top of page
  • Ron Favali

How Should PR Pros Respond to Deep Fake “News” That Effect Clients?

Like most PR professionals, I’m a news junkie. Not only do I love news, I have a deep admiration and respect for the journalist that produce it.

I am also mesmerized by generative AI tools and have benefited from using ChatGPT for research in creating client content. In some cases, it has saved me hours in researching topics compared to traditional search. However, I would never copy anything directly from ChatGPT for client work. Honestly, the quality just isn’t that good.

There’s a definite line in the sand between news and passing off AI-generated content as news. Earlier this year, CNET got dinged for attempting to pass off AI-generated stories as news, then its staff pushed back. Just this week, it was reported that Google is trying to push AI technology to produce news content for major news organizations. The reception was mixed at best.

This morning, for the first time, I had to navigate this tricky issue that directly impacted a client. I work with a couple of scientific organizations focused on commercializing extremely complex technologies for enterprise use. There are a limited number of companies developing these technologies, so when any news runs or content is posted to social media on these subjects, I know about it almost instantly.

I noticed a post from one of my client’s competitors as part of my morning news consumption. The post linked to a “story” on Medium. While the post gave this story credibility, red flags immediately became apparent.

It was a deep fake.

The “author” on Medium opened the account within the past month and has already published a dozen or so stories on entirely different subjects.

A Google search on the author’s name came up blank.

These two issues were relatively easy to investigate. However, there was more. As a PR pro who has worked with this client for several years, the content focused on their industry came off as weird. It was dry. I’ve seen it all before. There was nothing new.

For anyone that has used ChatGPT, this article replicated the format ChatGPT uses in long responses to questions. Introduction. Many subheads. Lots of bulleted content. Generic conclusion.

Also, scientific advances are rapid. The article didn’t mention advancements from the past couple of years, another potential indication that it was created using ChatGPT, which doesn’t know anything that happened after 2021.

But honestly, these points still weren't enough to prove much of anything.

One of the most significant red flags was a sentence that pointed readers to a Coursera course to learn more about this topic. AHA! There it is. There are a handful of companies working on this technology. There are maybe 100 scientists in the world advancing it. There’s no way any information on this topic can be accessed through a platform like Coursera. Despite using a specific term in the course title, it wasn’t even close to the same thing.

After noticing that, it only took me a few minutes to replicate the entire article on ChatGPT by asking questions based on the subsections of the article.

So, what’s the big deal? Why is this a problem?

First, the ‘author’ positioned himself as an industry expert and engaged with commentators on Medium as an expert. Second, a competitor was promoting the content, indirectly implying a certain level of capability in this field that they don’t really have. I don’t believe this was done intentionally. I think someone on their social media team has a search set up for specific terms and posts stories to social feeds whenever stories including those terms pop up.

Unfortunately, the competitor post is getting some traction and perhaps some undue credit for being further along in this technology than they are.

I knew my client would never want to get into a senseless social media war on anything, and it’s not something I would ever suggest they do anyway. There’s no point to that.

So what did I do about it?

I informed my client and proved the story was a deep fake generated entirely by ChatGPT. I provided a response they could use if anyone asked them about the story or why they weren’t included in it.

I really don’t think anyone reputable company would knowingly want to promote deep fake content. I was not able to message the original poster. The platform it was posted no longer allows that. I sent them a very professional message on another platform pointing out issues with the story they were promoting and told them how to prove it was a deep fake.

I created a blinded account on Medium and posted that I proved ChatGPT produced the story. Minutes later, the blinded account—which in no way can be traced back to me—was blocked by the “author.”

This was the first time I came across a deep fake like this. I think I handled it appropriately. As PR professionals, is there anything you would have done differently? I’m interested in any input anyone has.

Recent Posts

See All


bottom of page