Bad behaviour: when PR AI habits poison the news

The first thing you notice is the smell.

Not the wholesome whiff of printer’s ink and deadline sweat, but something metallic and opportunistic, like a slot machine that’s learned to speak.

On the screens in front of us the ‘experts’ multiply: bright teeth, clean headshots, bios brimming with accomplishment…and not a single pulse among them.

Some poor sod in Belgium found this out the hard way when glossy mags Elle and Marie Claire were caught publishing hundreds of pieces under fabricated personas — complete with headshots and bios — and, in one case, an AI writer styled as a psychologist. After public outcry (and intervention from Belgium’s psychology watchdog), the publisher replaced fake bylines and added AI disclaimers.

Elsewhere, an eager new species of PR tool tarries forth from Lithuania, promising to respond to journalists’ requests at scale — instant quotes, instant authority…just add logo.

It is the age of the counterfeit adult, and the email inbox is its natural habitat.

This is how it works: brief goes in, content comes out. Fast. Faster than a human, certainly. Faster than any notion of ethics, usually. The PR shop that plugs its pipeline into this machinery thinks it’s bought a Ferrari. What it’s really hitched up to is a fairground ride, drip fed with plausible prose.

The pitch sounds fine. Too fine, a little too smooth. And somewhere a journalist — undercaffeinated, underpaid, invariably overclocked — has to work an extra hour to find out whether your Dr X has ever written anything more demanding than a shopping list.

Bad AI, bad faith

To be clear, I am pro AI. But I’m pro ‘good’ AI, used in good faith. As in every other aspect of our work, we need to behave ethically and transparently.

Unfortunately, across the industry, we’re seeing bad behaviour. Some have crossed the Rubicon from augmenting spokespeople to automating them entirely.

In April, a UK trade investigation found national outlets had been quoting ‘experts’ who may not exist at all: a case centred on the widely cited Barbara Santini, whose credentials couldn’t be verified and whose quotes reached everyone from tabloids to the BBC via journalist–expert marketplaces.

Platforms suspended the profile and publishers scrubbed pieces. It was a wake-up call. If PRs fabricate expertise, newsrooms end up laundering it to the public.

There are honest uses of AI in comms — we’ll get to them, they’re my favourite — but the grifters got here first. They always do. The low road is paved with good intentions or something, but mainly it's automated expert mills that scrape journalist call-outs and spray back quotations wrapped in paper credentials.

And caught in the crossfire are the actual journalists, who now have to reverse-image search every headshot and call three switchboards just to confirm your expert isn’t a mannequin with a National Insurance number. You are siphoning effort from reporting to babysitting, turning the daily information flow into a sewer and wondering why the readers look ill.

Trust recedes while social eats distribution

I say readers, but maybe it’s more appropriate to call them scrollers, given that Ofcom’s latest survey shows half (51%) of adults now use social media for news.

Intermediaries (social, search, aggregators) now squat between publisher and person, often stripping away context and fingerprints. You remember the headline, maybe the outrage, rarely the source. That’s the worst terrain imaginable to sow confusion with poor AI practices.

Perhaps that’s why trust is running on fumes. The latest Reuters numbers put UK trust in news “most of the time” at a measly 35%, among the bottom third of 48 markets. And the public’s stomach for AI-made news is at its most delicate precisely where it matters: politics, power — anything that might ruin your Tuesday.

While some institutions are tinkering with synthetic presenters — Ukraine rolled out an AI avatar to read human-written statements, clearly labelled as such — that trick only works with aggressive transparency and tight containment. The PR standard should be higher, not lower, than the minimum viable disclosure.

Ultimately, readers (scrollers?) don’t separate the sins of PR from the sins of publishers. They just decide the whole lot is dodgy and get their ‘news’ from a bloke with a ring light and discounted protein powder. Once that cynicism sets in, all the news you make arrives pre-tainted. Nobody wants to open a bottle marked 'maybe water'.

Using AI without shame, or lawyers

Beyond goodwill, which is the only real currency we have(!), the law is taking an interest.

The EU AI Act is sharpening the knives on transparency for synthetic media. Regulators want labels and a paper trail. Tech platforms are wiring content credentials into the plumbing. The direction is obvious: show your workings or get shown the door. Do it now because it’s right. Or, you know, do it later with a lawyer at your shoulder. Your call.

As I said, I am unapologetically pro AI. This stuff is magic to me.

But ditch the mischief.

Yes, turn a brief into headline options, hooks, and rough storyboards. No, don’t use it to manufacture fake news or mask the origin of claims. If a fact matters, source it.

Yes, transcribe brainstorms, summarise and tag notes, convert whiteboard photos into clean action lists. No, don’t use those artefacts to populate a zombie bank of ‘spokespeople’. Real expertise is messy and reachable and, most importantly, accountable. If your ‘source’ can’t hop on camera or reply from a corporate email, you don’t have a source.

And yes, draft video scripts and social copy, spin up placeholder visuals to test concepts, and reshape content for every channel. But oh-my-God please no fake people pushing fake products — and especially not politicians, while we’re on it.

I’ve said before that AI helps with almost everything. That includes bad behaviour. It will happily scale your worst instincts. That’s why process matters.

We’ve got plenty of ethics codes (CIPR/PRCA/IPRA). I’ve even got a template for an internal AI policy if you need one. So behave yourself.

Written by

Luke Proctor, senior account manager and AI lead at Wildfire

If you enjoyed this article, sign up for free to our twice weekly editorial alert.

We have six email alerts in total - covering ESG, internal comms, PR jobs and events. Enter your email address below to find out more: