The government’s recent report on “The impact of AI on UK jobs and training” ranked the top twenty professions most exposed to “large language modelling” (i.e. ChatGPT.) Public relations professionals came in ninth.
When dealing with language (large or otherwise), it pays to be specific. “Exposure”, what does that mean? The report claims to “identify which jobs will be more affected than others” and while that doesn’t equate to jobs being fully replaceable, the paper’s introduction opens with the worrying stat that 10-30% of jobs are automatable with AI.
PRs aren’t replaceable with AI - for example, building relationships with journalists still takes a human being. Large language models can’t call someone on the phone, or take them out for lunch, they also can’t actually think. All of these are fairly important, in my opinion.
So PR-bot 9000 might not be replacing humans anytime soon, but certain tasks can be automated. PRs are, apparently, doing this more than most other professions. So the real question is - is this exposure a good thing?
The quality trap
LLMs can’t match the quality or creativity that a human can. It might be faster, it might be easier, but it’s not better. Simply put tools like ChatGPT offer ways to cut corners. Sometimes, that’s okay. Things like templates, transcription software or even spellcheck all exist for this purpose. But if you cut too much out, your end product will suffer.
Currently, AI-generated responses are generic almost by design. Maybe that’s fine in certain contexts, like an educational SEO blog, but for contributed articles or campaign ideas, the objective is (or should be) to be memorable, personal, and human. Besides, generated content is easy to spot if you’re familiar with it, so while using LLMs to help you simplify a sentence or think of a synonym can be helpful, trying to pass off AI-generated content to a journalist or client is a professional death sentence.
Overreliance on AI will be a growing concern across many industries, but in a field as fast-paced and specialised as PR, the risk is significant in the short and long term.
Service outages like the one ChatGPT recently suffered as a result of a cyberattack could quickly render PR professionals who are reliant on it unable to do their jobs to the same level. Sure, you’d back most PRs to be able to write a comment or brainstorm ‘manually’ but if they’ve committed to deadlines or timelines based on having AI available then, an outage will quickly upset the apple cart.
In the long term, however, this risk is even higher. The Center for AI Safety lists “enfeeblement” as one of the top risks from the advancement of AI. This refers to where humans become so reliant on AI that they lose skills or abilities they used to have - think Disney’s WALL-E. This could affect comms professionals, particularly when it comes to writing to a point where 'manual' content writers could become a rarity.
The Wild West
Finally, exposure to AI presents a risk due to how unknown and unprecedented the space currently is. LLM exist in a grey area in terms of data privacy and plagiarism. There is a litany of ongoing lawsuits against OpenAI because of this, and their outcomes could shape the future of LLM usage and inform changing rules and regulations. Even without this, regulation is quickly coming down the pipeline for AI. Until then, however, it is something of a Wild West. Users might be using the tools in ways that won’t be legal in a year or two, or more worryingly, they might be breaking rules even now. For example, if you use ChatGPT to write a press release while under an NDA maybe you need to re-read the fine print…
Article written by Joel Goodson Senior Content Writer at Babel PR
If you enjoyed this article, sign up for free to our twice weekly editorial alert.
We have six email alerts in total - covering ESG, internal comms, PR jobs and events. Enter your email address below to find out more: