The Creative Moment Awards Winners 2024 PRmoment Leaders PRCA PA Academy PA Mediapoint PA Assignments ESG & Sustainability Awards

I’m in a state of cognitive dissonance around AI, who else is?

As someone working in tech PR, I’ve worked with AI-led companies for a long time now. I’m more than aware of the productivity arguments around AI.

And as part of the technology and creative communities, it’s important my tech communications agency uses and experiments with AI tools, whilst being sensitive to the implications on creators.

This often puts me in a state of cognitive dissonance. Essentially, wanting to pursue an AI strategy, whilst knowing it's fraught with challenges.

I am not alone

According to the CIPR’s latest report into AI in PR, I don’t think I’m alone: “[AI tools] are not yet being widely adopted, despite forward momentum in their uptake. Indeed, survey after survey indicates a low take up with more rapid progress being stalled by job anxiety and fears about new technology. Yet it is inevitable: Public relations will have to accelerate adoption of AI and automation tools, especially those which offer a productivity boost.”

The reticence in the industry versus the inevitability of its adoption seem to be at odds with each other. But the report itself offers great advice: agencies should seriously consider the risks and ethical considerations before investing in AI, and upskill in the right areas.

Risks of AI

Assessing the risks around AI is something I’ve been doing for a while. As a small business, I’m looking to international regulations, guidelines, and codes of conduct to help me out. And frankly, it’s all in flux. Don’t get me wrong, updates are on their way in some form, but the lack of clarity requires careful consideration.

I’m paying attention to any regulatory scheme coming into play in the UK, and how it diverges from the pending EU AI Act. Because if the UK says yes, but the EU says no, that’s a problem. It means you can’t do something easily.

GenAI and copyright

Let’s take the issue of intellectual property and the role of generative AI (GenAI) as an example. This is a particularly important example for agencies - as advocates for creative communities, as creators of client content and as guardians of clients’ reputations.

The EU’s AI Act is the bloc’s attempt at regulating AI. But it doesn’t include any intellectual property provisions, with the exception of GenAI developers having to declare summaries of the copyrighted material being used to train systems. But there are no definitions of what these summaries should include. That isn’t helpful when trying to assess IP risk.

In the UK, when our Intellectual Property Office (IPO) and its new AI working group tried to propose changes to copyright practices to include GenAI’s data mining capabilities, creators heavily opposed it. That means uninhibited data mining for commercial use is still blocked in the UK. And any future changes to AI copyright codes of conduct (hopefully coming this Autumn), may not enable AI systems to have unlimited access to material, without copyright consequence.

And let’s not forget we have a pending AI Safety Summit to define the UK’s regulatory policy, as well as encourage international cooperation around AI. But nothing is set in stone.

Before AI, agencies always had to ensure they were not exposing themselves to a legal challenge, particularly when creating new IP for clients - rigorous checks are needed before things go live. That’s nothing new. But that task just got a bit harder. Without legal clarifications and certainties, time will be spent on third-party AI tool legal verifications, and ensuring an AI tool hasn’t overstepped copyright, or misrepresented something. We’d even have to audit our own prompts, to ensure we didn’t lead an AI tool in a questionable direction - ethically and legally.

Tread carefully

That means if we consider the way we communicate the transparency of creation, fact-check AI tools and validate prompts, spend time negotiating indemnification with third party AI tools and clients, and check and re-do insurance policies, it requires an AI strategy grounded in specific legal and ethical certainties.

I’m a pragmatic person. I run a small business, like the majority of other agency owners, which can’t afford to take on too many risks without certain clarifications. So yes, we will experiment internally, and be incredibly careful about where we invest. But, like the rest of the industry, we will approach adoption with a keen eye on regulatory developments, and protections for the creative communities we care so much about - and are part of ourselves.


Article written by Grace Keeling, co-founder and comms growth partner at B2B agency Made By Giants

If you enjoyed this article, sign up for free to our twice weekly editorial alert.

We have six email alerts in total - covering ESG, internal comms, PR jobs and events. Enter your email address below to find out more: