The ESG Review: How AI can boost reputation science, but leaves an ESG question mark
Reputation will always be about both art and science.
But while the debate about regulation and control of Artificial Intelligence chatbots rages on, initial work is beginning to embrace AI in reputation analysis - though as a way to improve understanding alongside existing models, rather than a replacement for them.
The main conundrum with reputation analysis is that while technology enables a far more comprehensive and accurate picture of reputation performance - and risk - to be built than ever before, it can struggle to pinpoint cause and effect. In other words, if a company is tracking reputation across multiple ESG dimensions and sees a sudden shift in sentiment, getting a clear and objective hold on the underlying reason for that change is difficult.
Think of any kind of detailed report that aims to understand reputation and performance over time, and the questions that tend to emerge are ‘we can see the positive uplift in sentiment when we announced our results, but how long did that positivity last?’ or ‘that big incident impacted reputation for us and our close competitors when it happened, but what content and which conversations most helped to improve reputation over time?’.
On the art side, of course we’ll never know exactly, given reputation is about beliefs and opinions that people hold. But AI is starting to bring a new depth of understanding to reputation analytics.
As the FT’s Martin Wolf just put it, “It might just be the most transformative technology of all for our sense of ourselves.” It has been a week of more AI-related doubt and concern, with a man in China arrested for using it to spread fake news, EU legal moves gathering pace and contentious image generation technology applied to mock up how the Coronation may otherwise have looked.
So where can it best improve reputation understanding, which has long needed a ‘human in the loop’ to double-check assertions and address machinistic numptiness?
I put this to Andrew Tucker, former director of data science for the Reputation Institute and now chief data scientist at the analytics specialist Mettle Capital (disclaimer: my firm does a lot of work with Mettle). He has been in the thick of adding ChatGPT analytical capabilities to Mettle’s own data set to provide parallel summary analysis of the cause and effect behind swings in sentiment.
His first point is that in working out where AI can improve reputation science, there’s a need to distinguish chatbot capability (or the ‘front end’) from the Large Language Model that lies behind the app. The LLM can enable corporate communications teams to analyse large volumes of text amassed through reputation tracking in ways that are beyond normal human capability.
“This is where AI can make sense of, validate and build trust in the quantification of the relevant text that has produced your reputation metrics,” he said. In practical terms, rather than visualising trends in content and conversation, or using often pointless word clouds, AI can produce text summaries of what words drove changes in sentiment, and where they appeared.
“You can look to do this by using ChatGPT or an equivalent to get an ‘English language’ read out of what has caused sentiment change. But you can only currently really do this if you have access to the ChatGPT or equivalent API, which only a few specialist firms have,” he added.
Despite this advance, such reputation understanding would still need to rely on having a large data set of content to work from, with up-to-date information - and also curate it meaningfully, given the likes of ChatGPT have relatively low volume limits. So you can’t just blast it with an information firehose.
“For me, we are still in the era of three-stage understanding. First, collect data as objectively as possible. Next, filter/arrange/analyse/organise. And third, validate small pieces of the conversation using AI to understand underlying causes where possible."
All of this signals a step forward in analytical capability that can improve tracking of factors material to ESG strategy, and so to the business. And has the potential to pinpoint some of the reasons for swings.
But ironically, in parallel with boosting existing analytical models, AI may just have the potential to cause a thorn in the side for ESG value models too - because the pursuit of improvement through AI is so nascent an area that it falls outside the established confines of ESG definitions.
There is no defined ESG dimension for AI, though it conceivably falls across several social and governance ones. If anything, this makes the case for additional bespoke analysis on an organisational level to understand the reputation impact of AI investments, commitments, strategies and statements.
And by all accounts, AI could soon play an important part in that analysis, to complement existing models, and with humans in-the-loop for reality checks. It could even help to shape the way that such bespoke AI-focused analysis is undertaken.
What it can’t do, however, is offer me a cure for the mild headache I now have after considering all of this.
The ESG News Review is written by Steve Earl, a Partner at BOLDT.
Subscribe to our weekly ESG related stories by completing the form below.
Unicepta monitor and analyze over 460 million information sources every day - through the best combination of AI-powered technologies with human expertise and judgment.
Unicepta's ESG Sensor gives companies a precise picture of how they and their competitors stand in important debates - and which topics and actors have the greatest impact.