Blog 4 minute read
With 170 written submissions and evidence from 73 witnesses asked more than 4,000 questions, the scale of the Digital, Culture, Media and Sport (DCMS) select committee's investigation into Facebook is unprecedented, and it's pretty damning too.
The 108-page report described Facebook as a “digital gangster" and at the end of the 18-month long investigation, Facebook has come under criticism for violating privacy laws, and failing to prevent the spread disinformation. This follows warnings to Instagram influencers earlier this year from the Competition and Markets Authority (CMA) that their paid-for posts could be misleading users and breaking consumer law – both of which have big implications for brands and businesses.
The Government’s proposed legislation recommends a stricter compulsory code of ethics and the appointment of an independent regulator to ensure users’ rights are protected. Damian Collins MP, chair of the DCMS Committee, said: “The age of inadequate self-regulation must come to an end.”
End of an era?
But can there ever be regulation comprehensive enough?
The internet is so vast and so difficult to police that the government faces substantial obstacles in its efforts to regulate it. One of those obstacles is Facebook itself which has so far been accused of failing to comply with the enquiry. However, there are encouraging signs that the platform is listening to criticisms of its content approval processes.
In a recent statement, a spokesperson said: “We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for seven years." Facebook's UK public policy manager Karim Palant followed this up by announcing that the company had tripled the size of the team working on detecting “bad content”, as well as investing in machine learning and artificial intelligence to detect abuse.
Although a step in the right direction, these changes don't go far enough for some. What about non-political content and advertising? What about the other platforms in Facebook’s ecosystem? What about the agencies and influencers who use them for business?
If there is to be the kind of significant change the industry needs – the kind that could bring about the end of the age of inadequate self-regulation – then all businesses working with social media must assume their individual responsibilities to safeguard users too.
The impact on branded content
Much like the way Facebook is now labelling political ads, Instagram influencers are also coming under intense scrutiny from the CMA to label any sponsored content accordingly. Since launching Takumi in 2015, transparency and regulation has been a priority and we are trying to shape a better monitored industry. We have worked alongside regulators in our markets to clarify rules and guidance for all involved and recommend PR agencies follow their reports too.
Similarly to Facebook, we are also using technology to help purge Instagram marketing campaigns of fake engagement, ‘bot’ audiences and insufficiently labelled paid-for content – to protect advertisers and consumers. Our technology notifies us if ad-labelling is removed from influencer content which would make posts non-compliant with regulation in the legal territory in question. We also have strict policy against paying out fees to influencers unless posts are legally compliant. But, this is our specialism – brand and agency side marketers have a much harder job on their hands…
Direct-to-brand influencer marketing deals are harder to regulate, but should be held to the same regulatory standards or brands will also start paying the price – literally, with fines from the ASA. In the vast majority of cases, agency and brand-side marketers are great practitioners and are up to date on regulation – but enforcing this on content is no easy job if you’re part of a busy PR team (which is where we increasingly see budgets lie).
Upcoming brands that go direct to influencers or through small agencies often make up the minority that skirt regulation on labelling paid-for content on Instagram. In doing so, they not only risk conning their target audiences, but also themselves by increasing their chances of using influencers who have disingenuously accrued followers through bots and fake likes.
Public relations is an industry built on strong relationships. PR agencies need and deserve to take the lead on this important content topic in order to restore consumer trust in the brands and individuals they represent. This means taking a tough stance against data privacy violation, harmful content and ‘fake news’. Agencies must equip themselves with the right knowledge and skills, either through third-party support or training, in order to ensure they uphold the highest ethical standards of transparency and authenticity – vetting influencer and third-party content effectively.
If you enjoyed this article, you can subscribe for free to our twice weekly event and subscriber alerts.
Currently, every new subscriber will receive three of our favourite reports about the public relations sector.