ILaw Logo blue text, transparent background
AboutpeopleexpertiseNewsTestimonialsCareersContact
ILaw Logo blue text, transparent background

Can Algorithms be held accountable for Defamation?

November 17, 2023
Insight
No items found.

Robo-Reporters On Trial: Can Algorithms be held accountable for Defamation?

Can an artificial intelligence (generative AI) system or its owner be held accountable for defamatory remarks made by the AI? In considering this question, a recent Wall Street Journal article recounted an incident where OpenAI’s Generative AI tool, ChatGPT, was asked about its relationship with an editorial cartoonist but provided an inaccurate description, shedding light on the potential risks associated with AI generating defamatory statements. What are the legal aspects surrounding defamation in both the UK and the US, and are AI systems capable of being found liable for defamation claims today? Indeed, how will legislatures need to change the law to reallocate liability?

In order to understand the legal implications of AI and defamation cases, we need to consider the laws in both the UK and the US. In the UK, defamation requires three elements to be satisfied:

  1. There must be a statement about the claimant that is defamatory at common law. For example, a journalist writes that an actor is an alcoholic. 
  2. The statement must be published by the defendant, or the defendant must be responsible for its publication. Using our current example, this might include the journalist publishing the defamatory statement “actor X is a raging alcoholic” on their twitter account. 
  3. The statement must have caused or is likely to cause serious harm to the reputation of the claimant. For instance, if the actress loses her role in a film due to the journalist’s newspaper article. 

In the US, defamation law differs in that it also requires “actual malice (New York Times Co. v. Sullivan, 376 U.S. 254 (1964). This means the statement must have been published by the defendant “with knowledge that it was false or with reckless disregard of whether it was false or not”. For example, if the journalist knowingly writes that the actress is an alcoholic despite knowing all too well that the statement is false. 

Currently, ChatGPT only generates content for the individual who inputs the specific question or task. As a result, the user would need to publish that content before anyone could begin to raise a claim for defamation.

However, it is feasible that in the not-too-distant future, our journalists will be replaced with AI systems that can understand current events, formulate articles, and publish said articles across various platforms. This introduction of AI adds complexity to defamation claims.

As this technology is still in its infancy, the UK Government has not passed any legislation that specifically covers and regulates AI. However, from its 2022 policy papers, we know that the current UK Government is maintaining that “legal liability must always rest with an identified or identifiable legal person - whether corporate or natural”.

Imagine that the “journalist” in our example above is replaced by an AI system that generates articles with little to no human input, a “Robo Journalist”. If the UK Government decides to continue with its current approach, this means that our actor would lose her claim against the author (our Robo Journalist), leaving her with potential claims only against the editors and publishers of the defamatory material. 

In the US, the impact of AI journalism could be even more significant. As these AI systems are self-supervising (meaning that the AI trains itself), the very use of these systems, which generate articles with little to no supervision, could potentially remove the element of actual malice from the editor or publisher. This is because the editors and publishers, who are heavily reliant on their “robo journalists” to produce factually correct information, will be completely unaware that the statement in the article is or could be false, making them immune from liability.

How will Governments tackle our “robo journalists”?

In terms of legislating, governments worldwide have been slow to address these specific issues. The UK Government’s 2023 Policy Paper on AI Regulation has proposed cross-sectoral principles to be implemented by sector regulators. These principles aim to promote values such as safe use, transparency, and fairness in each sector. For example, regulators like OfCom may require newspapers to disclose to their readers the AI systems they employ and the processes by which choices are made in order to be compliant with the transparency principle. 

One group of regulators, the Digital Regulation Cooperation Forum (DRCF), which includes regulatory bodies such as the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA), are currently exploring the idea of Algorithm Audits. In the context of defamation, an auditor would have to test the Robo Journalist to ensure it is not generating any defamatory statements. 

The UK Government also plans to shift responsibility onto individual regulators to determine which actors should be held liable. The UK Government clearly holds the view that industry experts may be best tasked to establish the degree of liability of each actor. However, even with such industry knowledge, it will still prove to be a difficult task for regulators to strike a balance between promoting the adoption of AI systems (to fuel innovation and efficiency within the media industry) and protecting the individual’s right to protect themselves against defamatory statements. 

So, it appears that our “robo journalist” will be safe from liability for now. However, it is likely that governments and industry regulators such as Ofcom may need to clarify their stance quickly when media companies start adopting AI systems on scale.

About the author

Share

Latest News

More from