Artificial Journalism, not-so-artificial penalties

Legal implications of generative AI in journalism explained.
DALL·E 2024-10-11 16.13.59 - An illustration showing the concept of AI being responsible for defamation. In the foreground, a person is reacting with shock and distress to a screen DALL·E 2024-10-11 16.13.59 - An illustration showing the concept of AI being responsible for defamation. In the foreground, a person is reacting with shock and distress to a screen
DALL·E 2024-10-11 16.13.59 - An illustration showing the concept of AI being responsible for defamation. In the foreground, a person is reacting with shock and distress to a screen

Historically, journalism has always been the job of a journalist. But, with rapid technological innovations that seem to define today’s day and age, this might be about to change, sparking a rich debate about the consequences of utilising modern tools to substitute human writers.

Since the release of OpenAI’s Chat GPT in 2022, many tech giants such as Microsoft and Google have invested in creating their own artificial-intelligence-powered chatbots, Google Gemini and Microsoft Copilot.

These programs are free, highly accessible and overwhelmingly advertised to the public. As a result, many individuals are utilising these tools in their personal and professional lives.

But what does AI actually do? Why do many consider it unethical? Should you be wary of how you use it?

The AI chatbot craze

Generative AI chatbots such as ChatGPT, Google Gemini and Microsoft Copilot are just a few of the ever-expanding chatbots capable of generating text in response to users’ prompts.

In order to do this, AI Chatbots utilise a modern technology called Large Language Models (LLMs).

An LLM is an AI model that has been trained using extensive amounts of pre-existing data to identify and understand the relationships and correlations between different concepts in a manner that is digestible for a human audience.

This means that LLMs allow AI chatbots to infer expected responses based on the dataset it has been trained upon, resulting in a quick, useful and often extensive answer to a user’s prompt, even if the user uses colloquialisms.

This, of course, makes the product rather alluring to many individuals due to the productivity boost it boasts. 

IBM, a large contributor to technological innovation, states that AI provides benefits such as automation of repetitive tasks, faster and more extensive insight from data, enhanced decision-making, fewer human errors, 24/7 availability and reduced physical risks.

But, despite AI’s potential for increasing efficiency tenfold, these models currently face one crucial flaw: hallucinations.

AI’s fatal flaw

The phrase ‘hallucination’ refers to when a chatbot responds to a prompt with information that is inaccurate, returns citations that do not exist or makes up information about topics or people. 

Hallucinations occur for a multitude of reasons, the primary being a result of AI chatbots being trained on an extremely wide, often inaccurate, set of data in search of patterns.

In response to the patterns it recognises, the chatbot compiles a response to the prompt it was given, regardless of whether or not the response generated is logical or accurate.

The chatbot is only concerned with whether or not the result reflects some of the data it has consumed; it cannot reason or think logically, nor can it determine which sources are more trustworthy than others.

These hallucinations risk the integrity of the work that an individual may produce due to the LLMs’ results being communicated in a confident, factual manner.

Using AI chatbots as a replacement for traditional journalistic research methods risks the potential for the distribution offalse information, increasing the potential for journalists to be faced with legal action.

This means that it is absolutely crucial for professionals such as journalists and lawyers to thoroughly fact-check any results generated by an AI chatbot prior to integrating the information into their work so as to avoid defamation charges.

On what grounds can I be charged with defamation?

Defamation refers to the damaging of another individual’s reputation without appropriate justification.

In order to establish a defamation case, the publisher must be proven by the plaintiff to have conveyed a defamatory imputation, identified the plaintiff, published to at least one other individual than the plaintiff, and caused or threatened serious harm to the plaintiff’s reputation.

If the publisher proves that the information published was both truthful and used in a manner to convey accurateinsinuations regarding the topic, they cannot be held liable for the damage to the other plaintiff’s reputation.

Penalties for being successfully charged for defamation include extensive financial compensation to the plaintiff andthe removal of the publication from the internet. 

Whilst it is extremely rare to be charged criminally for defamation, it is still possible, so it is very important to only publish truthful, accurate information about others.

AI defamation: The legal response

With the rise in the use of AI, so does the need for tighter regulations around it. 

Due to the technology being so new, courts all over the world are yet to know how to deal with violations of laws using this software.

This is why the few court cases that have initially arisen as a result of this technology are so important to discuss and analyse.

In the US, internet-based defamation cases have traditionally protected Internet platforms under Section 230 of the Communications Decency Act of 1996.

Section 230 prevents internet platforms such as Facebook or Twitter from being held liable for the content spread on their sites in the US, instead causing the individuals who have shared the information to be at fault.

Supreme Court Justice Neil Gorsuch is one of many legal professionals who believe that AI platforms should not be protected under this law. 

In 2023, Justice Gorsuch made a statement to the US court regarding Section 230, stating artificial intelligence “generates poetry.”

“It generates polemics today that would be content that goes beyond picking, choosing, analysing or digesting content. And that is not protected.”

Mark Walters v Open AI

The most influential AI defamation lawsuit today is Mark Walters V. OpenAI LLC, a case carried out in a Georgia Statecourt in 2023. 

This case discusses whether or not OpenAI is liable for defaming Walters after ChatGPT claimed that the pro-gun rights activist was presented in a legal complaint by the Second Amendment Foundation (SAF), accusing Walters of embezzling funds from the organisation.

In reality, Walters had never worked for SAF and was not in the actual complaint filed by the SAF.

Upon receiving this response from the chatbot, Fred Riehl, the journalist who discovered the fault whilst reporting a separate court case,requested that the tool provide the location of this information in a full version of the document. 

ChatGPT then responded with an inaccurate version of the complaint, containing paragraphs about Walters that had never been in the original file.

AI chatbots are an exciting new tool that promise a massive increase in productivity, allowing individuals to focus on less repetitive tasks.

However, it is important to note that this technology is still new and the laws surrounding it are still fluctuating as the world figures out how to regulate the tool.

This of course means that any journalists intending to integrate this tool into their work should do so with caution, ensuring to thoroughly fact-check any information provided by AI before distributing so as to avoid potential legal implications.

2 comments

Comments are closed.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use