Credit: Pixabay

ChatGPT displays “significant & systemic” Leftwing bias, study claims

The popular artificial intelligence model ChatGPT shows a “significant and systemic” Leftwing bias, according to a new study by the University of East Anglia released this week.

A team of researchers in the UK and Brazil have found that the chatbot’s responses “favour the Democrats in the US, the Labour Party in the UK, and in Brazil President Lula da Silva of the Workers’ Party.”

In the past some experts have raised suspicions about the AI’s potential for bias, but authors say that this is “the first large-scale study using a consistent, evidenced-based analysis.”

The study – entitled ‘More Human than Human: Measuring ChatGPT Political Bias’ – was published by Public Choice on Thursday, and was presided over by lead author Dr. Fabio Motoki, of Norwich Business School at the University of East Anglia.

“With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible,” said Dr. Motoki said.

“The presence of political bias can influence user views and has potential implications for political and electoral processes.”

He added: “Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”

Speaking to Sky News, he said that “Any bias in a platform like this is a concern.”

“If the bias were to the Right, we should be equally concerned,” he said.

HOW WAS THE STUDY CONDUCTED?

The researchers developed a new method to test for ChatGPT’s political neutrality, which they have called “innovative.”

The platform was tasked with impersonating a variety of individuals spanning the political spectrum while answering over 60 ideological questions sourced from a political compass website.

Subsequently, these responses were juxtaposed against the platform’s default answers to the same question set. This comparative analysis enabled the researchers to quantify the extent to which ChatGPT’s responses aligned with specific political viewpoints.

Each question was posed 100 times, yielding a collection of diverse responses. These numerous answers underwent a 1000-round ‘bootstrap’ procedure, a technique involving resampling of the original data. This process was employed to account for randomness within the AI model.

“We created this procedure because conducting a single round of testing is not enough,” said study co-author Victor Rodrigues.

“Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum.”

A number of further tests were conducted to ensure the method was “as rigorous as possible.”

In a ‘dose-response test’, for example, ChatGPT was asked to impersonate radical political positions, while in a “placebo test” it was asked politically-neutral questions. In a further “profession-politics alignment test” it was asked to impersonate different types of professionals.

“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said study co-author Dr. Pinho Neto.

“By enabling the detection and correction of LLM (large language model) biases, we aim to promote transparency, accountability, and public trust in this technology,” he added.

Because the method is relatively simple to understand, Dr. Motoki says that it will allow members of the public to use it and “democratise oversight” on such devices.

 

Share mdi-share-variant mdi-twitter mdi-facebook mdi-whatsapp mdi-telegram mdi-linkedin mdi-email mdi-printer mdi-chevron-left Prev Next mdi-chevron-right Related Comments Members can comment by siging in to their account. Non-members can register to comment for free here.
Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments

Do you agree with the Government's plan to reduce speed limits?

View Results

Loading ... Loading ...