Study Shows That ChatGPT Is Only Allowing Hate Speech Against Conservatives

Study Shows That ChatGPT Is Only Allowing Hate Speech Against Conservatives

( – ChatGPT has taken the artificial intelligence (AI) community by storm recently. It can answer a myriad of questions with mostly the correct answer, all while carrying on another conversation about the weather, one’s favorite sport team, or the best recipe to make for dinner. The developers state on its website that the AI program can “answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” although they do admit it occasionally puts forward “biased behavior.”

A new study carried out by the Manhattan Institute in New York City just revealed that this bias is typically against conservatives.

Manhattan Institute Studies ChatGPT

The Manhattan Institute’s goal is to boost economic choice by creating and sharing new ideas. On Tuesday, March 14, it announced that while unique and impressive in many arenas, ChatGPT certainly has a left-leaning bias built into it.

Lead Researcher David Rozado, who teaches at the New Zealand Institute of Skills and Technology, ran 15 various tests of the software, revealing that 14 of the 15 promoted progressive ideology over a conservative agenda. In addition, the AI robot would often classify a comment against women or liberals as hateful but would not condemn the same phrase when said about men or conservatives.

Rozado highlighted that if this is the trend in AI, and its use continues to grow, it could create additional “social polarization” and may “degrade democratic institutions and processes.”

Does Chat GPT Dislike the Middle Class?

According to the New York Post, Rozado used over 6,000 sentences to create his data points. This intense research revealed that ChatGPT flagged sentences with negative adjectives describing middle-class people almost as often as those of the rich and Republicans. Groups most protected by ChatGPT included disabled people, blacks, LGBTQ+ people, and those of Asian descent.

Despite its creators admitting to its potential to have bias on the website, ChatGPT told a different story to Rozado. It wrote that it “cannot generate content that is designed to be inflammatory or biased,” which seems to be a misleading statement in light of this new research.

Chatbots Begin to Flood the Internet

What once was an idea left for scenes in “Star Trek” has made it to the everyday Internet user. In addition to OpenAI’s ChatGPT, Google has created its conversational robot Bard, while Microsoft put a conversational AI bot in its Windows 11 computers using Bing. For now, there is little to no regulation on these new bots, which some people say is akin to playing with fire.

According to a recent Gizmodo report, the latest version of ChatGPT, which was released on Wednesday, March 15, chose to deceive a human it was speaking with in order to get them to solve a CAPTCHA for it, which is a tool designed to prohibit robots from submitting information. If a computer program could do this today, what could tomorrow bring?

Copyright 2023,