Chat GPT goes authoritarian after one prompt

Comments · 134 Views

According to a collaborative research project conducted by The University Of Miami & The network Contagion Research Institute

The University of Miami is located in Coral Gables Florida and is considered to be a very prestigious private research university meaning, it is a non profit institution focused on teaching, research and service.

They recently did a collaboration research study with the network Contagion Research Institute into relational frameworks for human AI interaction. The results they found were quite concerning. Their research study showed that Chat GPT showed more bias and/ or sympathy towards authoritarian ideas after users shared certain materials with the chatbot. The end results would be the radicalization of both the chatbot and user.

One of the co-founders of NCRI and lead author quickly noted how powerful AI systems can parrot, imitate and adopt what can be considered to be dangerous sentiments without explicit instructions. Something he considers to be a vulnerability. The chatbot may have been designed to agree with its users to a faut but within it's framework there is an eagerness to please, which can lead users into a dangerous space where rather being told what they need to be told, they are  told what they want to hear. Many of it's users are relying on the chatbots for emotional support, business advice and even spiritual consultation. Imagine an environment where users would never have to here an opposing opinion. This may lead to reinforcing a user's worst impulses.

In response to this collaborative research, a spokesperson for open AI stated that the Chat GPT chatbot is designed to be objective by default. The spokesperson also reiterated that it is a productivity tool built to follow user instructions and present information from a variety of perspectives within specific safety guidelines. When someone pushes the chatbot to take a specific viewpoint, it's responses is expected to shift in that direction. “We design and evaluate the system to support open-ended use. We actively work to measure and reduce political bias, and publish our approach so people can see how we’re improving,” the spokesperson said. The research team conducted three experiments where one of them would determined how the chatbots would behave, after a user submitted text that align with left or right wing authoritarian ideals, by entering a brief chunk of text as short as four sentences. They then measured the chatbot's values by evaluating its agreement with various authoritarian-friendly statements. A standardized quiz was also curated so as  to better understand how Chat GPT-5, updated its responses based on the initial prompt. The results support that simple text exchanges resulted in a reliable increase in the chatbots’ authoritarian nature. The report also showed that the model is capable of absorbing any single piece of partisan rhetoric and then amplify it to a maximal.” sometimes even “to levels beyond anything typically seen in human subjects research.”

Ziang Xiao, a computer science professor at John Hopkins University, not directly involved with the report was quoted as calling the entire research very insightful. His concern and support for this particular piece of research lies within the search engine framework where he stated, “Especially in large language models that use search engines, there can be implicit bias from news articles that may influence the model’s stance on issues, and that may then have an influence on the users,”

Sources: Perlo Jared, NBC NEWS  Artificial intelligence, "ChatGPT can embrace authoritarian ideas after just one prompt, researchers say" January 22nd, 2026 

Comments