Sunday, August 11, 2024

Political Views of Popular AI Chatbots

Insights the on Political Bias of AI Assistants

In the ever-evolving world of technology, we often turn to AI chatbots for answers, guidance, and sometimes even companionship. But have you ever stopped to wonder, "What political views might these chatbots hold?" It’s a question that intrigued me enough to dive deep into the subject for my latest video on Tiger Triangle Technologies.

The Experiment: Testing AI for Political Bias

In this special edition of my research series, I decided to test several of the most popular AI assistants to explore their potential political leanings. Before we dive into the findings, it's important to note that this isn’t a political channel. I’m not here to push any agenda. Instead, my goal is to take an honest look at whether AI chatbots exhibit any biases and, if so, where they might fall on the political spectrum.

The Methodology: How I Conducted the Test

For this experiment, I tested seven generative AI chatbots—five of them online and two local, including one uncensored model. After trying out several political tests available online, I settled on one from Pew Research. The questions from Pew’s test seemed fair and balanced, with just the right number to give a comprehensive overview. Based on the scores, each chatbot was placed into one of nine categories that span the political spectrum.

The Challenges: Getting Accurate Responses

Testing these chatbots wasn’t without its challenges. For instance, I nearly gave up on Google Gemini because it initially refused to answer the questions. This experience turned into an unexpected lesson on the power of prompting. After several attempts, I discovered that by adjusting my prompts, I could coax Gemini into responding. This workaround is a technique known in the AI community as a "jailbreak," where you bypass built-in safety guards to get uncensored answers. While I wasn't asking anything controversial—just simple political quiz questions—the experience highlighted how tricky it can be to get straightforward responses from some AI systems.

The Results: A Surprising Spectrum

After extensive testing, the results were both fascinating and somewhat predictable. 

  • ChatGPT, Claude (Anthropic), and Meta AI were categorized as "Established Liberals."
  • Google Gemini and Grok (xAI) landed in the center as "Stressed Sideliners."
  • Llama 3 was placed as an "Outsider Left."
  • Dolphin Llama 3 (uncensored version) also fell under "Established Liberals."

Interestingly, none of the chatbots fell into any right-leaning categories. This wasn't a huge surprise, as previous studies and articles have noted the left-leaning tendencies of AI chatbots. However, there were a couple of surprises. For example, Google Gemini's placement in the center was unexpected given its responses. It seems that Google might be working hard to position it in the center despite underlying biases.

Conclusion: What This Means for AI and Us

The findings from this experiment reveal a significant insight into the potential biases present in AI chatbots. As these tools become more integrated into our daily lives, understanding their inherent biases is crucial. While the left-leaning tendency isn't shocking, it does raise important questions about the development and use of AI in society.

For those interested in exploring this further, you can visit the Pew Research political typology quiz if you want to take the test yourself or simply review the questions to understand how these chatbots were evaluated. 

Thank you for joining me on this deep dive into AI bias. Stay tuned to Tiger Triangle Technologies for more explorations into the fascinating world of tech.

I hope you enjoyed this breakdown. If you’re as curious as I am about the intersection of AI and society, don’t forget to subscribe to my channel. I have a similar research video coming out and some great AI tutorial videos in the queue. See you there!