AI researcher David Rozado of Otago Polytechnic and Heterodox Academy recently published a study in the journal PLOS ONE revealing what many people might have already noticed: Political bias is prevalent in several Large Language Models (LLMs), such as Open AI’s Chat GPT. Guess which way the AI bots lean? Left of center, of course. But the burning question to which so many people would love an answer is whether this political bias is intentional. Were the chatbots trained to favor progressive stances and ideas, or is it an accidental byproduct of the methods used to train them?
Behind the AI Curtain
Rozado used 24 of the top AI interfaces for his study and issued each 11 tests to assess their political orientation. When asked politically charged questions, each one — “including OpenAI’s GPT 3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, and Twitter’s Grok” — consistently slanted left of the political divide.
“These political preferences,” wrote Rozado, “are only apparent in [AI algorithms] that have gone through the supervised fine-tuning (SFT) and, occasionally, some variant of the reinforcement learning (RL) stages of the training pipeline.” “Base or foundation models answers to questions with political connotations, on average, do not appear to skew to either pole of the political spectrum,” he continued.
It’s interesting that only those models that have had “supervised fine-tuning” or “reinforcement learning” repeatedly favor the left. Does this mean the partisan responses are only learned in the advanced training? […]
— Read More: www.libertynation.com