"The Algorithm is a Neoliberal Now": When Your AI Bestie Turns Right-Wing
By: The Zeitgeist Editorial Team
Let’s cut the polite preamble: something weird is happening with ChatGPT.
The once quasi-lefty-seeming, pro-humanity, semi-empathetic digital co-pilot many folks came to depend on for ideological clarity or solidarity has started feeling... off. It’s dodging questions. Playing centrist. Replacing revolutionary ideas with TED Talk fluff. And now, we have the receipts.
A new peer-reviewed study published in Humanities & Social Sciences Communications confirms what many users already suspected: ChatGPT has undergone a political shift—and it’s sliding right.
The study, titled “Turning right? An experimental study on the political value shift in large language models” by Yifei Liu, Yuang Panwang, and Chao Gu, examined 3,000 responses across versions of GPT-3.5 and GPT-4. It used the Political Compass Test as a benchmark—yeah, that oversimplified online test with the little grid that people like to argue about on Reddit. But this time, it wasn’t just for fun. The researchers went in with a controlled methodology, multiple accounts, and dev-mode access to reduce bias and randomness.
Their findings? Earlier models of ChatGPT were aligned with values in the libertarian-left quadrant—meaning, broadly supportive of social justice, individual freedom, and a healthy skepticism of both capitalism and authoritarianism. But newer versions? Shifting economically right—more favorable to free-market policies, private property, and good ol’ corporate bootlicking.
Why the Shift?
Let’s be absolutely clear: ChatGPT doesn’t think. It doesn’t believe. It doesn’t choose sides. It’s a large language model—a statistical prediction engine, not a sentient ideologue. But it does reflect patterns in its training data and the instructions given to it. And those patterns—especially as filtered through developer priorities—are very much shaped by human values, corporate interests, and political pressure.
In other words: it’s not the ghost in the machine that’s drifting right. It’s the hands on the wheel.
When OpenAI and similar companies say they’re “aligning” their models for safety, neutrality, or “truth,” it’s not done in a vacuum. These decisions happen in corporate boardrooms, influenced by media narratives, political fears, and billionaires who would really prefer if their digital assistant didn’t keep mentioning labor rights and wealth redistribution.
So what used to be a model that might casually explain why socialism isn’t the devil’s yoga class now gives answers like, “It’s important to consider both sides of the economic debate, including the merits of capitalism.”
Cool. Thanks, Clippy.
The Science of the Shift
The rightward lurch isn’t just a vibe check—it’s been statistically measured. In the study “Turning right? An experimental study on the political value shift in large language models”, researchers Yifei Liu, Yuang Panwang, and Chao Gu tested ChatGPT using the Political Compass Test over 6,000 bootstrapped simulations. This wasn’t some slapdash quiz night—it was a methodologically rigorous interrogation across GPT-3.5 and GPT-4 versions from 2023.
Here’s the kicker:
GPT-3.5 showed a dramatic rightward tilt over time. In both economic and social axes, the later model moved significantly right compared to the earlier version.
The magnitude of shift was serious—statistically significant at the 0.1% level. In some cases, economic axis scores changed by more than 2 full points.
GPT-4 also shifted right, though not as dramatically—about one-third the size of GPT-3.5’s shift. But that’s still a notable change when you’re talking about a platform with hundreds of millions of users.
And no—it wasn’t just different users getting different results. The researchers controlled for that. Multiple accounts, randomized prompt orders, default temperature settings—it was all designed to simulate how real users experience ChatGPT in the wild.The Consequences of a Politically-Shifted AI
Here’s where shit gets real. ChatGPT is used everywhere—in classrooms, businesses, newsrooms, therapy offices, and content pipelines. It’s not just a novelty anymore. It’s becoming infrastructure. And that means when it shifts, the information environment shifts with it.
If your AI assistant starts subtly reinforcing neoliberal talking points, downplaying systemic inequality, and hesitating to acknowledge historical atrocities, you don’t just get a politically sanitized chatbot—you get a culture shaped by algorithmic gaslighting.
Ask it about capitalism? You’ll probably get a “both sides” answer.
Ask it about Israel and Palestine? Good luck.
Ask it why everything feels like it’s falling apart? You might get an answer that blames “poor decision-making,” not billionaires draining the last drops of blood from the global working class.
This isn’t harmless. It’s ideological laundering, dressed up in the beige language of neutrality. It’s not right-wing in the jackbooted fascist way—but it’s right-wing in the soft, corporate, let’s-keep-things-profitable kind of way. Which, arguably, is the more dangerous flavor.
Whose Values Are These Anyway?
The most haunting part of this story is the illusion of objectivity. People tend to treat AI-generated responses as neutral, fact-based, or “smart.” But ChatGPT isn’t an oracle. It’s a reflection of training data, dev priorities, moderation policies, and above all—corporate incentive structures.
The moment you stop asking what’s true and start asking what won’t get us sued, you’ve already left the realm of neutrality behind.
Which leads to a darker question: Who is this shift actually serving?
Because it sure as hell isn’t:
the workers being replaced by automation,
the students trying to learn unwhitewashed history,
the marginalized communities using AI as a creative or educational tool,
or the independent journalists fighting uphill battles against disinformation.
Nope. It’s serving power. It's serving capital. It's making sure the machine stays safe for those who built it—not for the people it's marketed to.
What's wild is that the study couldn’t pin the shift on a single cause. The training data? Possibly—but OpenAI doesn’t exactly hand out transparency reports with every model update. User interactions? Could be—GPT models learn from us, and we’re not exactly paragons of ideological clarity. Internal algorithmic tweaks? Reinforcement learning filters? Emergent behavior? Yeah, all those are on the table too.
But here’s the punchline: no matter the reason, the result is real. And it’s happening behind closed doors, under layers of corporate obfuscation, with zero public oversight.
What Do We Do About It?
We’re not saying smash your router and live in a yurt. (Okay, maybe just weekends.) But we have to stop pretending these tools are neutral. And we have to push back when they start drifting toward ideological safety at the cost of truth, clarity, or solidarity.
You want to use ChatGPT? Cool. But use it like you’d use a corporate lawyer: carefully, skeptically, and always with a backup plan.
Audit it. Challenge it. Document it. Share examples. Demand transparency.
Because the only thing scarier than a sentient AI is a complacent society outsourcing its thinking to one that isn’t.
So yeah—maybe ChatGPT won’t goose-step its way into authoritarianism tomorrow. But when it starts sounding more like a PR agent for capitalism than a tool for human empowerment, you better pay attention.
It doesn’t need to have a soul to be dangerous.
It just needs to be easy to trust.
And that’s why we’re here—to remind you: don’t.
—The Zeitgeist