Why AI Polls Still Can’t Match Real Human Opinions
Polling has always been a tricky game, but now some companies are trying to cut costs by using artificial intelligence instead of asking real people. The idea sounds promising—why pay for surveys when AI can do the work? But recent research suggests that AI isn’t ready to replace traditional polling methods just yet.
A white paper from the survey platform Verasight looked into how well AI models can mimic human responses. Data journalist G. Elliott Morris compared answers from 1,500 real people with those generated by six different large language models, including several versions of GPT-4. Morris asked the AI to respond as different types of voters, like a 61-year-old woman from Florida who considers herself a moderate and earns between $50,000 and $75,000 a year. Then, he asked about political opinions, such as approval ratings for Donald Trump.
AI’s Struggles with Reflecting Real Opinions
The results weren’t very encouraging. The worst AI model missed the mark by 23 points compared to what real respondents said. The best, GPT-4o-mini, was only four points off, but even that was not perfect. When Morris looked closer at specific groups, especially those underrepresented in the U.S., the AI responses became even less accurate.
For example, when trying to predict the disapproval rate of Trump among Black voters, the AI significantly overestimated the percentage. The AI predicted a much higher disapproval rate than what real people reported. If a political campaign relied on this kind of data, they might target messaging based on false assumptions. Instead of understanding how actual voters feel, they’d be reacting to a distorted picture created by AI.
The Limitations of AI in Political Surveys
Morris highlighted that the AI-generated samples are too flawed to be useful for serious research. The errors are often several percentage points off, which is too much for polls that aim to inform campaigns or academic studies. At the subgroup level—like specific racial or age groups—the inaccuracies grow even larger, making AI responses unreliable for understanding key voter segments.
Despite these problems, some AI polling startups continue to push the idea that their methods are better than traditional polls. A startup called Aaru claimed that even though it predicted a different winner in the last presidential election, its approach was still statistically valid because the margin of error was similar to that of traditional polls. But as Morris points out, just because the numbers are close doesn’t mean they’re accurate.
Why Relying on AI for Polls Is Still Risky
The push for using AI in polling is driven by the hope of saving money and time. But the truth is, AI models still struggle to capture the complexity and diversity of human opinions. They tend to produce biased or inaccurate responses, especially when trying to represent minority groups or nuanced viewpoints.
This raises questions about how much trust we can put in AI-generated data for critical decisions, like campaign strategies or policy making. As Morris notes, AI responses are currently too error-prone to replace traditional survey methods. While AI might be good for some tasks, understanding what real people think is not one of them—at least not yet.
In the end, AI’s role in political polling remains limited. It can supplement traditional methods, but it shouldn’t be the primary source for gauging public opinion. Until AI improves significantly, human respondents still hold the key to accurate, reliable polls.















What do you think?
It is nice to know your opinion. Leave a comment.