A good analysis that mirrors what I mean when saying about the election being "uncertain more than close"
-
A good analysis that mirrors what I mean when saying about the election being "uncertain more than close"
Polling paradox: what actually shapes the numbers?
Likely voters, past voters, demographics and more – how pollsters weight 2024 election poll data matters a great deal.
Good Authority (goodauthority.org)
Author (political scientist at vanderbilt) takes a very reliable survey and shows how with different sets of reasonable assumptions you can get results anywhere from Harris+0.9% to Harris+9%. The exact values aren't important, it could be Trump+3% to Harris+5%, it's the range
The point is that it is a hugely uncertain range based on a lot we don't know
#politics -
Nire Brycereplied to Hrefna (DHC) last edited by [email protected]
@hrefna
I'm always surprised how people throw out medical papers if they don't have huge sample sizes, but electoral polling is accepted at, often,
sizes < 20000 -
Basically you can think of it as a feed-forward model.
Population > sampled by pollsters who try to match the demographics > who then try to model who is a likely voter in that group > aggregated by models who may apply their own assumptions in turn about the quality or accuracy of those predictions.
Prior to 2016 or so the high quality pollsters assumptions were mostly in line with each other, and their demographic sampling was largely similar.
-
So your big, high quality names were—and had a history of—accurately predicting elections, and averaging their results together made a great deal of sense.
You of course had your Zogbys (low quality sampling) and your Research2000s (flat making things up) and others with a more partisan bent… but _in general_ the system was accurate and often precise as well. There were some noted exceptions, but in general it worked.
But now… the major pollsters are making different assumptions.
-
The old ways of sampling also have been breaking down. Finding inexpensive ones that do work has been challenging.
Enter the "unskewing" pollsters. Pollsters who aren't interested in accurate predictions, but in creating or reinforcing a narrative.
But who _are_ those pollsters?
Sometimes there's obvious evidence to that effect (Trafalgar), often it is just trying to predict it from their results. But is it:
1. Bad data collection
2. A bad theory
3. Deliberate bias️