49 Comments
User's avatar
SFHaine's avatar

I am shocked ( but totally unsurprised) at the Iowa poll. Women โ€” all colors, all political parties and all ages โ€” have been engaged & enraged post-Dobbs. Oh, and lots of folks havenโ€™t forgotten January 6th.

Yet the pollsters couldnโ€™t seem to figure out that perhaps more women would turn out. Or perhaps thereโ€™d be a ton of Republican crossover vote for Harris.

I had to read a TON of articles about the โ€œbro voteโ€ and the importance of Joe Rogan. Yet even I know the โ€œ young broโ€ demo is the least likely to vote.

But women? Esp women born in the 1950s and 60โ€™s like me? WE VOTE. And we donโ€™t vote for corrupt traitors, rapists or fraudsters. Esp when theyโ€™re the same person.

Expand full comment
Kris's avatar

Older women are dismissed , invisible in so many ways. People think itโ€™s Dobbs, and it is, but itโ€™s also the entire way they talk about women at every chance they get. If they think PR are angry about the garbage comment - they have no idea how angry woman are after 8 yrs of this crap. We protested in the MILLIONS on DJTs Inauguration Day. Did they think it got better in the intervening years? And itโ€™s not just what theyโ€™re saying - itโ€™s all the visuals - his relationship with Melania, who she is, what that says about women. Itโ€™s the young ladies that worked in his admin- like Cassidy Hutchinson and Sarah matthews. The long list of women he haps assaulted. Itโ€™s the entirety of how he is, how he acts, what he says - and all the other misogynistic bros like Elon. Our society is ill, or this man wouldโ€™ve never been elected.

Expand full comment
Heather Coon's avatar

Thank you for this. I have been trying to put my finger on what feels so wrong about the poll aggregators, especially after reading Nateโ€™s article on herding today. I donโ€™t have the stats background to fully delve into the issues, but this analysis captures the conclusion thatโ€™s been rumbling around in my head: itโ€™s unscientific but with a veneer of science to try to pass as such. The same as the polls themselves.

Expand full comment
Carl Allen's avatar

"if my forecast is bad it's not my fault" is currently accepted and it needs to stop

Expand full comment
Frank Canzolino's avatar

Hereโ€™s how I described Silverโ€™s work to a former gubernatorial candidate and a member of his staff:

Hereโ€™s the backup article. I finished it a few minutes ago. Heโ€™s hedging his bets because everybody he uses in his models are hedging their betsโ€ฆ

Expand full comment
Michael Fell's avatar

This is again what drives me nuts about this constant narrative of Atlas Intel being "the highest rated pollster," when they literally have one presidential election under their belt. Additionally, it interesting that their MOE is being used to talk about how great they are, but I can't remember seeing anything from the aggregates that go into why their methods would be leading to good results, or how those methods are ensuring better accuracy.

It is also interesting to me how Atlas touts their incredible successes (while ignoring their incredible misses, which there are more than a few), based on their MOE accuracy, but often the difference in the MOE miss between them and other pollsters is in the tenths of a percentage point (1.2% off for them and 1.4% off for another pollster, in one they posted showing how great they are). This is essentially meaningless in the realm of statistics with large sample sizes (if I'm remembering my many stats classes correctly), and the fact that they consistently get almost the exact same numbers for each candidate (over the course of July through October, in six polls, Trump ranged from 49.6% to 50.8%, and Harris ranged from 48% to 48.3%--a .3% difference over six polls) a little suspicious to me, as random chance would likely create more variance.

Expand full comment
Carl Allen's avatar

Correct on all points here regarding Atlas Intel

The fact is, the โ€˜best pollsterโ€™ in any given election is determined mostly by literal chance. This is mathematically irrefutable.

Yet the quants in this field don't understand that, it seems.

Expand full comment
Constantin's avatar

This bias is no different than the managed funds the various companies used to tout - start 100 analysts with small funds, and each year only keep the ones that exceed the market average such that after 3 years you can tout a โ€œtrack recordโ€ of someone allegedly knowledgeable vs. likely reality of someone who just got lucky. In these infrequent elections there is a similar survivor bias going on.

The problem with Nate silvers analysis is twofold. For one, his methodology punishes pollsters that go out on limbs but get it wrong. Secondly, it *assumes* that the methods / weighting / etc used by the โ€œexcellentโ€ pollster in the last election will carry the day in this years election. That is very unlikely to be the case as the candidates, voting population, issues, etc is typically changing from election cycle to election cycle.

Not only do you have a roulette wheel of entropy going on, the available scope of numbers on said wheel(a) may also be changing based on how well the weightings / demographics / etc properly capture the likely voting population in a given election cycle.

Expand full comment
Carl Allen's avatar

"This bias is no different than the managed funds the various companies used to tout - start 100 analysts with small funds, and each year only keep the ones that exceed the market average such that after 3 years you can tout a โ€œtrack recordโ€ of someone allegedly knowledgeable vs. likely reality of someone who just got lucky."

I used this exact analogy in my book as a threat to the field's future and legitimacy. Everyone wants an Oracle, but we just keep hopping on the cycle of Oracle --> hack --> appoint new oracle

Expand full comment
James Vornov's avatar

I think itโ€™s true that the poll aggregator (and Nate was not the first) have created a perverse incentive for pollsters to avoid publishing outlier data which we know should occur but is lacking.

In the medical literature we have the opposite perverse incentives where outlier data is rewarded and confirmatory results are devalued. You get published by being novel and different not by confirming others. So we have an industry of meta analysis where the outliers get averaged together. The aggregate data not surprisingly ends up in the middle often with the caveat that none of the trials were really well conducted to begin with.

Pollster preregistration and data availability are similar suggestions to what we have in studies due to replication problems.

Expand full comment
Carl Allen's avatar

That's a great point and I know that other scientific fields have ways of dealing with these incentives and disincentives. I'm actively looking to collaborate with people who have backgrounds in other fields because in my experience they're the ones who offer the best paths forward

Poll data analysis seems to be a fledgling field stuck on tradition instead of improving methods

Expand full comment
Constantin's avatar

Nate Silvers approach reminds me of similar attempts to appear pseudo-scientific re: ranking colleges. Every college president starts to game the statistics to juice their rankings, whether they actually benefit students or not.

It ends in an arms race where the better endowed universities usually prevail and the less endowed ones can get felled by one or two years of less than stellar enrollment.

Rankings can tank schools and the desire of schools to appear selective with a high enrollment rate on acceptance means that most kids have to apply to 2-3x as many schools as their parents had to in order to have a good probability of getting in somewhere theyโ€™d like to go. Itโ€™s nuts.

Expand full comment
mcsvbff bebh's avatar

This article is super strange in that it suggests reality is entirely based around nate silver, a mistake nate often makes as well, but it's more understandable in his case.

> But if your reputation is being largely formed on a tiny sample of one election, for the one or two polls you conduct closest to the election, and your poll produces results three or four points off the average, what exactly is your incentive for reporting that data as such?

There are a ton of incentives to report this data. It is the honest and morally correct thing to do. Pollsters should rely on trust and lying or hiding your results in this way is quite literally bad for business. Like there are just more things in the world than nate, and believe it or not people in the polling industry care about what they do for a living. If you don't like Nate Silver, you can stop reading him.

Expand full comment
Carl Allen's avatar

"There are a ton of incentives to report this data. It is the honest and morally correct thing to do."

That's not an incentive, that's a motivation, at best.

"Pollsters should rely on trust and lying or hiding your results in this way is quite literally bad for business."

This does not agree with reality. Good pollsters have gotten out of the business for fear of "getting it wrong"

Expand full comment
mcsvbff bebh's avatar

It seems that you're making this really personal and emotional and I think it's clouding your judgement

Expand full comment
Carl Allen's avatar

"it seems you're making this really personal and..."

Do you have anything substantive to say or are you just going to talk about your feelings and try to psychoanalyze me?

Expand full comment
mcsvbff bebh's avatar

I did reply quite substantively to you. The things you're saying are just incorrect.

Expand full comment
Carl Allen's avatar

"This article is super strange in that it suggests reality is entirely based around nate silver"

Nate Silver is the most often cited analyst in the field

Expand full comment
mcsvbff bebh's avatar

Most often cited doesn't mean the entire fabric of reality bends around him.

Expand full comment
Carl Allen's avatar

"doesn't mean entire fabric of reality..."

He influences other analysts to adopt his flawed methods.

If you don't understand that then I'm not interested in explaining it lol

Expand full comment
mcsvbff bebh's avatar

Your assertion presented without evidence that one person is driving an entire industry into wildly unethical behavior so they stay in his good graces is just batshit insane. It's Q Anon levels of crazy. I don't think you can possibly understand how anything works if you truly believe this to be true.

Expand full comment
Paul Stone's avatar

Mc, this is a straw man argument.

Expand full comment
mcsvbff bebh's avatar

This blog post is wrong. It's just a crazy rant that would've been better suited to a tweet, I engaged in good faith and gave a substantive response, but the entire idea is ridiculous and I gave it far more credence than it deserved tbh.

Expand full comment
Carl Allen's avatar

"the blog post is wrong"

What is wrong about it, specifically? Use quotes.

Expand full comment
Lee Reyes-Fournier, PhD's avatar

So Nate's method of grading pollsters is like an influencer rating food on an arbitrary 10 point scale.

Expand full comment
Still_Independent's avatar

I donโ€™t work for a campaign. Iโ€™m not selling ads to campaigns. As an amateur but rabid consumer of polling data, why would I care about any metric other than how close your poll in a given state was to the final results compared to other pollsters? I agree that if youโ€™re going to rate pollsters it should reflect multiple elections, but in the end, I want to know who has a good track record and who has a persistent bias (mathematically, nothing intentional) towards one side.

Expand full comment
Carl Allen's avatar

"why would I care about any metric other than how close your poll in a given state was to the final results compared to other pollsters?"

Because if you're going to consume poll data you should understand what the data you're consuming means.

Many people who approach the roulette wheel see something like 15/20 of the most recent spins have been black, thus conclude red is due.

Sometimes data is not useful in the way you want it to be. And understanding it better makes it more useful than a calculation based in luck, not skill, like the current one.

Expand full comment
Still_Independent's avatar

Maybe Iโ€™m read too much into your analogy, but unless youโ€™re trying to tell me that pollster accuracy is purely random, Iโ€™m still at a loss as to a better metric. If a pollsterโ€™s collection or adjustment methods reliably lead to bias in one direction or another. If a pollster is consistently 5 or 6 points off every election (even worse if itโ€™s in differing directions) then I want to know that. As an example, I knew how to interpret Rasmussen polls (even before they were caught working with the campaign) because of their consistent bias. I get binomial distributions and the variation of polls even for an individual pollster. But in the end, short of simply ditching polls and just declaring them as useless, itโ€™s still the only metric I care about.

Expand full comment
Carl Allen's avatar

It's not purely random, but for any individual election it's so close to random that saying whose poll is "best" is statistical nonsense

Expand full comment
Carl Allen's avatar

"So you're telling me the pollsters who are graded as most accurate are just luck"

Yes, literally by definition. There's a factor of randomness that cannot be accounted for by any poll error calculation that is currently ignored.

No one who understands how polls work would grade or rate pollsters as "most accurate" for any individual election.

It's statistical nonsense

Expand full comment
Carl Allen's avatar

I understand where you're coming from re: "the only metric I care about" but here you're judging the accuracy of polls - which are tools that are not intended to be predictions - by how well they predict something.

That's a misplaced sense analysis guided by the field's current standards, that I'm trying to correct

Expand full comment
Still_Independent's avatar

I agree with what youโ€™re sayingโ€ฆ.. to a point. Recency to the election is important. I also agree that polls are not predictions in and of themselves. Thereโ€™s little-to-no practical difference between Trump 48-47 and Harris 48-47 insofar as what will happen on Election Day. However, if a poll taken a few days out shows candidate A up 49-46 with 3% undecided, and candidate B wins by 5, barring a natural disaster, something was horribly wrong either way the poll, or it was just extremely bad luck. If it happens in multiple states, then there is something amiss with the pollster. I want to know this. So do I care whoโ€™s the โ€œbestโ€? No. Do I want to know, at least under a specific set in conditions, whose results come closer to the final results? Yes, I do.

Expand full comment
Thomas Elliot's avatar

This feels relevant here.

https://www.acsh.org/news/2019/12/14/election-polls-should-report-confidence-intervals-not-just-margins-error-14452

Just because a poll gets updated doesnโ€™t make it statistically significant. Most polls are not providing new information, just repeated samples within the confidence intervals. I feel like this is especially true in an election with a strong known entity like Trump.

Expand full comment
Julius's avatar

I am not well versed in statistics. however, there is a margin of error in polls. in your scenario of 49-46 and 3 percent undecided (that leaves 2 percent out there, still), if the margin of error is 4 percent per candidate and 2 percent undecided vote for B, then the outcome could be 52 - 46 in favor of B. that's within the margin of error of the poll, I believe, and the poll is still "good."

I think, anyways. someone correct me if I have it wrong, please.

Expand full comment
Carl Allen's avatar

Yeah, it's almost certainly still a very good poll in terms of accuracy

Expand full comment
Still_Independent's avatar

And as Substack still has no ๐Ÿคฌedit, thank you for responding in the first place.

Expand full comment
Paul Stone's avatar

You can edit comments on a computer, just not in the smartphone app.

Expand full comment
disinterested's avatar

I hope Iโ€™m understanding Carlโ€™s argument here, but another problem is how โ€œaccuracyโ€ is being judged by Silver, that is, margins. For example, if a pollster releases a poll right before the election that says the final result is 49-46, but then the election actually shows 46-43, with a big chunk of third party support, Silver would judge that as perfectly accurate, because the margin was correct. Of course, it was way off though. It got the result correct, but the numbers totally wrong.

Expand full comment
Carl Allen's avatar

"if a poll says 49-46, but then the election is 46-43 with lots of third-party voters, Silver would judge that as perfectly accurate because it got the margin right"

That's absolutely correct. Not just Silver, but the consensus of experts in this field.

This happens all the time in primary elections, where a candidate polls at 65, ends at 58, but still wins by a margin close to the original poll.

The bigger problem is that this "spread method" assumes undecideds must always split 50-50 (and any third-party defectors must split evenly to the major candidates) which is stupid on its face but it is literally how the entire field measures poll error currently.

Expand full comment
Jim Caserta's avatar

When I saw Selzer's poll, my first thought was 'Methodology Matters'. I do not think she thinks herself brave for publishing what her results were, I think she is very confident in how she obtained those results. From what I'm reading, she does much more extensive questioning than the average pollsters. Also, and this is huge, she is local and trusted. She has to get a higher response rate than nearly every other pollster (response rate should be reported in every poll, as well as how many responses are from repeat responders). I also have zero clue of her political leanings, so to respondees, she is seen as neutral.

I think how pollsters are achieving this herding is also a very key question. They're adjusting their weighting factors to 'correct' for 'expected demographics' of who votes. First, I think average pollsters do not have nearly as much insight into this as they think they do, and second, I don't think average pollsters are trying to determine that. I think Selzer recognizes the first and is doing the second.

In another world I've tried to explain to people the difference between independent and dependent random variables and I'll just say it's challenging. Also, one mistake of 2016 was that state-to-state results are not independent! Why someone in MI votes one way is likely to be similar to why someone in WI votes that way!

Expand full comment
lunafaer (she/they)'s avatar

if nate silver passed from this earthly plane i would dance in his grave and then piss on it.

Expand full comment
User's avatar
Comment deleted
Nov 3
Comment deleted
Expand full comment
mcsvbff bebh's avatar

Yes this article seems to assume the entire industry thinks about nothing except Nate Silver's personal opinion. It's quite off-base

Expand full comment
User's avatar
Comment deleted
Nov 3
Comment deleted
Expand full comment
Carl Allen's avatar

I literally said both pollsters and Silver act in their own self interest in the article

And that Silver gets pissy when pollsters don't do what he wants

Hence

Stfu

Expand full comment
Moose's avatar

it is in silver's best interest to complain about herding.

Expand full comment
User's avatar
Comment deleted
Nov 3
Comment deleted
Expand full comment
Carl Allen's avatar

"please consider that it might be productive to stop taking personally the things that should be very impersonal"

You're confusing the fact that I've provided supporting evidence that someone is wrong, stupid, or otherwise unjustified in their work with something personal.

Maybe you should stop taking the fact that I don't care what you think about my work so personal.

You can talk about the content of my work or you have nothing to offer.

Expand full comment
Carl Allen's avatar

"our pollster ratings actually include a penalty for herding"

Ohhhhh noeeeez not his RATINGS!

Lmao do you hear yourself?

Expand full comment
Paul Stone's avatar

Jonathon, you rather come across as trying to pick a fight, with your emotional arguments and condescension and name-calling.

Your argument is that Nate Silver tries to correct for the problem which he admits exists. You donโ€™t have any basis for saying that Silver is able to correct for the problem. How could you? It seems to me rather like trying to get detail out of an overdeveloped negative. A lot of the information is gone.

But, Iโ€™m not an expert. Anyway, it doesnโ€™t actually have much bearing on Carl Allenโ€™s point, which relates to the problem itself and its proximate cause.

Expand full comment
Carl Allen's avatar

Silver rightly identified herding as a problem.

Silver uses his platform to chastise pollsters he believes is herding.

Silver rightly, again, points out that if polls herd, individual pollsters will appear less wrong (by his unscientific metric) on average

At no point does Silver stop to consider the possibility that his own unscientific calculation for "poll accuracy" (which he proudly publishes, uses to reward "good" pollsters and punish "bad" ones) is at least partially responsible for the problem he laments.

Hence....the article

Expand full comment