I am shocked ( but totally unsurprised) at the Iowa poll. Women โ all colors, all political parties and all ages โ have been engaged & enraged post-Dobbs. Oh, and lots of folks havenโt forgotten January 6th.
Yet the pollsters couldnโt seem to figure out that perhaps more women would turn out. Or perhaps thereโd be a ton of Republican crossover vote for Harris.
I had to read a TON of articles about the โbro voteโ and the importance of Joe Rogan. Yet even I know the โ young broโ demo is the least likely to vote.
But women? Esp women born in the 1950s and 60โs like me? WE VOTE. And we donโt vote for corrupt traitors, rapists or fraudsters. Esp when theyโre the same person.
Older women are dismissed , invisible in so many ways. People think itโs Dobbs, and it is, but itโs also the entire way they talk about women at every chance they get. If they think PR are angry about the garbage comment - they have no idea how angry woman are after 8 yrs of this crap. We protested in the MILLIONS on DJTs Inauguration Day. Did they think it got better in the intervening years? And itโs not just what theyโre saying - itโs all the visuals - his relationship with Melania, who she is, what that says about women. Itโs the young ladies that worked in his admin- like Cassidy Hutchinson and Sarah matthews. The long list of women he haps assaulted. Itโs the entirety of how he is, how he acts, what he says - and all the other misogynistic bros like Elon. Our society is ill, or this man wouldโve never been elected.
Thank you for this. I have been trying to put my finger on what feels so wrong about the poll aggregators, especially after reading Nateโs article on herding today. I donโt have the stats background to fully delve into the issues, but this analysis captures the conclusion thatโs been rumbling around in my head: itโs unscientific but with a veneer of science to try to pass as such. The same as the polls themselves.
Hereโs how I described Silverโs work to a former gubernatorial candidate and a member of his staff:
Hereโs the backup article. I finished it a few minutes ago. Heโs hedging his bets because everybody he uses in his models are hedging their betsโฆ
This is again what drives me nuts about this constant narrative of Atlas Intel being "the highest rated pollster," when they literally have one presidential election under their belt. Additionally, it interesting that their MOE is being used to talk about how great they are, but I can't remember seeing anything from the aggregates that go into why their methods would be leading to good results, or how those methods are ensuring better accuracy.
It is also interesting to me how Atlas touts their incredible successes (while ignoring their incredible misses, which there are more than a few), based on their MOE accuracy, but often the difference in the MOE miss between them and other pollsters is in the tenths of a percentage point (1.2% off for them and 1.4% off for another pollster, in one they posted showing how great they are). This is essentially meaningless in the realm of statistics with large sample sizes (if I'm remembering my many stats classes correctly), and the fact that they consistently get almost the exact same numbers for each candidate (over the course of July through October, in six polls, Trump ranged from 49.6% to 50.8%, and Harris ranged from 48% to 48.3%--a .3% difference over six polls) a little suspicious to me, as random chance would likely create more variance.
This bias is no different than the managed funds the various companies used to tout - start 100 analysts with small funds, and each year only keep the ones that exceed the market average such that after 3 years you can tout a โtrack recordโ of someone allegedly knowledgeable vs. likely reality of someone who just got lucky. In these infrequent elections there is a similar survivor bias going on.
The problem with Nate silvers analysis is twofold. For one, his methodology punishes pollsters that go out on limbs but get it wrong. Secondly, it *assumes* that the methods / weighting / etc used by the โexcellentโ pollster in the last election will carry the day in this years election. That is very unlikely to be the case as the candidates, voting population, issues, etc is typically changing from election cycle to election cycle.
Not only do you have a roulette wheel of entropy going on, the available scope of numbers on said wheel(a) may also be changing based on how well the weightings / demographics / etc properly capture the likely voting population in a given election cycle.
"This bias is no different than the managed funds the various companies used to tout - start 100 analysts with small funds, and each year only keep the ones that exceed the market average such that after 3 years you can tout a โtrack recordโ of someone allegedly knowledgeable vs. likely reality of someone who just got lucky."
I used this exact analogy in my book as a threat to the field's future and legitimacy. Everyone wants an Oracle, but we just keep hopping on the cycle of Oracle --> hack --> appoint new oracle
I think itโs true that the poll aggregator (and Nate was not the first) have created a perverse incentive for pollsters to avoid publishing outlier data which we know should occur but is lacking.
In the medical literature we have the opposite perverse incentives where outlier data is rewarded and confirmatory results are devalued. You get published by being novel and different not by confirming others. So we have an industry of meta analysis where the outliers get averaged together. The aggregate data not surprisingly ends up in the middle often with the caveat that none of the trials were really well conducted to begin with.
Pollster preregistration and data availability are similar suggestions to what we have in studies due to replication problems.
That's a great point and I know that other scientific fields have ways of dealing with these incentives and disincentives. I'm actively looking to collaborate with people who have backgrounds in other fields because in my experience they're the ones who offer the best paths forward
Poll data analysis seems to be a fledgling field stuck on tradition instead of improving methods
Nate Silvers approach reminds me of similar attempts to appear pseudo-scientific re: ranking colleges. Every college president starts to game the statistics to juice their rankings, whether they actually benefit students or not.
It ends in an arms race where the better endowed universities usually prevail and the less endowed ones can get felled by one or two years of less than stellar enrollment.
Rankings can tank schools and the desire of schools to appear selective with a high enrollment rate on acceptance means that most kids have to apply to 2-3x as many schools as their parents had to in order to have a good probability of getting in somewhere theyโd like to go. Itโs nuts.
This article is super strange in that it suggests reality is entirely based around nate silver, a mistake nate often makes as well, but it's more understandable in his case.
> But if your reputation is being largely formed on a tiny sample of one election, for the one or two polls you conduct closest to the election, and your poll produces results three or four points off the average, what exactly is your incentive for reporting that data as such?
There are a ton of incentives to report this data. It is the honest and morally correct thing to do. Pollsters should rely on trust and lying or hiding your results in this way is quite literally bad for business. Like there are just more things in the world than nate, and believe it or not people in the polling industry care about what they do for a living. If you don't like Nate Silver, you can stop reading him.
Your assertion presented without evidence that one person is driving an entire industry into wildly unethical behavior so they stay in his good graces is just batshit insane. It's Q Anon levels of crazy. I don't think you can possibly understand how anything works if you truly believe this to be true.
This blog post is wrong. It's just a crazy rant that would've been better suited to a tweet, I engaged in good faith and gave a substantive response, but the entire idea is ridiculous and I gave it far more credence than it deserved tbh.
I donโt work for a campaign. Iโm not selling ads to campaigns. As an amateur but rabid consumer of polling data, why would I care about any metric other than how close your poll in a given state was to the final results compared to other pollsters? I agree that if youโre going to rate pollsters it should reflect multiple elections, but in the end, I want to know who has a good track record and who has a persistent bias (mathematically, nothing intentional) towards one side.
"why would I care about any metric other than how close your poll in a given state was to the final results compared to other pollsters?"
Because if you're going to consume poll data you should understand what the data you're consuming means.
Many people who approach the roulette wheel see something like 15/20 of the most recent spins have been black, thus conclude red is due.
Sometimes data is not useful in the way you want it to be. And understanding it better makes it more useful than a calculation based in luck, not skill, like the current one.
Maybe Iโm read too much into your analogy, but unless youโre trying to tell me that pollster accuracy is purely random, Iโm still at a loss as to a better metric. If a pollsterโs collection or adjustment methods reliably lead to bias in one direction or another. If a pollster is consistently 5 or 6 points off every election (even worse if itโs in differing directions) then I want to know that. As an example, I knew how to interpret Rasmussen polls (even before they were caught working with the campaign) because of their consistent bias. I get binomial distributions and the variation of polls even for an individual pollster. But in the end, short of simply ditching polls and just declaring them as useless, itโs still the only metric I care about.
I understand where you're coming from re: "the only metric I care about" but here you're judging the accuracy of polls - which are tools that are not intended to be predictions - by how well they predict something.
That's a misplaced sense analysis guided by the field's current standards, that I'm trying to correct
I agree with what youโre sayingโฆ.. to a point. Recency to the election is important. I also agree that polls are not predictions in and of themselves. Thereโs little-to-no practical difference between Trump 48-47 and Harris 48-47 insofar as what will happen on Election Day. However, if a poll taken a few days out shows candidate A up 49-46 with 3% undecided, and candidate B wins by 5, barring a natural disaster, something was horribly wrong either way the poll, or it was just extremely bad luck. If it happens in multiple states, then there is something amiss with the pollster. I want to know this. So do I care whoโs the โbestโ? No. Do I want to know, at least under a specific set in conditions, whose results come closer to the final results? Yes, I do.
Just because a poll gets updated doesnโt make it statistically significant. Most polls are not providing new information, just repeated samples within the confidence intervals. I feel like this is especially true in an election with a strong known entity like Trump.
I am not well versed in statistics. however, there is a margin of error in polls. in your scenario of 49-46 and 3 percent undecided (that leaves 2 percent out there, still), if the margin of error is 4 percent per candidate and 2 percent undecided vote for B, then the outcome could be 52 - 46 in favor of B. that's within the margin of error of the poll, I believe, and the poll is still "good."
I think, anyways. someone correct me if I have it wrong, please.
I hope Iโm understanding Carlโs argument here, but another problem is how โaccuracyโ is being judged by Silver, that is, margins. For example, if a pollster releases a poll right before the election that says the final result is 49-46, but then the election actually shows 46-43, with a big chunk of third party support, Silver would judge that as perfectly accurate, because the margin was correct. Of course, it was way off though. It got the result correct, but the numbers totally wrong.
"if a poll says 49-46, but then the election is 46-43 with lots of third-party voters, Silver would judge that as perfectly accurate because it got the margin right"
That's absolutely correct. Not just Silver, but the consensus of experts in this field.
This happens all the time in primary elections, where a candidate polls at 65, ends at 58, but still wins by a margin close to the original poll.
The bigger problem is that this "spread method" assumes undecideds must always split 50-50 (and any third-party defectors must split evenly to the major candidates) which is stupid on its face but it is literally how the entire field measures poll error currently.
When I saw Selzer's poll, my first thought was 'Methodology Matters'. I do not think she thinks herself brave for publishing what her results were, I think she is very confident in how she obtained those results. From what I'm reading, she does much more extensive questioning than the average pollsters. Also, and this is huge, she is local and trusted. She has to get a higher response rate than nearly every other pollster (response rate should be reported in every poll, as well as how many responses are from repeat responders). I also have zero clue of her political leanings, so to respondees, she is seen as neutral.
I think how pollsters are achieving this herding is also a very key question. They're adjusting their weighting factors to 'correct' for 'expected demographics' of who votes. First, I think average pollsters do not have nearly as much insight into this as they think they do, and second, I don't think average pollsters are trying to determine that. I think Selzer recognizes the first and is doing the second.
In another world I've tried to explain to people the difference between independent and dependent random variables and I'll just say it's challenging. Also, one mistake of 2016 was that state-to-state results are not independent! Why someone in MI votes one way is likely to be similar to why someone in WI votes that way!
"please consider that it might be productive to stop taking personally the things that should be very impersonal"
You're confusing the fact that I've provided supporting evidence that someone is wrong, stupid, or otherwise unjustified in their work with something personal.
Maybe you should stop taking the fact that I don't care what you think about my work so personal.
You can talk about the content of my work or you have nothing to offer.
Jonathon, you rather come across as trying to pick a fight, with your emotional arguments and condescension and name-calling.
Your argument is that Nate Silver tries to correct for the problem which he admits exists. You donโt have any basis for saying that Silver is able to correct for the problem. How could you? It seems to me rather like trying to get detail out of an overdeveloped negative. A lot of the information is gone.
But, Iโm not an expert. Anyway, it doesnโt actually have much bearing on Carl Allenโs point, which relates to the problem itself and its proximate cause.
Silver uses his platform to chastise pollsters he believes is herding.
Silver rightly, again, points out that if polls herd, individual pollsters will appear less wrong (by his unscientific metric) on average
At no point does Silver stop to consider the possibility that his own unscientific calculation for "poll accuracy" (which he proudly publishes, uses to reward "good" pollsters and punish "bad" ones) is at least partially responsible for the problem he laments.
I am shocked ( but totally unsurprised) at the Iowa poll. Women โ all colors, all political parties and all ages โ have been engaged & enraged post-Dobbs. Oh, and lots of folks havenโt forgotten January 6th.
Yet the pollsters couldnโt seem to figure out that perhaps more women would turn out. Or perhaps thereโd be a ton of Republican crossover vote for Harris.
I had to read a TON of articles about the โbro voteโ and the importance of Joe Rogan. Yet even I know the โ young broโ demo is the least likely to vote.
But women? Esp women born in the 1950s and 60โs like me? WE VOTE. And we donโt vote for corrupt traitors, rapists or fraudsters. Esp when theyโre the same person.
Older women are dismissed , invisible in so many ways. People think itโs Dobbs, and it is, but itโs also the entire way they talk about women at every chance they get. If they think PR are angry about the garbage comment - they have no idea how angry woman are after 8 yrs of this crap. We protested in the MILLIONS on DJTs Inauguration Day. Did they think it got better in the intervening years? And itโs not just what theyโre saying - itโs all the visuals - his relationship with Melania, who she is, what that says about women. Itโs the young ladies that worked in his admin- like Cassidy Hutchinson and Sarah matthews. The long list of women he haps assaulted. Itโs the entirety of how he is, how he acts, what he says - and all the other misogynistic bros like Elon. Our society is ill, or this man wouldโve never been elected.
Thank you for this. I have been trying to put my finger on what feels so wrong about the poll aggregators, especially after reading Nateโs article on herding today. I donโt have the stats background to fully delve into the issues, but this analysis captures the conclusion thatโs been rumbling around in my head: itโs unscientific but with a veneer of science to try to pass as such. The same as the polls themselves.
"if my forecast is bad it's not my fault" is currently accepted and it needs to stop
Hereโs how I described Silverโs work to a former gubernatorial candidate and a member of his staff:
Hereโs the backup article. I finished it a few minutes ago. Heโs hedging his bets because everybody he uses in his models are hedging their betsโฆ
This is again what drives me nuts about this constant narrative of Atlas Intel being "the highest rated pollster," when they literally have one presidential election under their belt. Additionally, it interesting that their MOE is being used to talk about how great they are, but I can't remember seeing anything from the aggregates that go into why their methods would be leading to good results, or how those methods are ensuring better accuracy.
It is also interesting to me how Atlas touts their incredible successes (while ignoring their incredible misses, which there are more than a few), based on their MOE accuracy, but often the difference in the MOE miss between them and other pollsters is in the tenths of a percentage point (1.2% off for them and 1.4% off for another pollster, in one they posted showing how great they are). This is essentially meaningless in the realm of statistics with large sample sizes (if I'm remembering my many stats classes correctly), and the fact that they consistently get almost the exact same numbers for each candidate (over the course of July through October, in six polls, Trump ranged from 49.6% to 50.8%, and Harris ranged from 48% to 48.3%--a .3% difference over six polls) a little suspicious to me, as random chance would likely create more variance.
Correct on all points here regarding Atlas Intel
The fact is, the โbest pollsterโ in any given election is determined mostly by literal chance. This is mathematically irrefutable.
Yet the quants in this field don't understand that, it seems.
This bias is no different than the managed funds the various companies used to tout - start 100 analysts with small funds, and each year only keep the ones that exceed the market average such that after 3 years you can tout a โtrack recordโ of someone allegedly knowledgeable vs. likely reality of someone who just got lucky. In these infrequent elections there is a similar survivor bias going on.
The problem with Nate silvers analysis is twofold. For one, his methodology punishes pollsters that go out on limbs but get it wrong. Secondly, it *assumes* that the methods / weighting / etc used by the โexcellentโ pollster in the last election will carry the day in this years election. That is very unlikely to be the case as the candidates, voting population, issues, etc is typically changing from election cycle to election cycle.
Not only do you have a roulette wheel of entropy going on, the available scope of numbers on said wheel(a) may also be changing based on how well the weightings / demographics / etc properly capture the likely voting population in a given election cycle.
"This bias is no different than the managed funds the various companies used to tout - start 100 analysts with small funds, and each year only keep the ones that exceed the market average such that after 3 years you can tout a โtrack recordโ of someone allegedly knowledgeable vs. likely reality of someone who just got lucky."
I used this exact analogy in my book as a threat to the field's future and legitimacy. Everyone wants an Oracle, but we just keep hopping on the cycle of Oracle --> hack --> appoint new oracle
I think itโs true that the poll aggregator (and Nate was not the first) have created a perverse incentive for pollsters to avoid publishing outlier data which we know should occur but is lacking.
In the medical literature we have the opposite perverse incentives where outlier data is rewarded and confirmatory results are devalued. You get published by being novel and different not by confirming others. So we have an industry of meta analysis where the outliers get averaged together. The aggregate data not surprisingly ends up in the middle often with the caveat that none of the trials were really well conducted to begin with.
Pollster preregistration and data availability are similar suggestions to what we have in studies due to replication problems.
That's a great point and I know that other scientific fields have ways of dealing with these incentives and disincentives. I'm actively looking to collaborate with people who have backgrounds in other fields because in my experience they're the ones who offer the best paths forward
Poll data analysis seems to be a fledgling field stuck on tradition instead of improving methods
Nate Silvers approach reminds me of similar attempts to appear pseudo-scientific re: ranking colleges. Every college president starts to game the statistics to juice their rankings, whether they actually benefit students or not.
It ends in an arms race where the better endowed universities usually prevail and the less endowed ones can get felled by one or two years of less than stellar enrollment.
Rankings can tank schools and the desire of schools to appear selective with a high enrollment rate on acceptance means that most kids have to apply to 2-3x as many schools as their parents had to in order to have a good probability of getting in somewhere theyโd like to go. Itโs nuts.
This article is super strange in that it suggests reality is entirely based around nate silver, a mistake nate often makes as well, but it's more understandable in his case.
> But if your reputation is being largely formed on a tiny sample of one election, for the one or two polls you conduct closest to the election, and your poll produces results three or four points off the average, what exactly is your incentive for reporting that data as such?
There are a ton of incentives to report this data. It is the honest and morally correct thing to do. Pollsters should rely on trust and lying or hiding your results in this way is quite literally bad for business. Like there are just more things in the world than nate, and believe it or not people in the polling industry care about what they do for a living. If you don't like Nate Silver, you can stop reading him.
"There are a ton of incentives to report this data. It is the honest and morally correct thing to do."
That's not an incentive, that's a motivation, at best.
"Pollsters should rely on trust and lying or hiding your results in this way is quite literally bad for business."
This does not agree with reality. Good pollsters have gotten out of the business for fear of "getting it wrong"
It seems that you're making this really personal and emotional and I think it's clouding your judgement
"it seems you're making this really personal and..."
Do you have anything substantive to say or are you just going to talk about your feelings and try to psychoanalyze me?
I did reply quite substantively to you. The things you're saying are just incorrect.
"This article is super strange in that it suggests reality is entirely based around nate silver"
Nate Silver is the most often cited analyst in the field
Most often cited doesn't mean the entire fabric of reality bends around him.
"doesn't mean entire fabric of reality..."
He influences other analysts to adopt his flawed methods.
If you don't understand that then I'm not interested in explaining it lol
Your assertion presented without evidence that one person is driving an entire industry into wildly unethical behavior so they stay in his good graces is just batshit insane. It's Q Anon levels of crazy. I don't think you can possibly understand how anything works if you truly believe this to be true.
Mc, this is a straw man argument.
This blog post is wrong. It's just a crazy rant that would've been better suited to a tweet, I engaged in good faith and gave a substantive response, but the entire idea is ridiculous and I gave it far more credence than it deserved tbh.
"the blog post is wrong"
What is wrong about it, specifically? Use quotes.
So Nate's method of grading pollsters is like an influencer rating food on an arbitrary 10 point scale.
I donโt work for a campaign. Iโm not selling ads to campaigns. As an amateur but rabid consumer of polling data, why would I care about any metric other than how close your poll in a given state was to the final results compared to other pollsters? I agree that if youโre going to rate pollsters it should reflect multiple elections, but in the end, I want to know who has a good track record and who has a persistent bias (mathematically, nothing intentional) towards one side.
"why would I care about any metric other than how close your poll in a given state was to the final results compared to other pollsters?"
Because if you're going to consume poll data you should understand what the data you're consuming means.
Many people who approach the roulette wheel see something like 15/20 of the most recent spins have been black, thus conclude red is due.
Sometimes data is not useful in the way you want it to be. And understanding it better makes it more useful than a calculation based in luck, not skill, like the current one.
Maybe Iโm read too much into your analogy, but unless youโre trying to tell me that pollster accuracy is purely random, Iโm still at a loss as to a better metric. If a pollsterโs collection or adjustment methods reliably lead to bias in one direction or another. If a pollster is consistently 5 or 6 points off every election (even worse if itโs in differing directions) then I want to know that. As an example, I knew how to interpret Rasmussen polls (even before they were caught working with the campaign) because of their consistent bias. I get binomial distributions and the variation of polls even for an individual pollster. But in the end, short of simply ditching polls and just declaring them as useless, itโs still the only metric I care about.
It's not purely random, but for any individual election it's so close to random that saying whose poll is "best" is statistical nonsense
Strongly recommend this article
https://open.substack.com/pub/realcarlallen/p/a-quick-poll-math-lesson?r=1tl3in&utm_campaign=post&utm_medium=web
"So you're telling me the pollsters who are graded as most accurate are just luck"
Yes, literally by definition. There's a factor of randomness that cannot be accounted for by any poll error calculation that is currently ignored.
No one who understands how polls work would grade or rate pollsters as "most accurate" for any individual election.
It's statistical nonsense
I understand where you're coming from re: "the only metric I care about" but here you're judging the accuracy of polls - which are tools that are not intended to be predictions - by how well they predict something.
That's a misplaced sense analysis guided by the field's current standards, that I'm trying to correct
I agree with what youโre sayingโฆ.. to a point. Recency to the election is important. I also agree that polls are not predictions in and of themselves. Thereโs little-to-no practical difference between Trump 48-47 and Harris 48-47 insofar as what will happen on Election Day. However, if a poll taken a few days out shows candidate A up 49-46 with 3% undecided, and candidate B wins by 5, barring a natural disaster, something was horribly wrong either way the poll, or it was just extremely bad luck. If it happens in multiple states, then there is something amiss with the pollster. I want to know this. So do I care whoโs the โbestโ? No. Do I want to know, at least under a specific set in conditions, whose results come closer to the final results? Yes, I do.
This feels relevant here.
https://www.acsh.org/news/2019/12/14/election-polls-should-report-confidence-intervals-not-just-margins-error-14452
Just because a poll gets updated doesnโt make it statistically significant. Most polls are not providing new information, just repeated samples within the confidence intervals. I feel like this is especially true in an election with a strong known entity like Trump.
I am not well versed in statistics. however, there is a margin of error in polls. in your scenario of 49-46 and 3 percent undecided (that leaves 2 percent out there, still), if the margin of error is 4 percent per candidate and 2 percent undecided vote for B, then the outcome could be 52 - 46 in favor of B. that's within the margin of error of the poll, I believe, and the poll is still "good."
I think, anyways. someone correct me if I have it wrong, please.
Yeah, it's almost certainly still a very good poll in terms of accuracy
And as Substack still has no ๐คฌedit, thank you for responding in the first place.
You can edit comments on a computer, just not in the smartphone app.
I hope Iโm understanding Carlโs argument here, but another problem is how โaccuracyโ is being judged by Silver, that is, margins. For example, if a pollster releases a poll right before the election that says the final result is 49-46, but then the election actually shows 46-43, with a big chunk of third party support, Silver would judge that as perfectly accurate, because the margin was correct. Of course, it was way off though. It got the result correct, but the numbers totally wrong.
"if a poll says 49-46, but then the election is 46-43 with lots of third-party voters, Silver would judge that as perfectly accurate because it got the margin right"
That's absolutely correct. Not just Silver, but the consensus of experts in this field.
This happens all the time in primary elections, where a candidate polls at 65, ends at 58, but still wins by a margin close to the original poll.
The bigger problem is that this "spread method" assumes undecideds must always split 50-50 (and any third-party defectors must split evenly to the major candidates) which is stupid on its face but it is literally how the entire field measures poll error currently.
When I saw Selzer's poll, my first thought was 'Methodology Matters'. I do not think she thinks herself brave for publishing what her results were, I think she is very confident in how she obtained those results. From what I'm reading, she does much more extensive questioning than the average pollsters. Also, and this is huge, she is local and trusted. She has to get a higher response rate than nearly every other pollster (response rate should be reported in every poll, as well as how many responses are from repeat responders). I also have zero clue of her political leanings, so to respondees, she is seen as neutral.
I think how pollsters are achieving this herding is also a very key question. They're adjusting their weighting factors to 'correct' for 'expected demographics' of who votes. First, I think average pollsters do not have nearly as much insight into this as they think they do, and second, I don't think average pollsters are trying to determine that. I think Selzer recognizes the first and is doing the second.
In another world I've tried to explain to people the difference between independent and dependent random variables and I'll just say it's challenging. Also, one mistake of 2016 was that state-to-state results are not independent! Why someone in MI votes one way is likely to be similar to why someone in WI votes that way!
if nate silver passed from this earthly plane i would dance in his grave and then piss on it.
Yes this article seems to assume the entire industry thinks about nothing except Nate Silver's personal opinion. It's quite off-base
I literally said both pollsters and Silver act in their own self interest in the article
And that Silver gets pissy when pollsters don't do what he wants
Hence
Stfu
it is in silver's best interest to complain about herding.
"please consider that it might be productive to stop taking personally the things that should be very impersonal"
You're confusing the fact that I've provided supporting evidence that someone is wrong, stupid, or otherwise unjustified in their work with something personal.
Maybe you should stop taking the fact that I don't care what you think about my work so personal.
You can talk about the content of my work or you have nothing to offer.
"our pollster ratings actually include a penalty for herding"
Ohhhhh noeeeez not his RATINGS!
Lmao do you hear yourself?
Jonathon, you rather come across as trying to pick a fight, with your emotional arguments and condescension and name-calling.
Your argument is that Nate Silver tries to correct for the problem which he admits exists. You donโt have any basis for saying that Silver is able to correct for the problem. How could you? It seems to me rather like trying to get detail out of an overdeveloped negative. A lot of the information is gone.
But, Iโm not an expert. Anyway, it doesnโt actually have much bearing on Carl Allenโs point, which relates to the problem itself and its proximate cause.
Silver rightly identified herding as a problem.
Silver uses his platform to chastise pollsters he believes is herding.
Silver rightly, again, points out that if polls herd, individual pollsters will appear less wrong (by his unscientific metric) on average
At no point does Silver stop to consider the possibility that his own unscientific calculation for "poll accuracy" (which he proudly publishes, uses to reward "good" pollsters and punish "bad" ones) is at least partially responsible for the problem he laments.
Hence....the article