5 Comments
Sep 25Liked by Carl Allen

Thanks, Carl! Very competently explained!

Expand full comment

Excellent

Expand full comment
Sep 24Liked by Carl Allen

Great explainer!

Expand full comment
Sep 24Liked by Carl Allen

Thanks!

Expand full comment
Sep 26·edited Sep 26

Carl, As always, you make a lot of sense, and I enjoy your commentary very much.

I do follow polls and aggregates quite a bit, but I always have nagging doubts: 1) In light of the high cost of polling and other pressures on pollsters, what percentage of the polls we get are actually high quality? Further, it seems as if there are considerably fewer polls now than in past cycles. How do we know that we have a sufficiently large enough database of high quality polls to draw scientific conclusions about the aggregates. The smaller the pool and the lower the quality of data, the less reliable would be the outcome. 2) It seems as if many partisan pollsters are attempting to game the averages to create a narrative favorable to their patrons. This appears to have been the case with R pollsters in 2022 (creating the false "red wave" narrative) and once again in 2024. How do you correct for the potentiality of bias in pollsters who are delivering results specifically to alter public perceptions? One case now is the Montana Senate. I may be wrong, but I'm not even sure that we've had any real non-partisan polling there in recent weeks. How do you create reasonable polling aggregates given the absence of actual scientific data there--or even in other instances where there are very few non-partisan polls?; 3) The old "garbage in, garbage out problem": one's conclusions are only as good as the quality of data one collects. Even with unintentional bias or other errors, you potentially create a pool of flawed/tainted data. Yes, I know the idea in statistics about random errors cancelling each other out, but what if the errors aren't random? What if the errors in polling are the result of consistent assumptions simply being incorrect? I would guess a lot of political polling errors fall into that category; 4) How subjective is the "grading" of pollsters? How do we know someone didn't just get lucky for a cycle or two? Can we really agree on what makes for a good poll? Yes, i understand transparency, but is that all there is to a good poll? For example, NYT polls are said to be transparent in their methodology, but do we really trust the assumptions they're making in weighting--especially in light of the serious problems with their news coverage. Does it really make me a poll denialist to wonder about that?; 4) Why are polls the main 'holy grail' when it comes to political data collection? What about fundraising data, candidate experience, registration info, trending searches on search engines, trending hashtags on social media, and so much, much more? What makes polling an inherently better measurement than other data (other than the fact that we have more practice analyzing polling data)?

I certainly still believe in using polling (especially directionally), I'm far from a poll denier, and I love numbers, but it seems as if anyone who asks critical questions of any kind gets slammed. Honestly I don't want that. I just want to understand what's going on. I'm a scholar. I know that some of the conclusions I see as automatically accepted in the polling world would never have survived the kind of scrutiny I expect. But this isn't my field of study, and so maybe I'm just missing something. I certainly don't have any good answers here.

Expand full comment