Several people have raised or hinted at concerns about 538's presidential odds model, which has Biden favored even though Trump is leading in the polls. For example, https://twitter.com/RiverTamYDN/status/1813906369676001752.
If they just release an update to the model that e.g. changes it, or weakens Biden's chances without admitting anything, this would resolve NO. Looking for e.g. Blog posts or Tweets from 538 or their employees that admit an error.
I will be the judge of what "substantial" means - "we fixed coding errors," change in which polls are included, would all resolve YES. I am not going to bet.
Producing unusual results is categorically NOT an error. Einstein's theory of general relativity predicted black holes. Those are an unusual result, but not an error.
An election model is useful BECAUSE it might predict unusual results!!! If it only predicted obvious results, it would be no better than the sources of information we already have!
@KevinBurke I have no shares in this market (so hopefully unbiased or whatever), but that's patently absurd, I'll be honest. I think you should consult with other users / moderators before resolving this market.
Let's say I think I can make a weather forecasting model that uses insight from bird's patterns, and I code a bunch of stuff tracking bird movements and use that to predict weather patterns.
Now after a month, it turns out my model is showing really high odds of hurricanes in North Dakota. So, I tweak the model to give much less weight to bird movements.
I did not have an "error" in my model. It worked precisely as it should have. I simply created a bad model. Weather didn't actually correlate well with bird movements!!!
That is precisely what happened. GEM and 538 made a model that weighted fundamentals heavily but this turned out not to be a very accurate method for predicting this election. There was no "error" in their model, it was just a bad model! They adjusted components of this to make their model more accurate in the future. This fixed the issue of their model being bad and useless, but didn't explicitly correct any "error" in the model.
@benshindel I think it's genuinely ambiguous. Morris stopped a little bit short of explicitly "admitting error" in that post, but he did say "there were some issues, we fixed them" which is basically admitting error in more words (and in a face saving way). I think either resolution is reasonable, though I do think I would still disagree that this counts.
Good thread explaining why there wasn’t an error causing the thing that everyone thought must be caused by an errot:
https://x.com/gelliottmorris/status/1827908201150349406?s=46&t=62uT9IruD1-YP-SHFkVEPg
I really don't think there's any evidence there was an "error". I don't think changing methodologies from one with a heavy political-science-y approach to priors vs one that's more of a straightforward polling aggregator like the Silver Bulletin is "admitting an error" tbh. It's more like "admitting a change in approach" or something. Is there an "error" in Lichtman's keys, for example? Not really, it's just a dumb model.
@KevinBurke I don't understand the clause ", change in which polls are included,". Why would a change to their inclusion policy on polls be a "substantial error in their model"? Do you mean an error that caused a change in poll inclusion? I don't get this.
@nikki care to share why you believe there was an error? Versus just an over-reliance on some sort of incumbency advantage base rate?
@KevinBurke would a data error caused by 538 count? would a data error caused by a poll/economic indicator count?
if the data error affected other forecasters as well - NO, if it only affected 538 - YES