Will Five Thirty Eight admit substantial errors in their model before the end of August 2024?
Basic
38
4.8k
resolved Sep 3
Resolved
YES

Several people have raised or hinted at concerns about 538's presidential odds model, which has Biden favored even though Trump is leading in the polls. For example, https://twitter.com/RiverTamYDN/status/1813906369676001752.

If they just release an update to the model that e.g. changes it, or weakens Biden's chances without admitting anything, this would resolve NO. Looking for e.g. Blog posts or Tweets from 538 or their employees that admit an error.

I will be the judge of what "substantial" means - "we fixed coding errors," change in which polls are included, would all resolve YES. I am not going to bet.

Get Ṁ1,000 play money

🏅 Top traders

#NameTotal profit
1Ṁ603
2Ṁ106
3Ṁ104
4Ṁ85
5Ṁ52
Sort by:
bought Ṁ150 YES

There we have it, they finally admitted their issues.

This is not an error.

They even say explicitly "while this is how the complex statistics powering our model were supposed to work in theory"!!!

Producing unusual results and "addressing issues" is an error

The math for the model was correct, but the output was unusual

Producing unusual results is categorically NOT an error. Einstein's theory of general relativity predicted black holes. Those are an unusual result, but not an error.

An election model is useful BECAUSE it might predict unusual results!!! If it only predicted obvious results, it would be no better than the sources of information we already have!

<blockquote>Producing unusual results and "addressing issues" is an error</blockquote>

Yeah this is my inclination as well. I closed the question because I didn't want people to bet purely based on how they thought I'd resolve the question, but I am tempted to resolve YES based on this evidence.

Can you share a link? I can't find that on their Twitter or website.

@KevinBurke I have no shares in this market (so hopefully unbiased or whatever), but that's patently absurd, I'll be honest. I think you should consult with other users / moderators before resolving this market.

There's just plainly a massive difference between a model having "ERRORS" and being "BAD AT PREDICTING THINGS". The latter is what they "fixed", not the former!

Let's say I think I can make a weather forecasting model that uses insight from bird's patterns, and I code a bunch of stuff tracking bird movements and use that to predict weather patterns.

Now after a month, it turns out my model is showing really high odds of hurricanes in North Dakota. So, I tweak the model to give much less weight to bird movements.

I did not have an "error" in my model. It worked precisely as it should have. I simply created a bad model. Weather didn't actually correlate well with bird movements!!!

That is precisely what happened. GEM and 538 made a model that weighted fundamentals heavily but this turned out not to be a very accurate method for predicting this election. There was no "error" in their model, it was just a bad model! They adjusted components of this to make their model more accurate in the future. This fixed the issue of their model being bad and useless, but didn't explicitly correct any "error" in the model.

@benshindel I think it's genuinely ambiguous. Morris stopped a little bit short of explicitly "admitting error" in that post, but he did say "there were some issues, we fixed them" which is basically admitting error in more words (and in a face saving way). I think either resolution is reasonable, though I do think I would still disagree that this counts.

I think the admission has to be a key part they have to actually acknowledge a problem with the old model, or someone involved has to in their personal capacity, under their own name (ie not off the record comments to a journalist)

Good thread explaining why there wasn’t an error causing the thing that everyone thought must be caused by an errot:

https://x.com/gelliottmorris/status/1827908201150349406?s=46&t=62uT9IruD1-YP-SHFkVEPg

Sigh. They won't admit their error.

I really don't think there's any evidence there was an "error". I don't think changing methodologies from one with a heavy political-science-y approach to priors vs one that's more of a straightforward polling aggregator like the Silver Bulletin is "admitting an error" tbh. It's more like "admitting a change in approach" or something. Is there an "error" in Lichtman's keys, for example? Not really, it's just a dumb model.

@KevinBurke I don't understand the clause ", change in which polls are included,". Why would a change to their inclusion policy on polls be a "substantial error in their model"? Do you mean an error that caused a change in poll inclusion? I don't get this.

bought Ṁ250 NO

@nikki care to share why you believe there was an error? Versus just an over-reliance on some sort of incumbency advantage base rate?

They fixed some of their data. Now it's a matter of whether anyone admits it.

What do you mean by that?

@KevinBurke would a data error caused by 538 count? would a data error caused by a poll/economic indicator count?

if the data error affected other forecasters as well - NO, if it only affected 538 - YES

Does it have to be a specific employee (Morris, who headed up the model) or does any 538 employee count?

I think someone who works on the model

Man, so it's not just me who thought that didn't look right.