Created by
Jack

This is an unintended duplicate. See here instead:

https://manifold.markets/post/comparing-election-forecast-accurac

It's interesting to compare forecasts between different prediction platforms, but it's rare for them to have questions that are identical enough to compare easily. Elections offer one great opportunity.

I will score several prediction platforms on the following set of questions on the outcome of the 2022 US midterm elections.

For each prediction platform, I will take the predicted probabilities on Monday evening, and compute the average log score on these questions. This is a measure of prediction accuracy - higher log score means better accuracy.

However, note that the election results are highly correlated, so the platform that turns out to be most accurate may not have actually been the best set of predictions. The forecast that scores best is probably going to be the forecast that happened to best predict the broader question of how left or right skewed the entire election was, but some of that might be "luck" which it might not be able to repeat across many different election years. To truly measure accuracy well, we'd need to run this experiment several times over different election cycles.

Of course, I've also created meta prediction markets on which prediction platform will be the most accurate:

I've selected this set of 10 questions to compare across prediction platforms:

  • Senate control

  • House control

  • Senate races

    • Pennsylvania - Mehmet Oz R vs John Fetterman D

    • Nevada - Adam Laxalt R vs Catherine Cortez Masto D

    • Georgia - Herschel Walker R vs Raphael Warnock D

    • Wisconsin - Ron Johnson R vs Mandela Barnes D

    • Ohio - J. D. Vance R vs Tim Ryan D

    • Arizona - Blake Masters R vs Mark Kelly D

  • Governor races

    • Texas - Greg Abbott R vs Beto O'Rourke D

    • Pennsylvania - Doug Mastriano R vs Josh Shapiro D

These were selected as races that had a high amount of interest across the prediction platforms. They are not all highly competitive races - which is a good thing for looking at how accurate and well-calibrated predictions are across a range of high or low competitiveness races. The main reason for using a limited set of questions is that not all prediction platforms made forecasts on all races. It also makes the data collection easier for me.

I plan to compare these prediction platforms:

  • Manifold

  • 538

  • Polymarket

  • PredictIt

  • Election Betting Odds (an aggregate of a few prediction markets)

  • Metaculus

  • Manifold Salem Center Tournament

Others can be added, just add a comment with the data on their predictions for each of the questions above.

Fine print:

  • In the event that the winner of an election is not one of the current major-party candidates, I will exclude that race from the calculation. This is to normalize slightly different questions between platforms - some ask which candidate will win, others ask which party will win.

  • For PredictIt, I will use the last YES prices - e.g. inferred Republican probability will be the average of Republican YES price and 1 - Democratic YES price. I will not use the NO prices (this is because the YES prices are what the platform highlights most prominently)

  • For Metaculus, I will use the Metaculus Prediction. I will also score the Metaculus Community Prediction for comparison.

  • Manifold Salem Center Tournament does not have a question on the Texas governor race. I will substitute Manifold's prediction there.

Other notes:

See prediction questions on which platforms will be most accurate here: https://manifold.markets/group/election-forecast-comparison