Will the 2024 Atlantic Hurricane Season be worse than 2017?
Basic
10
2.7k
Dec 1
13%
chance

Resolves "YES" if the 2024 Atlantic hurricane season meets a majority of the following criteria:

(1) Total economic damage estimate of at least $295 billion
(2) At least 17 named storms
(3) At least 10 hurricanes
(4) At least 6 major hurricanes
(5) At least 2 category 5 hurricanes
(6) At least 3,000 estimated deaths
(7) At least 224 units of Accumulated Cyclone Energy

Only storms occurring during hurricane season (June 1 - November 30) will count towards the total. Resolution may take into account reporting/estimates that become public after November 30 if necessary, as long as the reporting is of events occurring during hurricane season.

2017 was the costliest hurricane season on record thus far. The latest forecast from CSU (FORECAST OF ATLANTIC HURRICANE ACTIVITY FOR 2024 (colostate.edu)) suggests that 2024 might exceed it.

Get Ṁ1,000 play money
Sort by:

September 9th update:
At least $9.04 billion in damages (mainly from Beryl and Debby)
6 named storms (Francine in Gulf now)
3 hurricanes (Beryl, Debby, Ernesto)
1 major hurricane (Beryl)
1 category 5 hurricane (Beryl)
94 estimated fatalities

Accumulated Cyclone Energy 55.1

@JakeLowery Edit: I redid my calculations and found one error in the largest probability (3) and reduced it quite a bit...

I get the following probabilities now for (1)-(7), adding in the forecast for Francine from IVCN model for ACE (about 5 more ACE), and assuming it will be only a category 2 (per NHC forecast)

[0.004, 0.11, 0.2, 0.03, 0.24, 0.067, 0.03]

This still yields about ~0.1 % chance

If you assume Francine becomes a cat 3 that 0.03 probability for major hurricanes criterion becomes ~ 0.09, and the overall probability increases to ~0.2% chance

@parhizj nice work. It is amazing how calm it was for the long stretch in August until now, it really seems like it would take something out of distribution to cause a yes here.

bought Ṁ782 NO

This is one of the trickier markets to get a good estimate given its complexity, so I've put it off. Tackling it piecemeal with some sketchy analysis...

(Dated) forecasts from different groups had a range of values on the high end. I'm going to rely on a 1991-2023 climatelogical baseline, conditioned on the remaining days, when I can, instead of those dated forecasts. Where I have the sketchiest of data, I take the high end.

(1) I only have the sketchiest of models, which puts the exceedance probability at 2% on the high end (using my ACE forecast, wikipedia data, and some basic modelling -- modeling that doesn't explain most of the variance), with a middle value of <<1%. A far better model for this and (7) would rely on expected storm surge, or other variables using some other predictor or more realistic complicated model.

(2) I think this is no longer likely to be met given the next month might be "normal" or even relatively quiet. Moreover, climatologically (1991-2023) I get 5% for >= 17 named storms.

(3) I put this at 54% (this is too precise but its a rough guess).

(4) 6% (we are only 1/6 of the way there with only Beryl).

(5) 28%, given we already have had Beryl this is not as bad as (4)

(6) Similar to (1) a very sketchy model, putting the exceedance probability at 15% on the high end, with a middle value of 1%

(7) 224 ACE seems very unlikely. Even https://weathertiger.substack.com/p/real-time-2024-atlantic-hurricane yields a value presently ~ 5%. My own climatological notebook puts it around 125 ACE (224 ACE would be ~ 0% in my notebook). So I will use a value of ~ 3% (average of between these two estimates).

that gives the probabilities for (1)-(7):

[.02, 0.05, 0.54, 0.06, 0.28, 0.15, 0.03]

Given the logistic nature of the question, I treat each criterion as a random, independent binomial trial with the above probabilities and run a number of simulations, thresholding on the sum of the trials in each simulation to get a result for the simulation. This assumes the independence of the criterion, which is itself a bit sketchy. Code below for reference if anyone wants to try different probabilities.

From the below code for the above probabilities I get <1%. It's hard at face for me to say whether it is the extremeness of the 2017 season or the conjunction of the very specific criteria (potentially giving 2017 a very narrow profile in the probability space) is contributing to the low probability, as none of the PDFs for the criteria (2)-(6) are going to be linear. If someone could answer this question by plugging in a range of reasonable values to the simulation below maybe they could come up with an answer.

For reference as a sanity check, if we assume (1) and (6) as a given by setting them to 1, which are the sketchiest probabilities, we get ~28% back, which makes sense as the lower of the remaining two highest criteria (5).

import numpy as np

def calculate_prompt_probability(criteria_probabilities, num_simulations=1000000):

N = len(criteria_probabilities)

k = N // 2 + 1 # majority threshold

simulations = np.random.binomial(1, criteria_probabilities, size=(num_simulations, N))

majority_met = np.sum(simulations, axis=1) >= k

return np.mean(majority_met)

question_probs = [0.02, 0.05, 0.54, 0.06, 0.28, 0.15, 0.03]

calculate_prompt_probability(question_probs)

@parhizj I love the detailed analysis, but surely the probabilities are not close to independent, right?

@JakeLowery Right. I did say that itself was another assumption when treating them as independent bernoulli trials and combining them.

@JakeLowery I'm open to ideas on how to recombine them if anyone else has any... Without making that assumption of independence which is required in trying to combine them simply, I think it makes sense to also just take the lowest probability of the highest 4 probabilities to get an upper probability (since that must be met in order to meet the majority criteria). That would give an upper probability of 6% among the highest four (54%, 28%, 15% , 6%).

Edit: a downside of this is, is the estimate of the probability is only as good as ALL of the piecewise probabilities you come up with... otherwise you won't get the correct '4th' probability.

August 26th update:
It has been a very quiet couple of weeks -- estimated casualties have climbed to 88, with no other meaningful activity to note. However, meteorologists noted shifting weather paterns that point towards a big acceleration in storm activity for September:Supercharged September: Atlantic hurricane season to intensify dramatically (accuweather.com)

August 15th update:
At least $8.915 billion in damages (mainly from Beryl and Debby)
5 named storms (latest: Ernesto)
3 hurricanes (Beryl, Debby, Ernesto)
1 major hurricane (Beryl, although Ernesto is threatening to get there)
1 category 5 hurricane (Beryl)
85 estimated fatalities

Accumulated Cyclone Energy 44 (35.1 of which from Beryl, source Real-Time North Atlantic Ocean Statistics compared with climatology (colostate.edu))

Might have been better as a multi to break up the seven…

Re. Maria mortality…

https://pubmed.ncbi.nlm.nih.gov/30318387/

I don’t recall this study ever being reported widely in the US media … 😦

Regarding the criteria, it would be good to know what your resolution sources will be (approx timeline) as this is ambiguous regarding timelines… (economic damage estimates might take months… similarly for deaths: Puerto Rico case in point where it took a year for the study to come out; the NHC report on Maria was just revised last (edit) year (https://www.nhc.noaa.gov/data/tcr/AL152017_Maria.pdf to include the study, 5 years later)

I will resolve based on death estimates from no later than a month after the last significant storm, if that is the last input needed to resolve the question. Economic damage estimates are harder, but can be ranged reasonably within a similar timeframe. If consensus reporting of the range of Econ damages is ambiguous for this question’s resolution AND the Econ damages are the decisive factor for the overall resolution, I will wait for something more authoritative for months but in no case a year or longer..