It is my birthday tomorrow. I'm turning 23. I don't know if I'll survive the next 15.
I'm physically healthy. I dont drink, I don't drive, I won't commit suicide. The base rates for my death in actuarial tables end up around 2085.
But given existential risks from transformative AGI, there is a >1% chance that everybody dies.
So please bet on this market about p(doom) on short timelines!
@Gigacasting, here is my reasoning:
• My baseline for all-cause mortality for my age group and region comes from https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid
• I cross-reference that against historical CDC metrics (cited at the bottom)
• I calculate confidence intervals for the conditional probability of each leading cause of death based on my gender, ethnicity, personality, income, and lifestyle.
• I JOIN data FROM prediction markets for transformative AI timelines ON markets for x-risk from unfriendly AI.
• I find that for a wide range of estimates, the likelihood of “AGI kills me” dominates as the primary threat factor.
• I sanity check my numbers, and do a “common sense” pass. My inner 5-year-old says that people usually die when they are so sick that their body starts breaking down, and that it is unusual for people in their prime to tragically pass away.
Here are some criticisms I have of my model:
• It was thrown together quickly and without epistemic rigor. I was idly pondering my birthday tomorrow.
• I did not add worst case “what-if” scenarios for traumatic events, e.g violent rape or forced drug addiction, which may severely impact my distribution of outcomes. I find it uncomfortable to contemplate morbid scenes such as these in depth, and statistically, the chance that they occur is minor.
• I do not consider my genetics or family history. I am unaware of anything out of the ordinary, but it is possible I may be prone to some illness. I do not believe getting a comprehensive medical screening is a priority at this time. It would be a paranoid, hypochondriac waste of resources to sign up for expensive scans when I have no reason to think the information would be useful. (This would change if I noticed any symptoms that warranted a full diagnosis.)
• My actual lifespan is not independent of how long I am expecting to live. In particular, consider that there is a direct relationship between my understanding of reality and the actions I choose to take. For instance, if I was delusional and believed I could fly, then I may attempt to touch the sky. More realistically, if I falsely believed I was in danger, then I would move to mitigate that risk, and the cost of steps I’d perform to defend myself may prove more hazardous than the fake illusory imperilment I’d imagined. Conversely, an overly confident sense of safety would invite taking on greater levels of risk than what is optimal, as I would have miscalculated the point that maximizes reward with no possibility of ruin outside of uncertain long tails that EITHER lie outside my locus of control OR are not worth eliminating.
I appreciate the feedback! Some thoughts:
• Making a perfectly calibrated personal mortality model is not relevant to my goals. Neither would it be a feasible undertaking given the irreducible loss involved in predicting what will happen 15 years from now. At that scale, there are always going to be issues. We must concentrate our efforts EITHER on questioning the overall architecture OR on removing the most egregious sources of massive error. I am not convinced that analysis at higher resolution would lead to a meaningfully different conclusion.
• I am open to other perspectives.
• If your objection is that all of the prediction markets are wrong, you are empowered to bet against the consensus position.
• If your objection is that the comparison of verified and speculative risks is irresponsible, I can recommend some resources on security mindset that I have found compelling (If you’ve already read them and still disagree, I would love to know why!).
• If your objection is that my measurements for non-AI mortality are too low by several orders of magnitude, I invite you to share WHERE my methods are mistaken AND sketch out an outline of what a corrected version would look like.
• If your objection is that I should not be rounding off the contribution of all leading causes of early death just because they aren’t as “popular” than AI risk, I assure you that I take them quite seriously and am committed to employing sensible precautions against them.
• If the shape of your objection does not fit into the shapes I have already described, pretty please help me understand what your actual crux is.
• If your objection is “TL;DR”, let me know HOW far in you got BEFORE I lost you. i.e, WHERE did I lose your interest? WHAT part was too boring OR effortful TO BE worth reading?
Xu JQ, Murphy SL, Kochanek KD, Arias E. Mortality in the United States, 2021. NCHS Data Brief, no 456. Hyattsville, MD: National Center for Health Statistics. 2022. DOI: https://dx.doi.org/10.15620/cdc:122516.
@SheikhAbdurRaheemAli One line summary: if it’s ambiguous whether I killed myself or not, resolve as N/A
@AshleyDavies Correct. We build off of life insurance invalidation criteria. The market resolves N/A if I:
• kill myself (by any method)
• die in a preventable accident (motor vehicle collision, falls, drowning)
• perish due to participating in a high-risk activity
• become a homicide victim
• am declared missing
• suffer liver/kidney failure
However, the market still resolves NO if the cause of death is:
• Cardiovascular disease, including stress-induced myocardial infarction (Heart attack)
• Stroke or aneurysm
• Malignant neoplasm (Cancer)
• Infectious disease
• War or natural disaster
Hope that clarifies the resolution criteria. Thank you for the birthday wish!