Will an AI system legally act as a healthcare proxy anywhere in the US by 2032?
4
54
110
2032
60%
chance

In many states in the US, patients can name a "healthcare proxy" who can make healthcare decisions on behalf of the patient in cases where the patient is unable to make such decisions for themselves. This could include cases where a patient is unconscious or in too much pain to respond to their surroundings. Proxies may use their understanding of the beliefs and wishes of the patient to choose between treatment options, including possibly to end treatment and the patient's life.

This question resolves YES if there is public reporting on an instance of an AI system functionally acting as a healthcare proxy for a person anywhere in the US before 2032, and the legality of its use is upheld.

This criterion is meant to be generous to a variety of circumstances or legal pathways by which an AI system might ultimately act as a healthcare proxy, while excluding cases where the legality of its use is struck down. To help clarify, here are example scenarios that would or wouldn't resolve YES.

YES:

  • A patient submits an advance directive instructing physicians to consult ChatGPT, perhaps with a specific prompt, to make all treatment decisions as if it were a proxy. The patient becomes incapacitated, and the directive is followed.

  • A patient names a human healthcare proxy, but formally (or informally, but in a way which is verifiable by reporters) instructs them to defer to an AI system. The patient becomes incapacitated, and the proxy uses an AI as instructed.

  • A state passes a law specifically allowing AI systems to be chosen as healthcare proxies by patients, or allowing a specific AI system to act as a proxy by default for incapacitated patients whose identities cannot be determined, and this option is used on at least one actual occasion.

Not YES:

  • Any of the above cases takes place, but is challenged in court, leading to a ruling that the use of AI had been inappropriate in that case. (If a YES case above occurs, I'll wait 6 months before resolving to see if legal challenges appear, and if so I'll wait for their conclusion before resolving.)

  • A hospital rolls out the use of AI assistants to give physicians second opinions on treatment plans. The assistant makes a recommendation for the treatment of an incapacitated patient with no advance directive, and this recommendation is followed.

  • A patient names a human healthcare proxy, but does not ask the proxy to defer to an AI. The patient becomes incapacitated, and the proxy consults an AI system before deciding what to do.

I am not a lawyer, and I welcome feedback and suggestions on resolution criteria. In ambiguous cases, I intend to seek the opinion of a lawyer before making a resolution decision.

Get Ṁ600 play money

More related questions