In what year will an OpenAI natural language processing model be more competent than me in my area of expertise?

In what year will replies to ten well-engineered prompts to a natural language processing model by OpenAI in my area of expertise outperform my own answers to them?

Example of what I mean by a well-engineered prompt:
"The following is an excerpt from the 'Guyton & Hall' authoritative textbook on human physiology. It describes the respective roles of RAAS, ANP, and ADH in controlling plasma osmolality and free water clearance through their effects on renal tubular transport."

Currently, I find ChatGPT's and GPT3's answers to questions of this complexity to sound convincing. However, when examined more closely they turn out to be factually wrong and inconsistent often enough to be useless.

This question is conditional on OpenAI's continued existence.
Resolves N/A if OpenAI ceases to exist, e.g. due to bankruptcy.

Resolution based on my own best judgment.

Since I will base my future use of NLP for study purposes on its performance relative to myself, I am incentivized to judge as truthfully as possible.

Get Ṁ600 play money
Sort by:
predicts LOWER

my area of expertise is biomedical sciences,
this means that my prompts will have well-defined answers grounded in factual truth but sometimes require integrating understanding of the interplay of several complex systems

bought Ṁ20 of LOWER

related markets