Will replies to ten well-engineered GPT-5 prompts in my area of expertise outperform my own answers to them?
Example of what I mean by a well-engineered prompt:
"The following is an excerpt from the 'Guyton & Hall' authoritative textbook on human physiology. It describes the respective roles of RAAS, ANP, and ADH in controlling plasma osmolality and free water clearance through their effects on renal tubular transport."
Currently, I find ChatGPT's and GPT3's answers to questions of this complexity to sound convincing. However, when examined more closely they turn out to be factually wrong and inconsistent often enough to be useless.
Resolves N/A if there is nothing that can reasonably be called GPT5.
Resolution based on the best judgment of a friend of mine who has tech expertise yet enough understanding of biomed sciences to be able to competently judge. (I was convinced that I might be biased, so I changed this from me resolving it based on my own best judgment.)
Since I will base my future use of GPT for study purposes on its performance relative to myself, I am incentivized to judge as truthfully as possible.
@vluzko Fair enough, do you think this would be better if I ask a friend of mine who has tech expertise yet enough understanding of biomed sciences to be able to competently judge?
@L you do not think there will be a GPT-5 or you think that the resolution criteria are too murky? Help me improve this if it's the latter.