
Will a large language model be trained by Dec 31 2024 for the following task: parse history books or similar material to identify potential natural experiments?
By natural experiments I mean configurations of the kind leveraged by econometrics to determine causal relations between variables, using techniques such as difference in differences, regression discontinuity, etc.
Will resolve YES if by midnight Dec 31 2024 at least one paper has appeared claiming to have done this. The paper should have received at least 2 citations by that date, excluding self citations (defined as citations by papers whose first author is the same first author as the original paper).
I will consider arxiv preprints or ideas preprints as papers for the purposes of the question, in addition to papers officially published in refereed journals.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ97 | |
2 | Ṁ25 | |
3 | Ṁ14 | |
4 | Ṁ12 |
This does not pass the bar, but it's still impressive IMO: https://chat.openai.com/share/00ab94be-50e1-4fbd-88e4-51b1c5d960ff