![](/_next/image?url=https%3A%2F%2Ffirebasestorage.googleapis.com%2Fv0%2Fb%2Fmantic-markets.appspot.com%2Fo%2Fdream%252F7o5igusfUl.png%3Falt%3Dmedia%26token%3D29cabf37-23ca-45d1-b20c-5027024bda18&w=3840&q=75)
I've written some stuff about this topic here: https://forum.effectivealtruism.org/s/sC8KoZx9jAdrEtmHj
By "satisfactory" I mean from my perspective given the research I'm doing this year.
Mar 28, 3:40pm: In a year from today, will I have a satisfactory framework for describing the epistemology of AI alignment? → In a year from today, will I have a satisfactory framework for describing the epistemology of AI alignment?
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ5 | |
2 | Ṁ2 | |
3 | Ṁ0 |
@traders The creator has deleted her account. This will resolve N/A unless someone posts a link to a clear post or similar thing from the creator that answers the question.
@MarkIngraham extraordinary claims require extra ordinary evidence
There is so much low hanging fruit that i cannot even envision SLOWING down of ai progress for 3 years. There are so many things that are possible just recently and after that we also have mass adoption waves you're not accounting for.