
Current AI agents (circa Jan 2024) are quite bad at clicking, reading screenshots, and interpreting the layout of webpages and GUIs. This is expected to change in the near future, with AI capable enough to navigate an arbitrary GUI about as well as a human.
Example of an early system of this type: https://github.com/OthersideAI/self-operating-computer/tree/main?tab=readme-ov-file#demo
Resolution criteria:
This question resolves YES if, the day after 2024 ends, I can direct an AI agent to resolve this market as YES using only voice commands while blindfolded. It resolves NO if this takes over 30 minutes.
Update:
There are no restrictions on whether the AI agent is free, open source, proprietary, local, remote, etcetera.
Update:
If someone else on Manifold can demonstrate an AI agent resolving a Manifold market as YES (while following the same restrictions that I would have followed), then I'll resolve this one as YES too. This is in case I'm not able to get access to the AI agent myself for testing.
Update:
The agent will need to be able to open a web browser and login to Manifold on its own.
Update 2025-02-01 (PST) (AI summary of creator comment): Additional Resolution Criteria:
The AI agent must not require modifying with custom code (e.g., writing scripts).
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ1,091 | |
2 | Ṁ479 | |
3 | Ṁ351 | |
4 | Ṁ310 | |
5 | Ṁ290 |