
By 2030, can we convert at least 10% of an AI's weights to C code, enhancing interpretability?
6
130Ṁ1032031
40%
chance
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
The framing is suspicious - converting to C code by itself may not improve or even decrease interpretability... So how would you judge that the C code is more interpretable than the weights?
Related questions
Related questions
Will AI agents be able to regularly code small features for us in a year?
77% chance
In 2029, will any AI be able to construct "reasonably" bug-free code of >= 10k LOC from a natural language specification? (Gary Marcus benchmark #4)
78% chance
Will AI be able to write, compile, and unit test a single .c file to reproduce GPT-2 training from PyTorch code by 2025?
29% chance
Will an AI system be able to fully refactor a 10k+ line codebase before 2026 ?
47% chance
Will AI/chatgpt technology be good enough to autoconvert matlab code to runnable python code by EOY 2025?
85% chance
On Dec 31, 2025, will a widely available AI model be able to write a sophisticated 2000 line program?
61% chance
By the End of the year what percentage of the code on Github will be AI generated?
Will AI be able to write, compile, and unit test a single .c file to reproduce GPT-2 training from PyTorch code by 2026?
70% chance
By 2030, will over 50% of software development projects be primarily created by AI, with minimal human coding?
56% chance
Will there be entry-level AI coders by 2026?
67% chance