The SWE-bench is a benchmark developed to evaluate if language models can resolve real-world GitHub issues. The leaderboard showcases various models and their performances in terms of the percentage of SWE-bench instances they resolved. Each instance in the SWE-bench represents a GitHub issue. The leaderboard is categorized into two main sections: Unassisted and Assisted.
Assisted: In this category, models have the advantage of the "oracle" retrieval setting where the correct files to edit are directly given to them.
This question is only about the Assisted category of this benchmark.
http://www.swebench.com/#
Current SOTA is 4.8
The prediction market will resolve based on the SWE-bench leaderboard standings as of 31 December 2024.
Multiple answers can be correct.