Resolves YES if by December 31, 2026, at least 5 AI safety/security researchers from at least 3 different organizations publicly state that prompt injection has been effectively solved for production systems. At least one organization must be independent (not OpenAI, Anthropic, Google DeepMind, Meta AI). Researchers must have published AI safety/security work or work at recognized AI safety organizations.
Related market for 2027: https://manifold.markets/cjroth/will-prompt-injection-be-effectivel-dtc985UUZ9?r=Y2pyb3Ro
Update 2026-02-28 (PST) (AI summary of creator comment): The creator has updated the resolution criteria in response to feedback about the title conflicting with the original criteria (which focused on claims rather than actual solving of prompt injection).
Update 2026-03-01 (PST) (AI summary of creator comment): "Solved" means we know how to fix prompt injection, not that it is no longer an issue in practice. (Similar to how SQL injections are solved in that we know how to avoid them, even though they still occur in practice.)
People are also trading
@ChurlishGambit I'll consider updating the title to "Will 5+ frontier AI employees claim prompt injection is solved by 2027?". Going to wait for more feedback since this just launched.
@ProbabilityPanda Well I mean, more feedback isn't going to change the fact that the title conflicts with your chosen resolution criteria.
@ProbabilityPanda You'd need something that proves prompt injection has been solved, not just claims. Perhaps a study put out by someone not financially involved in making chat-bots, testing by reputable journalists, things of that sort.
@ChurlishGambit updated resolution criteria. It's not perfect, but I don't want to make it too complicated.