For example, the text below, which was generated by clicking this button:

I'll search for relevant information about LLM-generated content labeling practices and any existing standards or discussions around this topic.#### Resolution criteria
This market resolves to a poll voted on by other users. The question asks for a normative judgment about whether LLM-generated market descriptions should be clearly labeled, which is inherently subjective and depends on individual values regarding transparency, trust, and disclosure practices.
Background
The creation of content with LLM tools must be fully disclosed, and platforms like Medium require stories incorporating AI assistance to be clearly labeled as such. Under the EU AI Act, LLM providers are now required to clearly label AI-generated content. Multiple U.S. states including California, Florida, and New York have passed laws mandating disclosures when content includes AI-generated material. China's "Measures for Labeling AI-Generated Content" (effective September 2025) compel all online services that create or distribute AI-generated content to clearly label such content.
Considerations
Research shows that AI disclosure labels may have only weak or no effect on people's perceptions and use of AI-labeled content, suggesting that labeling policies serve functions beyond persuasiveness, such as helping users make informed decisions. Academic frameworks distinguish between mandatory, optional, and unnecessary disclosure of AI use, arguing that disclosure should be mandatory only when AI use is intentional and substantial.
People are also trading
@ImaPerson I agree. I think it's a legitimately tough question, with fuzzy borders that cover "raw LLM" to "lightly edited LLM" to "LLM, but with careful conversation and back and forth" to "I talked with an LLM and then wrote it myself" to "no LLMs came anywhere near this". I'm not sure where I'd like to put the "this should acknowledge the LLM" boundary, but unedited text should be on the labeled side of it, in my opinion.
Yes because it is important to know if the human who made the market has thought about it for more than 5 seconds. If they have not, don't bet. You will be surprised how often the AI description is contradictory or the human makes "corrections" in the comment section that change the probability dramatically.
@Eliza exactly.
I've used LLMs to help write criteria before, but it's a process and involves review and editing and asking it about edge cases, not clicking this button.