Resolution Criteria
I will start a poll one week before the end of 2024 asking Manifold users whether they think Custom GPTs are defensible. Being defensible in this context means that others can't just "steal" your prompt and files, significantly impacting the usage of the original GPT. It also implies that creators have methods to secure and differentiate their bots enough to offer some business defensibility.
This question will resolve to the option with the highest number of votes. The aim is to gauge the community's confidence in the security measures and unique value propositions that creators are expected to embed in their Custom GPTs to prevent imitation and ensure continued user engagement.
Key Considerations For Voters
When voting, consider the overall market impact of Custom GPTs: have any maintained a consistent lead, what unique features or innovations have they introduced, and how have these factors influenced user trust and engagement? Your decision should reflect an assessment of these GPTs' ability to stand out in a competitive landscape, maintain user loyalty, and continuously adapt and innovate in response to market demands.
The Case for Yes - Custom GPTs will be defensible
Related
@Soli Yeah, if this was for the last week of 2023, I'd buy NO, but since this is for the end of 2024, I'm going to buy YES. They acknowledged this issue and have said will work on it