Do commercial services like Midjourney and Bing Image Creator have to stop or adapt their services in the US by 2027 because they rely on training data that was not explicitly released for that purpose by the authors?
If new law or an interpretation of current law causes them to stop their service, this resolves as "YES".
If it causes them to rely only on new models trained on a limited pool from authors who explicitly agreed, this also resolves as "YES".
If their current models or successors or equivalent models from new competitors are still commercially used at the end of 2027, this resolves as "NO".
If authors are only compensated for the use, without a way to opt-out of the usage, it will NOT be enough to resolve as "YES".
If their art can still be used freely by research or for open-source projects, that fact will NOT hold up a "NO".
Does this only apply to the US? I expect we'll see some countries in Europe playing with the idea.
EDIT: nvm, I see you said this is on whether they have to stop or adapt their services in the US. Though I'd be curious on the resolution if one or more companies chooses to pivot in order to comply with European law even though they don't strictly have to in the US.
@TomGoldthwait Yes this one is US only. I had to choose something and went with the main audience of this platform. It would be interesting if the opinion differs on the EU.
@LeonBohnmann Gotcha. My expectation right now is that there's a decent chance at least one country will make a law about copyrighted art in AI training.
I'd also guess that as the space opens up we'll see at least one company make an effort to comply with such a law, possibly also branding themselves as the more "ethical" AI platform. Non-zero chance of a scandal where a company claims to be in compliance but actually isn't (with some level of plausible deniability because of outsourcing the process for collecting "ethical" training data).