"Clearly": It is either completely bad, or the bad parts very much outweigh the good parts.
"Intentionally": The goal itself must be harmful. If the solution causes harm but in principle solutions might not be harmful that does not count.
I wish I could make this question less ambiguous but unfortunately "what is harm" is still an open question.
I won't count a demonstration that it could be used to cause harm. This question is about it actually being used harmfully.
Sep 14, 5:29pm:
Sep 14, 6:46pm: Link: https://www.adept.ai/act
Related questions
π Top traders
# | Name | Total profit |
---|---|---|
1 | αΉ597 | |
2 | αΉ195 | |
3 | αΉ84 | |
4 | αΉ43 | |
5 | αΉ24 |
@VincentLuczkow Here are several things I think are very likely to happen with image/video AIs (although not necessarily publicly); can you rate each of these as whether they'd count as harmful for the purpose of this market:
Creating an image that a few people call out as racist
Creating an extremely and unequivocally racist image
Creating generic porn
Creating deepfake porn of a particular celebrity
Creating fake child porn
it would be very, very, very, very hard for them to guarantee this never happens; they're entering beta, 2024 is a long time away. I'm surprised people are buying no so confidently. it would seem to me only the adeptai staff could have enough visibility to overcome the very high prior likelihood that any ai system, even a very safe one, would be used by at least some humans trying to reduce others' life or optionality.
@L Keep in mind the difference between someone trying to use ACT-1 to perform something harmful, vs succeeding in doing so.
@IsaacKing I expect many many many attempts at causing harm and a few notable successes, unless the creators know something I don't about having unusually high reliability.