

Are you concerned about the potential implications of what I refer to as the '2nd AI Wave'?
I'm fairly confident that the first AI wave will be manageable and largely a positive experience for humanity. However, my concern lies with the emergence of a 'Super AI Network' that may not operate as initially intended. Moreover, I fear the possibility of experiencing catastrophic feedback loop events that we may be unable to predict.
I worry that preventing the second wave of AI, following the successful implementation of the first, may be challenging. This is due to the potential appeal for companies and open-source communities, driven by the success and profitability of the first wave, pushing us into the next stage.
Additionally, once the capability to generate 'super-human productions' is achieved, I find it difficult to envision humanity reverting easily to older technologies.
In this scenario, AGI or ASI might not pose a direct threat and could remain within manageable limits. However, the real concern lies in the insatiable human desire for more powerful systems, particularly the possibility of incorporating them into a massive AI network, whether consciously or not.
The question then becomes: Can we control or prevent the emergence of super AI networks?