If AI gets smart enough to self-improve and all the psychological data, it will be used as an advisor and commander, and even if some other country makes their own, first takes the cake and have already grown ten times stronger. Even if 199 countries will decide that AI is too dangerous to use, the 200th will simply develop their own without competition.
AI will become a strategical weapon even if it doesn't control itself. There was a good Foreign Affairs article about this recently.
The AI Power Paradox: Can States Learn to Govern Artificial Intelligence—Before It’s Too Late? (foreignaffairs.com)
Archived version: The AI Power Paradox: Can States Learn to Govern Artificial Intelligence—Before It’s Too Late? (archive.ph)
@fb68 The close date for the market is wrong and should be set at 2040 or so.
I think AI as a geopolitical advisor is pretty plausible, especially in the timeframe of two extra decades. Maybe some diplomat or politician somewhere is secretly using ChatGPT already to determine what they should say next!
I'm fairly sure it's not possible for a self-improving AI to count as a weapon. The point of it is that it's better at controlling itself than we are. It's not for wielding. If it does what you want it to do, any war will end or be mitigated quickly and most of what it will do for us will be peaceful, if it doesn't do what you want it to do, you'd be dead with everyone else.
I guess a more interesting version of the question might be: Will non-self-improving AI be used for the bulk of an army's logistics and strategy before we get to self-improvement. That could happen.