Resolves N/A in 2050.
Dec 11, 2:32pm: Do we have AGI now? → Do we have AGI now? [Experimental]
@FranklinBaldo Are you changing the resolution criteria or resolution date of this market? If so, please state the new criteria clearly in the description so we all know what's going on.
@IsaacKing I bought a bunch of shares in this market, and it really sucks if OP is just going to arbitrarily change how it resolves. That’s how you lose trust from bettors.
@IsaacKing Maybe we can either get the market maker to resolve N/A, or get manifold to resolve it N/A
It was pointed out that the way it was originally going to resolve clearly doesn't work as intended, so something needs to change. I'd probably suggest creating a new market with better resolution criteria and leaving this one the same but with a more accurate title. N/A resolution is also a possible option if the market participants prefer.
I don't think a flag like that is necessary, the author already edited the title with experimental which seems like sufficient warning.
@jack It would tie up my shares and make me sell them for a loss until I'm refunded in 2050. Better to resolve N/A right now and make a new market, or leave this market with its original criteria.
@jack IMO once people have traded in your market it’s sacred. You don’t get to go “oh no that’s no what I intended.” You can resolve it N/A if you’re early and have good reason, but otherwise I think there’s an obligation to resolve correctly and faithfully.
Ok, ignore that last suggestion, it was just a suggestion if all the participants didn't actually care about the resolution in 2050.
Back to the main point: What I often do is edit my markets when there's a strong reason for it and offer compensation to anyone who was negatively impacted.
There are a lot of tradeoffs to consider here. For example, leaving this market as is is clearly bad because it harms new traders who misunderstand it based on the title - that's why I suggest leaving the resolution criteria as is but changing the title.
But that option still harms the past traders who already traded based on a misunderstanding of the title - this is less bad because the resolution criteria were explicit and clear and if you lose mana because you didn't interpret them right that's typically on you. But my point is, there are tradeoffs
@FranklinBaldo Thank you, I appreciate that. If you place a YES limit order at 44%, I can sell my shares without taking a loss. There will be some small amount of profit that I can't easily calculate beforehand, which will eventually be undone once you resolve it N/A.
@IsaacKing https://manifold.markets/FranklinBaldo/refund-to-isaac-king?referrer=FranklinBaldo
Is there a (non-physical) task it is claimed that AI cannot do—that is in the cognitive toolkit of a 80-100 IQ person?
Superior perception, memory, much better art, vastly cleverer writing.
Basically left with human emotion and biological drives, and physical control systems; cerebrum = solved. Cerebellum and basal ganglia soon enough.
@Gigacasting I feel that the G part of AIG may be solved by connect the these models we already have the right way to create agents that can work on any kind of problem.
“Agentic general intelligence” and “coordination: physical and social” are what are actually dangerous.
It’s amusing the proposed solutions to “alignment” are always the opposite of what actually works.
Centralization = obviously dangerous
Slowing down = obviously dangerous if hardware gets better faster than we understand what it can do.
Control = obviously dumb, as it’s trivial to malevolently unleash without this.
In time it will be obvious that the way to mitigate risk is the same as always: eventually cap aggregate compute, or the max size of any networked cluster. (And none of the theory and philosophy will have accomplished anything.)
Not a coincidence “AI alignment” is mostly a grift arguing for totalitarian control and thought-policing: love OpenAI who are a centralized political-commissar aligned group, while they clearly hate StabilityAI and anyone pushing the limits of efficiency and ability.
A world with 1000s of groups controlling compute and not pushing Moore’s law after a certain point is feasible. Bostrom totalitarianism is hellish at best and won’t work at worst.
@MartinRandall I tried to avoid this by postpone the closing date to too far away. But I'm open to change the resolution criteria. I was thinking about it an came with came up with this one: market resolves YES when an artificial agent is appointed to the board of directors of an S&P500 company, meanwhile I will subsidize the NO with my daily 25 Mana.
Any thoughts?
@FranklinBaldo cynically I don't think being a board director requires intelligence.
I'll lose my m9 trading profit but I think you should N/A the market and make another with a more specific prediction like that one.