Will Carl Shulman soon claim that AI forecasting systems will play an integral role in the post-AGI economy?
➕
Plus
15
Ṁ6507
resolved Jul 11
Resolved
YES

Robert Wiblin will soon release an interview with Carl Shulman (https://twitter.com/robertwiblin/status/1806070670154993811). This market resolves YES if, during the interview, Carl expresses confidence that AI-based judgmental forecasting systems (including AIs participating in prediction markets) will be integral parts of world economy and governance post-AGI.

This must include confident claims, conditional on his assumptions behind the trajectories he imagines, similar to the following:

  • Most people will generally defer to AI forecasting systems on questions they haven’t thought carefully about

  • AI forecasting systems will closely guide the decision-making of large and important civil institutions, such as businesses or government

  • AI systems will make predictions about personal events in peoples’ lives, with impressive historical track records, for instance by offering insurance contracts tied to specific personal events

I’ll resolve conservatively, leaning towards NO in cases where Carl’s discussion of AI forecasting is short, remote, tangential, or framed as highly speculative relative to the main trajectories he considers.

I’m mostly making this market because it’s clear that Carl is thinking carefully about how new epistemic systems will affect society, and I want to preregister my expectation that he will publicly present a vision of what they might be like.

Get
Ṁ1,000
and
S3.00
Sort by:

@traders The second part of the interview has been released (https://80000hours.org/podcast/episodes/carl-shulman-society-agi/), and I think it clearly warrants a YES resolution. Carl spends basically the entire interview discussing the effects that superhuman AI forecasting systems will have on the world. It is clear that he is very confident that such systems will become superhuman at predicting the future, and will play a large role in society and governance. Some quotes:

  • "So I don’t think we should doubt that any really advanced technological society will have the germ theory of disease or quantum mechanics. So we shouldn’t doubt that it will be technologically feasible to have very good social epistemology, very good science in every area, very good forecasting of the future by AIs within the limits of what is possible."

  • [In the context of discussing how powerful AI forecasting tools would have affected governance during COVID-19]: "And the advanced AI stuff is very good at determining what is going on at the lower levels, interpreting the mixed data and reports. It can read all the newspaper articles, it can do all those things. And then at the lower level, AI advisors to the local officials can be telling them what’s the sensible thing to do here in light of the objectives of the country as a whole. And then instead of the local official being maybe incentivised to minimise the story, maybe from a perspective of protecting themselves from a stink being made, they can just follow the truthful, honest AI that makes clear this is the reasonable thing to do in light of the larger objectives, and therefore that covers the rear end of the parties who follow it locally."

  • [On voters consulting AI advisors]: "It would result in a systematic incentive of politicians for winning elections to try and make people feel better off. Because if they asked their AI advisors, 'If I vote for party one, will I be better off in four years, as I judge it or in these respects, than if I vote for party two?'"

  • "And my actual best guess is that the result of these technologies comes out hugely in favour of improved epistemology, and we get largely convergence on empirical truth wherever it exists."

  • "It could be the case that on many sort of controversial, highly politicised factual disputes, that you get a pretty solid, univocal answer from AIs trained and scaffolded in such a way as to do reliable truth tracking, and then that makes for quite drastic differences in public policymaking around the things, rather than having basically highly distorted views from every which direction because of basically agenda-based belief formation or belief propagation."

  • [On how forecasting AI systems will affect norms in journalism and information dissemination]: "But still, some institutions and organisations and whatnot are potentially able to do that, follow the same procedures, get to the same truths themselves, and that would move just endless categories of things to objectivity. And so you have a newspaper and it talks about event X, and so they can say, 'Interested party A claims X; interested party B with different interests, claims not X.' But then if they also say 'Truth-tracking AI says X,' then that’s the kind of norm that can be like, don’t make up your sources, don’t put citations that don’t exist in your bibliography.
    And then furthermore, it just reduces the expense. So it makes it possible for a watchdog organisation to just check a billion claims with this sort of procedure. And there are small amounts of resources available today for journalistic watchdog things, for auditing, for Tetlockian forecasting. And then when the effectiveness of those things — you’ve got an incredibly large amount of the product for less money — then, A, people may spend more; there’s tremendously greater wealth and resources, so more of the activity happens. And running on all the most important cases, having the equivalent of sort of shoe leather local investigative journalism in a world where there’s effectively trillions of super sophisticated AI minds who could act as journalists, is enough to kind of police all of the issues that relate to each of the 10 billion humans, for example."

  • "There are other aspects of the world that are independent. If you ask about what happened to the romantic lives of 100 million people, say: you get datasets from social media or something, the individualistic factors that are uncorrelated between people; you could get very large datasets for that, so you could show AI that is great at long-term forecasting with respect to these uncorrelated things between people."

    By default I'll resolve YES in a few days, but feel free to discuss if you disagree with this resolution.

@traders One day’s notice before I resolve YES. If there are disagreements on whether this is an appropriate resolution, feel free to start a discussion.

Extended close time

The first part of the interview is now out through the 80k hours podcast (https://80000hours.org/podcast/episodes/carl-shulman-economy-agi/). I’ll wait for the second part of the interview to be released to resolve. During the interview, Rob mentions that they are saving a discussion of epistemics for the second part.

@AdamK your level of confidence surprises me and makes it seem like you basically know exactly what’s going to be talked about

This might serve as interesting context (https://nickbostrom.com/propositions.pdf) It's a working paper with Carl Shulman. They discuss new epistemic systems and their potential in a post-AGI society. I think it's decent evidence that Carl believes AI-based forecasting will have a transformative effect on decision making. I'm mostly interested to see whether Carl will publicly expound on his ideas in this upcoming interview (I'd say: likely given its length), and whether the details I'm imagining match up to his expectations. If this market feels serendipitous in retrospect, then take it to be evidence that I correctly inferred his thinking. I don't have private information.

These systems leverage vast amounts of data and sophisticated algorithms to provide insights that can guide decision-making processes fall guys.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules