What is the market value of AGI in your opinion?
24
139
Never closes
$10B < AGI < $100B
$100B < AGI < $1T
$1T < AGI < $2T
$2T < AGI < $5T
$5T < AGI < $10T
AGI > $10T

This question is to understand what is the perception of how AGI compares to existing company market values.

If you choose AGI > 10T, could you comment on poll regarding how valuable you think it would be?

Get Ṁ600 play money
Sort by:

The value of AGI is equivalent to the work it can do (compare to human labour) minus the value of resources required to produce and maintain it's operation (with all that exhaustively involves).

If the resource requirements exceed the value of the work it can do, then it has negative value.

If resource requirements are so inferior to the value of the labour AGI can do, that AGI actually outcompetes human labour, then you have a recursive economic dilemma;

How large can the economy grow if all human labour is substitutable with something cheaper?

How much can the economy grow again the next year after all human labour has been substituted with something cheaper?

We can calculate the limits to growth for the resource consumption of current technologies, but human labour includes research and innovation, which involve the creation of new technologies which both use fewer resources for the same output, and use resources in novel ways which produce higher value outputs than what was possible prior.

We could do a naïve calculation of the value of all human labour today, and do a flat 1.1x multiplier:

since AGI by definition must be capable of all forms of work which a human can perform, then a 1.1x value for resources source of that labour must by economic necessity outcompete all (present) human labour activity

The problem then is the recursion; with 1.1x better output per resources, we can afford more labour than previously...

@JosefMitchell for the most part, I agree with your assessment; but I would like to add a few points. First when we compare the cost of human labor to the cost of running AGI, a few things to keep in mind. First of let's compare apples to apples. We need to quantize AGI labor in the sense that we should calculate the cost of a single agent per hour. For instance, if you can run a model that can run 10 agents 24 hours, you would divide model expenses (running/maintaining/hardware, etc) by 240 to get the agent-hour cost of AGI. and then you can compare that to the hourly wage of a human worker.
Secondly, we should acknowledge that human labor has a wide cost margin. It goes all the way from a worker on a farm to a CEO managing a top tech company. As long as AGI agent-hour cost is somewhere in between this range it could probably easily fit into our economic system. Because AGI will be replacing more expensive labor, and the rest will be run by humans.

However, if AGI is cheaper than even the cheapest human labor, then there will be no incentive to hire humans at all. At this point, we end up with a very weird situation. Because the market value system itself derives from human willingness to pay for something. So in the second scenario, the value system itself collapses. But there is also a second possibility. In a world where every product is produced by AGI, we might actually go back to overvaluing products produced by humans with a way larger margin for novelty. For instance, you might buy a human-weived sweater or shirt sewn by a human a lot more. Let's say an AGI shirt is 1$ and a human shirt is 10$. Now because of this weird market impact, AGI costs become more expensive. Because AGI will have to work 100 times more to create the same value as the human.

@ftkurt I disagree with a couple of your points, but it is hard to predict exactly what happens when purchasing power gets drained from the economy to such an extent as in the case when double digit percentages of the present workforce become disemployed/unemployable (see my market on this topic).

> Secondly, we should acknowledge that human labour has a wide cost margin.

Agreed, though it is important to note that just because a human is more or less expensive in a certain role, does not mean that an AI system will be more competitive in roles where humans are presently more expensive than humans in other roles;
One could imagine a scenario where relatively simple, and therefore cheap to mass produce, power and maintain robotics (perhaps of the humanoid kind, but not necessarily) can be 0.8x the cost of a human farm worker for the same output, but no AI system can match the performance quality (and thereby outcompete) a human investment banker or CEO (neither can the vast majority of human agricultural workers).
You end up with a scenario where displaced agricultural workers are removed from the economy, both as labour inputs and sources of purchasing power (important signal to the free market), but CEOs become even more important to the economy which remains.

> we might actually go back to overvaluing products produced by humans with a way larger margin for novelty.

This may be the case but it will only be the case for luxury goods, and never commodities or goods with inelastic demand, this means that humans will be downgraded as a source of productivity to only some select subset sectors of the economy, as opposed to the whole economy (as it is today) this will dilute purchasing power allocated to labourers producing these luxury novelty goods, and further distort the demand signals in the economy today.
I can't imagine a result which isn't either drastic reduction in the quality of life of most people who work for a living (if the reduction in purchasing power is spread in an egalitarian fashion), or mass unemployment (if it is spread unevenly).

Once purchasing power is reduced so massively, demand for these luxury novelty goods will be much lower than it is today, further shrinking purchasing power allocated to those sectors of the economy.

The definition of AGI keeps moving forward - I would say that GPT-4 exceeds the average human in all areas as it stands. I spend every day talking to it most of the time, as it excels at developing models. If GPT-4 were serving as a machine learning engineer, it would be worth a lot more than if it were a cook.

To make things more complicated, my stock market models will always perform better than AGI at stock trading, assuming I continue to develop them at the same pace. A model will always perform better on validation data for a task if the training data is more specific to the task, and a smaller model for a specific task will cost much less to run.

Therefore, it's difficult to take any meaning from the poll because I suspect that "AGI" will never become as widely used as smaller, better fitted, models for specific tasks. Why put GPT-6 in a robot when you could just train a cook model that uses 1/100 the electricity?

@SteveSokolowski

>The definition of AGI keeps moving forward - I would say that GPT-4 exceeds the average human in all areas as it stands.

I disagree. I think the more we develop artificial intelligence the more we learn how to compare it to human intelligence. Before it was a chat where you wouldnt notice the other person is an ai. But now we know that replicating language is quite doable. But when it comes to complex tasks such as understanding mathematical concepts and solving a problem it starts to get stuck. When dealing with a bigger problem with large domain info needed it also gets stuck. But know we have a lot more details sub-topics we expect AGI to excel in. And once we will have them we will know we have AGI.

@SteveSokolowski > Why put GPT-6 in a robot when you could just train a cook model

Because the overhead of training specialized models will be worth more money to the manufacturer of household (or indeed even industrial) robotics than the money saved to the end user in electricity costs would increase sales for the manufacturer.

The everything robot manufacturer will enjoy higher margins and greater marketshare than any specialist robotics producing company.

Now if you're a hacker, a tinkerer, the kind of person who builds robots for fun, this is great news for you; you can make an more economical implementation for personal use; but your way of doing things won't scale to society.

@JosefMitchell
> Because the overhead of training specialized models will be worth more money to the manufacturer of household (or indeed even industrial) robotics than the money saved to the end user in electricity costs would increase sales for the manufacturer.

not only that. new research shows that the more multimodal data we feed a model the better it starts to understand its narrow task as well. I remember reading a piece that showed giving an LLM model to a self-driving car actually improved its accuracy. If you think about it it actually makes sense. with narrow data and goals; the model doesn't really understand what it's doing. but with more world data it can find workarounds and reason with some problems

@ftkurt I've found that general models are a stopgap measure that yields very little improvement.

It may be true that you could put an LLM into a self driving car and it improves performance, but is the improvement significant enough? I experimented with using output from LLMs to reason through data and found that it uses up huge numbers of GPUs. I am not rich anymore and I decided the computational resources were better spent with more neurons per layer on the existing model, and that worked just as well.

In my experience, I think that while the paper is technically correct, using other models is just a temporary workaround to computational resource limitations. The real solution to solving any narrow problem, like with everything else, is simply down to more GPUs.

@SteveSokolowski Thats true; but you don’t necessarily use two models separately and use them in tandem. If you could in principle create a truly multimodal model, it could improve accuracy with minimal increase in hardware needs. I personally think this is the next avenue for the next breakthrough.

@ftkurt You've got it - we're in agreement here - but that's not "AGI." That's when my stock trading model that can take in both OHLCV bars in floating point numbers, as well as images of bar charts, and use both together to improve its accuracy.

That's where I think the future is headed; towards near-perfect narrow models that are easy to understand and easy to control, not huge 100,000+ GPU models that are unnecessarily capable, ridiculously expensive, and which @EliezerYudkowsky might say are a threat to life.

Nonsensical question given a wildly variable estimation of the required compute to run the "AGI".

@a2bb The whole idea of the poll is getting an idea. Besides we already have a good idea how much current LLMs cost. More importantly hardware costs will eventually go lower. Making hardware cost of AI irrelevant.

@ftkurt LLMs are garbage and won't ever be close to an "AGI" alone, algorithm-wise, due to its intrinsically narrow domain.

Making hardware cost of AI irrelevant.

Non-sequitur.

@a2bb > LLMs are garbage
You don't know that. for all intents and purposes, this is the closest we've got to human imitation by a machine. AGI will probably be way more sophisticated in architecture, but I can certainly see how LLM architecture and transformers specifically will take a core part in that. The reason I brought LLM as a good starting point for cost analysis is that this is the first time an AI opened up discussions on AGI. To be that proves that we are way closer to it than before. hence the cost might not be more than a degree of magnitude higher.

More related questions