Here's an ASK HN thread I posted on whether Moore's Law is dead or not:
https://news.ycombinator.com/item?id=35997975
It seems to descend a bit into a semantic argument about what you mean by "Moore's Law". If you go by transistor count, the consensus in that particular thread is that we're still meeting the technical predictions every two years. But if you go by cost and power, those seem to have come unglued over the last decade and so what "Moore's Law" meant to the average computer enthusiast in 1999 seems to no longer be operating and is thus "dead."
So for the purposes of THIS market I define "The Spirit of Moore's Law" as being:
About every two years, computers get about twice as powerful for the same price
About every two years, computers get about twice as cheap for the same power
This is important because it means you can just rely on hardware advancement to make your old code faster.
Also, whether the "Spirit" of Moore's Law is dead or not has important implications for AI timelines, even if the literal transistor count part is still on track.
Essentially this market is looking for a definitive answer to whether the ongoing free lunch of cheap computational advancement I experienced in my childhood and young adulthood has substantially slowed down.
On January 1, 2024, I'll review the evidence. If I'm convinced the party is over, this resolves YES. If I'm not convinced or I'm only kinda convinced but there's a strong lingering reasonable doubt, this resolves NO.
I won't bet in this market, but I will subsidize it.
To be clear, the way to profit on this market is: bet in the direction you think is true, and then go gather as much evidence as you can to convince me that's correct, and post it here.
Thank you to everyone who posted your answers here.
A lot of factors have gone into my thinking on this one, but a recent decision was pretty salient; I was considering whether to buy a new laptop. Back in the late 90's early 2000's, I would try to hold off on this decision for as long as possible until the moment I was poised to use it a maximal amount in my daily life, knowing that it would quickly obsolesce and I'd be back in the market for a new one in short order. This time I realized that I barely felt that way at all; sure future laptops will be a little better, but even if I wasn't planning to put it to super heavy use until the latter half of the year, it could still be a good idea to pick it up now.
That's just one anecdote, but I have a thousand others. The world of 2024 feels vastly different from 1989, 1999, or even 2009 in this regard.
Whether or not we are still cramming the right number of transistors onto an ever shrinking surface, in terms that actually matter to me personally, the "Spirit of Moore's Law" feels pretty dead.
Resolves YES.
Anecdotal evidence, but I haven't seen any comments to this effect:
I work at a large tech company. We absolutely do not take the point of view that our code will just passively get faster, and haven't for many years (I can't actually remember a concrete occurrence). More and more in recent years the push has been towards spending more time on performance. Some systems have had big gains from accelerators, but these are special purpose and require tons of work to migrate workloads.
If even the CEO of Intel is now saying "yeah, we're not seeing this doubling every two years", I'd say this resolves Yes.
@robotnik I am such a good predictor that I can see into anons brains and take your mana with my presience
re: silicon based processors, moore’s law is grinding to a halt. this is a major reason why the future of moore’s law lies in alternative semiconductors. specifically, 2D materials seem to hold a lot of promise here. not only do they allow for smaller pitch and lower power operation, there’s also the ability to have much more 3d processors with 2D materials
if you want to be a little more speculative, all optical processing seems to be fairly close (~2 decades) and that is 5-6 order of magnitude speedup, maybe at a small space disadvantage.
for current technologies, moore’s law probably still applies. custom hardware is the biggest driver here, with NN accelerators and dedicated encoding hardware speeding up most processes significantly.
eg: 4090 gpu doubles performance of 3090 gpu with little price increase
The cost of an equal amount of CPU cycles has not been declining by 50% every two years on the major cloud service providers. This is where most of the Internet traffic is routed through, though its hard to say what percentage of overall compute. It's quite a bit though. It's not getting cheaper for most people.
@ArmandodiMatteo (Could we have trained GPT-4 ten years ago for the same cost? I don't think so, and I don't think that's because of software.)
Those new transistors are being put to use. CPUs continue to have more cores, which directly translates to performance for multithreaded software. CPUs have more cache; current CPUs have more on-die cache than the disk space on the first two systems I used. And new CPU cores get more done even at the same clock speed. Plus, since you said "computers" and not "CPUs", consider things like RAM capacity and NVMe storage.
@josh Addition of extra cores is highly constrained by power, SRAM is no longer scaling much, and IPC gains are increasingly tiny.
The more I think about it, I really think more needs to be made of how unusual the last few years have been in terms of production and sales of computer parts. We’ve seen demand suddenly explode whilst production got harder due to lockdowns, sickness and logistical breakdowns. That’s a big external force on every metric we measure Moore’s Law on. Did Moore’s Law die before/during COVID? Or did COVID merely make it look that way?
@jacksonpolack That is a reason to bet NO, because the market author doesn't seem to have noticed yet, so unlikely they will notice this year either.
@jacksonpolack It was admittedly more of a spur of the moment bet than deep insight. But I feel like there has been lots of progress for ML workloads. You wrote this earlier:
I remember this being true. An older friend of mine would describe how amazing it felt to take code you wrote ten years ago, run it, and watch it go ten times faster. The same friend, recently, noted that was no longer true.
I personally feel like this is totally still the case for neural networks when getting a new GPU. Maybe I’m too taken in by the sales pitch, but NVIDIA certainly still claims impressive progress: https://www.nvidia.com/en-us/data-center/h100/
And notably GPUs are getting exponentially cheaper: https://epochai.org/blog/trends-in-gpu-price-performance
But yeah, there’s a bit semantic argument about what we mean by Moore’s law's spirit. I do of course agree that the speed-up for general-purpose single threaded code isn’t great currently. But even phones and laptops (at least sometimes) can do signal processing and ML that just wouldn’t have been possible hardware-wise a few years ago.
Lastly, I read the question description as being biased towards a NO resolution since that’s the default if OP isn’t sure. I mean, “If I'm convinced the party is over, this resolves YES”? The nature of the party might be shifting but there clearly is a party going on currently.
Thanks for the link! It looks like price per perf for GPUs are getting exponentially cheaper.
The OP's description of the spirit of moore's law is
About every two years, computers get about twice as powerful for the same price
About every two years, computers get about twice as cheap for the same power
I don't think just neural networks doing this counts. 90% of what we do with computers aren't neural networks. And he doesn't say chips, he says computers. twice as cheap for the same power - I'd love it if something with the same power as a 2021 laptop or desktop was available today for twice as cheap or something twice as powerful for the same price ... if there's a secret 2x faster architecture I'm not aware of, I'd love a link!
It'd be cool to have markets on 'how much will a petaflop cost in 2030', though.
I agree though it is biased towards a NO resolution, since:
On January 1, 2024, I'll review the evidence. If I'm convinced the party is over, this resolves YES. If I'm not convinced or I'm only kinda convinced but there's a strong lingering reasonable doubt, this resolves NO.
I'd also be interested in hearing from lars how he's feeling so far!
But even phones and laptops (at least sometimes) can do signal processing and ML that just wouldn’t have been possible hardware-wise a few years ago.
"at least sometimes" is true!
Essentially this market is looking for a definitive answer to whether the ongoing free lunch of cheap computational advancement I experienced in my childhood and young adulthood has substantially slowed down.
But the 'free luch' is over. I don't think anyone would describe GPU programming as a 'free lunch', when compared to everything you write going from 100 megahertz to a gigahertz. Computational advancement comes at the expense of ever narrowing specialization.
Fundamentally, you can't rely on hardware advancement to make your old code faster, because the hardware advancement is specialized.
@jacksonpolack I think we agree on everything except on how we define ‘power’. I guess in terms of FLOP/s the multiplier of 2x from 2021 to 2023 might work. Maybe also for graphics-heavy workloads like games or multimedia processing? But yeah, single-thread workloads only got moderately faster.
So it depends on how we average the uneven speedup among the workloads. It’s not like I’m sure of my position, but here’s two arguments for why I think we should count the GPU at least somewhat:
A lot of software isn’t bottlenecked by computing the business logic any more. Unless you’re doing something demanding like multimedia, games or ML, the CPU on a decent modern computers is already mostly idling. And these intensive workloads disproportionately do benefit from acceleration.
Programmers don’t have to adapt their programs for GPU themselves. People mostly use libraries or other dependencies that do the heavy lifting on GUIs and image manipulate and benefit from GPUs without even knowing about it. For example, lots of websites are now much smoother without the owners changing anything. Try opening Manifold on an old phone lol
Eh, by the market description I think that 'twice as powerful' means 'twice as powerful for most tasks', and consumer gpu flops doubling doesn't do much for me beyond make my video game fps higher and make AI work a bit better. I think it's reasonable to count GPUs a bit. But you should also count cpus and hard disks and memory a bit then. And:
A lot of software isn’t bottlenecked by computing the business logic any more. Unless you’re doing something demanding like multimedia, games or ML, the CPU on a decent modern computers is already mostly idling
Yeah! But what are they bottlenecked on? Memory and I/O. And growth in those has slowed down too. This a graph of the price of dram per gigabyte, and it shows the same post-2010 plateau. Hard drive cost per gigabyte also plateaued.
People mostly use libraries or other dependencies that do the heavy lifting on GUIs and image manipulate and benefit from GPUs without even knowing about it. For example, lots of websites are now much smoother without the owners changing anything
I think if you measured websit smoothness over time, you wouldn't observe a doubling every two years :)
@1941159478 Nvidia's marketing pitch is not that accurate. H100s cost roughly three times as much as A100s. From what I've heard, the price/performance running existing code is roughly the same as with A100s, but you can get somewhat better price/performance if you go to some effort to change your code to use FP8 and to deal better with their even more compute-skewed memory to compute ratio.
@jacksonpolack HDDs may not be getting much cheaper but SSDs have gotten massively cheaper, faster and more common. I think this is more relevant to consumers.
@jacksonpolack to be clear, @jacksonpolack's interpretation is in line with my reasoning here. I'll still need to review the evidence at resolution time to decide, but just having one part of computers massively speed up, while the normal day-to-day part that normies actually rely on stagnates, doesn't count. Back in the 80's and 90's it was unambiguously clear that pretty much every aspect of computing, especially the parts that a normie every day user cared about, were getting massively faster and cheaper, all the time. It was such a ridiculous frenzy of pace that I remember it hard to time when to buy a computer because a faster and cheaper one would already be out just a few months later. And it was exhilarating getting a new PC and watching all my old stuff magically get faster without any change to the code. That's what I mean by the spirit of moore's law for the purpose of this market. Now, is this what everyone means by Moore's Law? No, and it is perfectly valid to take issue with my particular definition, but in that case it's best to just make your own market and set up a different criteria that you prefer.