
While December and January were comparably slow for AI models, the last week has seen, among other things:
Near-perfect video generation and a world model with Sora
Multiple companies releasing or raising money for 1M context LLMs
nVidia posting 15% gains in one day
Statements by DeepMind about a robotics breakthrough
To an uninformed observer, it might appear that computing power has tipped over an edge where many more things are suddenly possible, or someone has achieved an AGI-like system that is pumping out new technologies at an increased rate.
Did the world enter a new phase of more rapid breakthroughs in mid-February?
Feb 2024 has had more advancements than Jan 2024, for sure, but doesn't seem like anything crazy. OpenAI video model has been rumored for a while and hasn't actually been released, 1M context is cool but not completely unanticipated, NVIDIA gains are not really a breakthrough. It'd be cool to have some way of measuring "generic AI progress" though.
@ShakedKoplewitz Slow but gradual? I'd say still fast, just not "and then the world ended the next day because of recursive self-improvement" type of fast. I feel like there has been more progress in the past 1 year than the prior 10, and we haven't begun to really integrate the advances that were made this time last year, yet. Early adopters are adopting, but in terms of integrating GPTs into everywhere they could have benefits? Not even close. If things just stopped right now, everyone ignored Sora, and we fully took advantage of what GPT-4 as of March 2023 allowed, over the next 5 years, it would count as 5 years of rapid and disruptive change by pre-2023 standards.
I voted no, because this is not an uptick in the speed of developments, it's just the current (fast but not "fast takeoff" fast) pace continuing. To me it looks like we're in one of those scenarios where the AI storytellers go "this is what it would look like in a slow takeoff scenario, where the world changed over a handful of years rather than a handful of minutes or hours".
@equinoxhq We haven't been able to integrate things because it's just too expensive to do it, mostly. I agree that expensiveness is exactly what a "slow takeoff" would look like.
That said, "slow" still looks fast to humans.
@SteveSokolowski I think probably also, it's still only a minority of people who are aware of the possibilities. There's a step that needs to be taken, where someone who really understands well what AI can do (which is not that many people), and also understands very well how some current thing is currently done (which, for each thing, is not that many people), puts a solution together. We have the technology to, for example, have a tradesperson take a picture of a thing they've never seen before, and a system does an internet search for and integrates several hundred pages of information about what that thing is and how to fix it, which the tradesperson can now have a verbal conversation about, saving them several hours of fiddling around. It could probably be done, but it hasn't been done yet. I think there are lots of things like that.
Also, regulatory barriers prevent many things that professionals do, which could now be done by AI, from being done by AI.
Also, lots of people don't really want most of what they do to be done for them in a way that makes them replaceable by people with much lower skill levels, so they just keep doing things the way they know, rather than exploring how AI might help them.
It's not just cost - even if training and inference were free, it would take time between "this can now be done" and "this is being done in most cases where it can be".