- Aditya's Newsletter
- Posts
- There Is No Wall. Or Is There?
There Is No Wall. Or Is There?
We are now a community of 275! Thank you❤️
This newsletter is free and I don’t use paid advertising. I completely rely on organic growth through users who like my content and share it.
So, if you like today’s edition, please take a moment to share this newsletter on social media or forward this email to someone you know.
If this email was forwarded to you, you can subscribe here.
If you want to create a newsletter with Beehiiv, you can sign up here.
there is no wall
— Sam Altman (@sama)
6:06 AM • Nov 14, 2024
Sam Altman, CEO of OpenAI, posted this on X on 14th November - “there is no wall”
This was an apparent reference to an alleged performance improvement limit (or plateau) that the best AI models are hitting.
GPT-2 was unknown to the non-AI world. GPT-3.5 in ChatGPT took the world by storm and ushered in the AI era. GPT-4 was significantly better (launched just 4 months after ChatGPT!) But, it looks like that GPT-5 won’t even be released this year.
The story of improvement from average, highly hallucinating, unhelpful models to industry-changing and transformative AI models has been similar at other companies such as Anthropic (Claude), Google (Gemini), and Meta (Llama).
The speed at which AI models were evolving led to speculations that we might hit super intelligence before the end of the decade - AI systems that learn on their own and go beyond all human information.
More recently, reports suggest that AI might be plateauing.
This story by The Information reports that OpenAI and other companies are changing their strategies to expand horizontally - some examples are ChatGPT being able to ‘see’ your IDE on macOS, the computer use API by Claude (as mentioned in my last edition), etc.
To keep the revenue kicking in, companies are now exploring better use cases for their AI models and not just better models every few months.
Companies developing and making money from LLMs would hate such news going into the ‘normie’ circles. The concept of AI improving exponentially forever (or for at least many years to come) is necessary for attracting even more investment and valuation.
AI skeptic Gary Marcus has been warning about this for years, if not decades. It should be noted that some of his predictions about AI are not well-received by many including Sam Altman, Elon Musk, and Yann LeCun.
He strongly reiterated his point recently.
In fact, he wrote an article titled “Deep Learning Is Hitting A Wall” which was published in Nautilus back in 2022. Incidentally, ChatGPT launched 20 days later. Not the best of timings for the common world to see such a headline.
The economics are likely to be grim. Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence. As I have always warned, that’s just a fantasy. There is no principled solution to hallucinations in systems that traffic only in the statistics of language without explicit representation of facts and explicit tools to reason over those facts.
LLMs will not disappear, even if improvements diminish, but the economics will likely never make sense: additional training is expensive, the more scaling, the more costly. And, as I have been warning, everyone is landing in more or less the same place, which leaves nobody with a moat. LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive. When everyone realizes this, the financial bubble may burst quickly; even NVidia might take a hit, when people realize the extent to which its valuation was based on a false premise.
Sam Altman is confident that he’s right.
Dario Amodei, CEO of Anthropic is confident that he’s right.
One of the reasons I’m bullish about powerful AI happening so is just that if you extrapolate to the next few points on curve, we’re very quickly getting towards human level ability.
Some new models that have come from other companies, they are starting at what we’d call PhD or professional level, right? If you look at their coding ability.
The latest model we released, Sonnet 3.5, the new or updated version - it gets something like 50% on SWE-bench.
At the beginning of the year, I think state-of-the-art was three or four percent, so in 10 months, we’ve gone from three to 50% and I think in another year will probably be 90%.
I mean, I don’t know, it might even be less than that.
Gary Marcus is confident that he’s right.
Matei Zaharia, CTO of Databricks, is confident that he’s right.
Whenever you double your training costs, you increase quality by only around 1% or something like that.
Beyond the scaling law itself, there’s also the fact that we kind of put in all the data on the internet in these models already.
Even if you repeat it and add more data, maybe you can modify it and create variants of it. However, it’s not clear if it would be that much more informative.
It is possible that we’re kind of maxing out the general consumer thing.
Only time will tell the real answer.
What do you think?
Learn AI in 5 Minutes a Day
AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.
Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.
Did you like today’s newsletter? Feel free to reply to this mail.
This newsletter is free but you can support me here.
Reply