
Senior technology investor: If there is no breakthrough in Scaling Law, AI will collapse in 2024

Technology investor Gavin Baker pointed out that due to the delay of NVIDIA's Blackwell chip, the AI industry was supposed to stagnate for 18 months in 2024. However, the emergence of two new Scaling Laws, "reinforcement learning validation rewards" and "computations during testing," has allowed AI reasoning capabilities to make leaps under hardware constraints (intelligence level tests increased from 8% to 95%), preventing a market crash and laying the foundation for future explosive capabilities combined with new hardware
Gavin Baker pointed out that the release of Gemini 3 proves that the scaling law for large models is still valid.
On Tuesday, senior technology investor Gavin Baker noted in a recent podcast interview that the launch of Google's Gemini 3 model validates that even during periods of limited hardware computing power, AI can achieve leaps in capability through new reasoning mechanisms.
He emphasized that without the timely emergence of model reasoning capabilities, the global AI industry would have been completely stalled from mid-2024 to the release of Gemini 3. Due to NVIDIA's next-generation chip Blackwell facing the most complex product transition and delays in tech history, this "gap period" in hardware computing power could have triggered severe turbulence in the capital markets.
Baker pointed out that in the past few months, without the real launch of next-generation computing power, AI's progress has mainly relied on two new methods: one is Reinforcement Learning with Verified Rewards, and the other is Test Time Compute.
He believes that it is these two technologies that have enabled models to achieve significant improvements in intelligence levels on existing hardware, thereby supporting the high valuations of current tech stocks.
The Moment of Life and Death for Pre-training Law
Regarding the pre-training scaling law, Baker emphasized that the release of Gemini 3 is milestone because it clearly confirms that this law is still valid.
Prior to this, no one could fully explain why the scaling law works from a theoretical standpoint; it was more of an "empirical observation" similar to how ancient Egyptians observed celestial phenomena—while they could accurately measure the alignment of pyramid axes with celestial bodies, they did not understand the underlying orbital mechanics.
For investors, each confirmation of the scaling law is crucial. If this empirical law fails, it means that massive capital expenditures will not translate into stronger intelligent performance.
Gemini 3 proves that even under the current hardware architecture, by increasing computing power and data, the capabilities of the model base are still improving. However, Baker also pointed out that relying solely on the scaling law during the pre-training phase cannot explain the market boom of the past six months.
In fact, if AI progress solely depended on the pre-training stacking of hardware computing power, the industry would face a "vacuum period" lasting up to 18 months starting from mid-2024.
Two New Laws Save the Market
What spared the global market from this disaster is the emergence of reasoning capabilities.
Baker cited ARC AGI benchmark testing data, indicating that AI intelligence levels progressed from 0 to 8% over the past four years, but after OpenAI launched the first model with reasoning capabilities, it skyrocketed from 8% to 95% in just three months This leap is derived from two new Scaling Laws:
Reinforcement Learning with Verified Rewards: As Andre Karpathy said, "Anything that can be verified can be automated." As long as there are clear right and wrong outcomes, AI can self-evolve through reinforcement learning.
Test Time Compute: Allowing the model to "think" longer before answering questions, trading off more reasoning power for higher intelligent performance.
These two laws have forcibly continued the growth akin to Moore's Law in the absence of NVIDIA's Blackwell. They not only fill the gap during hardware iteration but, more importantly, these laws have a multiplier effect.
Finally, Baker emphasized that AI has crossed the growth bottleneck of merely relying on stacking graphics cards and has entered a new stage of value leap achieved through logical reasoning and verification. He predicts that in the future, when these new Scaling Laws operate on the more powerful Blackwell foundational model, AI capabilities will experience another explosion
