AI Expert: Doubts about AI are "self-deception" regarding the "exponential growth trend"

Wallstreetcn
2025.09.30 02:10
portai
I'm PortAI, I can summarize articles.

AI researcher Julian Schrittwieser believes that the current "AI bubble theory" is a manifestation of the failure to understand the exponential growth trend of technology, similar to the misjudgment at the beginning of the COVID-19 pandemic. Research shows that AI's performance in software engineering, cross-industry occupational tasks, and other fields is experiencing exponential growth, and it is predicted that by mid-2026, AI will be able to autonomously complete 8 hours of work and reach the level of human experts in multiple industries by the end of the year

An expert from the forefront of AI research firmly rebutted the current prevalent "AI bubble theory."

Julian Schrittwieser, a researcher at the AI star company Anthropic, warned in a post on his personal blog that the widespread skepticism about the "bubble" or "platform period" of AI is a serious misreading of the technology's exponential growth trend, akin to the initial neglect of exponential spread during the early days of the COVID-19 pandemic.

The current discussions around AI progress and the so-called "bubble" remind me of the first few weeks of the COVID-19 pandemic. When the exponential trend had already clearly indicated the arrival and scale of a global pandemic, politicians, journalists, and most public commentators still viewed it as a distant possibility or a localized phenomenon.

He pointed out that although AI still makes mistakes when performing tasks like programming or website design, the assertion that it cannot reach human levels or has minimal impact is "a strange phenomenon," just as people believed a few years ago that AI programming was "science fiction."

People notice that while AI can now write programs, design websites, etc., it still often makes mistakes or goes in the wrong direction, and then somehow they conclude that AI will never be able to complete these tasks at a human level or will only have a minimal impact.

Schrittwieser's core argument is based on two key studies: METR and OpenAI's GDPval. The data shows that the time AI models take to autonomously complete complex tasks is doubling at an exponential rate, with the latest models capable of handling software engineering tasks lasting over two hours. More importantly, in the GDPval assessment covering 44 professions, the performance of top AI has been "astonishingly close" to human levels and is even beginning to challenge the capabilities of industry experts.

In this blog post titled "Once Again Failing to Understand Exponential," Schrittwieser likens the current skepticism towards AI to "self-deception," arguing that people underestimate the scale of the impending transformation by focusing on the imperfections of the present.

Software Task Capability: Doubling Every 7 Months

To counter the "platform period" argument regarding AI, Schrittwieser first cited the study "Measuring AI's Ability to Complete Long Tasks" published by the independent evaluation agency METR. This study measures the length of software engineering tasks that AI models can autonomously execute, revealing a "clear exponential trend."

According to the study, the model Sonnet 3.7 from seven months ago was able to complete tasks lasting up to one hour with a success rate of 50%. The latest charts on the METR website further confirm the continuity of this trend.

Schrittwieser pointed out that new models, including Grok 4, Opus 4.1, and GPT-5, not only continue the trend, “these latest models are actually slightly above the trend and can now perform tasks for over 2 hours!

Crossing Codes: Catching Up with Human Experts in 44 Professions

In response to the skepticism that “AI only excels in software engineering,” Schrittwieser cited another assessment released by OpenAI called GDPval. This study aims to measure the performance of models in a broader range of economic activities, covering 44 professions across 9 industries, with tasks provided by industry experts averaging 14 years of experience.

The results again presented a similar trend. Schrittwieser wrote that the latest GPT-5 is “astonishingly close to human performance.”

More compellingly, Claude Opus 4.1, released before GPT-5, performed even better in this assessment, with its performance “almost matching that of industry experts.” Schrittwieser specifically commented on this: “I want to particularly commend OpenAI for releasing an assessment showing that another lab's model surpassed their own model—this is a good sign of integrity and concern for beneficial AI outcomes!”

Looking Ahead to 2026: A “Key Year” for AI Economic Integration

Based on the aforementioned exponential growth data spanning multiple years and industries, Schrittwieser believes it would be “extremely surprising” if these improvements suddenly stopped. He provided a clear prediction based on trend extrapolation:

  • By mid-2026, models will be able to work autonomously for an entire workday (8 hours).

  • By the end of 2026, at least one model will reach the performance level of human experts across many industries.

  • By the end of 2027, models will frequently surpass experts in many tasks.

He concluded that future models may be better than experts.

This may sound overly simplistic, but predicting future models by inferring a straight line on the chart may give you a better future model than most “experts”—even better than most actual field experts!