
Slap in the face to OpenAI! Google Gemini Advanced has received the IMO 2025 official certification gold medal: pure natural language end-to-end reasoning

Google DeepMind's advanced version of Gemini won a gold medal at the 2025 International Mathematical Olympiad, solving 5 problems and scoring 35 points, officially certified by IMO. This achievement marks a significant breakthrough in AI's natural language understanding and reasoning capabilities, enabling it to directly handle natural language problems and generate mathematical proofs with significantly improved efficiency. Compared to OpenAI's self-claims, Google's official certification results are more authoritative
While the global tech community is still discussing OpenAI's claim of its internal model winning the IMO gold medal, the true "officially certified" champion has arrived. Google DeepMind has just released a significant blog post announcing that its advanced version Gemini, equipped with "Deep Think" capabilities, has officially reached the gold medal standard by solving 5 out of 6 problems with a total score of 35 points in the 2025 International Mathematical Olympiad (IMO)!
This achievement has been certified by the IMO official coordinator, with all problem-solving processes completed within the 4.5-hour competition time limit, and the entire process conducted using natural language for end-to-end reasoning.
Haha, compared to OpenAI's self-claim, Google has presented irrefutable official results. Now I seem to finally understand why OpenAI rushed ahead; Sam Altman must have known something in advance. I also seem to understand why OpenAI disregarded the IMO organizing committee's opposition and prematurely claimed that its experimental model had won the gold medal. An officially certified result must have made Sam uneasy; if they didn't hype it up in advance, this result would have been a huge blow to OpenAI.
The True Gold Medalist: 5 Perfect Scores, Solved with End-to-End Natural Language
The significance of this achievement is reflected in several groundbreaking advancements:
The Leap from "Formal Mathematics" to "Natural Language":
Do you remember last year (IMO 2024)? Although Google's AlphaGeometry and AlphaProof reached the silver medal standard, they required human experts to first "translate" the natural language problems into formal languages like Lean that AI could understand. This year, Gemini has achieved an end-to-end breakthrough, directly reading and understanding the official problems described in natural language, and then generating rigorous, human-readable mathematical proofs. This marks a significant step forward in AI's reasoning capabilities towards human intuition and flexibility.
Competition-Level Efficiency:
Last year's system required several days of computation time. This year's model completed all problem-solving and proof generation within the 4.5-hour competition time limit Official certification, indisputable:
The blog clearly states that its model results were officially scored and certified by IMO coordinators using the same standards as student solutions, IMO President Professor Gregor Dolinar:
This is the problem-solving process released by Google, with a 13-page PDF that I can't understand; math experts, please enjoy picking it apart:
Google's official announcement and OpenAI's premature claim
OpenAI suddenly claimed that one of its internal experimental models also reached gold medal level before the IMO closing ceremony. However, this action immediately sparked huge controversy:
Ignoring rules: It is reported that the IMO organizing committee had explicitly requested OpenAI not to release results before the closing ceremony, but OpenAI did not comply.
Lack of certification: OpenAI's results are entirely "self-reported" and have not undergone independent verification and scoring by IMO officials.
Methodology opacity: Its model and methods were not disclosed prior to the competition.
This series of questionable actions drew a public response from Terence Tao, making his attitude towards OpenAI on social media unsurprising:
I will not comment on any self-reported AI competition results that did not disclose their methodology before the competition.
Terence Tao's inner thoughts: You’re both the referee and the athlete; do you think I don’t understand?
Behind the gold medal
How did Google achieve this astonishing leap? The answer is the Deep Think advanced mode.
Parallel Thinking: The Deep Think advanced mode allows the model to no longer be limited to a single linear reasoning path. It can simultaneously explore and combine multiple possible solutions, just like a top mathematician mentally calculating several problem-solving approaches at once and ultimately choosing the optimal one.
Reinforcement learning and high-quality data: Google trained Gemini using novel reinforcement learning techniques specifically targeting multi-step reasoning, problem-solving, and theorem-proving data. Additionally, it provided a large corpus of high-quality mathematical problem-solving solutions.
The development team also incorporated some general tips and techniques on how to solve IMO problems into the model's instructions.
So finally, let me ask everyone a question: How far is AGI?
AI Cambrian Explosion, original title: "Slapping OpenAI in the face! Google Gemini Advanced Edition officially certified gold medal at IMO 2025: Pure natural language end-to-end reasoning." Risk Warning and Disclaimer
The market carries risks, and investment should be approached with caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk