
After becoming the first stock of large models, the chairman of KNOWLEDGE ATLAS speaks for the first time: discussing 2513, "burning money and generating blood"

KNOWLEDGE ATLAS was listed on the Hong Kong Stock Exchange on January 8, with the stock code "2513", rising 3.27% on its first day, with a market capitalization of HKD 52.8 billion. This IPO raised over HKD 4.3 billion, with the Hong Kong public offering being 1,159.46 times subscribed. KNOWLEDGE ATLAS was founded in 2019, originating from the research platform AMiner at Tsinghua University established in 2006. KNOWLEDGE ATLAS carries the dreams of China's AI industry, facing challenges of high computing costs and business model difficulties
On January 8, KNOWLEDGE ATLAS was listed for trading in Hong Kong, with the last four digits of its stock code being "2513," which sounds like "AI I live my life."
KNOWLEDGE ATLAS opened on its first day with a rise of 3.27%, priced at HKD 120 per share, with a market capitalization of HKD 52.8 billion. In this IPO issuance, the Hong Kong public offering was subscribed 1,159.46 times, and the international offering was subscribed 15.28 times. Based on the issue price of HKD 116.20 per share, KNOWLEDGE ATLAS raised over HKD 4.3 billion in this IPO (before the "green shoe").
KNOWLEDGE ATLAS was officially established in 2019, but its story can be traced back to 2006. That year, Tsinghua University's KEG (Knowledge Engineering Group) released a research intelligence mining platform called AMiner, which used artificial intelligence to mine the objective laws of scientific development. This system covers 220 countries and regions, with a cumulative visit count exceeding 10 million, becoming an important tool for researchers worldwide.
This system eventually emerged from the laboratory in 2019, becoming the technical gene of KNOWLEDGE ATLAS AI.
In the beginning of 2026, KNOWLEDGE ATLAS's story reached a highlight moment.
Carrying these technical genes, the tech-savvy individual promised with a string of numbers to "achieve the AGI dream in one lifetime," under the halo of being "the world's first stock of large models," waiting for the reality check from the capital market.
But this was destined not to be an easy coronation.
From the moment the prospectus was released, the world gained its first opportunity to transparently and comprehensively examine the business model of a large model enterprise.
Under the pressure of high computing costs, can the marginal cost of Token achieve the scale effect of the internet? What exactly is the core offering of a foundational large model company—scarce "intelligence" or a disguised "resale of computing power"? Will the MaaS (Model as a Service) of the AI era fall into the growth dilemma once faced by SaaS (Software as a Service)? When will the huge R&D investment and current massive losses be transformed into positive commercial returns?
Beyond its commercial significance, KNOWLEDGE ATLAS also carries, to some extent, the dreams of China's AI industry, humorously referred to as the "captain of the global AI competition." KNOWLEDGE ATLAS AI has closely followed OpenAI in the rhythm of model releases and has aligned its model matrix with OpenAI's layout. OpenAI explicitly pointed out in a report titled "Chinese Progress at the Front" that KNOWLEDGE ATLAS has made significant progress in multiple dimensions and listed it as a core competitor in the sovereign AI competition.
Going public is an important milestone; "no matter how much money KNOWLEDGE ATLAS raises or how much profit it makes, it is actually just the fare on the road to AGI." This statement made by KNOWLEDGE ATLAS's management during external communications has left a deep impression on the industry.
With the title of "the world's first stock of large models," KNOWLEDGE ATLAS's cornerstone investment lineup is also quite luxurious, including core state-owned assets from Beijing, leading insurance funds, large public funds, star private equity funds, and industrial investors. JSC International Investment Fund SPC, JinYi Capital Multi-Strategy Fund SPC, Perseverance Asset Management and 11 cornerstone investors collectively subscribed for HKD 2.98 billion.
As the new year of 2026 begins, under the spotlight and pressure, KNOWLEDGE ATLAS stands at a critical turning point in transforming large models from "technically usable" to "practically usable." Before officially ringing the bell, Chairman Liu Debing of KNOWLEDGE ATLAS shared in-depth how the company plans to build a verifiable and sustainable business story in the future.

01 The AGI Marathon has Reached L3
Q: The code for KNOWLEDGE ATLAS is quite interesting, 2513, which sounds like "AI I live my life"?
Liu Debing: We hope this is an Easter egg from KNOWLEDGE ATLAS for all AGI believers. On the long journey from L1 to L5, we still need this romantic belief. At the same time, we want to tell the world that we are marathon runners who will dedicate our lives to the cause of "making machines think like humans and using reliable AI to make humanity better."
Q: From KNOWLEDGE ATLAS's definition, what stage is AGI currently at?
Liu Debing: We are currently at the L3 (Level 3) stage. The core feature of this stage is the initial manifestation of agents with autonomous learning capabilities. AI has not only become "usable" in multiple fields but is also becoming increasingly "practically usable."
Although we are still some distance from fully achieving the AGI goal of thinking like humans, we are in this critical transition process.

Q: KNOWLEDGE ATLAS has become the "first stock of global large models." What are the key metrics behind achieving excellent results in this major test?
Liu Debing: Regarding the IPO, we believe the most important aspect is the practical test of whether "technical logic can run through commercial logic." When we review internally, we mainly look at the performance in these three dimensions:
First, we examine the revenue structure and growth quality. From 2024 to 2025, KNOWLEDGE ATLAS's compound growth rate reached 130%. Behind this, three lines are simultaneously exerting force: cloud MaaS, subscription services, and the demand for enterprise localization deployment are all being released simultaneously.
Second, we look at the position of the underlying technology. For example, in terms of evaluation data, GLM-4.7 scored 68 points on the Artificial Analysis leaderboard, currently ranking first among domestic models and open-source models, and sixth globally.
Third, we assess the actual penetration rate of the ecosystem. In 2025, we increased our efforts in open-source, especially with core capabilities like AutoGLM. Currently, the cumulative download volume of the KNOWLEDGE ATLAS open-source series has exceeded 60 million globally This data means that the model has truly entered the workflow of developers.
We will continue to monitor these indicators in the future.
Q: There’s a half-joking saying in the industry that the benchmark for large models has been "worn out." Besides "ranking," what else do you think can reflect the technical value?
Liu Debing: However, benchmarks still have reference significance; a good foundational model will definitely not perform poorly on benchmarks. But we will also consider more comprehensive standards. The model's performance in practical applications, as well as whether it can be consistently chosen and validated by global developers, is more important.
Q: What is the international developer's evaluation of the KNOWLEDGE ATLAS model?
Liu Debing: The feedback has been quite positive. Applications like code application Windsurf and cloud platform Vercel have integrated KNOWLEDGE ATLAS's GLM model. On the global model invocation ranking OpenRouter, KNOWLEDGE ATLAS ranks first in paid invocation volume among domestic models. The GLM Coding Plan has only been online for two months, and over 150,000 developers worldwide are paying to use it, with annual recurring revenue (ARR) already exceeding 100 million.
Q: Is this due to cost-effectiveness or technical value?
Liu Debing: I think it's a comprehensive consideration. First, the technology must be good, at least reaching first-tier levels, and then everyone will consider costs. For developers, similar to "ticket holders" in the large model field, they care more about model capabilities, but if it's for commercial use, they consider cost-effectiveness more.
KNOWLEDGE ATLAS's pricing is still very advantageous; for example, compared to Claude, the API invocation price is only about one-seventh of it.
Q: In the open-source community, after releasing a new model, is the traffic organic, or is there systematic proactive operation?
Liu Debing: It's mostly organic; we won't invest too many resources in operations.
Q: KNOWLEDGE ATLAS once fully benchmarked OpenAI on the model matrix, basically keeping pace with their release of new models. In the future, how does KNOWLEDGE ATLAS consider its differentiated path for the Chinese market?
Liu Debing: This is a very critical question. In the early stages, we did benchmark OpenAI comprehensively because, at present, large models are still the cutting-edge paradigm in AGI technology, and OpenAI is at the forefront of this paradigm.
But in the future, we won't make a single choice between globalization and differentiation in the Chinese market.
China has the most complex and dense real application scenarios in the world, which determines that we will naturally form a technical orientation different from overseas companies in terms of model safety, low hallucination rates, and industry adaptability. This general capability honed in complex scenarios can constitute our unique advantage.
On the other hand, large model companies must have a global perspective from the very beginning. The essence of AGI is general capability; foundational models cannot exist solely for a single market. Currently, we can provide code generation capabilities close to international first-tier levels at significantly lower prices than comparable closed-source models, and we already possess international competitiveness in terms of cost, efficiency, and engineering capabilities In terms of going global, we have mainly initiated and led the "International Co-construction Alliance for Autonomous Large Models," collaborating with ASEAN and several countries along the "Belt and Road" to jointly build controllable national-level AI infrastructure. We help friendly countries create their own "digital sovereignty large models," which have already been implemented in multiple countries.
Q: How do you understand "digital sovereignty large models"?
Liu Debing: The core logic is consistent with the global AI cooperation initiative initiated by our country: emphasizing respect for the sovereignty and cultural values of each country, and on this basis, promoting AI to benefit the world, allowing all humanity to share the dividends of the technological revolution.
Under this principle, we are not just exporting models, but promoting comprehensive solutions that include models, computing power, data, and applications, engaging in deep cooperation with friendly countries.
Q: Is there a selection of regions for going global?
Liu Debing: We are looking at the global market, but will prioritize countries with closer cooperation like ASEAN and the Belt and Road, and will promote globally in the future.
Q: Have we narrowed the gap between us and the world's top models?
Liu Debing: From the beginning, we said that achieving AGI is a long-distance race, and there are still many technical gaps in this field. However, we have discovered a new pattern: new models are emerging one after another, and the iteration speed is extremely fast. Whenever a leading model is released, it often only maintains a short-term advantage before a new model surpasses it.
Currently, there are indeed differences in the technical levels among mainstream AI companies, but there has not been a situation where one has "pulled far ahead, making it impossible for others to catch up."
In this process, Chinese large model companies have performed excellently, and we have already matched international mainstream models, with no significant gap in technical levels, always keeping pace with the world's forefront.
Of course, we also feel the pressure, especially in terms of computing power, data resources, and the scale of financial investment, where foreign models have superior foundational conditions. However, our advantages are also evident; China has richer application scenarios.
Q: Does the narrowing of this gap also indirectly prove that the technological iteration curve of pre-trained models has slowed down? Is there still cost-effectiveness in the huge investment in pre-training?
Liu Debing: There is actually no factual basis for this. We see that pre-training still brings significant performance improvements, and the flagship models continuously released by leading companies recently confirm this.
However, the current market is different from the "hundred model battle" explosion period of 2023; it is entering a stage of accumulation and differentiation. Companies that excel in the foundational layer will continue to deepen their pre-training efforts, while those skilled in applications will shift towards the application layer, which is a reasonable differentiation.
But we will definitely continue to invest firmly in pre-training. The foundational pre-trained models determine the upper limit of intelligence levels, and the long-term returns on investment are clear. At the same time, we are indeed prioritizing the scaling on the inference side, as models need stronger "online inference" and "slow thinking" capabilities to find optimal solutions in unsupervised tasks or complex environments Q: You have mentioned "self-adjusting parameters" multiple times before. Do you think this is a distant vision, or can we expect to see preliminary implementations in the near future (for example, by 2026)?
Liu Debing: "Self-adjusting parameters" are a crucial step in the evolution of models and can even be seen as a core indicator of L4-level intelligence. Currently, many deep applications still require the involvement of technical personnel from large model companies for tuning to achieve ideal results. Once models have the capability of self-adjusting parameters, users can drive the model's autonomous iteration through continuous interaction and feedback during actual use.
This self-evolution capability could potentially trigger explosive growth in applications.
However, there is no clear timeline yet, but this is one of the core technologies that Zhipu is currently striving to tackle.
02 MaaS is "selling intelligence," not "selling computing power"
Q: From a business model perspective, is MaaS seen as an important growth pole for Zhipu in the future? Is its essence selling computing power or selling intelligence?
Liu Debing: Definitely, because it is the path with the lowest marginal cost and the strongest scalability effect in the commercialization of large models.
I believe the essence of MaaS is still selling intelligence, not computing power. If it were selling computing power, it would be a business model of the previous cloud infrastructure, which follows a heavy asset return logic.
The core value of MaaS lies in the fact that customers pay to obtain the model's understanding, reasoning, or decision-making capabilities for complex logic. This is actually quite understandable; I think of it like electricity or water.
Computing power is more like the operational equipment of hydropower stations, while AI capability is the "water" and "electricity" flowing within it. Although the two are closely intertwined, water and electricity themselves are the core value independent of the infrastructure.
This is our core thinking logic: AI capability must be output to every terminal through MaaS, making it the most essential production factor in the future intelligent society, rather than just a single computing resource.
Q: But this actually involves two challenges. In the era of large models, generating tokens has a clear "hard cost" of computing power. This means that its marginal cost reduction may not be as significant as in traditional internet or software industries. Will this business model make profitability more difficult?
Liu Debing: Currently, the core cost of large models is indeed computing power. However, from the logic of MaaS (Model as a Service) itself, its marginal cost is actually very low because it has the liquidity of "water flow," allowing for rapid and unlimited replication.
Regarding computing power costs, with the improvement of domestic computing capabilities and the increasing efficiency of computing chips, the computing power cost required to generate one token is rapidly decreasing.
There is also a more ultimate idea. When the model architecture stabilizes, we can conduct proprietary optimizations for specific models, which is what we are promoting with the "integrated computing" work. By deeply binding the model with computing chips, it is possible to achieve cost reductions by tens or even hundreds of times Q: Do you think MaaS will encounter the same challenges as the SaaS era, such as poor payment habits and difficulty in scaling?
Liu Debing: I believe there are significant differences between the two, primarily because in the AI era, "AI" will become the foundational infrastructure for all devices, potentially changing many things.
When AI deeply integrates into production and daily life and evolves into an indispensable infrastructure, users' willingness to pay will undergo a fundamental shift. This payment model is likely to be embedded within specific application scenarios. Due to the massive business flow and deep integration into daily processes, users' "perception" of payment will not be particularly strong, much like how you currently pay for call charges.
In the past internet era, many SaaS products were more about tool-based cooperation at the application level. Although these tools were useful to users, they did not reach the level of "must-buy" necessity. This led people to prefer seeking free alternatives, and vendors sometimes had to resort to a "the sheep is shorn from the pig" model to profit indirectly.
Q: For enterprises, is it really a pressing moment of "if you don't adopt AI, you will fall behind"?
Liu Debing: You can feel this urgency just by looking at the current growth data of AI applications. AI has already become a core national strategy. Not just AI companies, but various industries are rapidly considering the introduction of AI. Many enterprises have already experienced tangible benefits after trying it out.
A few days ago, I noticed data from the China Bidding Network, which showed that the number of bidding projects related to AI applications increased by about 390%, or 3.9 times. From our own MaaS perspective, usage has also achieved a tenfold increase.
This is an "explosive" trend.
The internet, finance, and education sectors are progressing very quickly due to a solid digital foundation. Traditional sectors like energy and manufacturing are also starting to gain momentum, and related applications are becoming increasingly numerous. The actual demands we encounter in daily life are also very broad.
Overall, I believe that applying AI across various fields is already a settled matter, and this process will only accelerate.
Q: In the future, will the process of intelligence slow down in industries without a digital foundation?
Liu Debing: It does take time for AI to penetrate traditional industries, but AI has a very important characteristic that is completely different from traditional software services.
In the past, when developing software, if you created a system for one industry or scenario and wanted to switch to another, you basically had to start from scratch. However, the greatest capability of AI lies in its generalizability. Once we run a model successfully in one industry, its core capabilities can be transferred across industries.
Therefore, the speed at which AI penetrates traditional industries will be faster than many expect. We are even considering longer-term "unmanned industries," such as unmanned agriculture and deep-sea exploration. These high-risk or highly repetitive fields are precisely where AI can shine and demonstrate its core value Q: For small-scale enterprises that are sensitive to costs, will their willingness to pay and the decision-making costs of payment be higher?
Liu Debing: In fact, for cost-sensitive enterprises, in the long run, they should pay more attention to AI, because one of the most important features of AI is to improve the efficiency of production and life, which can be translated as reducing costs.
Deeply transforming the industry requires a lot of R&D costs, but if you want to use it in your own scenario, many times using some open-source models with simple adaptations can yield very good results.
Q: Should we choose open-source models or directly choose MaaS?
Liu Debing: Each has its advantages. The biggest benefit of choosing open-source models is the ability to iterate independently. If certain open-source models are highly compatible with specific business scenarios, enterprises can deploy them directly and carry out secondary development.
Using a MaaS platform mainly allows you to enjoy the benefits of rapid technological iteration at any time. Whenever a new flagship model is released, it will be integrated into the platform immediately. Some business points that previously did not perform well on old models often see immediate improvements when switching to new models.
Q: Currently, the revenue share from localized industrial cooperation is higher. Can we expect an explosion of MaaS in the next year?
Liu Debing: Industrial cooperation itself can bring considerable cash flow, which is quite different from previous software customization, as mentioned earlier regarding "universality." If I can apply it in one scenario within an industry, I can scale it up within that industry, which brings very significant gains.
Additionally, we also believe that AI, as a future infrastructure, does not only serve our daily office needs in a C-end manner.
It also has a significant role in impacting our production and manufacturing processes, and at this point, it is necessary to enter the industry, which I think is also a very important direction.
Currently, in terms of commercial revenue, the industrial side is actually larger. MaaS is currently priced relatively low, aiming to increase volume and attract more users, but the growth of MaaS is very rapid.
Q: Will the future revenue structure change to have a higher proportion of MaaS?
Liu Debing: From the company's development principles, these two areas are currently balanced. In terms of trends, the growth of MaaS is very rapid, and it is entirely possible that it will exceed localization, accounting for more than 50% or even more.
Q: Is the "localization" model a necessary path for future scalable growth, or is it just a "detour" taken at this stage to obtain revenue and cash flow?
Liu Debing: I believe both are true. First of all, localized projects have extremely high practical value. Industries such as finance, electricity, and government affairs have the highest requirements for large models, which must not only pass in understanding reasoning and stability but also meet hard requirements such as safety compliance, auditability, and low hallucination When this model can be repeatedly validated in these industries, it essentially completes a high-intensity training of general capabilities. Then, by solving the execution challenges of complex business flows, it can further achieve the widespread adoption of massive terminals, driving large-scale usage by developers.
More importantly, there exists a positive iterative cycle between localization and scaling: through the in-depth validation of localized applications, the real user feedback we collect can effectively drive the capability evolution of the MaaS (Model as a Service) platform and the underlying base model.
As the model's capabilities enhance, its adaptability and suitability for different scenarios will also improve.
When this iteration continues, the originally heavily customized demands will gradually become standardized, significantly reducing the difficulty and cost of research and development. Therefore, localization is not an isolated "detour."
03 Model as Product, Intelligence Level is the Core Metric for Measuring Model Capability
Q: In the past, people often criticized general large models for their homogeneity, but after a year of development, we see that various models have begun to differentiate with highly recognizable characteristics. From the perspective of KNOWLEDGE ATLAS, what aspects will the product strength of large models mainly reflect in the future?
Liu Debing: We have always believed that "model is product," and the goal is to enable general models to adapt to various complex application scenarios. In this process, the core metric for measuring the enhancement of model capabilities has always been the evolution of intelligence level.
This enhancement of intelligence level is specifically reflected in the model's deep understanding of human intentions, precise perception of complex scenarios, and the ability to interact efficiently with the environment while executing tasks. These constitute the core competitiveness of general models.
On top of this core capability, we believe that combining general models with specific industries or application scenarios with scaling potential is a highly valuable direction. By implementing necessary constraints and targeted optimizations within specific fields, large models can achieve more ideal results in practical applications.
Q: What is your view on the controversy surrounding the massive losses of large model companies?
Liu Debing: The main reason for the losses lies in the huge R&D investments and the purchase of computing power services. The prospectus of KNOWLEDGE ATLAS also contains detailed data disclosures. For the first half of 2025, the R&D investment was 1.5947 billion, with a cumulative R&D investment of about 4.4 billion during the reporting period. The R&D investment is mainly used for purchasing computing power, accounting for 71.8% of the total R&D expenditure.
However, this is also the norm in the industry. Domestic listed internet companies maintained a high growth trend in capital expenditure in the first half of 2025. For example, Alibaba plans to invest over 380 billion in cloud and AI hardware infrastructure over the next three years, which exceeds the total of the past decade.
Therefore, the cost of computing power is one of the main reasons for strategic losses. However, the cost of computing power is continuously decreasing, which is also a trend.
Q: What is the current "AI concentration" level in the entire industry? From the perspective of the Chinese market, will the growth slope still be steep in the future? Liu Debing: We believe that the industry is at a "critical point" of transformation from quantitative change to qualitative change. If we only view large models as a technological wave, there will be peaks and valleys; but if we see it as a technological revolution, it will open up unprecedented new spaces and markets. We firmly believe that 2026 will be a key year for the development of AGI, and the subsequent development of AI will accelerate. Not only will the concentration of AI in the industry continue to increase, but a large number of AI-native new applications will also emerge.
Q: What technical challenges need to be tackled for large models to evolve from "usable" to "user-friendly" by 2026?
Liu Debing: We need to promote the evolution of models from L3 to L4, enhancing the model's intent understanding ability, self-adjustment capability, and achieving self-iteration in applications. While iterating the base model, we can directly optimize through the application layer. For example, by increasing the knowledge base and setting business logic, we can make the intelligent agent "user-friendly" in specific scenarios.
Q: Recently, the open-source of Zhipu's Auto GLM has attracted attention. Will mobile phones be the first scenario to explode on the client side? Will hardware devices be launched in the future?
Liu Debing: The open-source of AutoGLM indeed provides developers with great freedom. They can now deploy it locally, fully control their data and processes, or use it in the cloud immediately. They can conduct secondary development based on specific scenarios, deeply integrating it into their own products to create assistants that can truly "execute tasks."
Client-side models are closer to customers and scenarios, which is an important way to make AI tangible. Zhipu is also one of the earliest large model vendors in China to layout client-side models.
I believe that scenarios such as mobile phones, smart cars, smart homes, and smart offices all have the potential to explode as long as they incorporate Agents (intelligent agents). As for which field will explode first, it has the flavor of "experimental science." The development of AI this time cannot rely solely on theoretical deduction; it places more emphasis on practical execution. As long as the direction is reliable and sufficient resources and talents are invested to tackle challenges, breakthroughs are possible; conversely, if investment is insufficient or only superficial, even the best opportunities may be missed.
Our core strategy has always been to define and enhance the intelligence ceiling of models. Our current positioning is very clear: we mainly act as a foundational technology enabler, collaborating with terminal hardware manufacturers to inject our model capabilities into their products, thereby generating better application effects. We prefer to empower partners rather than develop hardware terminals ourselves.
Q: Talent is scarce in the field of large models. What type of talent does Zhipu value the most?
Liu Debing: I agree that for large model companies, what truly determines the ceiling is not the size of the team, but the density of talent.
Zhipu currently values the combination of three types of capabilities: Originality: the ability to propose new paradigms at the algorithmic architecture level. Engineering capability: the ability to stably and efficiently deploy cutting-edge models in real and complex scenarios. Technical faith: possessing long-termism and a willingness to continuously invest around the long-term goal of AGI Q: Are you worried about talent loss? Global companies are recruiting talent at "high prices."
Liu Debing: The competition for top talent has always existed, but the stability of the core team at KNOWLEDGE ATLAS is very high. We have always believed that retaining talent cannot rely solely on compensation; KNOWLEDGE ATLAS has a very pure atmosphere, and the team's gene comes from the Tsinghua Knowledge Engineering Laboratory, which has always fostered an environment of freedom, truth-seeking, and de-layering exploration.
Additionally, there is certainly a mechanism for profit sharing, such as highly competitive salaries and a comprehensive long-term equity incentive plan.
We also provide ample computing power support for R&D personnel, as well as a complete feedback loop from the laboratory to the users. Scientists here have a high degree of freedom in exploring cutting-edge technologies and can access a full-stack technology system.
Q: Five years from now, when people mention KNOWLEDGE ATLAS's code "2513," what do you hope they think of?
Liu Debing: We hope "2513" becomes synonymous with inclusive intelligence in the AI era. It is not just a stock code; it is an AGI system that can self-evolve and is full of human warmth. We hope that five years from now, complex intelligence will no longer be the privilege of a few but a right accessible to everyone.
When people think of 2513, they should think of it as a representative of Chinese strength and an original technology company moving towards the future of AGI.
Risk Warning and Disclaimer
The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at their own risk
