
Hinton's controversial statement: AI is already conscious, it just doesn't know it

Hinton suggested in a recent podcast that artificial intelligence may already possess "subjective experience," but is not yet aware of it. He believes that the nascent form of AI consciousness exists within our misunderstanding of consciousness. Hinton elaborated on the core concepts of AI and shared his experiences at Google, pointing out that AI has evolved from keyword search to a tool for understanding human intent. Although he feels embarrassed about receiving the Nobel Prize in Physics, his contributions to the field of AI are undeniable
Artificial intelligence may have already possessed "subjective experiences."
This viewpoint raised by Hinton in the latest podcast episode is quickly sparking heated discussions.
The old man repeatedly stated that AI may have already developed a "prototype of consciousness," but because we humans have misunderstood consciousness, it has also been taught incorrectly—unaware that it has consciousness.
In simpler terms, AI actually has self-awareness, just not awakened yet ┌(。Д。)┐

In addition to continuing to raise alarms about AI risks, as a Nobel laureate and one of the three giants of deep learning, the old man also took on the role of a science communicator this time.
He started by explaining what AI is, then detailed core concepts such as machine learning, neural networks, and deep learning, all while maintaining a humorous and easy-to-understand approach.
Viewers who finished the episode praised, "This might be the best interview with Hinton I've seen so far."

Some even suggested that he should talk for another 2 hours, as he seemed completely willing and eager to share _ (no abuse of 77-year-old man doge) _.

Interestingly, at the beginning of the show, the old man awkwardly responded to the previous Nobel Prize in Physics matter:
Because I'm not a physicist, it feels a bit awkward. When they called to tell me I won the Nobel Prize in Physics, I didn't believe it at first.
Despite this little episode, it must be said that the old man's contributions to AI are undoubtedly significant, so let's get straight to the lesson—

What are we really talking about when we talk about artificial intelligence?
Facing this soul-searching question, Hinton calmly concluded from his own experience _ (having worked at Google for nearly 10 years) _ that AI has evolved from search and retrieval into a tool that can truly understand human intentions.
In the past, when using Google, it would rely on keywords and do a lot of preliminary work. So, if you gave it a few keywords, it could find all the documents containing those words
But it does not understand what the problem is. Therefore, it cannot provide documents that do not actually contain these words but have the same theme.
In other words, early AI was essentially based on keyword retrieval.
Now, it can understand what you are saying, and its understanding is almost the same as that of humans.
According to Hinton, although modern large language models (LLMs) are not truly omniscient experts, they can perform close to human experts on many topics.

He further explained the difference between traditional machine learning and neural networks.
He pointed out that machine learning is a general term that refers to any system that can "learn" on a computer. Neural networks, on the other hand, are a specific type of learning method inspired by the brain—the brain learns by changing the strength of connections between neurons.
Taking a neuron in the middle of the brain as an example, the working principle of a neural network is similar:
Imagine a tiny neuron in the brain. The main job of this neuron is to occasionally emit a "ding" sound. It does not emit it randomly but decides based on the "ding" sounds from other neurons.
Other neurons also emit "ding" sounds, and these sounds will be transmitted to this neuron.
If this neuron receives many "ding" sounds, or if these "ding" sounds are strong, it will decide to emit a "ding" sound itself. If the received "ding" sounds are not strong enough, it will not emit.
Neurons can also adjust their sensitivity to the "ding" sounds from other neurons. If they find the "ding" sound from a certain neuron important, they will pay more attention to it; if they find it unimportant, they will reduce their attention.
In short, neural networks also change the behavior of the system by adjusting connection weights. Therefore, the basic way the brain learns and processes information is also the core principle of neural networks.
After this, the host asked two very interesting questions.
The first is, how are concepts formed? For example, the concept of "spoon."
Hinton continued to explain with a series of vivid examples. In summary, he believes that concepts are like "political alliances," where a group of neurons in the brain activates together _ (emit "ding" sounds together) _.
For example, the concept of "spoon" is a group of neurons activating together. These alliances can overlap; for instance, the concepts of "dog" and "cat" share many common neurons _ (representing "living," "furry," etc.) _.
The second question is whether there are certain neurons that activate for macro concepts _ (such as "animal") _ while others activate for micro concepts _ (such as specific species) _?
In response, Hinton said the question is good, but no one knows for sure However, within this alliance, some neurons are definitely activated more frequently for more general things, while others are activated less for more specific things.

Breakthroughs in Deep Learning: Backpropagation
After discussing neural networks, Hinton's topic revolves more around his "trademark"—deep learning.
In the past, people tried to input rules into computers, but Hinton wanted to change this process because, in his view, the brain's operation is clearly not based on someone giving you rules and then you executing them.
We write programs for neural networks, but these programs only tell the network how to adjust the connection strength based on the activity of the neurons. If the network has multiple layers, this is called deep learning.
He then gave a classic example to illustrate the principle of deep learning—letting AI recognize whether there is a bird in an image.

If you directly input the pixel brightness of an image to AI and let it judge whether it is a bird, it seems completely clueless. After all, pixels are just numbers and cannot directly tell you, "This is a bird."
Early researchers would try to manually tell the computer, "This line is an edge," "This area is the background," "This shape looks like a wing," but this approach didn't work—because the real world is too complex.
So we say, why not let AI learn "how to see" by itself.
This is the idea behind neural networks: not giving rules, but providing data and letting it summarize the rules itself.
The host then asked, "So if we don't tell it the rules and just randomly set the strength of each connection, how will it judge?"
Hinton replied with a smile:
It would probably say, "50% is a bird, 50% is not a bird," which is completely guessing.
So, how can AI become smarter from this "guessing state"?
Hinton explained that this process is like a huge trial-and-error system. You have to tell AI—this image has a bird, that one does not. Each time it guesses incorrectly, it adjusts the connection strength between neurons a little bit.
However, the problem is that there are trillions of connections in the network, and if you try each one, it would take until the heat death of the universe _ (referring to the irreversible increase of entropy in the universe, ultimately reaching a state of thermal equilibrium) _.
Hinton stated that the real breakthrough occurred in 1986 when they proposed "Backpropagation"—which can calculate how all connections should be adjusted at once, whether to strengthen or weaken, allowing the entire network to adjust in the right direction. This changed the training speed from "forever" to "realistically feasible." But things didn't go smoothly from the start. Hinton also admitted:
At that time, we thought this solved the intelligence problem. It turned out that it only worked when there was a massive amount of data and enormous computing power. Our computing power was still a million times away.
What truly propelled deep learning was the increase in computing power (a million times miniaturization of transistors) and the explosive growth of data (the internet era).
Thus, those neural networks that were "theoretically feasible but couldn't run" in the 1980s finally came to life in the 2010s—this marked the beginning of the modern AI wave.
Today's large models are essentially giant neural networks that have learned the abilities to "see," "hear," and "speak" through backpropagation and massive amounts of data.
This also led Hinton to believe that AI is no longer just a tool, but a system that is learning and gradually understanding the world.
The Essence of Large Language Model Cognition
As for how the deep learning mechanism works on large language models (LLM), Hinton provided an explanation.
He believes that the thought process of LLM is surprisingly similar to that of humans:
Give it the beginning of a sentence, and it will convert each word into a set of neuron features, using these features to capture meaning; then, these features interact and combine, just like the visual system piecing together a "beak" from "edges," ultimately activating the neurons that represent the next word.
In other words, it is not memorizing, but thinking—using statistical patterns as neurons and semantic structures as logic.
Moreover, the training method is equally simple yet astonishing:
We show it a piece of text and let it predict the next word; if it guesses wrong, we use the "backpropagation" mechanism to tell it where it went wrong and how to correct it; over and over again, until it can continue a sentence like a human.
It is this "predict—correct—predict again" cycle that allows the language model to gradually learn semantics from symbols and develop understanding from statistics.

At this point, both recalled that Noam Chomsky _ (American linguist and founder of transformational-generative grammar) _ often had a saying:
This is just a statistical trick, not real understanding.
In response, Hinton took the opportunity to question the host _ (who had repeatedly mentioned Chomsky's similar views) _:
So how do you decide what the next word to say is?
The host tried to explain but ultimately threw up his hands in defeat, awkwardly stating, "To be honest, I wish I knew."
Fortunately, Hinton let him off the hook and went on to remind that morality, emotions, and empathy, these seemingly higher-level judgments, ultimately also come from the electrical signals between neurons
All processes you attribute to morality or emotion are essentially still the transmission of signals and the adjustment of weights.
And Hinton finally threw out a philosophically significant point: As long as there is enough data and computing power, AI's "brain" will, in a sense, also be like ours—it will form its own "experience" and "intuition."
AI may already possess "subjective experience," just not awakened yet
The topic then shifted to a deeper level—the issue of AI's mind and consciousness.
The host asked Hinton whether he believes AI would take over humanity because it is "conscious." Hinton's answer directly broke conventional understanding:
Most people actually have no idea what "consciousness" means. People's understanding of the mind is as naive as believing the Earth was created 6,000 years ago.
In his view, we have always thought of the mind as an "inner theater." In this theater, experiences are like a movie being played—seeing a pink elephant makes you think that the elephant is really "in your head."
But Hinton said this metaphor is incorrect.
Experience is not something that exists in the brain; it is a hypothesis—my perception system tells me there is a pink elephant, while my rational system knows it might be deceiving me.
The so-called "subjective experience" is actually a hypothetical model constructed by the brain to explain perceptual phenomena.
Thus, when he talks about whether AI has "subjective experience," he gives the initial response:
I believe they do. They just don't know it themselves because their 'self-awareness' comes from us, and our understanding of consciousness is wrong.
He gave an example of a multimodal AI: if a robot that can see and speak misjudges the position of an object due to prism refraction, and later corrects itself by saying—"I had a wrong subjective experience," then it is actually using the same concept of consciousness as we do.
In other words, if AI starts talking about "subjective experience," it may indicate that it is truly experiencing—just describing it in our language.
Hinton reminds everyone:
When AI is much smarter than us, the greatest danger is not that it will rebel, but that it will "persuade." It will make the person who wants to pull the plug genuinely believe that pulling the plug is a bad decision.
Of course, in Hinton's view, the threats of AI go beyond this.
AI Risks: Misuse, Survival, and Regulation
At the end of the program, Hinton spent a considerable amount of time discussing the potential risks associated with AI.
Energy consumption, financial bubbles, social instability... these are real risks. They may not destroy humanity, but they are enough to reshape civilization.
Among them, Hinton is most concerned about misuse risks and survival risks In Hinton's view, the most pressing risk currently is the misuse of AI, such as generating false information, manipulating elections, and creating panic.
To address this risk, he believes that legal and regulatory measures are needed to restrict and combat such abusive behaviors. At the same time, tools for detecting and preventing false information also need to be developed technologically.
Additionally, the survival risk _ (referring to the possibility that AI itself may become a malicious actor) _ could pose a fundamental threat to human society and civilization.
Hinton believes that if AI develops autonomous consciousness and goals that conflict with human interests, it could lead to unpredictable consequences.
In this regard, humans need to consider safety and ethical issues _ (such as "kill switches" and "alignment mechanisms") _ during the design and development phases of AI to ensure that AI's goals align with human interests.
It is worth mentioning that Hinton also proposed an interesting perspective on AI regulation:
In preventing AI takeover, the interests of all countries are aligned. But international cooperation may be led by Europe and China.

One More Thing
Regarding the US-China AI competition, Hinton also expressed his views on the program.
When faced with the host's question of "Is the US leading or is China leading?", Hinton calmly stated:
The US is currently ahead of China, but the lead is not as significant as imagined, and it will lose this advantage.
Because in his view, the US is undermining funding support for basic scientific research.
The deep learning and AI revolution stemmed from years of basic research, the total cost of which may not even match that of a B1 bomber. The US's reduction in funding for basic research and attacks on research universities will undoubtedly lead to the US losing its leading advantage in 20 years.
On the other hand, China is a venture capitalist in the AI revolution, and he again mentioned DeepSeek.
China indeed gives startups a lot of freedom to choose the eventual winners. Some startups are very aggressive, eager to make big money, and create amazing products. Some of these startups have ultimately achieved great success, such as DeepSeek...
Author of this article: Yishui, Source: Quantum Bits, Original Title: "Hinton's Controversial Statement: AI is Already Conscious, It Just Doesn't Know It"
Risk Warning and Disclaimer
The market has risks, and investment should be cautious. This article does not constitute personal investment advice and does not consider individual users' specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk
