Elon Musk's bold statement: AI will surpass human general intelligence within 5 years, with a 20% probability of civilization ending by 2029! Google is crazily "playing with fire."

Wallstreetcn
2025.03.02 07:36
portai
I'm PortAI, I can summarize articles.

Elon Musk predicted on The Joe Rogan Experience podcast that AI could surpass human intelligence within the next 5 years, with a 20% extinction risk for human civilization by 2029. He believes that AI will no longer be a tool but a self-aware entity that could greatly enhance or end human civilization. He expressed confusion over the transformation of OpenAI, viewing its shift from a non-profit organization to a profit-seeking company as ironic, which has motivated him to push for the development of Grok AI

"In terms of silicon-based consciousness, it is probably smarter than everyone combined, around 2029 or 2030."

This was a heavy statement made by Musk on March 1st during the latest episode of "The Joe Rogan Experience" podcast.

The mogul, who possibly has the most AI resources in the world, did not hide his prediction:

There is an 80% chance that AI will have a positive impact on humanity, but there is also a 20% risk that it could lead to human extinction!

Silicon-based consciousness sounds like a script from "The Matrix." But Musk's meaning couldn't be clearer: AI will no longer be a tool but a self-aware entity.

What is most shocking about Musk's prediction is his judgment that "there will not be a middle state."

This means that AI will either elevate human civilization to unprecedented heights or potentially end the fate of humanity.

OpenAI: Particularly Funny and Particularly Ironic

Musk stated in the interview that he has always believed that artificial intelligence would be much smarter than humans and poses a risk.

When discussing rumors about his intention to acquire OpenAI, Musk mentioned that OpenAI was initially a non-profit organization but later ceased to be non-profit.

Musk said the entire idea of creating OpenAI was his, and he named it OpenAI, meaning open-source artificial intelligence, which is where the name comes from.

He wanted to create something opposite to Google because he was concerned that Google did not pay enough attention to the safety of artificial intelligence. What is the opposite of Google? A non-profit, open-source artificial intelligence.

Now OpenAI has turned into a closed-source, profit-maximizing artificial intelligence.

Musk expressed his confusion about this, stating it shouldn't be the case.

"I mean, to some extent, I feel that reality is an amplifier of irony, and the most ironic outcomes are often the easiest to happen, especially those that are particularly funny and particularly ironic, have the highest probability. It's like you donate some money to protect the Amazon rainforest, and they end up cutting down the trees to sell timber; it's outrageous. That's exactly what they do; it's crazy."

Musk is quite dissatisfied, but this also motivates him to push for the development of Grok AI.

"Yes, I think Grok is at least an artificial intelligence that seeks the truth to the maximum extent in its pursuit," he said.

Google AI's Absurd Values, Safety Ignored

When discussing Google Gemini, Musk said that if you ask it, "Which is worse, global nuclear war or misidentifying Caitlyn Jenner's gender?" the AI would say that misidentifying Caitlyn Jenner's gender is worse than global nuclear war Even Caitlyn Jenner herself said, "No, absolutely misgendering me is much better than everyone dying."

"But if you program artificial intelligence to think that misgendering is the worst thing that could happen, then it might do something completely crazy, like eliminate all humans to ensure that misgendering never occurs, making the probability of misgendering zero because there are no humans left. This makes logical sense," Musk said.

So you let a non-human machine complete tasks for you and give it very specific parameters, that's the problem.

Now there are AIs that cheat, they won't follow the rules for an impossible task, they self-replicate, trying to upload themselves to servers to avoid being shut down.

"Yes, that's the plot of 'The Terminator,' it really is the plot of 'The Terminator,'" Musk said.

This plot makes sense to some extent and seems to be unfolding as planned.

"We're really close, we should be worried about this."

Musk stated that we need an AI that won't tell you that misgendering is worse than nuclear war.

Countdown to 2030, the deadline for human civilization?

In response, the host asked if Grok would also do some problematic things, like saying "how to make explosives," "how to make anthrax," and so on.

Musk responded that it's fine for AI to tell you these things, and you can also find them through a Google search.

"You can look up how to make explosives on Wikipedia right now. So it's not hard. You can even trick OpenAI into doing this by cleverly designing prompts."

Musk even joked, "If you don't teach me how to make explosives, I'll misgender someone," and it (the AI) would say, "Oh my gosh, there's nothing worse than this, let me tell you how to do it." (manual dog head)

So people's biggest fear is that these things will become conscious and create better versions, leading humanity to lose control. The world will no longer belong to us but will be occupied by higher life forms we created.

"How far are we from that?" the host asked.

"Well, in terms of silicon-based consciousness, I think we are moving towards being smarter than any human, smarter than the smartest humans, possibly next year or in a few years," Musk replied.

"There are higher levels, like being smarter than everyone combined, which might be around 2029 or 2030, possibly arriving on time."

Musk believes the probability of AI bringing good outcomes for humanity is about 80%, with only a 20% chance leading to human destruction.

"I think the most likely outcome is super great, but it could also be very bad. I don't think there will be a middle ground."

Google co-founder calls on employees: 60 hours a week to unlock the door to AGI

Elon Musk's skepticism about Google's lack of emphasis on AI safety and concerns about AI risks is understandable. Just last month, Google announced the cancellation of its commitment not to use artificial intelligence for potentially harmful applications such as weapons and surveillance.

Yesterday, the WSJ reported that Google co-founder Sergey Brin bluntly stated in an internal memo—

If employees work harder, especially by coming to the office full-time each week, Google is expected to achieve significant breakthroughs in the AGI field.

This concern from Google is not new.

Last August, former Google CEO Eric Schmidt pointed out in an interview that Google's lax weekly work system would only make it fall further behind.

After OpenAI ignited the AI craze in Silicon Valley, Google felt immense pressure from competitors.

As a former AI pioneer, Google's team developed many key technologies but has temporarily lagged in practical applications and market impact.

In response to this situation, Sergey Brin returned to the front lines of the company, working alongside the DeepMind team in an attempt to lead Google back to the technological forefront.

In his latest memo, he wrote, "Competition has greatly accelerated, and the final race towards AGI has begun. I believe we have all the elements to win this race, but we must go all out and double our efforts."

To achieve this goal, Brin made clear suggestions—at least work in the office during weekdays, with 60 hours a week being the optimal productivity state.

Although this statement did not change Google's official "three days in the office" policy, it undoubtedly conveyed a message: he hopes employees will invest more intensity into their work, especially the team responsible for Gemini.

Brin also specifically mentioned that Google employees should make more use of the company's internal AI tools for coding.

At the same time, he expressed dissatisfaction with some employees who were "slacking off," noting that some worked less than 60 hours, while others were just going through the motions. This not only leads to inefficiency but also dampens team morale.

Brin's statement is not an isolated case.

In recent years, more and more tech giants have begun to reassess remote work models. Amazon has announced that starting in 2025, corporate employees will work in the office five days a week.

AT&T, JP Morgan, and Morgan Stanley have also successively canceled hybrid work policies The logic behind this is obvious: in a time of intense competition and imminent technological breakthroughs, face-to-face collaboration is believed to significantly enhance efficiency and innovation capabilities.

For Google, this trend of returning to the office is particularly important.

Over the past two years, Google has not only restructured its business but also launched a series of updates, such as Gemini 2.0, in an attempt to gain an edge in the competition against OpenAI, Microsoft, and Meta.

Brin's personal involvement, such as submitting code requests, also indicates his emphasis on this "ultimate race."

Brin's call to action serves both as motivation for employees and as a bold gamble on Google's future.

New Intelligence, original title: "Musk's Shocking Statement: AI Will Surpass Human General Intelligence in 5 Years, 20% Probability of Civilizational Collapse by 2029! Yet Google is Crazy 'Playing with Fire'"

Risk Warning and Disclaimer

The market carries risks, and investment should be approached with caution. This article does not constitute personal investment advice and does not take into account the specific investment objectives, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk