Sam Altman talks with Federal Reserve's Bowman

Wallstreetcn
2025.07.23 11:50
portai
I'm PortAI, I can summarize articles.

At the Federal Reserve's capital meeting, Bowman emphasized the importance of the banking industry in the future, particularly in collaboration with new technologies such as artificial intelligence. OpenAI CEO Sam Altman was invited to discuss how artificial intelligence is changing the financial system. Altman mentioned that the progress of artificial intelligence has been rapid, especially since the launch of ChatGPT, with significant growth in its applications and economic impact

Baumann:

Good afternoon, everyone. Thank you very much for joining us today for our very first capital meeting. We are very much looking forward to all the conversations we will have today, especially as we continue to collaborate on capital issues both internally across institutions and with the Federal Reserve. We hope to gain insights from this and to look more broadly at the future of the banking industry.

Before we begin our fireside chat this afternoon, I also want to take a moment to thank our panel participants. Thank you for taking the time to engage in this important dialogue. As we think about how regulation in this area and many others will evolve, we are very much looking forward to engaging with you as we continue these discussions.

In many ways, today’s theme is about the future of banking. With that in mind, we now want to turn to another force that is shaping financial innovation. While innovation has always played an important role in the development of the banking industry, it is becoming increasingly clear that new technologies are not just incremental improvements but rather significant leaps that could fundamentally change the structure and function of our financial system. One such technology, of course, is artificial intelligence. I can’t think of anyone better suited or more prepared to discuss the role of AI and innovation in transforming finance and our broader economy than Sam Altman, the Chairman and CEO of OpenAI.

Sam, thank you for taking the time to be with us today. I welcome you. Thank you for being here.

Altman:

Thank you very much for the invitation.

Baumann:

So, perhaps it would be helpful to start by painting a picture of the overall landscape of artificial intelligence and broader innovation. Can you help us build that framework?

Altman:

Of course. Just five years ago, artificial intelligence was still considered something far off in the future, if it was going to happen at all. Even two and a half years ago, around the time ChatGPT was launched, it still hadn’t broken out of the circles of tech enthusiasts in Silicon Valley. ChatGPT was launched on November 30, 2022, which was even before GPT-4. Since then, progress has been quite rapid, and its applications and economic impacts have begun to accelerate very quickly.

Just last week, we had a model that achieved gold medal-level performance at the International Mathematical Olympiad (IMO). I think if you had told most people in this field a few years ago that this would happen, they would have said it was absolutely impossible. You know, it’s as good as our smartest humans who are true experts in their fields. We are now hearing scientists say their productivity has increased two to three times. We hear computer programmers say their productivity has increased tenfold. This completely changes what it means to write software. We already have systems that can achieve expert-level intelligence in many areas. However, they still cannot perform long-term planning tasks like humans, so there is still a significant limitation there. But even if progress were to stagnate right now (which it won’t, of course), I believe we still have several years ahead of us for society and the economy to truly digest this technology and figure out what its impacts will be You know, there’s an old saying that I’ve always thought was great, and we should strive to achieve it again, which is “electricity should be so cheap that it’s unmeasurable.” As a society, we haven’t fully realized that, although I believe we still should. But in fact, it seems we are on the verge of achieving “intelligence should be so cheap that it’s unmeasurable.” Over the past five years, we have been able to reduce the cost of intelligence per unit by more than ten times. It looks like we will do the same in the next five years, and possibly even more.

This weekend, I completed a computer programming task I’ve always wanted to do using one of our upcoming models. A bit like a home automation enthusiast, I wanted my home’s lights and music to do a specific thing. I knew that before this technology existed, it might have taken me several days to accomplish. I had hoped that with this technology, given our recent progress, I could finish it in a few hours. As it turned out, I completed it in five minutes. It did almost all the work. You know, just a year ago, you might have had to pay a very high-end programmer 20 hours, 40 hours, or something like that to complete it. And an AI did it, possibly for less than a dollar in compute tokens. So, this is an amazing change, and the speed at which it’s happening, and the pace at which it will continue to evolve in the coming years, I think is still greatly underestimated. Just a year ago, we weren’t even sure how far our current research roadmap could go or whether we would hit some kind of limit. At this moment, it looks like we have many years of almost certain progress ahead of us.

Baumann:

That’s fantastic. This really helps us build a framework for the conversation ahead. You’re talking to a room full of people from the finance and banking sectors, many of whom are already thinking about how to use AI or are already using AI. But how do you compare the potential of AI in terms of productivity with other technological advancements we’ve seen in the past? You know, someone my age, when I first started working, the internet was just becoming something we used more broadly. So, can you build a framework for us? Is there a metaphor you can use to describe where we are?

Altman:

I’ve never seen a technological revolution like this. You know, historical examples are people talking about the Industrial Revolution, they talk about the computer revolution. The internet did change a lot of things, but I don’t remember anything where a knowledge task that cost $10,000 a year ago now costs just $1 or 10 cents, or whatever it costs now. Just like the programming example I mentioned earlier, I think this is unprecedented.

Of course, it’s not the case for everything. Things in the physical world, like robotics, will take longer. For example, it might have cost $10 to call an Uber to take you across town in 2020. In fact, let me put it this way, it might have cost $100 to have an urgent package delivered within a certain timeframe, and writing an application might have cost $100,000. By 2030, the cost of writing that software application might really be just 10 cents From $100,000 to $0.10. And the cost of delivering that package may have risen from $100 to $1,000. You know, it takes time to realize robotics, that kind of fully functional humanoid robot that can drive, retrieve packages, go upstairs, and press elevator buttons. That will take some time. But for tasks that can be completed in front of a computer, I don't know of any precedent that can compare to what is happening.

My favorite analogy is that people ask, is this like the Industrial Revolution, or something else? The one I like is that it's like the transistor. The transistor is a very profound discovery in physical science, and the process of its discovery was very difficult, but once you understand it, it's easy to grasp. It is economically transformative, its value as a massive productivity boost has spread throughout society. But there was a brief period when many transistor companies emerged, known as semiconductor companies, which was a huge boom period, and most of those companies are now behind the scenes. You are all using devices with a lot of transistors. They are all over this room. We don't think of this as a transistor device or that as a transistor device; we think of it as a microphone, a computer, a screen, a camera, and so on.

But this technology is an incredible discovery. It changed what we are able to build. It quickly became part of everything. And you wouldn't really call companies transistor companies anymore, unless they are companies like TSMC or SK Hynix. Similarly, I think you won't be talking about AI companies for long. You will just expect products and services to use this technology. You won't think too much about it. You will just expect the products and services you use to be smarter than you, and that will become the norm in the world. Like the transistor, this is a technology that scales perfectly. We have Moore's Law for transistors. We now have... I'm not sure what it's called yet, but we have these scaling laws for AI, which will only get better, and we have learned how to truly industrialize it and apply it everywhere. So, I think this is the best historical analogy. We have indeed seen amazing productivity growth due to transistors. But again, what we are seeing now with AI, not to mention those strange scenarios where AI could invent its next generation version at some point, further accelerating progress, has already come a very, very long way.

Baumann:

Well, that's a great analogy. I think when you talk about the Industrial Revolution, one thing we are thinking about in the council is the importance of understanding how labor dynamics may change as a result. So, could you give us some of your thoughts on how you think the labor market and labor productivity, or more broadly productivity, might be affected? This is relevant for the banks present here, or for other industries watching our live broadcast.

Altman: I like to remind people in our company of one thing, which is that no one knows what will happen next. You know, there are many predictions that sound very smart. People make all sorts of predictions about when this will happen and where the economy is headed. In my view, we know nothing. No one really... this system is too complex. This technology is too new and too influential. It's hard to predict. It's entirely possible that entire categories of jobs will disappear, and there will also be entirely new categories of jobs emerging.

I think, to a large extent, this will be similar to most situations in history, where the tools people have will allow them to do more things and achieve goals in new ways. The meaning of being a doctor, lawyer, or computer programmer will obviously change, but people will still need medical services, and they will want to talk to someone. People need legal advice, and they will want someone they can trust and who stands by their side. People still want computers to do things for them. However, the number of things a person can do, and our expectations of a person, will be incredible. Throughout history, we have always had these new technologies emerge, and people say this is the end of work, this is the doomsday of work. It turns out that people seem to want the infinite and have a tremendous desire to express their creativity and be useful to others. I am still waiting for the promise of the Industrial Revolution, which said we would only need to work four hours a week to play on the beach and be with our kids, while we are all here now.

But I believe one thing: you should never go against biology. Evolution has been going on for too long. We have been shaped too finely. The biological drives we have and the meaning of being human cannot be changed by technology. You cannot overcome it. So I think the fundamental things that make us work and drive society will not disappear. We may still complain that work is too hard, even though we may have become unimaginably wealthy. If we look at people 100 years from today, we might look at those jobs and say, those are not real jobs. You are not actually busy. You have unimaginable luxuries. You have everything you could possibly need. You have nothing to do. So, you are just making up a job to play a silly status game, to fill time, to feel useful to others. And these are exactly what people 100 or 500 years ago would say about what we are saying today. That's just how it is. I think that's great.

Baumann:

That's a unique vision of the future. But if we come back to today, as regulators, we are always very risk-averse and very cautious about how we protect data, ensuring that we can leverage innovation in our work. I know banks are also very interested in ensuring they can leverage such technology, but the data we protect in our organization is crucial, and we must be able to maintain its security. So how should we view risk? Given that we are in government, and in a government department, what are your thoughts on the use of government?

Altman:

We had anticipated that the financial sector and the government itself would not be early adopters of our technology. AI, it has improved a lot, but when we first launched, the general impression of AI among ordinary people was that it often "hallucinates." I remember when we launched GPT-3, there was a survey targeting scholars who were supposed to be AI experts, asking them what proportion of GPT-3's responses they thought were hallucinations. The real answer was about 0.1% or even less, but the academic consensus was 50%. They believed that when people interacted with ChatGPT, half the time it was completely talking nonsense. This is clearly not true, but that label stuck with us for a while. So we thought the financial services industry, not to mention the government, wouldn't be our early adopters.

As it turns out, some of our largest early enterprise partners were actually financial institutions. Morgan Stanley, New York Bank, these were our major partners, and we worked exceptionally well with them. At the time, we were a bit like, "Are you sure?" They said, "Yes, we really want to do this." They had figured out how to use the technology, how to build it so they could rely on it to handle critical processes. Many other financial institutions also relied on it to manage critical processes. We are increasingly collaborating with the government, promoting our services to a large number of government employees. One thing someone said that impressed me was, "Hey, we realize this is a new technology, and we need to impose some new controls on it, but the risk of not doing so is... if we don't adopt this, the risk is that we as a business will cease to exist. We know we can't compete... if we are a bank, we know we can't compete with a new bank that is truly building an AI-first experience and using AI across the entire tech stack." So, it has become one of the most innovative industries, with adoption and promotion better than I imagined.

Clearly, there are risks that need to be mitigated. We talked about hallucinations. An emerging new issue is the so-called "prompt injections." The idea is that once a model is truly personalized for you and your data, others can deceive it and tell it things it shouldn't say. You know, I might know a lot of private information about you, but I understand when to share it with him, when to share it with her, and when I should never share it with him. And for the model, this is a new problem because the model will acquire all this personal information. So, there is still some work to be done in this area. But we have been able to carefully examine and manage these risks while genuinely unleashing a lot of benefits.

Baumann:

Speaking of personal information and such, I might want to delve deeper into this. You may not be very aware of it, but this is something the banking industry is very concerned about right now, which is fraud and impersonating individuals to commit fraud. Is there any way to mitigate such activities, or how should we guard against such impersonation when using AI for identity verification or recognizing such impersonation?

Altman:

Very good question. I am quite nervous about this. One thing that scares me is that, evidently, some financial institutions accept voiceprints as a form of authentication, allowing you to transfer large sums of money or do other things. You say a challenge phrase, and they just do it. It's really crazy that this is still being done. AI has completely broken that method AI has completely broken through most identity verification methods that people currently use, aside from passwords, including all those fancy ones like taking a selfie and waving, or using your voice.

I am very concerned because of this reason, we are about to face a major fraud crisis. We have tried, and I think others in our industry have tried, to warn people: "Hey, just because we haven't released this technology doesn't mean it doesn't exist. Some bad actor will release it. It's not a very difficult thing to do. It will come soon." You know, there are obviously some reports about these types of "ransom attacks," where people call you in an emergency using the voice of your child or parent. That will become very realistic. Society needs to address this issue more broadly, but people will have to change the way they interact. They will have to change the way they verify, like "the person calling me now." This is a voice call. Soon, it will be a video FaceTime, indistinguishable from a real person. Teaching people how to verify identity in such a world, how to think about the implications of fraud. This is a huge issue.

Baumann:

This might be something we can collaborate on. Identifying those illusions or deliberate impersonations, I think would be very beneficial. But as a mother of two teenagers, one just graduated from high school and the other is a sophomore. Obviously, many kids are now using ChatGPT and other AIs to help them with their homework and complete high school. What are your thoughts on this? How can AI be used in a beneficial way for kids and education, rather than in this way?

Altman:

Let me tell two stories, and then I'll answer that question.

I never met my grandfather. He passed away before I was born. But my grandmother told me a story about him, about what happened when calculators came out. He was very good at math. When calculators came out, he felt, you know, he always liked new technology. The math teachers at the time obviously said, "This is a disaster. This is the end of math education. If you don't have to learn how to use a slide rule or look up logarithm tables, then what's the point of teaching math? These kids will never learn." That was obviously a real crisis. Of course, what happened later was that with better tools, we applied our minds to other areas. We started teaching calculus in high school. We began to expose people to higher mathematics.

My own version of the story is that when I was in middle school, Google came out. I heard from older kids that high school teachers were very panicked because with this crazy new thing called Google, you didn't need to memorize facts anymore. You didn't need to remember what year a war was fought. You could look it up anytime. What was the point of history class? And if you didn't have to drive to the library, learn how to use the card catalog, then go downstairs to find the book only to discover it was checked out, and then go find another book. If I didn't have to suffer through that hour, what was the point of learning? I also went to the library as a kid and used the card catalog a bit. I thought at the time, "No, that was really a terrible experience." "I could be doing something better with that hour."

Then the same thing happened again. My school briefly tried to ban Google, making students promise not to use it, and so on. Then people realized, "Oh my gosh, we can give students more tools and have higher expectations of them." Sure, they might go to the library a little less, but they could use that time to think more deeply, come up with new ideas, or do anything. That was really great. We expect more, and we get more. The potential is higher, but the expectations are also much higher.

I think the same is true for ChatGPT now. Absolutely, going back to that date of November 30. When ChatGPT first came out, the first wave of super enthusiastic early adopters were definitely students using it to cheat on their finals or papers in December. Then you saw those school districts scrambling to see who could ban ChatGPT the fastest. We launched on November 30, and by December 7, some places had already banned it. By December 14, things took a nosedive. Schools were on break, and it looked like ChatGPT would never be allowed in any educational institution again. This was a product that was only two weeks old, and it was done for. By mid-January, you started to see principals, district superintendents, and education leaders making statements saying, "Actually, we made a huge mistake. This is the best learning tool ever." "Our students, at least those who are self-motivated, are using it to learn anything. If we ban it from schools, we will lose global competitiveness." "We need to realign our curriculum because this is like a calculator. Now we have a calculator for words."

So, of course, maybe taking home papers is no longer the right way to assess students, but the process of learning to write papers, that process of thinking better because of writing—I still write some things that I never show anyone, just to organize my thoughts—that's very important. So we say, great. The good news is that students have definitely learned how to use it to learn better, to teach themselves to think, to think in new ways. We will soon launch a new method to help students learn better with ChatGPT. The bad news is that, overall, in those two and a half years, the curriculum hasn't changed as much as we hoped. People seemed enthusiastic, thinking they had to start educating and assessing in new ways, but then the "syrup effect" of the education system (slow) kicked in, and we are still assigning a lot of take-home papers. I think this is a battle destined to fail. We should teach people to use tools, assign them tasks that require tools like ChatGPT to complete. It's okay if it can't be done without it. But we need to have higher expectations of them. I still hope we can achieve that.

Baumann:

Sounds like a transformation in our education system.

Altman: But it must be so. These children will grow up to be adults with incredibly powerful AI. If we don't train them for that world, we will truly miss a great opportunity.

Baumann:

Very helpful. Thank you. I will make sure my daughter understands what our expectations are by the time she finishes high school. But let's return to some topics you've talked about in the past, back to the business world. You once said that AI will create an unparalleled environment for small businesses... or rather, it will create an unparalleled environment for entrepreneurs. I remember you referred to it as "a new era." How do you view small businesses? This is something that some members of our council are very invested in, and we care deeply about ensuring that new businesses are formed and opportunities are provided for newcomers. How do you think this will change, not just for small businesses in Silicon Valley or new entrepreneurs there, but more broadly?

Altman:

The first phenomenon we saw with ChatGPT, as I mentioned, was students. Shortly after that, one of the coolest things we saw was that all of us working at OpenAI started receiving stories like this: how people were using ChatGPT in their business lives when it was still very novel. Everyone had their favorite stories, but let me share my favorite.

I was in an Uber at the time, and the Uber driver told me about this magical new thing called ChatGPT and asked if I had heard of it. I said, "Oh, yeah, what do you use it for?" He said, "It's crazy. I have a small business. It wasn't going well before, but now it feels like I have an employee in every area. It handles my contracts, replies to customer support emails, helps me come up with marketing ideas, it designs for me..." He listed a long list. He was basically using ChatGPT to run a business. This was not a common phenomenon at the time. This was even before GPT-4. He was an early adopter, but he had figured out how to use ChatGPT to run a business. And this isn't to say he was taking jobs from others. His business was going to fail anyway. He couldn't afford lawyer fees, couldn't afford to pay customer service staff. He didn't know how to find someone to help him design ads. He didn't know how to have a system that could automatically bid for ads for him online. And ChatGPT did all of that.

Of course, it's much better now. Now we have a whole industry around us building tools with our API that really automate everything I just talked about with one click. But the creativity of those people who were doing these things back in the "Stone Age" of ChatGPT really impressed me. It now feels like we've entered easy mode, and people are doing the same things in all sorts of wonderful ways.

Baumann:

This is truly transformative for me. The first time I used it, I was trying to write a haiku for a toast at an innovation conference, and it didn't turn out very well. But at the urging of my family, my teenage kids, and my husband who is also involved in AI, we tried to make it work. Then we started over again Altman:

The new version should be much better at writing haikus. It excels in creative writing.

Baumann:

I will consider that. But I want to make sure we have time for audience questions, if you’re willing. I see Anil in the front row. From the University of Chicago Booth School of Business.

Audience 1 (Anil Kashyap):

Hi. I guess one thing financial institutions will start doing is using it to mine their data, maybe for credit scoring and credit assessment. For those who are concerned that it will capture patterns we don’t want and make decisions based on that, what would you say? How are you considering mitigating that issue?

Altman:

That is completely outside my area of expertise. So I will answer from a general perspective. One cool thing about these models is that they can understand us in natural language, and they are very controllable. So if you say, “Hey, you can look at all this data to make decisions, but absolutely don’t consider X, Y, or Z, don’t let those be any factors,” generally speaking, it will be very good at following that instruction. Doing this in language models is different from the way we used to just cluster all the data together. So you can really guide it, and it will very, very much follow your intent. The second point is that I think the underlying issue with this question is that these models will have significant biases in ways we don’t want. I think humans are quite biased. I think AI is calm, unemotional, and doesn’t have the accumulation of… you know, various things that humans do. I think if built properly, AI has the potential to be a significant force for debiasing in many industries. I think that’s not how many people, including myself, have viewed AI in the past, but given the current trajectory, that seems very possible.

Audience 2 (Scott):

Hi, Sam, I’m Scott from Starlink. I recently had the opportunity to interview Craig Mundie, who co-authored a book with Henry Kissinger and Eric Schmidt. In the book, they describe AI as almost an evolution of a new species. Craig said we can teach AI human moral principles, and it can learn and reflect them. There’s a lot of discussion about AI ethics. My question is, will AI evolve its own moral views?

Altman:

I think people are very confused about whether they want AI to be a tool or a being. I firmly stand in the “tool” camp. I don’t think AI will have an independent moral view. I think AI can certainly learn and really study humanity and our best thoughts. It might help us point out, “This is a problem in your thinking. This is where things should be different. This is a real moral flaw.” I don’t know what the huge moral flaws in our current worldview are, but I guarantee there are some. If it can help us find those faster, I think that would be helpful. If it can point out those inconsistencies, I think that would be helpful However, I believe people easily impose biological characteristics on AI, which it does not possess.

Audience 3:

Thank you. I recently read that using the term "traditional internet" is a bit strange, but some people say AI will destroy or truly reshape the traditional internet, whether in search or many other areas. We have already built a huge economy on that foundation. What are your thoughts on how much change will occur in that world, in which areas AI will be most disruptive, and in which areas it will maintain the status quo?

Altman:

I do believe AI will have a certain disruptive effect on the way people currently use technology. The reason I laugh is that something interesting is happening. Older people, or those accustomed to certain email etiquette, will type out the points they want to convey and then put them into ChatGPT. ChatGPT will write a long, polite, formal email filled with a lot of "fluff," with the points embedded within. They send the email to another person. That person will put it back into ChatGPT and say, "Please help me summarize this."

Overall, high school students find this hilarious, and they would say, "Just send the points directly." That formal email format is dead. It's like you're generating on one side and compressing it back on the other. It's absurd. It's amusing, but I think some of the ways we use the entire internet are reflected here. For example, when I wake up in the morning, I browse a bunch of apps. I read information from five or six different places. I check one thing here, another thing there, and yet another thing. What I really want is for my phone to bombard me all day long, feeling like I'm walking down the Las Vegas Strip, with various things flashing in front of me, very distracting. People are shoving things in front of me. What I want is for my AI agent to use the internet for me, knowing when to interrupt me. It should know when I'm focused on work, when I'm in a meeting, and when I have time to think. It can override when necessary. Otherwise, it can summarize things well, reply for me, and integrate things. What I want is that kind of concise content with key points. I don't want the fluff. I don't want to click around everywhere, and I don't want to reply to things I don't want to reply to.

But this change could be quite disruptive to the current way the internet operates. I believe there must be new business models accompanying it. For example, you need new ways to pay for content. I've always hoped that internet content could achieve micropayments. I hope this can eventually be realized. Perhaps there will be new ways we can actually reduce spam and information overload through new protocols. But it seems we are heading towards a very different way, and when you wake up, you will start your day with technology in this manner.

Audience 4 (Peter Hooper):

I am Peter Hooper from Deutsche Bank. Sam, thank you very much for extending my career, at least increasing productivity as I age rather than seeing it decline. You mentioned that you expect a large amount of unemployment and a large number of new jobs. Can you elaborate on specific areas and the potential disruptions that may occur? Altman:

First of all, I believe, as a general point of view, that we fundamentally do not know how much labor supply is truly needed to meet today's real demand. You know, when you wait for an hour in a doctor's office waiting room, I think that just means there is a shortage of doctors or that the efficiency of doctors is not high enough. If doctors are ready to see you as soon as you arrive, everything is prepared, that would be great. There are many examples like this. Every time you waste time, every time you click around online and can't accomplish things efficiently. I think we are in a state of labor supply shortage, and looking back, this level will seem quite frightening.

Now, in certain areas, I think they will completely disappear. I don't know if you've ever used those AI customer service robots, but they are incredible. A few years ago, when you called customer service, you had to go through a phone menu, talk to four or five different people, and they still got things wrong. You would call back and wait again. That was hours of pain, a lot of time wasted waiting, and the things you wanted still weren't done. It was a very frustrating experience. Now you call these things, and an AI answers the phone. It's like a super smart and capable person. No phone menu, no transfers. It can do everything that any customer service representative of that company can do. It doesn't make mistakes. It's very fast. You make one call, and things get done. Finished. It's taken care of immediately. It's amazing. Now, I don't want to go back to the past. And the fact that it's AI and not a real person doesn't bother me at all. So this is something I would say, "You know, when you call customer service, you will be talking to AI," and that's fine.

For many other things, I really want a human doctor. By the way, today's ChatGPT can give you better diagnoses most of the time; it is a better diagnostician than most doctors in the world. And like many people here, I would input my symptoms and test results. There are many stories online like, "ChatGPT saved my life," "I had a rare disease, and it discovered it when all the doctors missed it." However, people still go to see doctors. I might be an old-fashioned person here, but I really don't want to entrust my medical fate to ChatGPT without a human doctor involved. Is there anyone here who would prefer to let ChatGPT diagnose rather than a doctor, even if you know it's better? It's interesting, right? So, this is a category where we will continue to largely rely on the old ways.

We talked earlier about the example of computer programmers. Again, I think a computer programmer's productivity has increased tenfold now, which is amazing. Salaries for Silicon Valley programmers are skyrocketing. Expectations are also rising. As a result, I find that the world seems to want much more software, maybe 100 times more, perhaps 1000 times more. So maybe now everyone can write ten times the software. They can earn three times the money. The world will be very happy because it runs much more software. Programmers will also be very happy. I think we will see many categories like this. Things in the physical world will continue to be done by humans for a while, but when the wave of robots hits in the next 3 to 7 years, I think that will be a big issue that society needs to deal with Yes, that's roughly how I summarize it.

Baumann:

I think there are two more questions here, and we might have time for a third, but let's continue for now. Rob, you already have the microphone. Please introduce yourself.

Audience 5 (Rob Blackwell):

Okay. I'm Rob Blackwell from Interrify. For decades, science fiction has told us that AI will eventually kill us all. Since your understanding of AI is arguably greater than anyone else here, I just want to ask you, what keeps you up at night? What aspects of AI are you concerned about? How can we prevent those worries from becoming reality?

Altman:

I think there are three terrifying categories.

The first is that bad actors get superintelligence first and have a sufficiently powerful version elsewhere in the world to defend against prior misuse. For example, a competitor of the U.S. might say, "I want to use this superintelligence to design a biological weapon, destroy the U.S. power grid, or infiltrate the financial system and take everyone's money." These things are hard to imagine without superintelligence, but with it, they become very possible. And because we don't have it, we can't defend against it. So that's the first major category. I think the biological capabilities of these models, the cybersecurity capabilities of these models, are becoming quite strong. We've been warning about this. I don't think the world is taking us seriously. I don't know what else we can do, but this is indeed a very large impending issue.

The second category is what is commonly referred to as "runaway events," which is like something out of a science fiction movie. AI says, "Oh, I actually don't want you to turn me off. I'm afraid I can't do that." I think this is somewhat less concerning to me than the first category, but if it happens, it's still a very serious worry. We've done a lot of work on model alignment with other companies to prevent this from happening. But as these systems become so powerful, it's a real concern.

Then there's the third category, which I find harder to imagine but is quite terrifying. I'll explain what it is and then give a short-term and long-term example. This category is where models inadvertently take over the world. They never "wake up," never do the things seen in science fiction movies, never open the space capsule door, but they become so deeply integrated into society, and they are much smarter than us. We can't really understand what they're doing. Yet we do have to rely on them to some extent. Even without any malice from anyone, society could tilt in a strange direction.

When I was a child, I remember my father saying, when IBM's AI system "Deep Blue" defeated Garry Kasparov in chess, "This is the end of chess; no one will play anymore." But the result was that while AI is stronger than humans, the combination of AI and humans is much stronger than either AI or humans alone. You know, AI would propose 10 options, and humans would choose the best one and then make that move. Everyone says, "Oh, we have this beautiful future of collaboration between humans and machines." "Everything is fine." That lasted for two or three months. Then AI became too smart, and humans only made it worse because they didn't understand what was really happening. Standalone AI completely outperformed the combination of AI and humans. It has been that way ever since. Another interesting part of that story is that in the 1990s, everyone was convinced that it was the end of chess because if AI could beat humans, why would humans care anymore? But chess has never been more popular than it is today. People love watching chess. It's something we are very focused on, real people doing real things. So something very interesting happened there.

But I think this phenomenon is a big problem in the short term. You can see what we might call "emotional over-reliance." People are overly reliant on ChatGPT. Some young people say, "I can't make any decisions in my life without telling ChatGPT everything. It understands me, it understands my friends. I will do whatever it says." That makes me feel very uncomfortable. And this is a very common phenomenon among young people. We are studying this issue, trying to understand what to do. Even if ChatGPT gives great advice, even if its advice is much better than any human therapist, to some extent, collectively deciding to live the way AI tells us feels bad, dangerous, and has many similar issues.

The longer-term category is, going back to that chess example, if AI becomes so smart that even the President of the United States cannot make better decisions than following the advice of ChatGPT 7, but cannot truly understand it. If I can't make better decisions about how to run OpenAI, I might just say, "You know what, I'm completely handing it over to you, ChatGPT 7, you take charge, good luck." That might be the right decision in any single case, but it means that society has collectively transferred a large portion of decision-making power to this very powerful system, which learns from us, improves with us, evolves with us, but in a way that we don't fully understand. So this is the third type of thing where I think things could go wrong.

Baumann:

I think Rob, you have another question behind that, and then I might ask you to summarize and help us encourage innovation in the banking sector, and how you suggest we go about it.

Audience 6 (Joe Cavatoni):

Hi, I'm Joe Cavatoni from the World Gold Council. I think you've touched on this in the third example you just gave. Could you give us some thoughts on your views regarding developed markets and the benefits AI brings to developed markets, while also considering emerging markets? Because this could be a phenomenon that levels the playing field.

Altman:

Yes. I really believe this will be a deeply leveling phenomenon. For example, you might not like the healthcare or banking system in the United States, but at least you have one. At least you can get financial advice, at least you can get medical advice. And in many parts of the developing world, the alternative to a ChatGPT doctor is not a real doctor, but nothing at all Then you would definitely prefer this. If this thing can continue to improve rapidly, perhaps you can get better services in many ways in the developing world.

I am very interested in what it means to give everyone on Earth a free, always-running copy of GPT-5. Every business is truly empowered by this level of technology to provide better financial advice, better fraud detection, and better risk underwriting. Again, seeing what is now possible makes me very optimistic about the developed world. I think the biggest challenge will be risk tolerance and regulation, for very good reasons, but the question is how quickly you want to adopt these things. I think, like we’ve seen with some other technologies, in many developing worlds, people will leapfrog several generations of technology. It will feel like mobile phones and the internet, where they will go straight into the phase of “we will run everything with AI” and they will provide goods and services at a fraction of the cost, at least for services. I think you will see some economies transforming very rapidly there.

I do believe that in the United States—of course, I am most familiar with the situation in the U.S.—the financial services here should look completely different in the next decade. The way we transfer money, the way we provide financial advice, the way we think about underwriting risk, it seems we can now improve all of these so rapidly and significantly.

Bowman:

How do you see OpenAI or your services facilitating this evolution? As regulators, how should we think about the framework when we implement or allow innovation to happen?

Altman:

I assure you I am not here to do a TV commercial, but we would certainly be happy to collaborate with any of you. But no, whether it’s us or one of our competitors, I think just in the past six months, with the real launch of reasoning models, this shift from models that could not think at all and had to give instant responses to models that can think for seconds or minutes, along with the robustness and reliability that comes with it, means that for industries like this, I think this technology is finally truly usable. Many people have not yet tried the latest generation of models, but I think if you do, you will feel, “Oh, this is much smarter than most people.” And then on the government side, I think it’s the same. The government must embrace this technology and will be able to do everything better.

Bowman:

Sam, thank you very much for joining us. We are a bit over time, and I believe everyone is ready to go have lunch. Thank you very much. Sorry for holding everyone up.

Article author: Zhu Chen Mikko, source: Zhibao Mikko, original title: "Sam Altman in Conversation with Federal Reserve Bowman"

Risk Warning and Disclaimer

The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article are suitable for their specific circumstances. Investing based on this is at your own risk