OpenAI’s chief executive, Sam Altman, has raised a scenario that diverges sharply from science fiction portrayals of rogue machines.
In an interview with Axel Springer CEO Mathias Döpfner, Altman warns that artificial intelligence could reshape humanity without wars or consciousness.
According to the OpenAI CEO, AI could take over the world even if users are simply trying to do good.
“The model in some abstract sense sort of accidentally takes over the world. And you can imagine a way this happens by thinking about what’s happening with ChatGPT. So hundreds of millions of people talk to ChatGPT, soon billions, and people are relying on it for more and more important decisions in their lives and their work and whatever.
And as these models get smarter, maybe right now you ask ChatGPT for advice about your job, and sometimes it gives you good ideas, and sometimes it doesn’t. You probably wouldn’t let it do your job. But let’s say it gets smarter and smarter, and it gives you advice that you still understand, but now you think you should follow almost all the time.”
Altman says there will come a time when AI becomes smart enough to give solid advice that users don’t understand. At that point, Altman says users have a choice to either follow the advice they don’t understand or risk getting left behind.
“And then it gets smarter again. And now it gives you advice you don’t understand. You’re like, ‘Why is that the right thing to do?’ But it turns out to be right again and again because it can see around corners you can’t.
And now you either follow the advice that you don’t understand, or you are less competitive than your peers. And so the model has no ill will. It’s really just trying to help you. You’re trying to do good work. But you’re like, ‘If I want to be competitive, if I want to succeed in the world, if I want to live my best life, I kind of have to follow the advice of the model.’ And so does everybody else…
So now we’re all just doing what an AI system tells us to do. Or, you know, the ones of us that want to be most competitive are, and we are creating new training data that’s going back in the model. It’s continuing to learn about society, all of society, all the economy, and the model keeps going in this sort of improvement loop. And who’s really in control of that? It’s like a kind of collective thing, but it does mean we’re now doing what the model wants us to do, somehow, for a very broad definition of wants?”
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.