Tucker Carlson wanted to see the “angst-filled” Sam Altman: He wanted to hear him admit he was tormented by the power he holds. After about half an hour of couching his fears with technical language and cautious caveats, the OpenAI CEO finally did.
“I haven’t had a good night’s sleep since ChatGPT launched,” Altman told Carlson. He laughed wryly.
In his wide-ranging interview with Tucker Carlson, the OpenAI CEO described the weight of overseeing a technology that hundreds of millions of people now use daily. It’s less about the Terminator-esque scenarios or rogue robots. Rather, for Altman, it’s the ordinary, almost invisible tweaks and trade-offs his team makes every day. It’s when the model refuses a question, how it frames an answer, when it decides to push back, and when it lets something pass.
Those small design choices, Altman explained, are replicated billions of times across the globe, shaping how people think and act in ways he can’t fully track.
“What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people,” he said. “That impact is so big.”
One example that weighs heavily: suicide. Altman noted roughly 15,000 people take their lives each week worldwide, and if 10% of them are ChatGPT users, roughly 1,500 people with suicidal thoughts may have spoken to the system—and then killed themselves anyway. (World Health Organization data confirms about 720,000 people per year worldwide take their own lives).
“We probably didn’t save their lives,” he admitted. “Maybe we could have said something better. Maybe we could have been more proactive.”
OpenAI was recently sued by parents who claim ChatGPT encouraged their 16-year-old son, Adam Raine, to kill himself. Altman told Carlson that case was a “tragedy,” and said the platform is now exploring options where if a minor talks to ChatGPT about suicide seriously, and the system cannot get in touch with their parents, that they would call authorities.
Altman added it wasn’t a “final position” of OpenAI’s, and that it would come into tension with user privacy.
In countries where assisted suicide is legal such as in Canada or Germany, Altman said he could imagine ChatGPT telling terminally ill, suffering adults suicide was “in their option space.” But ChatGPT shouldn’t be for or against anything at all, he added.
That trade-off between freedom and safety runs through all of Altman’s thinking. Broadly, he said adult users should be treated “like adults,” with wide latitude to explore ideas. But there are red lines.
“It’s not in society’s interest for ChatGPT to help people build bioweapons,” he said flatly. For him, the hardest questions are the ones in the gray areas, when curiosity blurs into risk.
Carlson pressed him on what moral framework governs those decisions. Altman said the base model reflects “the collective of humanity, good and bad.”
OpenAI then layers on a behavioral code—what he called the “model spec”—informed by philosophers and ethicists, but ultimately decided by him and the board.
“The person you should hold accountable is me,” Altman said. He stressed his aim isn’t to impose his own beliefs but to reflect a “weighted average of humanity’s moral view.”
That, he conceded, is an impossible balance to get perfectly right.
The interview also touched on questions of power. Altman said he once worried AI would concentrate influence in the hands of a few corporations, but now believes widespread adoption has “up-leveled” billions of people, making them more productive and creative. Still, he acknowledged the trajectory could shift, and that vigilance is necessary.
Yet, for all the focus now on jobs or geopolitical effects of his technology, what unsettles Altman most are the unknown unknowns: the subtle, almost imperceptible cultural shifts that spread when millions of people interact with the same system every day. He pointed to something as trivial as ChatGPT’s cadence or overuse of em dashes, which has already seeped into human writing styles. If such quirks can ripple through society, what else might follow?
Altman, grey-haired and often looking down, came across as a Frankenstein-esque character, haunted by the scale of what he has unleashed.
“I have to hold these two simultaneous ideas in my head,” Altman said. “One is, all of this stuff is happening because a big computer, very quickly, is multiplying large numbers in these big, huge matrices together, and those are correlated with words that are being put out one or the other.
“On the other hand, the subjective experience of using that feels like it’s beyond just a really fancy calculator, and it is surprising to me in ways that are beyond what that mathematical reality would seem.”
OpenAI didn’t immediately respond to Fortune‘s request for comment.
#havent #good #night #sleep #ChatGPT #launched #Sam #Altman #admits #weight #night