Skip to main content
CEO Portraits 4 min read

Ilya Sutskever: The Man Who Fired Sam Altman and Started Over

How Ilya Sutskever co-built AlexNet, co-founded OpenAI, tried to oust Sam Altman, and launched Safe Superintelligence — now valued at $32 billion.

Ilya Sutskever founder of Safe Superintelligence
Ilya Sutskever founder of Safe Superintelligence
  • Safe Superintelligence raised $2 billion in 2025 at a $32 billion valuation — with no product, no customers, and no revenue.
  • Ilya Sutskever co-created AlexNet in 2012, the neural network that triggered the modern deep learning revolution.
  • He served as chief scientist at OpenAI for nearly a decade before orchestrating the failed board coup against Sam Altman in November 2023.
  • Sutskever now serves as CEO of SSI after co-founder Daniel Gross departed in mid-2025 to join Meta.

$32 Billion and Zero Products: The Most Expensive Bet in AI

Safe Superintelligence is the strangest company in Silicon Valley. It has no product roadmap. It has no customers. It has no announced release dates. What it has is Ilya Sutskever — and for investors including Andreessen Horowitz, Sequoia Capital, Greenoaks, and DST Global, that is apparently worth $32 billion.

The company’s valuation surged sixfold in under a year, from $5 billion in September 2024 to $32 billion by early 2025. Alphabet and Nvidia have also backed SSI. Google Cloud provides the TPU infrastructure for its research. The mandate is deceptively simple: build superintelligent AI that doesn’t kill anyone.

Sutskever, now 40, has spent his entire adult life trying to make machines think. His path to running a $32 billion startup with no product began in a country that no longer exists.

Born in Gorky, Raised in Jerusalem, Coded in Toronto

Ilya Sutskever was born in 1986 in Nizhny Novgorod — then called Gorky — in the Soviet Union. At five, his family emigrated to Israel and settled in Jerusalem. The move shaped him. Growing up between cultures and languages, he developed an early fascination with computers and mathematics.

At 16, his family relocated again, this time to Canada. Sutskever attended a Toronto high school for exactly one month before being admitted directly into the University of Toronto as a third-year undergraduate. He was, by any measure, not a typical student. He earned a bachelor’s degree in mathematics in 2005, a master’s in computer science in 2007, and a PhD in 2013 — all at Toronto, all under the supervision of Geoffrey Hinton, the godfather of deep learning.

AlexNet, Two GPUs, and the Moment Everything Changed

The PhD years produced the breakthrough that would define a generation. In 2012, Sutskever convinced fellow student Alex Krizhevsky to train a deep convolutional neural network on the ImageNet dataset. The result was AlexNet, trained on two Nvidia GTX 580 GPUs in Krizhevsky’s bedroom at his parents’ house. It achieved a top-5 error rate of 15.3% — a 10.8 percentage point improvement over the previous best.

Researcher Yann LeCun called it “an unequivocal turning point in the history of computer vision.” Hinton himself later summarized it with characteristic dryness: “Ilya thought we should do it, Alex made it work, and I got the Nobel Prize.”

After graduating, Sutskever spent two months at Stanford with Andrew Ng, then joined Google Brain. In 2015, he co-founded OpenAI alongside Sam Altman, Greg Brockman, and Elon Musk. As chief scientist, he oversaw the research that led to GPT-2, GPT-3, and ultimately ChatGPT — the product that made AI a household word.

November 17, 2023: The 72 Hours That Broke OpenAI

At noon PST on November 17, 2023, OpenAI’s board fired Sam Altman. The statement said Altman had not been “consistently candid in his communications.” Behind the scenes, the driving force was Sutskever. He had written a 52-page memo accusing Altman of lying, manipulating executives, and fostering internal division. When later asked how long he had been considering the move, Sutskever answered: “At least a year."

"It was the board doing its duty.”

That’s what Sutskever told colleagues in an all-hands meeting the day after the firing. But the conviction didn’t last. Within 72 hours, 738 of OpenAI’s employees signed a petition demanding Altman’s return. Several senior executives resigned. Microsoft, OpenAI’s largest investor, moved to hire Altman outright. By November 21, Altman was back as CEO with a new board, and Sutskever had publicly expressed regret. The coup had failed. The relationship was irreparable. On May 14, 2024, Sutskever announced he was leaving OpenAI.

SSI: One Goal, One Product, No Distractions

One month after his departure, in June 2024, Sutskever announced Safe Superintelligence Inc. alongside Daniel Gross, the former head of Apple’s AI and machine learning strategy, and Daniel Levy, a former OpenAI researcher. The company set up offices in Palo Alto and Tel Aviv.

The founding statement was one sentence: “Our mission is simple and our name is our mission. We will build safe superintelligence."

"Superintelligence is a technology that could end human history. We should treat it with the seriousness it deserves.”

SSI raised $1 billion in its first round in September 2024, backed by SV Angel, DST Global, Sequoia, and Andreessen Horowitz, at a $5 billion valuation. By early 2025, it closed a $2 billion round led by Greenoaks at $32 billion. No product demo. No technical paper. No API. Just the reputation of the man who co-built AlexNet, shaped GPT, and tried to stop the person he believed was moving too fast.

From Co-Founder to CEO: Sutskever Takes Full Control

In mid-2025, Daniel Gross departed SSI to join Meta, leaving Sutskever as CEO and Levy as president. The transition was clean. Sutskever posted on X: “We are grateful for his early contributions to the company and wish him well.” The subtext was clearer — SSI is Sutskever’s company now, fully and without compromise.

The team remains small by design. SSI has hired selectively, pulling researchers from Google DeepMind, OpenAI, and academic labs. At least one early engineer has already left to start his own company. The pace is deliberate. Sutskever has argued publicly that AI’s bottleneck is no longer compute — it’s ideas. In a November 2025 interview with Dwarkesh Patel, he declared: “The age of scaling is over. It’s back to the age of research again, just with big computers.”

The Hardest Problem in the History of Science

Sutskever’s bet is that the AI industry has its priorities backwards. Everyone is racing to ship products. He is racing to solve alignment — the problem of ensuring a superintelligent system does what humans actually want. He compares it to nuclear reactor safety: you build the containment before you split the atom.

”AI is going to be both extremely unpredictable and unimaginable.”

Whether SSI delivers on its promise remains entirely unknown. The company has published nothing. It has demonstrated nothing. But Sutskever’s track record — AlexNet, GPT, ChatGPT — is arguably the strongest in the history of artificial intelligence research. He is the rare figure who has both built the revolution and warned that it might destroy everything. For investors betting $32 billion on that paradox, the logic is simple: if anyone can build safe superintelligence, it’s the person who understands most deeply why it’s dangerous. If Dario Amodei built Anthropic on the promise of restraint, Sutskever is building SSI on something even more radical — the promise of patience.

Ilya Sutskever on X | Safe Superintelligence

Tags

#AI #safety #research #startups #leadership

More in CEO Portraits