I finished Empire of AI two weeks ago, and I liked it very much. Karen Hao threads together the personalities, incentives, and consequences of this formative AI era with a clarity that makes the book feel like an instant classic for early-stage AI history.
The opening—Sam Altman’s firing and swift return to OpenAI—sets the tone: a high-stakes collision of mission, money, and governance. From there, Hao builds a people-first narrative. I especially appreciated how she traces Mira Murati’s long, steady positioning—quietly competent, consistently present, and increasingly pivotal. In a field crowded with loud voices, her approach reads like durable leadership.
Hao also sketches Ilya Sutskever as more of an AI philosopher than an operator—someone animated by first principles and long-term risks. That portrait sits alongside OpenAI’s institutional fascination with AGI as a north star, a framing that concentrates effort and capital even as it compresses timelines.
What lingered most for me is the book’s steady drumbeat about safety versus speed. Hao’s reporting portrays a pattern: safety language up front, velocity in practice. Safety orgs and processes exist, but they’re repeatedly subordinated to product cadence, competitive positioning, and narrative control. Evaluations and governance rituals are there, yet the gravitational pull is toward shipping, partnerships, and market momentum. If Hao’s account is accurate, OpenAI’s internal equilibrium has tipped over time from “move carefully and test” toward “move fast and align later”. I found that sobering—and frustrating—because trust in this domain is built on visible, enforced brakes, not just promises.
The human texture is compelling too. According to the book, Altman is depicted as someone who will verbally agree to avoid conflict and then proceed as he intends anyway. I really disliked reading that; if true, it’s a trust-eroding habit and, to me, a clear leadership gap.
Hao situates OpenAI within the wider constellation of power. Cameos from Elon Musk, Bill Gates, Reid Hoffman, Peter Thiel, Reed Hastings and other familiar figures aren’t just name-drops; they trace how narrative, capital, and policy braid together. The emergence of Anthropic gets thoughtful treatment as a counterpoint—an institution trying to put safety at the center and, in doing so, sharpening the whole field’s arguments about alignment, governance, and pace.
The book also pulls back the curtain on externalities. There’s the environmental side, including controversy over hyperscale data centers and freshwater stress in Chile—reminders that “the cloud” is intensely physical. There’s the hidden labor behind models: content moderation and data labeling in places like Kenya and Venezuela, where workers have alleged low pay and difficult conditions. And there’s the frayed boundary between academia and big tech: Hao recounts disputes over data-center energy scrutiny and accounts of academics who say they faced retaliation—up to dismissals—after raising concerns. Whatever the particulars in each case, the chilling effect on open debate is worrying.
On the legal and governance front, Hao uses Elon Musk’s lawsuit challenging OpenAI’s shift from its nonprofit ideal to a capped-profit structure to explore how founding promises collide with competitive pressures. She also shows how Silicon Valley and Washington, D.C., can reinforce each other—through national-security framing, procurement, export controls, and soft power—to co-produce empires of capability and influence.
A few takeaways I’m carrying with me:
- Safety needs structural power, not just messaging. If the people with brake authority don’t have independence and vetoes, speed will win.
- The AGI narrative is a double-edged sword. It aligns talent and capital, but it can also justify timeline compression and governance shortcuts.
- Quiet operators matter. Mira Murati’s long-game presence shows how influence can be built without theatrics.
- Externalities are core constraints. Water, energy, and global labor conditions belong in the product roadmap, not the appendix.
- Pluralism is protective. Rival institutions like Anthropic, critical journalism, and academic scrutiny help counterbalance concentrated power.
- Trust is strategy. Verbal alignment that isn’t honored later corrodes the coordination complex projects require.
If I wanted more of anything, it would be an even deeper look at how these leadership choices ripple outside the Bay Area and Beltway—researchers abroad, startups riding changing APIs, and communities living with the environmental and labor costs. But that’s less a flaw than a sign of how much ground the book already covers.
Empire of AI touched so many areas of AI that I expect to revisit it for years. Two weeks on, the scenes and dilemmas—especially the slow tilt from safety toward speed—still linger. That, to me, is the mark of a book that matters.