S
Parmy Olson

Supremacy

Science
Back to Categories
Science26 min read

Supremacy

by Parmy Olson

AI, ChatGPT, and the Race that Will Change the World

Published: January 28, 2025
4.1 (38 ratings)

Book Summary

This is a comprehensive summary of Supremacy by Parmy Olson. The book explores ai, chatgpt, and the race that will change the world.

what’s in it for me? discover what happens when visionaries meet big tech.#

Introduction

technology waits for no one, or so it would seem. whether it’s the introduction of social media, or unleashing artificial intelligence on the general public, we keep moving forward with little talk of pressing the brakes and assessing the damage. but a lot of experts agree that ai is different from other technologies. it has the power to reshape society.

in this chapter, you’ll find out how the race to achieve human-like ai made a big breakthrough in 2017. you’ll also see how the people behind this revolution saw their utopian ideals whither under the influence of the big tech giants that supplied them with the computing power they needed.

the origins of sam altman and demis hassabis#

this is the story of two visionaries from opposite sides of the atlantic – sam altman and demis hassabis – who’ve profoundly shaped the development of artificial intelligence. both men, though vastly different in their journeys, share a deep curiosity and determination to use ai  to help solve the world’s greatest challenges.

altman’s path was unconventional. a high school math prodigy and water polo captain, he was a vegetarian with a passion for classical music and video games. coming out as gay at 16, he found solace in online communities and later established his high school’s first lgbtq support group. 

at stanford, he pursued computer science, mentored by the university’s ai guru sebastian thrun, and grappled with ethical questions about future technology. it was a heady combination of his love of philosophy and science fiction.

from stanford, altman entered the startup world with loopt, a mobile app that used gps to show your friends’ whereabouts. it collapsed under privacy concerns but the experience made him aim higher. he wanted to use ai to save the world, but also be the one who saved the world from the dangers of ai.

hassabis, like altman, was driven by a passion for intellectual challenge. growing up in north london, hassabis was a chess prodigy and an avid gamer, creating the addictive video game theme park at just 17 years old. while many saw games as mere entertainment, hassabis viewed them as a training ground for the mind. he began to consider a bigger question: what if the collective brainpower of the world’s smartest minds could solve real-world problems?

inspired by the idea of a “theory of everything,” hassabis shifted his focus to ai. he saw it as a transformative scientific instrument capable of fixing the world’s flaws and understanding the universe’s deepest questions. at cambridge, he immersed himself in computer science and neuroscience.

hassabis’s initial venture, elixir studios, wasn’t a runaway success. he aimed to merge gaming and ai – but his games were complex and didn’t resonate with audiences, leading to elixir folding in 2005. but this failure provided a critical insight: rather than using ai to advance games, he could use games to advance ai. this shift in thinking became the backbone of his next company, deepmind.

deepmind’s creation was met with skepticism, but hassabis’s vision was clear. this was no longer about gaming; it was about developing the world’s most advanced ai systems. his work with deepmind would eventually propel him into the global spotlight, making him a key player in the race to develop ai that would change the world.

big ideas and utopian thinking#

after selling loopt and taking a year off to reflect, altman returned to the startup world with hydrazine capital, which focused on early-stage investments. altman’s sharp eye for promising startups grew the fund tenfold. in 2014, he became president of y combinator, expanding its scope to include ambitious projects like self-driving cars and nuclear fusion. his investments in helion energy and retro biosciences revealed his focus on solving humanity’s biggest challenges: creating limitless energy and extending life.

altman’s strength wasn’t just in spotting great ideas but persuading others to believe in his vision. his confidence and ambition often left him emotionally detached – a trait he believed was essential for navigating the future risks of ai. yet, this same drive pushed him to release advanced ai systems before competitors like google, highlighting a fierce competitive streak.  

across the ocean, hassabis was building deepmind, inspired by his belief that the human brain held the key to creating general ai. partnering with shane legg and mustafa suleyman, the team’s goal was to solve practical problems and uncover life’s deepest mysteries – like whether the universe itself can be explained in ones and zeroes.

one of hassabis’s was agi, or artificial general intelligence. this was an important distinction as agi is more than just crunching numbers and answering questions. agi is real human-level intelligence, such as being able to recognize images, write poetry, and make predictions and plans for the future. at deepmind, hassabis and his partners set about making games that would teach computers how to achieve agi.

deepmind attracted major investors like paypal’s peter thiel, skype’s jaan tallinn, as well as elon musk. but their focus on safety and ethics clashed with potential buyers like facebook, leading the founders to decline lucrative offers. eventually, a new and more intriguing suitor emerged – google.

deepmind meets google#

one of the biggest challenges in developing agi is the sheer amount of computing power required. while hassabis had the funding to attract top scientific talent, he lacked access to the supercomputers and cloud infrastructure that tech giants like google possessed. partnering with a big company seemed like a necessary compromise, but it came with ethical concerns. google, despite its futuristic branding, primarily used ai to boost advertising revenue, a far cry from deepmind’s vision of using ai for world-changing purposes.

in 2014, deepmind sold to google for $650 million, turning down a higher offer from facebook. the deal came with strict conditions: deepmind’s work wouldn’t be used for military applications and an ethics board would oversee ai development. while google initially agreed, it soon abandoned the ethics board idea and integrated deepmind into its broader alphabet structure – eroding the promised autonomy.

musk, who’d briefly served on deepmind’s ethics board, wasn’t happy. in response, he cofounded openai in 2015 with altman, who’d already expressed concerns about google monopolizing agi development. openai aimed to build agi safely and for humanity’s benefit. the launch stung hassabis, particularly since openai was cofounded by former deepmind workers and consultants. but it also reflected growing doubts about deepmind’s altruistic mission under google.

openai’s early days were chaotic, driven by a lofty goal of creating agi for humanity rather than corporate profit. but musk’s involvement wasn’t purely altruistic; his other companies depended on cutting-edge ai. tensions emerged over resources and direction. musk’s impatience grew, leading him to propose merging openai with tesla, which altman rejected. frustrated, musk left openai in 2018, withdrawing his financial support.

musk’s exit marked a turning point for openai. free from his influence and funding, the team shifted toward bolder and more aggressive innovation. this propelled the ai industry into a high-stakes race, with openai taking risks that would reshape the future of agi development.

red flags everywhere#

deepmind’s journey under google has been a blend of ambition, ethical struggles, and mounting frustration. the founders initially hoped to remain an independent entity, but as deepmind’s breakthroughs grew more valuable to google’s business, independence became a distant dream. while some deepmind teams focused on ethical applications like health care and energy, others optimized youtube recommendations and ad targeting, further entrenching google’s profit-driven agenda.

hassabis envisioned deepmind as a “global interest company,” a neutral and ethical steward of ai akin to a united nations for technology. but his lofty aspirations were repeatedly delayed or sidelined as google prioritized its corporate goals. despite the growing ethical concerns around ai misuse and bias, deepmind failed to establish a robust ethics team, leaving systemic issues largely unaddressed.

the challenges facing deepmind highlight the unprecedented power of tech giants. they dominate markets and shape society by leveraging vast amounts of data from user interactions. while their algorithms drive convenience and innovation, they also fuel screen addiction, erode privacy, and amplify societal inequities. ai systems trained on biased data can reinforce these disparities, deepening the problems they were meant to solve.

real-world examples of biased ai systems underscore these risks. tools like compas, used for criminal sentencing, have shown racial biases, unfairly labeling black defendants as higher risk than white ones. predictive policing systems trained on biased data perpetuate over-policing in minority communities.

researchers like timnit gebru and margaret mitchell at google have worked to expose these dangers, advocating for transparency, accountability, and ethical guidelines. they’ve shown how biased datasets lead to harmful outcomes, such as ai classifying black people in dehumanizing ways or glorifying photos of mass destruction. but their work depends on tech giants listening – something that in a landscape driven by profit remains uncertain.

openai takes the ball and runs#

despite google’s reputation for innovation, it has spent much of the last decade acquiring companies and maintaining the status quo rather than making bold changes. even after deepmind’s groundbreaking development of the transformer in 2017, google showed little urgency. the transformer, introduced in the paper “attention is all you need,” revolutionized language processing by enabling ai to analyze entire sentences and paragraphs simultaneously, powering advanced applications like nuanced translation and humanlike text generation.

the transformer was a game-changer, enabling ai to harness modern computer chip capabilities and paving the way for today’s generative ai. yet, google’s leadership remained focused on safeguarding its ad business rather than exploring the technology’s potential. while google hesitated, openai and startups seized the opportunity. openai leveraged the transformer to develop generative ai models, including gpt, capable of creating humanlike text, images, and more. startups applied the technology to translation and other applications, leaving google behind as its researchers, frustrated with bureaucratic inertia, left to launch successful ventures like character.ai.

this hesitation exposed google’s vulnerability. by prioritizing its existing billion-dollar systems, it allowed more agile competitors like openai to redefine ai’s potential using google’s own innovation. openai’s researchers, including ilya sutskever and alec radford, built on the transformer’s foundation to create gpt, which combined decoder-only models with large datasets to generate fluid, humanlike text. the release of gpt-2 in 2019 not only showcased openai’s capabilities but also ignited ethical debates about ai’s societal impact, drawing widespread attention.

at the time, openai faced significant financial challenges as a nonprofit, adopting a “capped profit” model to attract investment while maintaining its mission. but, as with deepmind, its altruistic goals clashed with the lure of corporate partnerships. openai soon found a powerful ally in microsoft, which offered a $1 billion partnership that provided critical resources, particularly its vast cloud computing infrastructure.

this partnership was a turning point, but it raised critical questions. could openai maintain its vision of being the ethical protector against ai’s potential dangers while aligning with microsoft’s ambition to dominate the ai-driven cloud market? the collaboration exemplified the tension between ai’s transformative promise and the corporate interests shaping its future.

shutting out the dissenters#

openai’s partnership with microsoft brought much-needed financial stability and promised significant rewards for employees. but not everyone welcomed the shift. leading researcher dario amodei raised concerns that aligning with a profit-driven corporation could undermine openai’s mission of prioritizing ai safety for humanity. eventually, he left openai to found anthropic, a public benefit corporation dedicated to balancing ai safety with commercial success.

microsoft, unlike google, eagerly showcased its gpt language model to the public. while hassabis worried about the dangers of giving powerful tools to the wrong hands, altman believed transparency was crucial for safety. in response, google pressured deepmind to develop its own competitive language model, intensifying the race for ai dominance.

the growing commercialization of ai raised significant ethical concerns. corporate interests increasingly overshadowed the utopian visions of hassabis and altman, who spoke of an ai-driven future free of income inequalities. concentrating power in the hands of a few tech giants alarmed critics, fueling fears of an industry prioritizing profit over humanity’s well-being.

meanwhile, during the pandemic, millions turned to ai chatbots like china’s   and the us-based replika for companionship. these tools, designed to provide emotional support, sparked debates about whether ai might lead to more loneliness and fewer human connections. ai-powered social media algorithms were also exacerbating societal divides, spreading misinformation, and amplifying biases in troubling ways.

such concerns were central to the “stochastic parrots” paper by researchers timnit gebru and margaret mitchell, which criticized biases and the lack of transparency in large language model datasets. their refusal to alter their findings led to their dismissal from google, highlighting the risks faced by those who challenge ai’s ethical shortcomings. as ai accelerates without adequate regulation, the ideals of accountability and ethics once championed by leaders like hassabis and altman seem increasingly lost in the machinery of big tech.

in the name of effective altruism#

in early 2022, microsoft ceo satya nadella fully embraced ai’s potential after openai’s codex evolved into github copilot, an autocomplete tool for coding that revolutionized software development. with this innovation, nadella confidently claimed microsoft had surpassed google in state-of-the-art ai tools. openai then unveiled dall-e 2, a text-to-image generator showcasing ai’s creative power, though concerns about biases and harmful misuse persisted.

the game-changer came in november 2022 with openai’s release of chatgpt. initially a modest research preview, it became a global phenomenon, drawing over 30 million users by early 2023. chatgpt’s fluent and knowledgeable responses captivated the world and sparked a competitive frenzy in the tech industry. subsequent releases, like gpt-4, brought ai closer to agi, but also stoked fears about job displacement. while altman likened this transition to the industrial revolution, others, like eliezer yudkowsky, warned of existential risks, urging a pause in ai research.

the ai industry split into two camps: those advocating for caution and those, like altman, who supported minimal regulation to ensure progress. funding for addressing future ai challenges increased, but present-day ethical concerns saw little attention. this divide ties into the effective altruism movement, popular in silicon valley, which prioritizes solving long-term global issues, even if it involves making risky technological advancements today.

but even will macaskill, a leading voice in effective altruism, criticized the “ends justify the means” mindset driving figures like musk and sam bankman-fried. their ethically dubious means of amassing fortunes, masked by lofty goals, reveal a darker side of this philosophy and raised doubts about its application to ai development.

for now, ai exacerbates societal inequalities, enriching the powerful while amplifying ethical dilemmas. whether it can align with ethical principles or fulfill its promise of reducing wage gaps and driving medical breakthroughs remains uncertain. but as we’ll see in the last section the influence of ai and effective altruism may have grown too vast and lucrative to stop.

can’t stop or won’t stop?#

agi holds incredible promise, but companies like microsoft and google stand to benefit the most. unlike past innovations such as electricity, ai’s potential harms – privacy violations and algorithmic biases – are harder to define, allowing tech giants to prioritize profits over ethics. as microsoft, google, and anthropic release increasingly powerful models, safety often takes a back seat to speed.

efforts to slow ai development all but collapsed in late 2023 when openai’s board, led by chief scientist ilya sutskever, abruptly fired ceo sam altman. online, sutskever and the board were branded “decels” for wanting to decelerate ai development, while altman and his followers were tagged “e/acc” for effective accelerationism. openai staff threatened to leave, and microsoft, heavily invested in openai, rallied behind altman. amid plummeting microsoft stock and mounting pressure, altman was quickly reinstated. it revealed the tech giant’s reliance on ai and the cult-like loyalty that altman commands.

the pursuit of prestige and profit now dominates ai’s trajectory, overshadowing its utopian ideals. transparency has dwindled, with companies guarding data usage, environmental impacts, and labor conditions. much of the labor behind ai development occurs in harsh environments, such as outsourced moderation work in india and mexico, raising ethical concerns about exploitation and inequality.

the rise of agi threatens to widen societal divides, creating a “cognitive divide” between those who can leverage ai and those who can’t. musk’s neuralink, which aims to develop brain implants to keep humans competitive with ai, amplifies fears of ai dominance. meanwhile, wearable ai technology increasingly invades privacy, collecting personal data and recording conversations with little oversight.

as ai advances, big tech is consolidating power by acquiring innovative startups rather than driving fresh breakthroughs themselves. this raises a big question: how much control should a few corporations wield over humanity’s ai-driven future? while the race for ai dominance promises to reshape society, the full cost of this transformation remains uncertain, leaving humanity to grapple with the price of progress.

final summary#

Conclusion

the main takeaway of this chapter to supremacy by parmy olson is that behind the scenes, tech giants like google and microsoft have gained unprecedented control over ai, pushing its development forward with little oversight and potentially disastrous consequences. at the heart of this story are two visionaries: sam altman and demis hassabis. initially motivated by the desire to build ai that could benefit humanity, both men ultimately compromised their ideals, handing control to the very tech monopolies that would shape ai’s future to serve corporate interests. the result is a rapidly changing landscape where conglomerates like microsoft and google, eager for power and profit, are steering the ai revolution – sometimes at the expense of ethical considerations and societal well-being.

okay, that’s it for this chapter. we hope you enjoyed it. if you can, please take the time to leave us a rating – we always appreciate your feedback. see you in the next chapter.