Nexus
by Yuval Noah Harari
A Brief History of Information Networks from the Stone Age to AI
Table of Contents
Book Summary
This is a comprehensive summary of “Nexus” by Yuval Noah Harari. The book explores a brief history of information networks from the stone age to ai.
what’s in it for me? an incisive analysis of networked information#
Introduction
stone tablets laid the groundwork for record-keeping and governance. books moved stories and information beyond oral tradition, and the printing press democratized knowledge, allowing it to be distributed on a vast scale.
now, humans are navigating the latest revolution in information. with the rise of computers, ai, and the internet, our information networks have been transformed – and are poised to evolve even further. this prompts important questions: have we unleashed a force beyond our control? and how can we mitigate its potential risks?
to manage these new technologies wisely, we must first understand the deep relationship between information and power – how they have always been intertwined within networks, how those networks must balance truth and order, and how this new information revolution is reshaping them. from the stone age to bitcoin, this chapter will explore the complexities of how information networks generate power and why that matters.
information and power are networked entities#
in 1797, the german writer johann wolfgang von goethe penned a poem that captures a persistent view of the relationship between information and power: the sorcerer’s apprentice. in the poem, a young boy, eager to prove himself, uses his master’s magic to animate a broom to fetch water. however, he quickly loses control of the spell. in a panic, he chops the broom in two, but the splinters turn into even more enchanted brooms. ultimately, it is the sorcerer who uses his magic to undo the chaos.
there are two key takeaways from this story. first, the apprentice misuses power after getting a taste of it, suggesting that individual psychology drives us to misuse power. second, the sorcerer steps in to fix the mess, implying that when we summon forces beyond control, a superhuman figure can solve the problem. unfortunately, both of these takeaways are flawed. as long as humanity follows the "sorcerer’s apprentice" model of power, we are heading toward destruction.
humankind has unleashed countless metaphorical “enchanted brooms.” we’ve tipped the climate off balance, summoned technologies like drones and ai, and fueled destructive ideologies like fascism – yet still act as if a sorcerer figure will fix everything. the truth is, no sorcerer is coming, and no single person or group is solely responsible for solving these problems. our tendency to summon uncontrollable forces arises from how we act together, as part of larger networks.
take, for example, 1930s germany. not all germans were fascists, but as a network, they collectively supported hitler’s rise to power. information has always been the glue that holds these networks together, and in today’s age of big data, this is clearer than ever. there is no doubt that generating and sharing information has benefited humanity: advances in medicine, for instance, have dramatically reduced child mortality. but despite having more data than ever, we still destroy the climate, pollute the earth, and engage in conflict.
as we stand on the brink of an ai revolution, we are about to unleash even more potentially destructive forces. these are not problems created by one entity, but by complex information networks. to save ourselves, we need to rethink the relationship between information and power. we must understand how networks operate, how information moves through them, and how we might channel them to contain the enchanted brooms we've unleashed.
we are connected through intersubjective stories#
humans aren’t the only species that cooperate – ants, bees, and chimpanzees work together as well. however, while these animals build hives or hunt for food collectively, they don’t create empires, religions, or ideologies. so, what sets us apart? evolution endowed humans with the ability to tell stories and be profoundly moved by them. this storytelling capability allowed our networks to grow exponentially – connecting not just person-to-person, but person-to-story. we no longer needed to know someone personally to help or connect with them; we only had to share the same narrative, whether it was the story of the bible or the story of communism.
if we examine humanity’s most powerful figures – kings, popes, emperors – it becomes clear that their power wasn’t derived solely from personal charisma. instead, it came from the stories they embodied. the same is true today with modern social media influencers: they don’t personally connect with millions of followers, but they represent powerful, branded stories that resonate across their networks.
there are three levels of reality to consider. first, there’s objective reality – the undeniable facts of the world, like pizza having around 2,000 calories. second, there’s subjective reality – how we individually experience the world, like finding pizza delicious. then, there’s intersubjective reality – the stories we share that exist in the collective minds of large groups. for instance, in 2010, programmer laszlo hanyecz spent 10,000 bitcoin to buy two pizzas. at the time, 10,000 bitcoin was worth $41 usd. today, that same amount of bitcoin is worth nearly $690 million. the value of bitcoin isn’t an inherent truth; it exists because enough people agree it has value. this is intersubjective reality.
we like to think that our systems of social order, like politics and law, are grounded purely in truth and fact. yet, they also rely heavily on shared stories. for example, while slavery is universally condemned today, early u.s. society was built on the belief that slavery was justified. the original u.s. constitution even sanctioned it, along with the subjugation of women and indigenous peoples. the divine right of kings? another intersubjective story. the very concept of nation-states? yet another.
returning to the idea of information networks: a simplistic view might suggest that more information naturally leads to more power and wisdom. a more cynical view, often embraced by populists and conspiracy theorists, argues that information is a weapon to be distrusted. the truth lies somewhere in the middle. information networks are tools for both uncovering truth (facts) and creating order (through stories). when a network prioritizes truth over order, it risks destabilization – just as darwin’s theory of evolution once destabilized christian society in the victorian era. on the other hand, when a network values order over truth, it can amass great power but become vulnerable to abuse – look no further than stalinist russia.
today, companies like meta and google are focused on increasing the speed at which networks gather information. however, by concentrating on optimizing efficiency, we overlook a far more critical challenge: the need to strike a balance between truth and order in these networks.
free flowing information isn’t intrinsically good#
humans are fallible, yet totalitarian systems must present themselves as infallible. take the christian church, for instance: its ideology revolves around an infallible god correcting the original sin of adam and eve. similarly, marxist doctrine argues that the working class can mistakenly identify with their oppressors, which is why they need the benevolent guidance of party rulers. both systems restrict the flow of information, assuming their followers aren’t equipped to interpret it correctly. if information is too freely available in these systems, the original doctrine might be questioned or exposed as flawed.
so what’s the opposite of this model? a free market of information, where errors are uncovered and replaced with truth. in theory, this sounds ideal. let’s consider one of the most significant milestones in the history of information networks: johannes gutenberg’s invention of the printing press in the mid-15th century. before its invention, europe had produced around 11 million hand-copied volumes of text over a millennium. yet, within just 46 years of the printing press, over 12 million printed volumes were distributed.
what followed was a profound shift. information spread rapidly, and people began to question long-held doctrines, paving the way for the scientific revolution as thinkers like galileo and copernicus circulated their ideas. this marked the beginning of a golden age of truth and enlightenment – or so it seemed. turns out, it wasn’t that simple. while the printing press democratized information, it democratized all information – both true and false.
at the same time galileo’s discoveries about the moon’s craters were spreading thanks to the printing press, another text was gaining traction: malleus maleficarum (the hammer of witches), written by heinrich kramer in 1487. kramer, a dominican inquisitor, had been expelled from the church due to his erratic behavior and obsession with satanic conspiracies. in response, he wrote malleus maleficarum, a witch-hunting guide that fueled a wave of witch hunts across europe, leading to the execution of an estimated 12,000 "witches" – though some scholars suggest the true number was much higher. the printing press may have liberated information, but it didn’t guarantee the spread of truth, let alone of order.
for truth to prevail, we need more than just an open market of information. institutions play a vital role in tilting the balance toward facts. however, institutions themselves aren’t infallible – they need self-correcting mechanisms. nature offers a good analogy: think of how our bodies self-correct as we learn to walk. the church, for example, gained power by proclaiming it could never be wrong – a crude self-correcting mechanism that ultimately proved flawed. by contrast, the scientific establishment derives authority in part from its willingness to admit and correct its own errors.
scientific institutions routinely publish papers that debunk previous theories. take eugenics, which was once used to justify colonialism and genocide, but has since been discredited by science. in psychiatry, the diagnostic and statistical manual of mental disorders (dsm), often referred to as the "bible" of the field, is revised every decade or so to reflect new understandings – unlike religious texts, which remain static.
so, are self-correcting mechanisms the key to balancing truth in an information network? they are crucial, but they’re not the only factor. while these mechanisms can shift the balance toward truth, they sometimes do so at the cost of order and stability.
computers intervene in our information networks#
let’s fast forward from the printing press to another pivotal moment in the history of information: the invention of the computer. the first computers, built in the 1940s, were bulky machines designed for mathematical calculations. yet visionaries like alan turing foresaw their potential to become so advanced that they could one day mimic human intelligence. since then, computers have fueled groundbreaking innovations such as smartphones, social media, blockchain, and ai, all of which are dramatically transforming the way we live.
consider the impact of social media algorithms on politics, particularly in myanmar during the rohingya crisis. after decades of military rule, myanmar began to democratize. facebook, which arrived in the country shortly before the transition, played a role in this democratization by enabling people to share and access information previously controlled by the state. however, alongside the rise of democracy, violence against the rohingya, a marginalized ethnic minority, also escalated.
extremists from the arakan rohingya salvation army launched attacks to establish a rohingya state. in retaliation, the government and buddhist extremists carried out ethnic cleansing, resulting in the deaths of between 7,000 and 25,000 rohingya and the displacement of 60,000. facebook’s role in this crisis was troubling. burmese facebook accounts were flooded with fake news, conspiracy theories, and anti-rohingya propaganda. facebook’s algorithm, designed to maximize engagement, amplified inflammatory content. the platform’s algorithms failed to moderate them, and even autoplayed certain incendiary videos – includiing one from an extremist monk that gained 70% of its views from autoplay, meaning users didn’t need to click to be exposed to its harmful message.
the myanmar crisis of 2016-2017 serves as a stark warning. during that time, algorithms were driving the flow of information between humans. today, we are entering an era where ai can communicate without human intervention. imagine this: an ai writes an article, another algorithm shares it, a third flags it as fake, a fourth analyzes it as the beginning of a political crisis, and other ai systems trigger stock sell-offs – all within seconds, and without any human involvement.
computers today are far more advanced than earlier information technologies. a stone tablet could record taxes but not calculate them. a printing press could copy a book but not rewrite it. a radio could broadcast music but not choose the songs. but computer technology fundamentally changes how information networks function – not just how information moves through them, but how information is created, shared, and interpreted.
for the first time, we have information networks that don’t require human input. what does this mean for the power of computers? in capitalist societies, power is often measured by how many entities work with you, how well you understand law and finance, and how capable you are of invention. these are all things that computers can now do, or will soon do, better than humans. as computers continue to reshape information networks, they will accumulate more power and influence than we do.
ai is revolutionary. is that a good thing?#
move 37. if you're an ai expert or a fan of the chinese board game go, you know the significance of those two words. in go, players place stones on a grid to capture territory in a game of immense complexity – far more complex than chess. during a landmark match in 2016 between google’s ai, alphago, and lee sedol, a world champion go player, the ai made a move that stunned the world: move 37. this bold, unprecedented move defied centuries of human strategy, and no one saw it coming. yet, it set up alphago’s victory.
move 37 wasn’t just about one player defeating another. it represented a pivotal moment when ai outperformed humans in a distinctly non-human way. for many experts watching the game, it raised broader questions: what does it mean if ai surpasses not only our skills but also our understanding?
let’s explore this through the lens of ai's impact on democracy. democracy, while imperfect, is perhaps humanity’s best attempt to create an information network that balances truth and order. mechanisms like voting and institutional checks (such as the judicial system and a free press) help maintain that balance. but will ai’s growing influence destabilize democracy beyond repair? this isn’t just a theoretical question – ai is already making decisions about who gets imprisoned, who gets a job interview, or who’s admitted to college.
take the case of eric loomis. in 2013, loomis was convicted of eluding police, and during sentencing, the judge used an ai algorithm called compas to assess his likelihood of reoffending. based on the ai’s assessment, loomis was deemed "high risk" and sentenced to six years, even though neither loomis nor his legal team had access to how the ai arrived at that decision. he challenged the sentence, arguing that it violated his rights, but the courts upheld it.
this brings up a critical issue: we don’t know how ai reaches its conclusions. if an ai rejects your loan application, its reasoning is hidden in a "black box." even if it provided an explanation, it would likely be pages of complex calculations – some based on relevant factors (like payment history) and others on seemingly irrelevant details (like the time of day you applied or the battery level of your device). how can we establish a self-correction mechanism when we don’t even understand what needs to be corrected?
as ai’s decisions become more opaque, trust in democratic and civic systems could erode. this creates fertile ground for populists, conspiracy theorists, and charismatic leaders to exploit the uncertainty. we’re already witnessing this in political discourse, with some americans still unable to agree on the basic fact of who won the 2020 election.
so, is ai yet another "enchanted broom" – a force beyond our control that could lead to societal collapse? optimists like ray kurzweil, who predicts ai will revolutionize education, healthcare, and even help prevent ecological disasters, would say no. but history offers a more complicated perspective. during the industrial revolution, luddites feared that new technologies would destroy jobs and social order. while their predictions didn’t come true in exactly the way they imagined, the revolution did bring about environmental and human costs – like climate change – that we are still grappling with today.
throughout history, new technologies have reshaped human information networks, from the ancient mesopotamian writing tools that recorded taxes and organized city-states to the computers that now produce and disseminate information at lightning speed. but while these networks have produced power, they haven’t always produced wisdom. our salvation may lie in realizing, unlike the sorcerer’s apprentice, that just because we can do something doesn’t mean we should.
final summary#
Conclusion
in this chapter to nexus by yuval noah harari, you’ve learned that throughout human history, we have created information networks that intertwine power and knowledge. computer technology has revolutionized how these networks are built and sustained, but it also threatens to undermine the self-correcting mechanisms that balance truth with order. to navigate this new reality, we need to deepen our understanding of how power flows through these networks and find ways to keep truth and order in harmony, ensuring that technology serves us without causing harm.
okay, that’s it for this chapter. we hope you enjoyed it. if you can, please take the time to leave us a rating – we always appreciate your feedback. see you in the next chapter.
You Might Also Like
Discover more book summaries in the same category or by the same author.
The Singularity Is Near
by Ray Kurzweil
The Master Algorithm
by Pedro Domingos
21 Lessons for the 21st Century
by Yuval Noah Harari