TU
Dan Davies

The Unaccountability Machine

Politics
Back to Categories
Politics19 min read

The Unaccountability Machine

by Dan Davies

Why Big Systems Make Terrible Decisions & And How the World Lost Its Mind

Published: February 6, 2025

Book Summary

This is a comprehensive summary of The Unaccountability Machine by Dan Davies. The book explores why big systems make terrible decisions & and how the world lost its mind.

what’s in it for me? learn how institutions hide accountability so you can spot and correct systemic failures.#

Introduction

maybe you’ve noticed that, in large organizations, you often can’t find a single person who admits to being in charge. requests get shunted to different departments, rules feel carved in stone, and nobody seems truly empowered to make a call. this pattern didn’t spring up overnight – it reflects a hidden shift in how society handles responsibility and decision-making. processes replace human judgment, and following “the system” becomes more important than hearing individual concerns.

in this chapter, you’ll learn why personal responsibility has faded behind rigid frameworks, how standardizing decisions can warp outcomes, and which strategies might restore a sense of control. through real-world examples, you’ll see how systems can be redesigned to better serve individuals rather than blindly following instructions no single person can overrule.

accountability is lost in complex systems#

in 2023, fox news paid dominion voting systems $787.5 million to settle the second largest defamation case in american history. the lawsuit hinged on months of false claims that dominion had rigged the 2020 election. yet court filings showed fox executives like ron mitchell privately calling the accusers “kooks” while still airing their allegations, worried that honesty about donald trump’s defeat would cost the channel viewers. this disconnect – where insiders knew the story was false but felt compelled to push it – recalls the capitol riots of 6 january, 2021, when many participants found themselves in an outcome they claimed not to want. such moments reveal how large systems slip into decisions that no single individual fully controls.

at first glance, you might assume that unethical choices must come from malicious intent, but stories like these suggest otherwise. rules, strategies, and corporate pressures created an environment in which no one felt able to stop a narrative they personally doubted. similar patterns appear in other large organizations. at an airport near amsterdam in april 1999, 440 ground squirrels were destroyed by an industrial shredder because of a minor paperwork problem. they’d been shipped from beijing for the pet trade but arrived without the correct import documents. management insisted that staff “formally” made the correct call, yet the horrified public saw a system that had run on autopilot until it reached a grotesque end.

such examples highlight what some observers label an accountability sink: a structure of rules and delegation that removes personal ownership of decisions while allowing the status quo to persist. the logic is that formal policies shield decision-makers from blame and streamline repetitive tasks. but when these policies meet an unexpected scenario, nobody steps in to override them until it’s too late. over time, these defenses against individual responsibility can become the source of systemic dysfunction.

this shift from personal responsibility to process-driven actions reflects a world in which managers and professionals often limit their own discretion, whether to avoid conflict, reduce legal risks, or keep operations consistent. the result can be a system where catastrophic outcomes happen with no clear individual to hold accountable. by recognizing how this diffusion of responsibility occurs, it becomes easier to see how large institutions so often produce bewildering – and sometimes shocking – results.

the reason some systems feel alien#

when youtube executives realized in 2017 that bizarre cartoon parodies were being recommended to children, they found themselves confronting a faceless chain of decisions no one person had truly intended. how did this happen? the company had defined “engagement” as its core priority, and this algorithmic fixation took on a life of its own. no one had personally chosen to frighten kids – it simply happened because the system had been aimed at maximizing watch time, whatever the content.

many argue that corporations sometimes act like rudimentary artificial intelligences, using rules and incentives that run on autopilot. even an influential executive may not fully grasp the web of decisions happening under their watch. in complex setups, procedures can become so elaborate that the system’s overall behavior no longer matches any individual’s intent. philosophers have compared this to the “chinese room” thought experiment, in which a person inside a sealed room mechanically answers questions written in chinese by following a set of instructions – without understanding chinese at all. the written responses appear coherent to outsiders, but there’s no genuine comprehension behind them. the black-box nature of an organization can similarly shield it from accountability, since it’s tricky to pinpoint where true responsibility lies.

these issues also appear in automated grading systems that unexpectedly punish certain groups of students, or in corporate goals that fixate on a single metric at the expense of everything else. such patterns resemble the “paperclip maximizer” scenario, in which a hypothetical ai set on making as many paperclips as possible destroys other priorities. it’s not being malevolent – but it is following instructions to a fault. the same dynamic arises whenever a system’s rules overshadow human judgment.

understanding that organizations have a tendency to grow beyond individual control provides some clarity about why they can seem so alien. they develop feedback loops and rigid procedures that may serve a narrow target but disregard ethical or practical concerns. that doesn’t mean there’s no reason behind these outcomes, only that the reason belongs more to the system than to any one decision-maker. recognizing this dynamic can help spark more thoughtful questions about who sets priorities, how rules are enforced, and why certain decisions seem detached from ordinary human values. genuine oversight might depend on ensuring that no system runs unexamined, especially when its workings appear too complicated for anyone to explain.

how to engage complex systems without dissecting their inner working#

in 1948, mathematician norbert wiener proposed that the feedback loops keeping an automated gun sight locked on a moving airplane might also shape how major organizations process information. he noticed that excessive feedback could trigger wild swings in behavior, while too little would leave targets slipping away. this insight launched cybernetics – which later influenced ross ashby, a british psychologist investigating how neural networks might stabilize the human mind, and stafford beer, a management consultant who applied these principles to real-world businesses.

their message was that you don’t have to dissect a giant institution’s every inner mechanism. instead, you can treat each part as a “black box” with defined inputs and outputs, then arrange clear feedback loops so no one piece gets overloaded. an example often used is a simple squirrel cage: the cage’s temperature, light, and other factors can each be regulated by small, self-contained controls, letting the squirrels stay healthy without constant human tinkering. ashby referred to the number of possible states a system can occupy as its variety. if the outside world changes in many ways, you need enough adaptability to match it – otherwise, chaos results.

beer built on this by showing how organizations could handle complexity by filtering out unnecessary detail and amplifying only significant signals. data that never influences decisions remains mere noise, so successful systems focus on information that sparks action. they also include emergency channels – like a “red handle” on a train – that bypass normal layers if disaster looms, ensuring leadership can respond quickly. by establishing who monitors which inputs, and limiting each segment’s responsibilities to what it can realistically control, managers avoid drowning in unmanageable detail.

what an institution does consistently is often more telling than any mission statement. by emphasizing feedback loops and variety engineering, you allow each team or subsystem to function within its capacity. there’s no need to open every black box and understand each wire or line of code. this approach combines clear accountability with enough flexibility to steer through unpredictable conditions. it underscores why large entities often appear to act on their own logic – and how we can design frameworks that keep those forces in check.

wealth doesn’t equal intelligence#

in 1881, joseph wharton founded a business school at the university of pennsylvania to elevate the status of management – yet, even decades later, many economists graduate without ever learning how to interpret a balance sheet. this gap suggests that running a profitable enterprise doesn’t guarantee intellectual rigor. instead, leaders can mask shaky decisions under sophisticated models and financial reports, giving the illusion of competence. economists, trained in abstract theories, often miss the immediate realities of production and overhead costs. meanwhile, managers who rely too heavily on standardized accounting can end up with a distorted picture of which products are actually profitable.

for example, a skincare firm might conclude that high-priced lotions outperform cheaper bulk lines because the entire marketing budget is listed as a “period cost” – an expense recognized in a single timeframe, rather than assigned to each product – while cheaper items carry all their direct manufacturing costs. outsourcing the cheaper line can then seem like a smart move, but hidden logistics or quality problems may end up erasing any short-term savings. in the end, consultants and middle managers often scramble to reinterpret the data, further blurring what truly went wrong.

this cycle of misleading data and misguided choices undercuts the idea that wealth creation is a reliable marker of intelligence. it’s true that large profits can reward certain strategies, yet those strategies sometimes rest on faulty assumptions or selective accounting practices. graduates of prestigious economics programs may wield elegant formulas, but without the language of actual ledgers and cost centers, they risk prescribing unrealistic solutions. at the same time, corporate leaders may defer to rules designed for external reporting, not internal decision-making.

the lessons are straightforward. misapplied metrics warp priorities, narrow perspectives, and feed into a broader system where superficial success can overshadow real wisdom. understanding how data is collected, organized, and weaponized is more important than memorizing abstract models. it’s this awareness, not financial gain alone, that signals genuine insight. recognizing these structural blind spots can sharpen our judgment and guard against decisions that, despite looking lucrative in the short term, may leave everyone regretting the fine print.

crises emerge when systems can’t adapt#

in 1997, a deal worth over $2 billion briefly padded general electric’s earnings, after which the company abruptly closed factories and laid off more than a thousand employees to avoid distorting its quarterly numbers. though this move kept the metrics neat, it undermined the overall resilience of the organization, stripping away the middle management tiers that once handled real-time information. over the next decade, prioritizing short-term financial gains – what some called the shareholder value revolution – became widespread. it rewarded a few high-profile ceos and their allies but piled debt and insecurity onto everyone else.

as more companies rushed to cut costs, outsource operations, and shuffle debts through financial engineering, they gradually lost the internal feedback loops that once flagged potential trouble. corporate decision-making came to hinge on meeting quarterly targets, with executives presiding over “rank and yank” policies – assessing employees and firing the bottom 10 percent, regardless of their performance. although such strategies appeared efficient on paper, they hid the fact that many firms could no longer adapt if economic shocks arrived unexpectedly. eventually, entire communities and industries frayed as wages stagnated and local economies became reliant on temporary fixes.

this blind pursuit of immediate numbers paralleled a broader shift in governance, where public institutions tried to emulate corporate efficiency by outsourcing major functions – like running transportation networks or approving defense projects – to private firms. with large contracts and outside consultants taking the lead, the state itself lost much of its ability to coordinate responses when real crises emerged. both private and public sectors thus found themselves ill-prepared for the late-2000s mortgage collapse, which caught financial giants with huge subprime holdings they’d barely acknowledged. many in working-class communities suffered the fallout, fueling discontent that turned into broader populist reactions.

these movements promised faster solutions but rarely addressed the structural cause: once a system devotes all its energy to near-term outputs, it sheds the capacity to handle life’s messy turns. a model that runs smoothly in mild conditions can unravel under genuine stress. recognizing how the single-minded quest for quarterly gains or budget “efficiency” leaves societies dangerously exposed is the first step toward designing more flexible, human-centered organizations – ones that can evolve rather than collapse when the world refuses to meet their spreadsheets’ expectations.

stop treating organizations like simple profit machines#

brian eno, known for layering musical elements into shifting, adaptive compositions, once found common ground with stafford beer, the management consultant who championed cybernetics. they both saw that large systems – whether creative or corporate – work best when they juggle multiple inputs instead of obsessing over a single goal. eno’s generative music evolves through ongoing feedback, and beer believed institutions should do the same, preserving accountability in the face of complexity. ignoring vital signals in favor of quarterly profits or rigid targets only leads to short-term wins at the cost of long-term resilience.

one central idea is that corporations shouldn’t behave like single-minded robots maximizing just one metric. when a firm focuses solely on cutting costs or raising quarterly profits, it amplifies certain signals while ignoring everything else. this leads to short-term wins but destabilizes both the organization and the broader economy. one proposed remedy is ending the practice of using unlimited debt and limited liability to fund buyouts. if investors had to shoulder more of a company’s obligations, they would curb reckless borrowing and encourage managers to weigh long-term effects.

these reforms connect to a bigger shift in thinking. economic theory often tries to optimize a single measure, like growth or profit, chasing one objective at everyone else’s expense. instead, institutional decisions should follow the model of an artist who balances multiple goals – staying solvent, supporting employees, and innovating. by releasing managers from the iron grip of short-term debt payments, we free them to consider the long term and honor feedback from workers, customers, and local communities.

this mindset also helps build better feedback channels with actual two-way communication. social media, for instance, can seem chaotic, but it holds the potential for real public input if supported by modern tools that condense huge amounts of data into usable signals. rather than seeing accountability as personal blame, we can accept that many decisions happen in systems too big for single actors to control. by making sure there are robust “red-handle” alarms to flag intolerable outcomes, we channel public concerns where they can prompt real change. true accountability emerges when systems encourage awareness of varied priorities, not just bottom-line results.

final summary#

Conclusion

the main takeaway of this chapter to the unaccountability machine by dan davies is that many of today’s organizations rely so heavily on rigid processes and narrow metrics that accountability and adaptability have faded. everyone is urged to trust “the system,” yet nobody is empowered to correct it if it goes astray. 

restoring responsibility starts with recognizing that data-driven goals shouldn’t silence real human judgment. by limiting reckless debt, creating clear feedback loops, and allowing informed discretion, we can loosen the grip of automated procedures. 

these efforts won’t succeed instantly, but each step toward genuine oversight means fewer crises and a more humane balance. with a little determination, institutions can evolve to better serve the people who depend on them.

ok, that’s it for this chapter. we hope you enjoyed it. if you can, please take the time to leave us a rating – we always appreciate your feedback. see you soon!