Back to Categories
Politics15 min read
Atlas of AI
by Kate Crawford
Power, Politics, and the Planetary Costs of Artificial Intelligence
Published: May 26, 2024
3.4 (129 ratings)
Book Summary
This is a comprehensive summary of “Atlas of AI” by Kate Crawford. The book explores power, politics, and the planetary costs of artificial intelligence.
what’s in it for me? uncover the hidden costs of artificial intelligence.#
Introduction
kate crawford, atlas of ai, power, politics and the planetary costs of artificial intelligence.
picture this, a vast network of mines, factories and data centres stretching across the globe from the nevada desert to the mountains of inner mongolia.
this is the hidden infrastructure of artificial intelligence, a technological system built on the extraction of minerals, data and human labour.
in this chapter, we take a critical look at the true costs and ethical implications of the ai revolution.
by tracing the complex web that undergirds the development of ai systems, we reveal the often overlooked material realities of this seemingly abstract technology.
so, if you're ready to pull back the curtain on how our ai systems are currently being made possible, then let's begin.
artificial hype what does a german horse have to do with artificial intelligence?
artificial hype?#
meet clever hans.
in the late 19th century, this particular horse, an orlov trotter to be precise, captivated audiences across europe with his astonishing intelligence.
clever hans could tell time, identify the correct date, differentiate musical tones and even solve math problems, tapping out the correct answers with his hoof.
or so it seemed.
the truth behind clever hans' seeming intelligence was revealed through careful investigation by psychologist oskar pfungst.
he discovered that the horse was not actually reasoning independently, but was instead responding to subtle, unintentional cues from his questioners.
these cues, such as changes in posture, breathing and facial expressions, would unconsciously signal to hans when he had reached the correct answer.
this phenomenon, now known as the observer expectancy effect, or indeed the clever hans effect, demonstrates how easily experimenters' biases can influence their subjects, leading to flawed conclusions.
the story of clever hans serves as a cautionary tale, illustrating the dangers of anthropomorphizing non-human entities and the importance of recognizing the influence of our own biases.
proponents of artificial intelligence believe that human intelligence can be formalized and reproduced by machines.
but the author argues that this thinking is fundamentally flawed.
ai is not intelligent in the ways we often imagine.
they are not capable of autonomous reasoning or understanding.
instead, they rely on extensive training with large datasets and predefined rules to perform specific tasks.
their outputs are ultimately shaped by the biases and goals of their human creators.
on top of this, they lack the contextual awareness, fluidity and adaptability that characterize human intelligence.
the material roots of artificial intelligence in the heart of the nevada desert sits an unassuming town called silver peak.
the material roots of artificial intelligence#
silver peak sits on the edge of a vast lithium deposit, a crucial component in the batteries that power our smartphones, laptops and electric vehicles.
this small mining community, with its evaporation ponds shimmering an eerie green, is just one of the many hidden places that form the backbone of the artificial intelligence industry.
as you explore the true origins of ai, you discover a complex web of extraction, exploitation and environmental destruction stretching across the globe.
from the rare earth mines of inner mongolia to the tin-rich islands of indonesia, an intricate network of supply chains feeds the minerals and metals essential to modern computing.
the human cost of this extraction is significant, from unregulated and dangerous working conditions faced by miners to the displacement of local communities and the destruction of fragile ecosystems.
the current ai boom actually shares striking parallels with previous eras of resource extraction.
in the 19th century, the growth of cities like san francisco was fueled by wealth generated by gold and silver mining in territories seized from mexico, involving the forcible displacement of thousands from their homes.
just as the true costs of that extraction were hidden from view, the environmental and human toll of the ai industry remains largely invisible to the public.
the giants of big tech, with their sleek campuses and billion-dollar valuations, keep themselves far removed from the scarred landscapes and impoverished communities that make their success possible.
but extraction doesn't end with minerals and metals.
the ai industry is also built on the exploitation of human labor.
this includes low-paid workers who label the vast datasets used to train machine learning algorithms, as well as factory employees who toil under harsh conditions to assemble our electronic devices.
even the energy that powers the industry's data centers, which, together, consume more electricity than entire countries, is often generated by burning fossil fuels, contributing to the global climate crisis.
despite the tech industry's promises of a cleaner, greener future, the shift to ai is only accelerating the environmental destruction and social inequality that have long been hidden costs of progress.
the myth of clean tech obscures the reality of a system that is fundamentally unsustainable, relying on the continuous extraction of finite resources and the exploitation of human labor.
as we become increasingly dependent on ai, from virtual assistants to the algorithms that shape our social and political lives, we will need to confront the true costs of this technology.
a radical rethinking of our relationship with computing is necessary, one that prioritizes sustainability, equity, and social justice over the relentless pursuit of growth and profit.
the data gold rush#
imagine a world where your every move, word, and facial expression is quietly harvested and fed into massive databases.
welcome to another side of the ai revolution where the tech industry's appetite for data knows no bounds.
as ai has exploded in recent years, so too has the scramble to amass the enormous training datasets needed to fuel machine learning models.
but where does all this data come from?
more often than not, it's scraped from the internet without the knowledge or consent of those concerned.
whether in the form of videos, images, or text, this corpus of human expression is treated as little more than raw material to be mined and refined, stripped of its context and meaning.
the extractive logic that now pervades the field of artificial intelligence has deep roots.
from the early days of speech and facial recognition research, the hunt was on for data to train with.
ibm's speech team in the 1980s scoured legal transcripts and other documents to build early language models.
the u.s. government's face recognition technology program in the 1990s created a dataset of facial photographs with the explicit aim of developing facial recognition systems for law enforcement and surveillance.
but it was the explosive growth of the internet that truly kicked the data gold rush into high gear.
suddenly, the web began to offer an almost limitless supply of images and text ripe for the taking.
one of the most influential datasets to emerge from this era was imagenet, created in 2009.
the imagenet team scraped over 14 million photos from online sources and employed legions of crowd workers to categorize them.
troublingly, the dataset contained all manner of offensive and prejudiced labels.
but imagenet set a powerful precedent.
training data was seen as something to be acquired by any means necessary, even if that meant ignoring issues of privacy and consent.
today, the author argues, this mentality is deeply entrenched in the culture and incentive structures of the tech industry.
metaphors abound comparing data to oil, a natural resource to be extracted and exploited.
the competitive pressures to build ever larger and more sophisticated ai systems have unleashed an all-out arms race to capture as much data as possible.
lost in this frenzy is any serious reckoning with ethical implications.
many university review boards rule machine learning as exempt from the usual oversight applied to human subject experiments.
datasets rife with errors and biases are routinely used to train ai systems.
these risk having real-world impacts as ai systems are deployed in areas ranging from predictive policing to automated hiring.
tech giants now control huge troves of data extracted from the public commons.
data generated by and about all of us, but whose value is siphoned away from the public sphere and into private coffers.
the pervasive data collection and surveillance regimes erected in the name of ai advancement threaten to erode our privacy and autonomy.
the path forward requires a fundamental rethinking of the extractive mindset that has taken hold in ai.
we need a new paradigm, one that prioritises transparency, accountability and respect for personal dignity over the impulse to harvest data at any cost.
only then can we build an ai ecosystem that genuinely serves the public good rather than just concentrating power and wealth in the hands of a few.
the politics of classification imagine a room filled with 500 human skulls, each carefully measured, labelled and catalogued.
the politics of classification#
this macabre collection, amassed by the 19th century physician and natural historian samuel morton, was used to promote the pseudoscientific notion that intelligence and character could be determined by skull size and shape.
morton's work, which claimed that white people had the largest skulls and were therefore the most intelligent race, was hailed as objective science and used to justify slavery and racial segregation.
this story illustrates how classification systems can encode and perpetuate power imbalances and social inequalities.
with the rise of machine learning, this phenomenon has taken on new urgency.
as ai systems are trained on vast datasets and tasked with classifying everything from objects to human beings, we see the growth of a new politics of categorisation.
let's look again at imagenet, the image database widely used to train computer vision models.
imagenet's images are organised according to a complex hierarchy of noun categories derived from wordnet, a lexical database of the english language.
the dataset's creators selected a subset of wordnet's classifications and tasked online workers with populating these categories with relevant images sourced from search engines and online repositories.
while imagenet has been instrumental in advancing the field of computer vision, its labyrinthine taxonomy has come under scrutiny for the biases and assumptions it encodes.
among the thousands of categories used to classify images are many that attempt to judge people's character, morality and worth based solely on their appearance.
women are reduced to derogatory labels like kleptomaniac, slut or wanton, perpetuating harmful stereotypes and reinforcing gender-based discrimination.
similarly, people of colour are subjected to a range of offensive and racist classifications.
the construction of race and gender designations themselves are another concern in ai systems.
training datasets like utkface treat age, gender and race as fixed objective qualities rather than the fluid, socially constructed concepts they are.
such essentialist thinking has a long and troubling history.
by building this classificatory logic into ai, we risk perpetuating harm and constraining the range of identities and experiences that are deemed valid or normal.
as we grapple with the ethical implications of ai, it's crucial to recognise that justice cannot be achieved through technical fixes alone.
striving for more diverse or inclusive datasets fails to address the underlying power dynamics at play.
instead, we must interrogate the very act of classification itself and ask who benefits from these systems and who is harmed.
this requires a fundamental shift in how we approach the design and deployment of ai, one that prioritises transparency, accountability and the lived experiences of those most impacted by these technologies.
the main takeaway of this chapter to atlas of ai by kate crawford is that the development and deployment of artificial intelligence systems are not purely technical endeavours, but rather are deeply entangled with issues of power, politics and ethics.
final summary#
Conclusion
by tracing the material realities and hidden costs of ai from the extreme realities of artificial intelligence, to the exploitation of human labour and the erosion of privacy, we are challenged to look beyond the hype and confront the complex implications of this transformative technology.
okay, that's it for this chapter.
we hope you enjoyed it.
if you can, please take the time to leave us a rating.
we always appreciate your feedback.
see you in the next chapter.
transcription by eso.
You Might Also Like
Discover more book summaries in the same category or by the same author.