AS
Arvind Narayanan

AI Snake Oil

Science
Back to Categories
Science19 min read

AI Snake Oil

by Arvind Narayanan

What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

Published: October 31, 2024
3.6 (147 ratings)

Book Summary

This is a comprehensive summary of AI Snake Oil by Arvind Narayanan. The book explores what artificial intelligence can do, what it can’t, and how to tell the difference.

what’s in it for me? uncover the myths and misconceptions of ai.#

Introduction

artificial intelligence – or ai – is transforming the world, promising to revolutionize industries, reshape economies, and change how we live. as these technologies become more and more integrated into our day-to-day lives, understanding their limitations, in addition to their potential, will become more and more crucial. 

for ai to truly benefit society, we need to navigate these issues carefully. artificial intelligence offers many exciting possibilities, from producing creative content to automating complex decisions to making the internet a safer space. equally, ai ushers in serious concerns about ethics, inequality, and privacy, as evidenced in instances of failing to appreciate nuance, deepening biases, and misuse by those focused solely on profit. 

in this chapter, you’ll explore three distinct types of ai – generative, predictive, and content moderation – and the myths and misconceptions associated with each. you’ll then learn about the steps we need to take to ensure that artificial intelligence complements, not competes with, human intelligence in a way that promotes the public good.

ready to dive in? let’s get started! 

generative ai#

generative ai, which creates media such as text, images, and video, is quickly becoming a part of daily life. although still in its relative infancy, it is already reshaping culture and the economy. yet, its effects are mixed, with significant contributions in some areas but equally significant concerns in others.

in terms of accessibility, generative ai has shown great promise. for instance, be my eyes, an app for those with visual impairment, uses ai to describe images, helping users better understand and navigate their surroundings. while the ai feature doesn’t match the accuracy – or social benefit – of human volunteers, its constant availability makes it a valuable tool nonetheless.

for those of us whose first in-person contact with generative ai came via chatgpt or midjourney, the technology’s recent rise might seem sudden. but in actual fact, generative ai’s roots stretch back decades. today’s popular applications, such as the prominent chatbots and image generators, share fundamental algorithms, differing only in their data and architecture. image generators, for instance, typically use diffusion models, which transform random noise into coherent visuals by learning from extensive datasets. however, this is where we start to run into issues as the large-scale use of copyrighted images without permission raises ethical concerns about creative ownership.

one key issue in this context is the unregulated use of artists’ work. image generator companies often train ai on billions of pieces of online content without crediting or compensating creators, exploiting loopholes in outdated copyright laws. understandably, many artists fear that ai-generated content could replace human-made art in routine tasks. this has led to passionate advocacy for more ethical practices, such as consent-seeking and fair compensation.

privacy risks also emerge as ai’s capabilities expand. while some ai tools, like predictive models – as we will explore in the next section – struggle with accuracy, image classification works exceptionally well, making it efficient and effective for surveillance. the same ai technologies used for object recognition can also be applied to monitor people, raising serious concerns about potential privacy violations by state actors and individual players.

chatbots present yet more challenges. despite their sophisticated and wholly convincing responses, chatbots actually generate text by predicting word sequences, not by understanding context. this makes them prone to producing plausible but inaccurate statements and, thus, are still largely unreliable for tasks requiring factual accuracy.

finally, it is crucial to highlight that developing generative ai involves labor-intensive data labeling, often outsourced to countries outside north america and europe, where companies can get away with paying low wages while providing high workloads. going forward, stronger protections and fair labor practices will be critical to the sustainability and equity of these digital technologies.

clearly, as generative ai continues to evolve, it will bring both opportunities and risks to the fore. its potential is undoubtedly vast, but addressing its ethical, legal, and social implications is essential to ensuring it serves society’s best interests while minimizing harm. 

predictive ai#

humans have long been allured by the possibility of predicting the future, whether by consulting oracles in ancient times or visiting fortune-tellers. today, predictive ai has become the modern-day tool for forecasting, offering to analyze data and predict outcomes. however, many claims about its effectiveness are overhyped, and predictive ai is not without serious limitations.

one key issue is that sound predictions don’t always lead to sound decisions. ai models often fail to account for how their own choices change the circumstances they predict. for instance, randomized controlled trials are essential in fields like medicine despite their high cost and time demands because they provide rigorous data on the cause-and-effect relationships of different interventions. predictive ai, however, tends to ignore this critical step, focusing solely on past data to generate present-day predictions. this lack of real-world, real-time testing means the resulting decisions may not be as effective as promised, especially when the ai is deployed in a new context.

another concern is how easily predictive ai systems can be gamed. because these models rely on what’s worked historically, they often miss what actually needs to be measured. in the hiring process, for example, ai may prioritize superficial qualities in résumés instead of meaningful indicators of a candidate’s fit and potential. job seekers can then feel compelled to manipulate their applications, engaging in guesswork that ultimately takes them away from communicating their genuine selves.

there is also a significant risk of over-reliance on ai, often referred to as automation bias. predictive ai is generally marketed as a means of cutting costs and fully automating decisions, removing the need for human involvement. yet when the ai fails or makes a poor decision, companies frequently shirk any responsibility, saying that human oversight should have been in place.

predictive ai models are also limited by the data on which they are trained. these models can perform well when tested on the same population they were trained on, but their accuracy falters when used on different groups. for example, an ai system trained on data from one country or industry may produce poor results when applied to another with contrasting characteristics. this issue is particularly concerning when predictive ai is used in high-stakes contexts, such as healthcare or law enforcement, where inaccurate predictions can very seriously affect individuals from underrepresented populations.

in fact, predictive ai tends to exacerbate inequality across the board. since these systems rely on past data, they often replicate the biases and inequalities present in that data. unfortunately, when predictive ai is deployed, the most vulnerable groups are frequently the first to experience the negative ramifications.

the widespread appeal of predictive ai comes, in part, from a seemingly innate human discomfort with randomness. people have been drawn to the idea of controlling the future since the beginning of time, and predictive ai provides the illusion of certainty. but the reality is that many outcomes are inherently unpredictable. accepting this uncertainty, rather than desperately grasping at flawed predictions, will lead to better decision-making and outcomes for all. but if we must continue chasing a crystal ball, we must move beyond treating individuals as fixed and their futures as predetermined and design models that recognize and adapt to the complexity and randomness of life. 

content moderation ai#

content moderation has become one of the linchpins of social media platforms. while the technical features of these platforms are easy to replicate, how they manage user content defines their uniqueness.

with millions of posts generated every day, ai seems like the ideal tool to handle moderation – able to apply rules consistently without fatigue or loss of focus. in fact, ai already plays a prominent role in content moderation. however, despite its potential, ai faces nontrivial challenges that curb its effectiveness.

most social media platforms rely on ai to scan posts as soon as they are published, checking for violations of community guidelines on issues like hate speech, pornography, or violence. if flagged, content can be silently shuffled out of sight, removed altogether, or labeled with a warning. while content moderation ai undoubtedly helps manage the massive volume of posts, it’s a far-from-perfect system.

one major limitation is ai’s inability to appreciate context and nuance. human moderators can interpret content within its social or cultural context, but ai tends to interpret things too literally. for example, ai often misjudges the use of reclaimed slurs or discussions about harmful content, flagging these posts despite the fact they may rightly intend to empower or condemn. although content moderation ai has improved over time, companies have yet to invest adequately in systems that make accurate, context-sensitive decisions.

another issue is cultural competence. effective content moderation requires understanding not just the language but the cultural context of the region. however, when platforms don’t have enough moderators fluent in local languages, they rely on ai for automatic translation. to be sure, translation technology is materially better today than it was even a decade ago, but it is still not attuned enough to make accurate judgments about culturally sensitive content. even if the language were to be perfectly translated, a lack of understanding of local norms would still result in poor moderation decisions.

ai also struggles to adapt to the constantly changing nature of the online landscape. social media platforms use two main ai techniques: fingerprint matching to detect copies of banned content, and machine learning to identify patterns in new content. however, as content types, social norms, and moderation policies evolve, these systems need constant retraining to stay relevant. this retraining requires significant time and human effort, making it difficult for the models to keep up with rapid changes.

regulatory pressures add yet another layer of complexity. to avoid legal risks, platforms often over-moderate, removing more content than necessary. this process – termed collateral censorship – happens because platforms prioritize protecting themselves from liability over investing the human and technological cost required to evaluate every instance of borderline content. try as platforms might to limit their efforts to clearly harmful misinformation, the potential for overreach exists nonetheless.

content moderation ai faces further limitations in handling the broader issues of policy making. social media platforms are key players in shaping the public discourse, and their policies and the implications of these policies are often hotly debated. such debates are important and necessary but inherently human and political. as a result, solely ai-made decisions will never be sufficiently satisfying.

with all that said, the limitations of content moderation ai highlight a broader truth: the challenges faced are not purely technological but societal. while ai can aid certain aspects of moderation, it falls short in areas requiring nuanced human judgment, cultural understanding, and adaptability. ultimately, addressing these complex challenges will require a combination of artificial and human intelligence to create effective and fair content moderation systems. 

the path forward#

ai is indelibly reshaping society, but its trajectory is not yet set in stone. we still have the time and power to guide its development to prioritize humanity’s interests. however, to achieve this, we need to seriously rethink how ai is being integrated, regulated, and applied across different domains. 

generative ai will likely transition from standalone tools like chatbots to becoming a core part of digital infrastructure. nevertheless, as companies like anthropic, google, and openai limit the sharing of research for competitive reasons, the risk grows that ai advancements will become exclusive and profit-driven. to prevent this, we need to push for ai development that emphasizes openness and benefits society at large.

predictive ai is often embraced by struggling institutions looking for efficiency gains, such as hiring or criminal justice systems. while ai may seem like a solution, it frequently fails to address deeper institutional issues. our modern obsession with maximum efficiency can obscure the need for more nuanced, human-centered decision-making. thus, shifting away from rigid optimization could open up possibilities for simpler, more transparent approaches that balance ethical considerations with practical considerations. 

in general, regulation and enforcement of regulations will be critical to ensuring ai is used responsibly going forward. despite the perception that ai requires an entirely new rulebook, existing regulatory frameworks provide a solid foundation for managing its risks. strengthening regulatory agencies with better funding and resources will help them adapt to the fast-moving ai landscape and prevent regulatory capture, in which large companies manipulate regulations to serve their interests. crucially, future regulation policies should be adaptable and proactive, not just reactive, ensuring they can match the pace at which artificial intelligence and digital technologies are advancing. 

finally, when it comes to threats to human employment, ai’s impact is likely to mirror previous automation trends. while some roles will inevitably see reduced demand, automation has rarely eliminated entire job categories. instead, it typically reshapes tasks, creates new roles, and shifts job demands. to address the economic impacts, policies like a “robot tax” on companies that benefit from automation could help incentivize retaining a human workforce. however, it’s important to acknowledge that labor challenges and inequalities existed long before ai, and resolving their root causes will require broader systemic changes. 

addressing ai’s impact on society will involve more than just technical fixes. it will require confronting the incentives that drive its misuse, implementing informed and flexible regulations, and taking a proactive approach to labor issues. by doing so, we can shape ai to serve humanity’s interests, making it a force for positive change rather than a source of unprecedented problems. 

final summary#

Conclusion

in this chapter to ai snake oil by arvind narayanan and sayash kapoor, you’ve learned that…

today, more than ever, it is critical that we take a clear-eyed, evidence-based view when assessing ai’s capabilities. 

the hype and hysteria surrounding artificial intelligence often leads to exaggerated claims, making it easy to overlook the technology’s significant limitations and potential risks. many high-profile technologies claim to work magic but often fail to work at all. we needn’t reject artificial intelligence wholesale, but we do need to be savvy in distinguishing between fact and fiction. 

embracing ai’s limitations can, in fact, be tremendously empowering. doing so enables developers, policymakers, and everyday users to channel their efforts into areas where ai truly adds value while steering clear of misguided applications that can be more harmful than helpful. 

with a grounded and well-informed perspective, we can sculpt ai’s role in enhancing human skills, solving meaningful problems, and building tools that genuinely improve lives. in this future, we maximize the technology’s strengths while remaining rooted in reality. in this future, artificial intelligence complements, rather than competes with, human intelligence. 

okay, that’s it for this chapter. we hope you enjoyed it. if you can, please take the time to leave us a rating – we always appreciate your feedback. see you in the next chapter!