3 Uncommon Books Every AI Wonk Should Read

Aarshin Karande
5 min readApr 28, 2023

--

It is a weird time to be an AI Ethicist. Last Autumn, I began a part-time M.Phil. at the University of Cambridge’s prestigious AI Ethics & Society program, joining its second cohort. Since then, my social interactions have been colored by unsolicited questions, advice, and dismissals about everything related to artificial intelligence (AI). All this amidst the tumult of ChatGPT, Bard, and Balenciaga has made for an exciting several months. Maybe too exciting.

Today, one of the many challenges for AI Ethics — the acquisition of AI ethically — is its status as a convergence point for virtually every discipline that has directly and indirectly contributed to the enigma of “intelligent machines.”

I am among the vagabonds AI Ethics has drawn. Previously, I worked in game design and research, product management, and customer strategy and studied in media ethics, audience psychology, and technology policy. I am skeptical of AI’s virtues, wary of its vices, and hopeful for its possibilities.

AI Ethics is simultaneously enhanced and burdened by its inherent interdisciplinarity. The relevance of many perspectives can lead to both imaginative breakthroughs and participatory paralysis. Moreover, every AI innovator is keen to be viewed as ethical without being bogged down by the lag of ethical practice. There is a lot of ethics-washing surrounding AI and tech more broadly.

There is little question about whether AI Ethics is important — it certainly is. Though, nobody has gotten AI Ethics right yet, or seems eager to, and the recent slew of ethical principles proposed by various governing bodies are already rendered irrelevant and outpaced by the sheer momentum, advances, and priorities in AI innovation.

Despite this, goodwill and interest abounds about getting AI right — for everyone. Various activist and civil society groups continue to push for social justice-based approaches to AI development that reduce social inequalities prevalent in tech and society.

On LinkedIn this week, I responded to a post hailing ChatGPT as “free education,” challenging its emancipatory claims prevalent in narratives surrounding AI. Following, a connection responded to my post with a great question:

What do you recommend that people consider or read to become better educated on the potential implications and consequences of generative AI and LLMs?

My response follows:

Firstly, when educating ourselves about the potential impacts and consequences of generative AI and LLMs, we would benefit from reading with the intention of ultimately fostering an “Ethical AI-vigilant” culture in our respective workplaces. We have to be more than decision-makers; we must be culture-makers who can practice flexibility and resilience when confronting the tumultuous, ever-changing, and demanding literacy curves of AI innovation.

Secondly, when learning about or developing novel AI innovations we must ritually ground them against the breadth of economic and political contexts locally (where innovations are produced) and globally (whom innovations will and may impact) and develop an instinct for those paradoxes and contradictions (e.g., immense computing powers predicated on immense social inequalities). We must fight against the instinct for mono-myths and -narratives around AI (e.g., “AI will make everyone more educated”) and instead advocate for the many contradictions this space surfaces (e.g., “AI could ease the disadvantaged and accelerate the privileged”).

Thirdly, my overall perspective is that we must ground ourselves in the elusive but real political economics (power dynamics and commodifying forces) behind peoples’ lived experiences with AIs. Often, AI is narrated around aspirations and potentials. But, what are the tangible ways people are reckoning with AI developments? I would recommend three wonderful readings to instill an overall outlook on the affective dynamics of the political economics surrounding AI. These include:

1. The Ascent of Affect by Ruth Leys

Ruth Leys’ research on the importance of emotions challenges today’s vogue of cognitive-centric approaches to humans and tech. Her work on the emotional dynamics of experiences demonstrates how information- and behavior-based approaches to data collection are incomplete and, consequently, subvert human dignity.

This books helps us understand how AI systems risk “anatomizing” humans reductively as mere “information compounds.” Leys proves we must be more curious about AI users’ everyday lives.

2. The Costs of Connection by Nick Couldry & Ulises A. Mejias

Couldry’s and Mejias’ groundbreaking research coined the term “data colonialism.” This book compellingly interrogates data practices against historical politics of extraction, subjugation, and exploitation of peoples, environments, and cultures (i.e., “colonialism”).

This book helps us understand how AI innovations are not salvations or cures for history but products of it. It considers the costs of data practices beyond the economic bottom line, like the psychological, cultural, and political. Couldry and Mejias prove we must be curious about the multidimensional nature and consequences of technology.

3. Making Stories: Law, Literature, Life by Jerome Bruner

Bruner’s incredible book on law and narrative psychology interrogates the role of storytelling in law decision-making and performance in the court room. He argues that law, as a practice, is determined by convincing narratives — strategic storytelling surrounding belief compliance — rather than moral merit. He beautifully disambiguates ideas surrounding virtue and justice by examining how legal outcomes are at the mercy of storytelling and the psychological inclinations governing it.

This book helps us understand how AI systems are at the mercy of human controls which are narratively determined (e.g., how narrative efficacy and brittle values influence legal reasoning and decisions). Bruner proves we must be curious about the fragility of human systems and decision-making.

AI poses a broad challenge to conjoining and contemplating the particulars of tech to the generals of human values, philosophy, and aspirations. Often, the generals of tech aspirations ignore the particulars of human experience. I insist on the cause of bridging these gaps. It is certainly not easy, but I have found the aforementioned readings immensely helpful.

The journey towards and cause of AI Ethics is long and far. I have found these three books continue to be rich touchstones for me on this perilous path between machine intelligence and human curiosity.

Aarshin Karande writes about AI Ethics & Psychopolitics. He studies at the University of Cambridge — formerly at the London School of Economics, University of Oxford, and UW Bothell. He is also an Indian Classical musician.

--

--

Aarshin Karande

AI Ethics & Psychopolitics — Studies at Cambridge. Formerly at LSE, Oxford, and UW Bothell. Indian Classical musician.