Navigating the AI Conversation, Part Two
How can beginners and non-specialists make sense of the dizzying developments in AI? The first step is to look beyond the headlines and view AI as a structured field of ongoing conversations. Let this post be your guide.

This members-only [UPDATE: now public!] is a continuation of last week's post which you can read here.
If you are enjoying what you read in these pages, please recommend Without Why to a friend!
How to Be a Beginner in the AI Conversation
Note: This post is longer than usual. You don't need to read or digest it all in one sitting. I have imagined it as a reference that you can come back to, especially the section called "Mapping the Conversational Space."
Last time, I called out a long-standing feature of the field of AI: the tendency to over-hype results and the corresponding tendency of the news media to eagerly amplify this hype.
We passionate non-specialists and beginners have to take great caution as we try to make sense of what's going on in AI. Despite the hype, these technologies may profoundly alter our ways of being-in-the-world.
How can beginners make sense of the dizzying developments? The first step is to look beyond the headlines. We can learn to view AI as a structured field of ongoing conversations, with different voices pursuing varying agendas and dreams.
This post lays out a provisional map of this conversational space. It highlights key voices shaping the conversation. For this approach to work, however, some preliminary steps are necessary. We need to enrich our understanding of the nature of conversation itself.
An Ontology of Conversation
Conversations are not just exchanges of information. Conversations generate the contours of possibility in our lives. In Heidegger's terms, conversations help to disclose our world. But what does this mean?
We are immersed in conversations every day. We listen to conversations about what is going on in the world, in our neighborhood, and in domains we are curious or passionate about (such as AI, politics, music, or climate change). We engage in conversations with our friends, family, and work colleagues in order to create a future together. With loved ones and therapists, we explore our dreams, disappointments, despairs, and delights.
Our conversations condition what we notice and find important in the world; they mold our common sense and our ethical sense; they enact conflict and fortify community.
Conversations shape (and are shaped by) our moods. A conversation can shift us from resignation to resolution, from anxiety to serenity.
In conversations, we enact our identities and coordinate our commitments, tending to what we care about and generating our shared future.
In these comments, I am drawing on Fernando Flores's radicalization of the speech act tradition in the philosophy of language, which he interpreted in light of Heidegger’s ontology. Fernando initially articulated such ideas in his Berkeley PhD thesis (advised by Hubert Dreyfus, Stuart Dreyfus, John Searle, and Ann Markussen) and refined them in his book with Terry Winograd, Understanding Computers and Cognition: A New Foundation for Design (1986), and in subsequent work.

I have worked closely with Fernando to further develop this "ontology of conversation." I have done so while designing and delivering educational programs with him over the last ten years through his company, Pluralistic Networks. What I share below, including the strategy for navigating conversations rather than imbibing information, is grounded in this experience of collaborating with Fernando.
Reading, Listening, and Learning
As an extension of this expanded understanding of conversation, we need to let go of the belief that reading, listening, and learning are about acquiring new information and knowledge. When we read, listen, and learn about a new domain, we aren't just absorbing information; we're entering into a conversational space in which human beings collectively disclose and shape their reality.
This perspective helps relieve the pressure of thinking that we need to read articles and books exhaustively and linearly from cover to cover. Articles and books are not repositories of information, but moments in a larger conversational whole.
Hence my suggestion above that we beginners and non-specialists come to see the field of AI as a structured nexus of ongoing conversations.
But how is a beginner supposed to orient in this often confusing and cacophonous conversational space?
Now we can dig into the details.
Orient to the Sympathetic Skeptics
Given the tendency toward hype in the AI industry and the media that report on it, I suggest anchoring your perspective first of all in some trustworthy sympathetic skeptics: individuals who believe in AI’s potential, both technically and philosophically, but maintain finely tuned skepticism toward the hyperbole often surrounding developments in the field.
My favorite sympathetic skeptic in today’s AI discourse is Gary Marcus. His newsletter, Marcus on AI, is indispensable. In his mission to deflate the ever-increasing hype bubble surrounding AI, Marcus's writing on his newsletter and social media is sometimes mocking and sardonic. But in this he is simply matching the mood of the conversation that is already there.
My advice for beginners is to start with Marcus and branch out into the further reaches of this conversation in light of the perspective he provides.
However, don’t take any voice (including Marcus's) to have the final, complete answer. For us beginners and non-specialists, navigating this conversation isn't about finding definitive answers but about deepening our ability to listen and ask better questions.
Rather than clinging to specific claims, I suggest that you focus on how different voices react to major announcements, identifying patterns in their interpretations and disagreements. This approach helps turns the cacophony of opinions into a map of perspectives, enabling you to stay oriented amidst the tumult of news.
Mapping the Conversational Space
To navigate the AI discourse effectively, it helps to break it into structured subdomains. Below is a map of key voices and perspectives.
Again, the sheer number of voices here will easily become overwhelming if you approach it in the attitude of "a bunch of stuff I have to read and a bunch of information I have to memorize." But, again, this is not the mood in which you should approach this conversation. Approach in the mood of an open-ended navigation.
Moreover, what I am offering here is not a simple list, but a structured map of the conversational space, divided into 7 overlapping regions.
i. Sympathetic Skeptics: Along with Gary Marcus, I regularly follow two additional voices of sympathetic skepticism:
- Melanie Mitchell writes a newsletter called AI: A Guide For Thinking Humans (this is also the name of an excellent book by Mitchell). She posts on her page much less frequently than Marcus, but it is always worth hearing what she has to say.
- AI Snake Oil newsletter, by Arvind Narayanan and Sayash Kapoor. They have also written an illuminating book by the same title.
- Ed Zitron's newsletter, Where's Your Ed At, is indispensable. It is actually inaccurate to categorize Zitron as a "sympathetic" skeptic. He is not sympathetic at all, but his voice is a welcome and needed one helping to balance out the tendency toward hype in the AI world.
ii. Academic and Journalistic Commentators: There is a wide-range of newsletters of a more journalistic tone that report on the ongoing developments in the field. Here are a few I follow. Browse these freely to stay in touch with what is emerging in the conversation:
- Ethan Mollock's newsletter, One Useful Thing
- Zvi Mowshowitz's newsletter, Don't Worry About the Vase
- Casey Newton's newsletter, Platformer
- Shakeel Hashim's newsletter, Transformer
- Jack Clark's (a co-founder of Anthropic) newsletter, Import AI
iii. AI-Ethics: I also feed my conversational diet with prominent voices in the conversation about AI-Ethics, notably Timnit Gebru and Emily Bender, of the DAIR Institute which hosts a noteworthy blog and podcast. The AI ethics debate focuses on the potential harms and biases embedded in AI systems, and in the AI industry as a whole. Gebru's notion (developed with Émile P. Torres) of the "TESCREAL bundle" of dominant AI-ideologies (and its connection with the eugenics movements) is an important and disturbing contribution to today's AI conversation. Another voice worth listening to in this corner of the conversation is Kate Crawford.
iv. AI-Safety and AI-Alignment: The AI-safety conversation focuses on how we can maintain control of AI systems that we create, how they might learn to deceive us, and ultimately how they might undermine the very survival of humanity (see the "Doomers" below).
Sometimes the question about AI-Safety is put in these terms: How can we make sure that our AI systems "align" with "human values." (See the book, The Alignment Problem: Machine Learning and Human Values, by Brian Christian). I have great reservations about both the notion "alignment" and the notion "human values" taken for granted in this conversation, an issue I will return to in future posts.
The annoyingly ubiquitous standard example of human-AI misalignment is the so-called "paperclip maximizer" thought experiment put forth in the 2014 book Superintelligence, by Nick Bostrom. This thought experiment is the inspiration for the artwork in the photograph at the top of this post. It goes like this (summary by ChatGPT):
Imagine a powerful AI designed to manufacture paperclips. It has one goal: maximize the number of paperclips. If this AI becomes superintelligent and has access to vast resources, it might take extreme measures to achieve its goal, including:
Consuming all available materials – It might convert all of Earth’s resources (including humans) into paperclips or machines that make paperclips.
Preventing interference – It could resist being shut down, seeing any attempt at reprogramming as a threat to its goal.
Expanding beyond Earth – If it has the capability, it may spread across the universe to turn more matter into paperclips.
Rather than "paperclip maximizer," I prefer Bostrom's own terminology for describing this situation. It involves a system's perverse instantiation of the goal its programmers gave it. Perverse instantiation is a hilarious phrase, but I digress.
Now, besides Bostrom himself, I'll mention one further voice and a couple more resources in the AI-Safety conversation.
- Dan Hendrycks runs an organization called the Center for AI Safety. He has published an open-access introduction to the field, a book called Introduction to AI Safety, Ethics, and Society. This book is so brutally boring I'm convinced an AI designed to neutralize human curiosity must have actually written it. Nevertheless, it is a useful overview of the current concerns of this field.
- An animated discussion of issues in AI-Safety continually takes place on the pages of an online forum called Less Wrong.
v. Doomers: Some voices in the AI-Safety conversation are convinced that this technology poses a lethal threat to humanity. It is worth being aware of this corner of the conversation, even though I regard it is mostly a distraction. One prominent voice here is Eliezer Yudkowsky, founder Less Wrong and of MIRI, a research institute in Berkeley. In recent years, Geoffrey Hinton has also come out as a reluctant doomer. Look out for these guys in various interviews and speeches you can find on YouTube or in podcasts.
vi. Accelerationists: Additionally, it is important to follow a few of the unrestrained boosters, or self-proclaimed "accelerationists" of AI. One must always take their comments with huge heaps of salt, bearing in mind their vested interests, positions of power, and strange techno-religious-utopian fervor.
- Marc Andreessen writes a newsletter and frequently posts on social media.
- Ray Kurzweil introduced the quasi-religious doctrine of an impending "singularity": a future where humans merge with AI to transcend biological limits. (I highly recommend Becca Rothfeld's brutal and funny review of Kurzweil's latest book on this topic.)
- Sam Altman, CEO of OpenAI, posts on social media and frequently gives interviews in the press.
vii. Further Industry Voices: Finally, here are some names you should put on your list of people to watch for on YouTube and podcast interviews. Most of these people post on social media and are worth following. Here too, most have vested interests and positions of power that we should factor into our listening:
- Francois Chollet (designer of the ARC-AGI benchmark, and a consistently articulate and illuminating voice)
- Demis Hassabis (Google DeepMind)
- Dario Amodei (Anthropic)
- Fei-Fei Li (Stanford)
- Yann LeCun (NYU & Meta)
- Stuart Russell (Berkeley)
Podcasts: Here are a few podcasts to put in your playlist:
- Your Undivided Attention, by the Center for Humane Technology.
- Hard Fork Podcast (by Kevin Roose of the NYT and Casey Newton of Platformer; breezy and fun, but not very deep)
- Dwarkesh Podcast
- Mystery AI Hype Theatre 3000 (Emily Bender and Alex Hanna, of DAIR)
Artificial Intelligence and Human Care
Above all, it is important to maintain a historical perspective. Today’s breakthroughs didn’t arise from nowhere. They’re built on decades of research, with both breakthroughs and failures, advances in computational power and even fundamental shifts in how we conceptualize and explain intelligence itself.
The neural network approach that is proving so powerful today was marginalized and denigrated for decades by the experts previously dominating the field. How can we be so sure that the experts of today are the ones who have finally figured everything out? We shouldn't be. Unimagined revolutions in our understanding of intelligence and computation will likely continue to emerge in the decades ahead. Yan LeCunn, for example, has said that "LLMs are useful, but they are an off ramp on the road to human-level AI."
We beginners and non-specialists should resist any felt need to form immediate opinions about what each new AI capability means. Let's also get out of the habit of relating to news about AI in terms of whether we "like it" or "don't like it." These are such narrow ways of listening.
Instead, let us cultivate a stance of abiding wonder. Isn't it amazing and troubling to be alive today? What possibilities–promising and threatening—might these emerging technologies open up? What aspects of intelligence, creativity, and care do they amplify or hinder? What new ways of being-in-the-world and being-with-machines-on-the-planet are emerging?
Let us also remember that AI is a tool, for now. The magic in AI comes out in how we interact with it.
Of course, many people working in the field of AI also think of themselves as advancing towards a new kind of being or agent, e.g., a "superintelligence." Such a being, the story goes, eventually won't be a mere tool of ours. This synthetic god could take control of our planet and use us as its tool, rapturing us towards our own ultimate salvation or annihilation.
I find this train of thought to be mostly a distraction. It diverts us from the real work that we need to do in imagining and designing how we will interact and collaborate with the more mundane AI-powered tools and agents that are arising today.
If Heidegger is right that the most important aspect of human existence is our distinctive capacity to care, then this suggests a direction for our future inquiries. Rather than fixating on "values," doomsday scenarios, or far-fetched fantasies of human obsolescence, we should focus on how AI impacts our capacity to care.
How are these new AI technologies going to reshape our capacity to care? Will our capacity to care increasingly be offloaded to machines? What practices do we need to put in place to revitalize and expand our capacities to care in the age of planetary AI?
We will continue to investigate these concerns in future posts.
Now my question to you: What voices and conversations around AI development have you found particularly illuminating? I'd love to hear your thoughts in the comments.
Please share Without Why with a friend if you find it worthwhile.