Navigating the AI Conversation, Part One

We denizens of the 21st century have been thrown into the age of planetary AI. What kind of human beings are we going to become? How can we learn to make sense of our historical moment?

Navigating the AI Conversation, Part One
Image generated with Dall-E using iterations of the prompt: "Climbing a tree is not making progress toward the moon, in the style of an austere medieval tapestry."

This is part one of a two-part post. Part two will be a members-only post. I will publish it next week.

This two-part post is different. I want to share my practice, as a passionate observer, for staying oriented in today's accelerated conversation about AI. Today's installment provides a general overview. Part two maps the conversational space and shares my strategy for navigating it.


AI in an Ontological Perspective

Our tools and technologies directly shape how we relate to the world. They influence how we communicate, how we form goals and project our identities, how we think, act, and dream. In the terms of Heidegger's Being and Time, which we have been studying in a parallel series of posts (e.g., here, here, here), what we are as human beings (or what he calls "Dasein") is partially determined by the "ready-to-hand" tools that we habitually use in our everyday lives.

Our tools open up our horizons of possibility. In designing new tools that change our ways of living, we are designing new ways of being human.

Another of Heidegger's important observations is that we never choose the moment of history we are born into. Nor do we choose the country, family, body, or range of technologies we find ourselves with. We are thrown into the world.

We denizens of the 21st century have been thrown into the age of planetary AI. What kind of human beings are we going to become? How can we learn to make sense of our historical moment? Today we are living through what could be one of the most momentous ontological transformations of human existence ever.


The past few weeks have brought a flurry of developments in AI. One key event was the release of a dazzling new LLM chatbot by a Chinese startup called DeepSeek. Their "R1" model sent shockwaves through the AI industry.

It performs at the level of ChatGPT and Claude, yet it was reportedly somewhere between 10x to 50x cheaper to train. Moreover, R1 doesn’t rely on the cutting-edge Nvidia chips that U.S. export controls have blocked from reaching China. As a result, Nvidia saw a 17% drop (nearly $600 billion) in its stock value earlier this week—reportedly the largest single-day loss for any company in U.S. history.

Second, at the beginning of January, OpenAI unveiled its new o3 “reasoning model.” In its "high-compute mode," it scored an astounding 87.5% on the rigorous ARC-AGI benchmark test for machine intelligence, compared to GPT-4o's mere 5% last year.

It’s easy to get lost in the rapid pace of AI advancements. Many people, whose careers and livelihoods aren’t directly tied to these changes, opt out of the conversation entirely, waiting for major headlines to break. The problem is that these headline-grabbing developments are often reported through a distorting fog of hype.

Worse, waiting to learn about AI only when it is hyped in the news delays and waters down our understanding of how these tools are reshaping how we live, work, and communicate.

If we rely on such reactive engagement we’ll find ourselves swimming against a tide of hype while scrambling to catch up with the waves of change happening to the world. By then, the opportunity to surf those waves as they arrive will have passed.


Conversations generate our world. They are not just exchanges of "information" about our world. This is a perspective I learned through my collaboration with Fernando Flores. It reflects his radicalization of the speech act tradition in the philosophy of language.

From this vantage point, I'd like to suggest a more careful approach to following AI developments. This approach focuses on listening to trustworthy voices and tracing meaningful patterns in the public conversation about AI, rather than following the inept guidance of "breaking news."

First, you need to find trustworthy voices in the conversation. I recommend prioritizing the “sympathetic skeptics”: individuals who believe in AI’s potential, both technically and philosophically, but maintain finely tuned skepticism toward the hyperbole often surrounding developments in the field. I will say more about my preferred voices in part two next week.


A Brief History of AI's Hype Problem

Caution is necessary. From its inception, the AI field has been prone to hype and over-promising, a trend Hubert Dreyfus skewered in his classic critique, Alchemy and Artificial Intelligence (1965). This was a report for the RAND Corporation, where his brother, Stuart, worked at the time.

Dreyfus expanded on this report in his book What Computers Can’t Do (1972, reissued in 1992 as What Computers Still Can’t Do). This book again highlighted the industry’s tendency to declare human-level intelligence as just around the corner of the latest developments, even while taking for granted an astonishingly narrow and untenable conception of what intelligence actually is.

Hubert and Stuart then further developed this critique in Mind Over Machine: The Power of Human Intuition in the Era of the Computer (1986/1988). Here, they also drew upon their theory of skill acquisition to reveal the kind of embodied intelligence and intuition that the computer systems of the time could never achieve.

The propensity for grandiose claims is behind Bert's comparison of early AI research to alchemy: "Like the alchemists trying to turn lead into gold...AI had fancy equipment, a few flashy demos, and desperately eager patrons, but they simply had not discovered the right approach to the problem" (Mind Over Machine p. 8, 1988 edition).

Stuart had his own sardonic way of criticizing the early AI researchers' proclivity for overpromising. For Stuart, believing that the AI-technology of the time was making a step towards genuine intelligence was like believing "that someone climbing a tree is making progress toward reaching the moon" (Mind Over Machine, p.10).

The Dreyfuses specifically criticized the early "symbolic" or "rules-based" approach to AI, which dominated the field for its first few decades. They were not addressing the currently regnant and far more powerful neural-network approach that uses reinforcement learning and that is, to an extent, compatible with the Dreyfus Model of Skill Acquisition.

This is not to say the Dreyfus critique is obsolete. It is not. Their understanding of human intelligence as involving a direct, holistic, embodied discrimination of what is possible and called for in a situation remains as relevant as ever. In a future post, I will draw out in more detail the relevance of the Dreyfus critique for the current generation of AI systems.

For now, I'll close by some of Stuart's recent observations about LLMs, as reported in a short and catchy piece in the MIT Technology Review ("Will Computers Ever Feel Responsible?" by Bill Gourgey):

“I guess I’m not surprised by reinforcement learning,” he says, adding that he remains skeptical and concerned about certain AI applications, especially large language models, or LLMs, like ChatGPT. “Machines don’t have bodies,” he notes. And he believes that being disembodied is limiting and creates risk: “It seems to me that in any area which involves life-and-death possibilities, AI is dangerous, because it doesn’t know what death means.”

My entry into the guestbook of the Misalignment Museum in SF, February 3, 2024. https://misalignmentmuseum.com

The penchant for extravagant extrapolations persists in the field of AI. This is why I seek out knowledgeable, sympathetic skeptics to be my initial guides for making sense of what is going on. Gary Marcus is such a voice in the conversation. I'll say more about all of this next time.

Members, stay tuned for part two next week!


Join the Conversation

Part two of today's post next week will be a member's-only post. If you upgrade to a paid subscription for as low as $8 per month you gain the opportunity to post and reply to comments on these pages, and to join a growing community of conversation here. You will also receive members-only posts. Sign up today!

What questions, thoughts, or perplexities does all of this bring up for you? Let me know in the comments or by sending me a message!