Care at the Edge of Automation
Technological research is guided (and sometimes misguided) by deep ontological assumptions about the nature of cognition, agency, communication, and more. If we are to create technologies that serve human care, we must bring these hidden assumptions into the open and question them at their roots.

Today I have two things to share. First I am honored to share the first blog post I wrote for Topos Institute, where I am a currently a philosopher in residence. I am going to share just a fragment of the post here. Please head over to the Topos website to read the whole piece! You can find it here.
Second, I am going to share the link to an interview with my friend, Professor Iain Thomson, who is one of the most important and striking voices talking about Heidegger, technology, and AI today. Look for that link below, and also be sure to check out Iain's provocative new little book, Heidegger on Technology's Danger and Promise in the Age of AI (Cambridge University Press).
Communicated by Brendan Fong, CEO of Topos:
I’m delighted to welcome our first philosopher-in-residence, B. Scot Rousse! In getting to know B over the past few months, I’ve been struck by his insights into the ways technologies can centre and marginalise human care and meaning-making, as well as his deep commitment to serving others and building a world that supports this care. We’re very excited to bring B’s rich, distinctive intellectual tradition into Topos. Moreover, I’m excited to explore how this perspective can be a fresh ingredient in new, Topos-led technologies that empower human communities in this technological era.
Here’s a first post from B, in which he shares a bit more about his intellectual lineage and the questions that drive him.
ABSTRACT
Technologies don’t just solve problems, they change us. We invent technologies, and they invent us in turn, shaping our lives and worlds. This is the phenomenon that Terry Winograd and Fernando Flores, in Understanding Computers and Cognition (1986), called “ontological design.” It matters now more than ever—along with a second lesson they saw clearly. Technological research is always guided (and sometimes misguided) by deep ontological assumptions about, e.g., the nature of cognition, agency, and communication. If we are to create technologies that truly serve human flourishing and care, we must bring these hidden assumptions into the open and question them at their roots.
1. Philosophy, Ontology, and Design
Technologies are not just the application of scientific knowledge to practical problems. They reshape our space of possibilities, altering how we live, act, and understand ourselves. The design of new technologies is, often quietly, the design of new ways of being human. Our inventions invent us in return.
This insight captures the notion of “ontological design,” introduced by Terry Winograd and Fernando Flores in their 1986 book Understanding Computers and Cognition: A New Foundation for Design. Rapid advances of AI and other technologies today demand that we grapple anew with this startling realization.
Take the smartphone. It didn’t merely make telecommunication more convenient. It placed us into a new condition of constant connectivity—reshaping how we learn about events, navigate the physical world, seek social connection, and even become the people we are. The repercussions of this transformation for our collective well-being are still coming into view.
Today is an exhilarating and disorienting time to be alive, and to be thinking about and building new technologies. Advances in AI have reignited fundamental questions about the human predicament: What is language? Intelligence? Communication? Flourishing? What kind of human beings are we becoming? What understanding of our predicament should guide the design and use of AI and other technologies?
In Understanding Computers and Cognition, Winograd and Flores showed that philosophical questions are always at stake in technological design. Every new system, they argued, carries a tacit or explicit stand on fundamental issues: what cognition is, what agency is, what communication is, and so on.
They named the guiding assumptions of the AI research of their time “the rationalistic tradition”: a view that human intelligence consists in formal operations (such as search and inference over explicit representations); that agency is the solving of discrete problems by selecting between definite alternatives; and that communication is the transmission of information.
Winograd and Flroes issued a threefold challenge to the rationalistic tradition: (1) to call attention to the hidden philosophical assumptions shaping AI research; (2) to show how these assumptions can both limit our technological capabilities and thwart our imagination for the possibilities of human-machine interaction; and (3) to offer an alternative ontology to guide future design. The urgency of this threefold challenge has been renewed today.
Understanding Computers and Cognition argued, quite presciently, that computer systems would become woven into human life as conversational technologies. But not all conversations are alike. Sometimes, for example, we are simply speculating about possibilities; other times we are directly coordinating action in requests, offers, and promises.
Adequately designing software to assist in the execution of such conversations for action, Winograd and Flores showed, required rethinking the nature of communication itself—not as the transmission of information, but as the coordination of commitments.
A promise is not a piece of information. It is a way of shaping and bringing forth the future, together. For example, ride-sharing apps work when a request (“pick me up”) and a promise (“driver arriving”) coordinate action; both hinge on mutual commitment, not just clarity of information.
2. Our Need For Renewed Ontological Reflection
While the AI paradigm has shifted since the 1980s—from symbolic rules-based AI to neural networks and machine learning—the need for the kind of philosophical reflection exemplified by Winograd and Flores has only deepened. Today’s systems operate differently, but often rest on similarly narrow assumptions about human intelligence, communication, and agency.
We must ask: What are the ontological assumptions guiding AI research today? How might they be limiting not only technical development, but also how we live and interact with AI systems? What alternative conceptions of intelligence, agency, and communication might better orient the future?
These are the kinds of questions that animate my research. I am enthusiastic to explore, alongside the Topos community, how philosophical reflection and technological invention can mutually inform each other, especially as we seek to create technologies that expand, rather than constrict, our capacity for shared sense-making in these turbulent times...
Please head over to the Topos Institute blog to read the rest of the post!
Here is the link to the interview with Iain Thomson about Heidegger, technology, and AI. Check it out!
Let me know what comments, questions, or provocations you have after reading this post and watching the video of Iain.
Join the Conversation
Your support makes Without Why possible. If you upgrade to a paid subscription for as low as $8 per month you gain the opportunity to post and reply to comments on these pages, and to join a growing community of conversation here. You will also receive extra-special members-only posts. Sign up today!