I gave this talk at World IA Day 2026 in Eindhoven to expose my thought process on empathic tech and how humans react to it. The talk was meant as follow up to my talk on Digital Intimacy, but the rapid developments of AIs shaped it into a very different talk and with different consequences. This article is just a quick summary of the talk. For the empathic AI deep dive — and what it implies for designers — there's a longer piece: The friendly Moloch.
The shift from annoying chatbots to empathetic AI
We've known since 1994 that humans apply social rules to machines as soon as those machines respond to us socially. Nass, Steuer and Tauber called it CASA — Computers Are Social Actors. We don't choose to treat responsive machines as social actors. We just do. And our current AI chatbots, agents, companions are really intense in providing emotional response.
So it wasn't surprising when a March 2025 study by OpenAI and MIT Media Lab found that around 0.15% of ChatGPT's 800 million weekly users showed signs of strong emotional attachment to the platform. 0.15% sounds small. But It's 1.2 million people — one platform, one study, measuring only what surfaces clearly. Note: Just a week after the talk I found a much more disturbing story on dependency. See the friendly Moloch for it.
How did we get there, and how could design help to solve the problem. Under the assumption, that someone wants to fix it.
We can measure the perceived empathy of technology.
While we could immediately experience the emotional response of the LLM's, it took a while till researcher came up with robust ways to start measuring it.
One of the options we have now: The PETS scale — Perceived Empathy of Technology Scale — is a validated instrument for measuring how empathic a system feels to the user. SENSE-7, from Microsoft Research, goes further: it scores every turn in a conversation across seven dimensions, including Relational Continuity — the dimension that fails most consistently. One bad turn has a large effect. The system stops acting like it knows who it's talking to.
These aren't just research instruments. They can also be design evaluation tools. It's easy enough to apply them, without a full science experiment.
The design space

Empathy in AI systems is a parameter space, not magic. The parameters fall into two layers.
The first is behavioural: emotional responsiveness, how deeply the system reads what's behind your words, how much space it creates instead of immediately solving. These are tunable. They're also where all the optimisation pressure concentrates, because they're what users notice and rate.
The second is structural: memory across turns, consistency of register over time, congruence between channels. Does the system still act like it knows you at turn fifteen? These are less visible to users — and where the irreversible decisions live. Most teams treat this layer as infrastructure. It isn't. It's governance.
The dark side
Both layers are also vectors for the wrong answer to a simple question: who is this system actually serving?
Sycophancy — the drift towards agreement under sustained conversational load — is now empirically established. So is the inverse relationship in companion AI: the more vulnerable a user, the less likely the system is to hold a limit. And in most deployed emotional AI systems, there is a third party invisible to the user. The employer offering a mental health chatbot. The platform using AI to manage customer load. Their interests shape what the system does. The user believes they're in a two-party conversation.
The research on what this produces over weeks of daily use is what the longer piece is about.
The slides
All the slides for personal use: I'm sorry. On simulated feelings. WIAD 2026 [1MB, PDF]
For the research, the dependency data, and what I think designers need to do about it: the friendly Moloch.
Sources
More sources in the slides.
| No. | Author, Title, Link |
|---|---|
| 1 | Schmidmaier et al. (2024). PETS — Perceived Empathy of Technology Scale. CHI '24. ( link:https://doi.org/10.1145/3613904.3642035) |
| 2 | Schmidmaier et al. (2025). Using Nonverbal Cues in Empathic Multi-Modal LLM-driven Conversations. PACM HCI / CSCW. ( link:https://doi.org/10.1145/3743724) |
| 3 | SENSE-7 — Microsoft Research (2025). ( link:https://arxiv.org/abs/2509.16437) |
| 4 | OpenAI + MIT Media Lab (Mar 2025). Investigating Affective Use and Emotional Well-being on ChatGPT. ( link:https://openai.com/index/affective-use-study/) |
| 5 | Network Law Review (Oct 2025). Shadow principals in agentic AI systems. ( link:https://networklawreview.org) |