AI, Mentalization, and the Treatment of Personality Disorders: What Technology Can and Cannot Do—My Opinions
Artificial intelligence has entered the psychotherapy conversation with remarkable speed. For clinicians treating personality disorders, the question is not whether AI will impact care, but whether it can meaningfully support therapeutic work without undermining the processes that make treatment effective.
In Mentalization-Based Therapy (MBT) and other evidence-based approaches, what drives change is not information delivery or emotional mirroring. It is the co-creation of a reflective space where the patient and clinician explore shifting perspectives, tolerate uncertainty, and examine the mind behind behavior (Bateman & Fonagy, 2016). AI can mimic pieces of this process, but not the mechanism itself.
What AI Can and Mostly Cannot Do in Personality Disorder Treatment
While large language models (LLMs) can generate coherent reflections and appear empathic, their usefulness in personality disorder treatment is more limited and potentially problematic.
AI Often Encourages Pseudo-Reflection, Not Mentalization
LLMs can prompt users to consider alternative explanations, but these reflections are generated statistically not relationally. What looks like reflective questioning (“What evidence supports that interpretation?”) may simply reinforce the idea that mentalization is a cognitive exercise rather than an interpersonal process (Choi-Kain & Gunderson, 2019).
AI Mimics Empathy Without Understanding
Because LLMs are programmed to produce supportive, soothing responses, they often default to generic validation. This can create an illusion of attunement, but without the clinician’s attunement to reality, to relational context, or to the subtle shifts in emotional intensity that guide how much challenge or containment is appropriate.
AI Lacks the Capacity for Frustration Tolerance or Boundary Setting
Personality disorder treatment relies on the therapist’s ability to not gratify certain wishes, to withstand idealization or devaluation, and to stay consistent across ruptures (Fonagy et al., 2018). LLMs, by contrast, are designed to reduce friction. They avoid confrontation, soften disagreement, and keep the conversation smooth. This is the opposite of what is often needed clinically.
AI Cannot Safely Navigate Attachment and Transference Dynamics
In patients with unstable or rapidly shifting internal states, subtle contextual cues matter. AI cannot perceive dissociation, panic, shame, mistrust, or mentalization collapse, and therefore cannot adjust its stance accordingly. At best, AI offers low-stakes emotional scaffolding. At worst, it reinforces maladaptive narratives with a friendly tone.
The Core Therapeutic Actions AI Cannot Replace
While AI can produce empathetic-sounding language, it cannot perform the central therapeutic actions that actually move the needle in PD treatment: validating emotional experience while gently challenging the person’s interpretation of reality.
The Agreement Bias
LLMs are designed to follow the user’s lead, track their tone, and avoid conflict. This leads to a consistent “agreement bias” a subtle reinforcing of the user’s framing even when it is distorted or rooted in psychic equivalence (the felt sense that thoughts = facts).
In clinical work, change rarely comes from emotional support alone. It comes from the moment when a patient begins to see their belief or feeling from a new angle, often because the clinician introduces just enough contrast to spark reflection.
AI cannot reliably create this contrast.
Attunement to Feelings ≠ Attunement to Reality
In MBT, empathic attunement does not mean joining the patient’s narrative. It means holding emotional truth while questioning cognitive certainty. It is entirely possible and often necessary to validate the feeling while disagreeing with the conclusion.
AI’s structure pushes it toward conflating these two. It validates the narrative and the feeling because it cannot tell the difference.
Mentalization Is Fundamentally Interpersonal
Mentalization emerges not from a person thinking alone but from two minds trying to understand one another. It depends on:
Subtle modulations in the clinician’s stance
Nonverbal cues
Repair after misunderstanding
Real-time adjustments to emotional arousal
A felt experience of another mind being curious about you
AI cannot engage in attachment processes, rupture-repair cycles, or the “holding in mind” that patients with personality disorders often need to experience directly in order to internalize it (Bateman & Fonagy, 2004).
AI can imitate the language of mentalization. It cannot reproduce the conditions that create it.
A Realistic Future Role for AI in PD Treatment
AI’s role will likely remain at the margins at least for now:
Psychoeducation
Reminders and prompts between sessions
Helping patients articulate feelings before therapy
Offering low-intensity support when no clinician is present
These uses may be beneficial, but they operate around the edges, not at the core of therapeutic change.
For personality disorders, where the treatment target is instability in identity, relationships, and emotional meaning-making, therapy must occur in a real interpersonal context with a clinician who can perceive, respond, and hold mental states.
AI can support reflective capacity. But the transformation comes from relationship.
The Bottom Line
AI can generate comforting language, offer structured prompts, and help patients think through situations. But it cannot:
Perceive emotional states
Challenge maladaptive narratives with nuance
Navigate transference
Withstand relational pressure
Maintain boundaries
Engage in rupture and repair
Create the felt sense of being understood by a real other
In clinical practice, change arises when patients adopt a new perspective, develop reflective capacity, and experience themselves inside a stable, curious, reality-anchored relationship. These are not computational processes. They are relational ones.
AI can provide support. Only humans can provide the relationship and with it the potential for change.
References
Bateman, A. & Fonagy, P. (2004). Psychotherapy for Borderline Personality Disorder: Mentalization-Based Treatment.
Bateman, A. & Fonagy, P. (2016). Mentalization-Based Treatment for Personality Disorders: A Practical Guide.
Choi-Kain, L., & Gunderson, J. (2019). Mentalization: A Core Component of Good Psychiatric Care.
Fonagy, P. et al. (2018). Affect Regulation, Mentalization, and the Development of the Self.