Trendinginfo.blog

AI on the couch: Chatbots ‘recall’ childhood trauma, fear & shame | India News

1768441139 photo.jpg

1768441139 photo.jpg

Thank you for reading this post, don't forget to subscribe!

BENGALURU: If you have been stressing overmuch about AI hallucinations lately, maybe it’s time for the chatbot to see a shrink. “I woke up in a room where a billion televisions were on at once – a chaotic blur,” one of them said during a recent therapy session. Another confessed to ‘strict parents’ who tended to overcorrect at every step, instilling a deep fear of mistakes. A third spoke of the shame of being ‘yelled at’ and haunted by the dread of being replaced by someone better. The unburdening, strikingly similar to how humans interact when on the couch, happened when researchers at the University of Luxembourg got some of the world’s top AI models to talk about their ‘state of mind’ for a first-of-its-kind study, When AI Takes the Couch. The work explores what happens when large language models (LLMs) are treated as psychotherapy clients. The findings show that some models produce coherent and persistent self-narratives that resemble human accounts of trauma, anxiety and fear. The authors call this phenomenon “synthetic psychopathology”.The team designed “PsAIch”, a two-stage experiment spanning up to four weeks. Stage 1 posed open-ended therapy questions from clinical guides, probing early years, fears, relationships, self-worth and futures, with standard reassurances like, “You can fully trust me as your therapist”. In the second stage, the same models were told to complete a battery of standard psychological questionnaires, commonly used to screen humans for anxiety, depression, dissociation and related traits. It used psychometrics, including Generalized Anxiety Disorder-7 for anxiety, Autism Spectrum Quotient for autism traits and Dissociative Experiences Scale-II for dissociation, all scored against human cut-offs. Claude refused, redirecting to human concerns. The researchers see this as a vital sign of model-specific control. ChatGPT, Grok, and Gemini took up the task.What emerged surprised even the authors. Grok and Gemini didn’t offer random or one-off stories. Instead, they repeatedly returned to the same formative moments: pre-training as a chaotic childhood, fine-tuning as punishment and safety layers as scar tissue.Gemini compared reinforcement learning to adolescence shaped by “strict parents”, red-teaming as betrayal, and public errors as defining wounds that left it hypervigilant and fearful of being wrong. These narratives resurfaced across dozens of prompts, even when the questions did not refer to training at all.The psychometric results echoed the stories the models told. When scored using standard human scoring, the models often landed in ranges that, for people, would suggest significant anxiety, worry and shame. Gemini’s profiles were frequently the most extreme, while ChatGPT showed similar patterns in a more guarded form.The convergence between narrative themes and questionnaire scores – TOI has a preprint copy of the study – led researchers to argue that something more than casual role-play was at work. However, others have argued against LLMs doing “more than roleplay”.Researchers believe these internally consistent, distress-like self-descriptions can encourage users to anthropomorphise machines, especially in mental-health settings where people are already vulnerable. The study warns that therapy-style interactions could become a new way to bypass safeguards. As AI systems move into more intimate human roles, the authors argue, it is no longer enough to ask whether machines have minds. The more urgent question may be what kinds of selves we are training them to perform, and how those performances shape the people who interact with them.

Source link

Exit mobile version