by hy555 on 5/10/2025, 5:02:34 PM
by caseyy on 5/11/2025, 12:28:06 AM
I know many pro-LLM people here are very smart, but sometimes it's wise to heed the words of world-renowned experts on a subject.
Otherwise, you may end up defending this and it's really foolish:
> “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.
by lurk2 on 5/10/2025, 5:41:30 PM
I tried Replika years ago after reading a Guardian article about it. The story passed it off as an AI model that had been adapted from one a woman had programmed to remember her deceased friend using text messages he had sent her. It ended up being a gamified version of Smarter Child with a slightly longer memory span (4 messages instead of 2) that constantly harangued the user to divulge preferences that were then no-doubt used for marketing purposes. I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).
Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s Her came out.
From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).
I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.
by mrcsharp on 5/11/2025, 2:15:35 AM
> "I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”
He seems so desperate to sell AI that he forgot such thing already exists. It's called family or a close friend.
I know there are people who truly have no one and they could benefit from a therapist. Having them rely on AI could prove risky specially if the person is suffering from depression. What if AI pushes them towards committing suicide? And I'll probably be told that OpenAI or Meta or MS can put guardrails against this. What happens when that fails (and we've seen it fail)? Who'll be held accountable? Does an LLM take the hippocratic oath? Are we actually abandoning all standards in favour of Mark Zuckerberg making more billions of dollars?
by Xcelerate on 5/10/2025, 10:32:30 PM
I have two lines of thought on this:
1) Chatbots are never going to be perceived as safe or effective as humans by default, primarily due to human fiat. Professionals like counselors (and lawyers, doctors, software engineers, etc.) will always claim that an LLM cannot do their job, namely because acknowledging such threatens their livelihood. Determining whether LLMs genuinely provide therapeutic value to humans would require rigorous, carefully controlled experiments conducted over many years.
2) Chatbots definitely cannot replace human therapists in their current state. That much seems quite obvious to me for various reasons already argued well by others on here. But I had to highlight point #1 as devil's advocate, because adopting the mindset that "humans are inherently better by default" due to some magical or scientifically unjustifiable reason will prevent forward progress. The goal is to eliminate the (quite reasonable) fear people have of eventually losing their job to AI by enacting societal change now rather than denying into perpetuity that chatbots are necessarily inferior, at which point everyone will in fact lose their jobs because we had no plan in place.
by jdietrich on 5/10/2025, 10:34:40 PM
In the UK (and many other jurisdictions outside the US), psychotherapy is completely unregulated. Literally anyone can advertise their services as a psychotherapist or counsellor, regardless of qualifications, experience or their suitability to work with potentially vulnerable people.
Compared to that status quo, I'm not sure that LLMs are meaningfully more risky - unlike a human, at least it can't physically assault you.
https://www.bacp.co.uk/news/news-from-bacp/2020/6-march-gove...
https://www.theguardian.com/society/2024/oct/19/psychotherap...
by James_K on 5/10/2025, 5:53:24 PM
Respectfully, no sh*t. I've talked to a few of these things, and they are feckless yes-men. It's honestly creepy, they sound like they want something from you. Which I suppose they do: continual use of their services. I know a few people who use these things for therapy (I think it is the most popular use now) and I'm downright horrified at the sort of stuff they say. I even know a person who uses the AI to date. They will paste conversations from apps into the AI and ask it how to respond. I've set a rule for myself; I will never speak to machines. Sure, right now it's obvious that they are trying to inflate my ego and keep using the service, but one day they might get good enough to trick me. I already find social media algorithms quite addictive, and so I have minimise them in my life. I shudder to think what a trained agent like these may be capable of.
by kbelder on 5/10/2025, 4:50:30 PM
I think a lot of human therapists are unsafe.
We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.
by sheepscreek on 5/10/2025, 10:18:47 PM
That’s fair but there’s another nuance that they can’t solve for. Cost and availability.
AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.
The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.
by drdunce on 5/10/2025, 5:29:30 PM
As with many things in relation to technology, perhaps we simply need informed user choice and responsible deployment. We could start by not using "Artificial Intelligence" - that makes it sound like a some infallible omniscient being with endless compassion and wisdom that can always be trusted. It's not intelligent, it's a large language model, a convoluted next word prediction machine. It's a fun trick, but shouldn't be trusted with Python code, let alone life advice. Armed with that simple bit of information, the user is free to choose how they use it for help, whether it be medical, legal, work etc.
by HPsquared on 5/10/2025, 4:54:42 PM
Sometimes an "unsafe" option is better than the alternative of nothing at all.
by citizenkeen on 5/11/2025, 1:38:17 AM
Look, make the companies offering AI therapy carry medical malpractice insurance at the same risk as human therapists. If they tell someone to go off their meds, let a jury see those transcripts and see if the company still thinks that’s profitable and feasible.
by pavel_lishin on 5/10/2025, 8:03:15 PM
A recent Garbage Day newsletter spoke about this as well, worth reading: https://www.garbageday.email/p/this-is-what-chatgpt-is-actua...
by j45 on 5/10/2025, 7:46:00 PM
Where the experts are the ones who's incomes would be threatened, there is likely some merit in what they're saying, but also some digital literacy skills.
I don't know that AI "advisory" chatbots can replace humans.
Could they help an individual organize their thoughts for more productive time with professionals? Probably.
Could such tech help individuals learn about different terminology, their usage and how to think about it? Probably.
Could there be .. a net results of spending fewer hours (and cost if the case) for the same progress? And be able to make it further with advice into improvement?
Maybe the baseline of advisory expertise in any field exists more around the beginner stage than not.
by arvinsim on 5/17/2025, 8:48:11 AM
It will be hard to fight against the tendency of people to use LLMs as therapists when LLMs are relatively free compared to paying up for a human therapist.
by rdm_blackhole on 5/10/2025, 6:06:50 PM
I think the core of the problem here is that the people who turn to chat bots for therapy sometimes have no choice as getting access to a human therapist is simply not possible without spending a lot of money or waiting 6 months before a spot becomes available.
Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?
by miki123211 on 5/11/2025, 2:06:45 AM
So here's my nuanced take on this:
1. The effects of AI should not be compared with traditional therapy, instead, they should be compared with receiving no therapy. There are many people who can't get therapy, for many reasons, mostly financial or familial (domestic abuse / controlling parents). Even for those who can get it, their therapist isn't infinitely flexible when it comes to time and usually requires appointments, which doesn't help with immediate problems like "my girlfriend just dumped me" or "my boss just berated me in front of my team for something I worked 16-hour days on."
AI will increase the amount of therapy that exists in the world, probably by orders of magnitude, just like the record player increased the amount of music listening or the jet plane increased the amount of intercontinental transportation.
The right questions to ask here are more like "how many suicides would an AI therapist prevent, compared to the number of suicides it would induce?", or "are all human therapists licensed in country / state X more competent than a good AI?"
2. When a person dies of suicide, their cause of death is, and will always be, listed as "suicide", not "AI overregulation leading to lack of access to therapy." In contrast, if somebody dies because of receiving bad AI advice, that advice will ultimately be attributed as the cause of their death. Statistics will be very misleading here and won't ever show the whole picture, because counting deaths caused by AI is inherently a lot easier than counting the deaths it prevented (or didn't prevent).
It is much safer for companies and governments to prohibit AI therapy, as then they won't have to deal with the lawsuits and the angry public demanding that they do something about the new problem. This is true even if AI is net beneficial because of the increased access to therapy.
3. Because of how AI models work, one model / company will handle many more patients than any single human therapist. This means you need to rethink how you punish mistakes. Even if you have a model that is 10x better than an average human, let's say 1 unnecessary suicide per 100000 patients instead of 1 per 10000, imprisonment after a single mistake may be a suitable punishment for humans, but is not one in the API space, as even a much better model is bound to cause a mistake at some point.
4. Another right question to ask is "how does effectiveness of AI at therapy in 2025 compare to the effectiveness of AI at therapy in 2023?" Where it's at right now does't matter, what matters is where it's going. If it continues at the current rate of improvement, when, if ever, will it surpass an average (or a particularly bad) licensed human therapist?
5. And if this happens and AI genuinely becomes better, are we sure that legislators and therapists have the right incentives to accept that reality? If we pass a law prohibiting AI therapy now, are we sure we have the mechanisms to get it repealed if AI ever gets good enough, considering points 1-3? If the extrapolated trajectory is promising enough (and I have not run the necessary research, I have no idea if it is or not), maybe it's better to let a few people suffer in the next few years due to bad advice, instead of having a lot of people suffer forever due to overzealous regulation?
by deadbabe on 5/10/2025, 8:53:48 PM
I used ChatGPT for therapy and it seems fine, I feel like it helped, and I have plenty of things fucked up about myself. Can’t be much worse than other forms of “therapy” that people chase.
by bigmattystyles on 5/10/2025, 4:40:18 PM
The problem is they are cheap and immediately available.
by nickdothutton on 5/11/2025, 8:01:37 AM
Perhaps experts can somehow moderate or contribute training data awarded higher weights. Dont let perfect be the enemy of good.
by more_corn on 5/10/2025, 10:05:09 PM
But it’s probably better than no therapy at all.
by emptyfile on 5/10/2025, 6:02:30 PM
The idea of people talking to LLMs in this way genuinely disturbs me.
by bitwize on 5/10/2025, 9:21:42 PM
I dunno, man, M-x doctor made me take a real hard long look at my life.
by Buttons840 on 5/10/2025, 5:11:04 PM
Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
Throwaway account. My ex partner was involved in a study which said these things were not ok. They were paid not to publish by an undisclosed party. That's how bad it has got.
Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.