Conviently, 2 of my posts today dovetail nicely.
-
Conviently, 2 of my posts today dovetail nicely. This post ...
https://mastodon.nzoss.nz/@strypey/115660611781101579
... linked to a Psychology Today article about AI psychosis, which says;
"This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals.*
https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
(1/2)
-
Conviently, 2 of my posts today dovetail nicely. This post ...
https://mastodon.nzoss.nz/@strypey/115660611781101579
... linked to a Psychology Today article about AI psychosis, which says;
"This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals.*
https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
(1/2)
@strypey Iatrogenic symptom amplification, particularly for depressive (including suicidal[!]) symptoms, is a well-known problem in HUMAN psychotherapy. It was a serious issue with Rogerian "nondirective" therapy, in which the therapist frequently repeats back paraphrased versions of the patient's own words. This was intended to facilitate the patient's own problem-solving, but in practice depressed patients tended to interpret it as agreement with, and validation of, their self-disparaging thoughts, leading to a vicious cycle of more and more pessimistic thinking. The same thing can easily happen with AI therapy.
-
D DThoris shared this topic on