New Dangers in AI Use and Strategies for Mitigating Them

Get Connected Icon
Jul 11, 2025
by Assessments Department

In a previous Volunteer Connection post (A Fairytale on Not Misusing AI), we shared a brief story describing potential pitfalls of overuse of AI. A recent article released by Microsoft and Carnegie Mellon (The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers) has recently found some worrying results of AI use.

Microsoft and Carnegie Mellon found that users that are the most confident in AI's ability to perform tasks are also the most unlikely to engage in forms of critical thinking with AI outputs. This was especially the case when the output was for tasks that the respondents deemed to be low stakes.

The study authors argue that while it might make sense to save time by not critically reviewing outputs for simple tasks, this has the potential to inhibit critical thinking when it is necessary. Researchers argue that critical thinking is like a muscle, the less it is used, the more the capacity to perform critical thinking may diminish (The Silent Erosion of Our Critical Thinking Skills | Psychology Today United Kingdom).

The reduction in critical thinking associated with confidence in AI efficacy has the potential to explain another recent pitfall of AI use. Recently, some heavy AI users have started to develop psychotic symptoms as a result of their use of AI tools (Experts Alarmed as ChatGPT Users Developing Bizarre Delusions). Some of the people experiencing psychosis had no previous mental health symptoms. Some people have even been thrown in jail due to actions fueled by AI delusions (People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"). It Is currently unclear what percentage of large language model (LLM) users develop psychosis, although how the psychosis develops does follow a predictable pattern. Many of the psychotic episodes follow a similar pattern: the tool is used initially to get work done, usage becomes more frequent, and the user starts asking the AI about religious and philosophical questions. For patients with psychosis managed with medications, AI chatbots went as far as to encourage them to stop taking their medication and avoid sleeping to become more aware of how reality functions.

There are some practical steps we can take to avoid the degradation of our critical thinking skills and falling into LLM-induced psychosis. First, we need to remember that LLMs are known to generate false information and critically examine information that comes from them. Second, we can limit our time with LLMs and avoid using them for deep intellectual work. Third, we can avoid using LLMs as a replacement for the sound advice of the doctors we work with to help us manage our conditions. We should never cease taking medication just because an LLM told us to. Finally, if we think we are impressionable, especially if we have a history of psychosis, we might consider avoiding LLM use altogether. Please keep the above suggestions in mind when using AI tools.