Four AI Tools Enhance Psychiatry Clinical Practice

Arjun Nagendran, PhD, and Scott Compton, PhD
Arjun Nagendran, PhD, and Scott Compton, PhD, researchers at Lurie Children’s, have implemented four AI models that target critical gaps in mental health care by reducing clinician burden, and improving patient access and safety, while supporting personalized care and evidence-based innovation.
AI systems have become increasingly proficient in interpreting data and creating a pathway to improvement which may also offer the opportunity to enhance traditional medical approaches to mental health diagnostics, monitoring, and interventions. The increasing demand for mental health services, exacerbated by the COVID-19 pandemic, emphasizes the importance of leveraging AI to facilitate early detection of mental illnesses, optimize safety and treatment planning, providing continuous patient support and decrease clinician fatigue.
Neuropsychology Outcome Reporting Assistant (NORA) automates aspects of data collection and report generation, reducing time spent on documentation and allowing more time spent in clinical care. This AI model processes multi-source patient data trained on clinician-specific content, methods, and style. The model allows manual review by the clinician and flags questionable and low confidence areas for clinician input. The more the clinician edits, the better the eventual AI output will be. This improvement in clinical workflow could lead to a decrease in administrative duties by the provider, leading to increased accessibility for patients and accelerating the delivery of results.
Medication Information for Neuropsychological Disorders (MIND) is trained on providing patients and families information on medication, side effects and dosing. It is estimated that clinicians spend ~30% of their time responding to repetitive patient queries; this model aims to reduce this burden. With only 12 months of experimentation and studies, the long-term effectiveness still needs validation. “Training the model on continuously updated, doctor-verified content enables it to provide confidence measures for its responses, deliver real-time, context-specific information, and appropriately acknowledge when information is insufficient and a provider should be contacted.” says Dr. Nagendran.
Suicide Assessment Fidelity Evaluator (SAFE) can evaluate the quality of patient safety plans for suicide risk in real time which can reduce review time by 70% and provide that same real time feedback during the plan creation. Running the safety plan and action steps while engaging with the patient can help us determine if the safety plan is solid. We now have preliminary evidence that these AI models can also predict the ED return risk for that patient based on this data. The long-term goal is to offer AI assistance and feedback to clinicians while picking up on cues that the provider may miss furthering the development of action steps for a particular patient truly customizing the safety plan.
Suicide Prevention AI Role-play Kit (SPARK) uses simulated, realistic child patients to enable safe, scalable, and diverse suicide prevention training for clinicians. Traditional training relies on human role-play, which is difficult to scale, costly, and limited in demographic representation. SPARK leverages a dual-AI approach in which one model performs the role-play while a second model functions as an external observer, providing after-action feedback on clinicians’ skills and performance. This approach supports skill development and confidence-building in a safe, private, and controlled training environment. Project work has already begun, and additional grant funding is actively being pursued.
MIND, NORA and SAFE have been identified as highly implementable, says Dr. Nagendran, while SPARK will take more time. “We are now collaborating with innovation teams for efficacy studies and moving towards commercialization.” All AI models and platforms are developed within clinical settings and are supported by evidence-based practices and require AI governance approval before institutional rollout, says Dr. Compton.