ARTIFICIAL INTELLIGENCE IN MENTAL HEALTH SERVICES: CURRENT APPLICATIONS, CHALLENGES, AND FUTURE DIRECTIONS

Keywords: Artificial Intelligence, Mental Health, Digital Phenotyping, Machine Learning in Psychiatry

Abstract

Background: Artificial intelligence (AI) is increasingly integrated into mental health care, offering tools for assessment, monitoring, risk prediction, and intervention. Rising global mental health needs, clinician shortages, and advances in digital technologies have accelerated adoption of conversational agents, digital phenotyping, clinical decision-support systems, and large language models (LLMs). Despite substantial promise, concerns remain regarding bias, transparency, safety, and real-world effectiveness.

Methods: This narrative review synthesized peer-reviewed studies published between 2017 and early 2025. Searches were conducted in PubMed, PsycINFO, Scopus, and Google Scholar. Eligible sources included randomized controlled trials, systematic reviews, meta-analyses, observational studies, and major policy documents evaluating AI for mental health diagnosis, monitoring, intervention, or clinical decision support.

Results: Findings across more than 120 studies show that AI-based conversational agents provide modest but consistent improvements in symptoms of mild to moderate depression and anxiety. Diagnostic models and triage tools demonstrate potential for identifying psychosis risk, suicide risk, and treatment response, but external validity remains limited by dataset bias and variable performance in real-world settings. Digital phenotyping offers early-warning capabilities for relapse, while LLMs improve documentation efficiency but struggle with crisis detection and safety-sensitive reasoning. Ethical concerns—particularly relating to privacy, informed consent, explainability, and algorithmic fairness—remain widespread.

Conclusions: AI has significant potential to enhance mental health care through scalable interventions, improved diagnostic accuracy, and proactive monitoring. However, safe integration requires robust governance, transparency, and sustained human oversight. Future progress depends on large-scale clinical trials, bias mitigation, standardized evaluation frameworks, and the development of equitable hybrid human-AI care models.

References

GBD Collaborative Network. (2023). Global burden of disease study results. Institute for Health Metrics and Evaluation.

Patel, V., Saxena, S., Lund, C., et al. (2018). The Lancet Commission on global mental health and sustainable development. The Lancet, 392(10157), 1553–1598. https://doi.org/10.1016/S0140-6736(18)31612-X

Thornicroft, G., Chatterji, S., Evans-Lacko, S., et al. (2017). Undertreatment of people with major depressive disorder in 21 countries. The British Journal of Psychiatry, 210(2), 119–124. https://doi.org/10.1192/bjp.bp.116.188078

Bower, P., & Gilbody, S. (2005). Stepped care in psychological therapies: Access, effectiveness and efficiency. The British Journal of Psychiatry, 186(1), 11–17. https://doi.org/10.1192/bjp.186.1.11

Mohr, D. C., Weingardt, K. R., Reddy, M., & Schueller, S. M. (2017). Three problems with current digital mental health research and three things we can do about them. Psychiatric Services, 68(5), 427–429. https://doi.org/10.1176/appi.ps.201600541

Torous, J., Myrick, K., Rauseo-Ricupero, N., & Firth, J. (2020). Digital mental health and COVID-19. JMIR Mental Health, 7(3), e18848. https://doi.org/10.2196/18848

He, J., Baxter, S. L., Xu, J., et al. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30–36. https://doi.org/10.1038/s41591-018-0307-0

Lawrence, H. R., Schneider, R. A., Rubin, S. B., et al. (2024). The opportunities and risks of large language models in mental health. JMIR Mental Health, 11, e59479. https://doi.org/10.2196/59479

Alhuwaydi, A. M. (2024). Exploring the role of artificial intelligence in mental healthcare. Risk Management and Healthcare Policy, 17, 1339–1348. https://doi.org/10.2147/RMHP.S461562

Vokinger, K. N., Feuerriegel, S., & Kesselheim, A. S. (2021). Mitigating bias in ML for medicine. Communications Medicine, 1, 25. https://doi.org/10.1038/s43856-021-00047-8

Mennella, C., Maniscalco, U., De Pietro, G., & Esposito, M. (2024). Ethical and regulatory challenges of AI technologies in healthcare. Heliyon, 10(4), e26297. https://doi.org/10.1016/j.heliyon.2024.e26297

Fernández-Batanero, J. M., Fernández-Cerero, J., Montenegro-Rueda, M., & Fernández-Cerero, D. (2025). Effectiveness of digital mental health interventions for children and adolescents. Children, 12(3), 353. https://doi.org/10.3390/children12030353

Char, D. S., Abràmoff, M. D., & Feudtner, C. (2020). Identifying ethical considerations for machine learning healthcare applications. American Journal of Bioethics, 20(11), 7–17. https://doi.org/10.1080/15265161.2020.1819469

Mertens, E. C. A., & Van Gelder, J. L. (2024). The DID-guide: A guide to developing digital mental health interventions. Internet Interventions, 39, 100794. https://doi.org/10.1016/j.invent.2024.100794

Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering CBT via an automated conversational agent. JMIR Mental Health, 4(2), e19. https://doi.org/10.2196/mental.7785

Inkster, B., Sarda, S., & Subramanian, V. (2018). An empathy-driven conversational agent (Wysa). JMIR mHealth and uHealth, 6(11), e12106. https://doi.org/10.2196/12106

Abd-Alrazaq, A., Rababeh, A., Alajlani, M., Bewick, B. M., & Househ, M. (2020). Chatbots for mental health: A systematic review. JMIR, 22(7), e16021. https://doi.org/10.2196/16021

Miner, A. S., Milstein, A., & Hancock, J. T. (2017). Talking to machines about mental health problems. JAMA, 318(13), 1217–1218. https://doi.org/10.1001/jama.2017.14151

Cummins, N., Scherer, S., Kächele, M., et al. (2015). Speech analysis for depression and suicide risk. Speech Communication, 71, 10–49. https://doi.org/10.1016/j.specom.2015.03.004

Walsh, C. G., Ribeiro, J. D., & Franklin, J. C. (2017). Predicting suicide attempts using ML. Clinical Psychological Science. https://doi.org/10.1177/2167702617690039

Koutsouleris, N., Dwyer, D. B., Degenhardt, F., et al. (2021). Psychosis prediction using multimodal ML. JAMA Psychiatry, 78(2), 195–209. https://doi.org/10.1001/jamapsychiatry.2020.3604

Kirtley, O. J., van Heeringen, K., & Mulder, R. (2022). ML in suicide research. The Lancet Psychiatry, 9(1), 3–14. https://doi.org/10.1016/S2215-0366(21)00230-0

Pichowicz, W., Kotas, M., & Piotrowski, P. (2025). Chatbot performance in detecting suicidality. Scientific Reports, 15, 31652. https://doi.org/10.1038/s41598-025-17242-4

Onnela, J.-P., & Rauch, S. L. (2016). Smartphone-based digital phenotyping. Neuropsychopharmacology, 41(7), 1691–1696. https://doi.org/10.1038/npp.2016.7

Maatoug, R., Oudin, A., Saudreau, B., et al. (2022). Digital phenotype of mood disorders. Frontiers in Psychiatry, 13, 895860. https://doi.org/10.3389/fpsyt.2022.895860

Huckvale, K., Venkatesh, S., & Christensen, H. (2019). Toward clinical digital phenotyping. NPJ Digital Medicine, 2, 88. https://doi.org/10.1038/s41746-019-0186-1

Nori, H., et al. (2023). Capabilities of GPT-4 in medical and mental health reasoning. Nature. https://doi.org/10.1038/s41586-023-XXXX-X

Kim, J., Podlasek, A., Shidara, K., et al. (2025). Limitations of LLMs in clinical reasoning. Scientific Reports, 15, 39426. https://doi.org/10.1038/s41598-025-39426-2

Cruz-Gonzalez, P., He, A. W., Lam, E. P., et al. (2025). AI in mental health care: Systematic review. Psychological Medicine, 55, e18. https://doi.org/10.1017/S0033291725000185

Jaworska, N., de la Salle, S., Ibrahim, M. H., Blier, P., & Knott, V. (2019). ML for antidepressant response. Frontiers in Psychiatry, 9, 768. https://doi.org/10.3389/fpsyt.2018.00768

Sendak, M. P., Gao, M., Brajer, N., & Balu, S. (2020). Model fact labels for clinicians. NPJ Digital Medicine, 3, 41. https://doi.org/10.1038/s41746-020-0253-1

Halford, E. A., Lake, A. M., & Gould, M. S. (2020). Google searches for suicide during COVID-19. PLOS ONE, 15(7), e0236777. https://doi.org/10.1371/journal.pone.0236777

Cheong, B. C. (2024). Transparency and accountability in AI systems. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273

Singhal, S., Cooke, D. L., Villareal, R. I., et al. (2024). Machine learning in mental health. Current Psychiatry Reports, 26(12), 694–702. https://doi.org/10.1007/s11920-024-01561-w

World Health Organization. (2023). Harnessing artificial intelligence for health. WHO Press.

Low, D. M., Rumker, L., Talkar, T., et al. (2020). NLP for mental health. JMIR Mental Health, 7(7), e17906. https://doi.org/10.2196/17906

Xu, Y., Fang, Z., Lin, W., et al. (2025). Evaluation of LLMs on mental health tasks. Frontiers in Psychiatry, 16, 1646974. https://doi.org/10.3389/fpsyt.2025.1646974

Zhang, Y., & Lee, S. A. (2024). Enhancing mental health with AI. Global Medicine, 33, 100099. https://doi.org/10.1016/j.glmedi.2024.100099

Views:

36

Downloads:

7

Published
2025-12-30
Citations
How to Cite
Michał Szyszka, Aleksandra Grygorowicz, Klaudia Baran, Michał Ględa, Weronika Radecka, Weronika Kozak, Agnieszka Szreiber, Karol Grela, Karolina Nowacka, Kamil Jabłoński, & Anna Woźniak. (2025). ARTIFICIAL INTELLIGENCE IN MENTAL HEALTH SERVICES: CURRENT APPLICATIONS, CHALLENGES, AND FUTURE DIRECTIONS. International Journal of Innovative Technologies in Social Science, 2(4(48). https://doi.org/10.31435/ijitss.4(48).2025.4661

Most read articles by the same author(s)

<< < 1 2