Implementasi Large Language Model dalam Multi-Domain Psikologi: Tinjauan Literatur Sistematis
DOI:
https://doi.org/10.52436/1.jpti.1105Keywords:
Large Language Models, Literature Review, PsikologiAbstract
Implementasi large language models (LLM) dalam bidang psikologi menyajikan peluang signifikan untuk meningkatkan diagnosis, pengambilan keputusan klinis, dan penelitian medis. Studi ini melakukan tinjauan literatur sistematis untuk mengeksplorasi penelitian-penelitian terkini mengenai aplikasi LLM dalam bidang psikologi. Dengan mengikuti panduan PRISMA, pencarian literatur dilakukan pada database ScienceDirect. Kriteria inklusi dan eksklusi diterapkan untuk mengidentifikasi studi-studi yang relevan. Data yang diekstraksi mencakup tujuan penelitian, metodologi, bidang aplikasi, jenis data yang digunakan, key findings, dan hasil. Sebanyak 20 studi dimasukkan setelah proses seleksi. Review ini memberikan gambaran komprehensif mengenai aplikasi LLM dalam bidang psikologi, mengidentifikasi peluang, tantangan, dan arah penelitian masa depan yang bermanfaat bagi peneliti, praktisi, dan pembuat kebijakan. Temuan ini menunjukkan bahwa integrasi LLMs dalam praktik psikologi memiliki potensi transformatif untuk meningkatkan kualitas dan aksesibilitas layanan kesehatan mental, namun memerlukan pengembangan framework etis dan regulasi yang komprehensif untuk memastikan implementasi yang aman dan efektif.
Downloads
References
S. Peereboom, I. Schwabe, and B. Kleinberg, “Cognitive phantoms in large language models through the lens of latent variables,” Computers in Human Behavior: Artificial Humans, vol. 4, p. 100161, May 2025, doi: 10.1016/j.chbah.2025.100161.
S. Jenner et al., “Using large language models for narrative analysis: a novel application of generative AI,” Methods in Psychology, vol. 12, p. 100183, Jun. 2025, doi: 10.1016/j.metip.2025.100183.
M. Heimann and A.-F. Hübener, “Circling the void: Using Heidegger and Lacan to think about large language models,” Cognitive Systems Research, vol. 91, p. 101349, Jun. 2025, doi: 10.1016/j.cogsys.2025.101349.
M. J. Page et al., “The PRISMA 2020 statement: An updated guideline for reporting systematic reviews,” Journal of Clinical Epidemiology, vol. 134, pp. 178–189, Jun. 2021, doi: 10.1016/j.jclinepi.2021.03.001.
Z. Guo, A. Lai, J. H. Thygesen, J. Farrington, T. Keen, and K. Li, “Large Language Models for Mental Health Applications: Systematic Review,” JMIR Ment Health, vol. 11, p. e57400, Oct. 2024, doi: 10.2196/57400.
A. Ferrario, J. Sedlakova, and M. Trachsel, “The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis,” JMIR Ment Health, vol. 11, pp. e56569–e56569, Jul. 2024, doi: 10.2196/56569.
O. N. E. Kjell, K. Kjell, and H. A. Schwartz, “Beyond rating scales: With targeted evaluation, large language models are poised for psychological assessment,” Psychiatry Research, vol. 333, p. 115667, Mar. 2024, doi: 10.1016/j.psychres.2023.115667.
T. H. McCoy and R. H. Perlis, “Characterizing research domain criteria symptoms among psychiatric inpatients using large language models,” Journal of Mood & Anxiety Disorders, vol. 8, p. 100079, Dec. 2024, doi: 10.1016/j.xjmad.2024.100079.
T. H. McCoy and R. H. Perlis, “Applying large language models to stratify suicide risk using narrative clinical notes,” Journal of Mood & Anxiety Disorders, vol. 10, p. 100109, Jun. 2025, doi: 10.1016/j.xjmad.2025.100109.
S. L. Pugh, C. Chandler, A. S. Cohen, C. Diaz-Asper, B. Elvevåg, and P. W. Foltz, “Assessing dimensions of thought disorder with large language models: The tradeoff of accuracy and consistency,” Psychiatry Research, vol. 341, p. 116119, Nov. 2024, doi: 10.1016/j.psychres.2024.116119.
L. M. Vowels, S. Sweeney, and M. J. Vowels, “Evaluating the Efficacy of Amanda: A Voice-Based Large Language Model Chatbot for Relationship Challenges,” Dec. 22, 2024. doi: 10.31234/osf.io/3x7e8.
E. E. Bernstein et al., “Feasibility of Using ChatGPT to Generate Exposure Hierarchies for Treating Obsessive-Compulsive Disorder,” Behavior Therapy, p. S0005789425000231, Mar. 2025, doi: 10.1016/j.beth.2025.02.005.
D. Ili? and G. E. Gignac, “Evidence of interrelated cognitive-like capabilities in large language models: Indications of artificial general intelligence or achievement?,” Intelligence, vol. 106, p. 101858, Sep. 2024, doi: 10.1016/j.intell.2024.101858.
S. J. Han, K. J. Ransom, A. Perfors, and C. Kemp, “Inductive reasoning in humans and large language models,” Cognitive Systems Research, vol. 83, p. 101155, Jan. 2024, doi: 10.1016/j.cogsys.2023.101155.
L. Sun et al., “Large language models show both individual and collective creativity comparable to humans,” Thinking Skills and Creativity, vol. 57, p. 101870, Sep. 2025, doi: 10.1016/j.tsc.2025.101870.
L. Bulla, S. De Giorgis, M. Mongiovì, and A. Gangemi, “Large Language Models meet moral values: A comprehensive assessment of moral abilities,” Computers in Human Behavior Reports, vol. 17, p. 100609, Mar. 2025, doi: 10.1016/j.chbr.2025.100609.
Z. W. Petzel and L. Sowerby, “Prejudiced interactions with large language models (LLMs) reduce trustworthiness and behavioral intentions among members of stigmatized groups,” Computers in Human Behavior, vol. 165, p. 108563, Apr. 2025, doi: 10.1016/j.chb.2025.108563.
J. Á. Martínez-Huertas, G. Jorge-Botana, A. Martínez-Mingo, J. D. Moreno, and R. Olmos, “Modeling personality language use with small semantic vector subspaces,” Personality and Individual Differences, vol. 219, p. 112514, Mar. 2024, doi: 10.1016/j.paid.2023.112514.
M. Andersson and D. McIntyre, “Can ChatGPT recognize impoliteness? An exploratory study of the pragmatic awareness of a large language model,” Journal of Pragmatics, vol. 239, pp. 16–36, Apr. 2025, doi: 10.1016/j.pragma.2025.02.001.
O. L. Jacobs, F. Pazhoohi, and A. Kingstone, “Large language models have divergent effects on self-perceptions of mind and the attributes considered uniquely human,” Consciousness and Cognition, vol. 124, p. 103733, Sep. 2024, doi: 10.1016/j.concog.2024.103733.
E. G. Wilcox et al., “Bigger is not always better: The importance of human-scale language modeling for psycholinguistics,” Jul. 17, 2024. doi: 10.31234/osf.io/rfwgd.
A. Bewersdorff et al., “Taking the next step with generative artificial intelligence: The transformative role of multimodal large language models in science education,” Learning and Individual Differences, vol. 118, p. 102601, Feb. 2025, doi: 10.1016/j.lindif.2024.102601.
D. Stoll, S. Wehrli, and D. Lätsch, “Case reports unlocked: Harnessing large language models to advance research on child maltreatment,” Child Abuse & Neglect, vol. 160, p. 107202, Feb. 2025, doi: 10.1016/j.chiabu.2024.107202.
A. Grybauskas and J. Cárdenas-Rubio, “Unlocking employer insights: Using large language models to explore human-centric aspects in the context of industry 5.0,” Technological Forecasting and Social Change, vol. 208, p. 123719, Nov. 2024, doi: 10.1016/j.techfore.2024.123719.










