AUTOMATED CONCEPTUALIZATION OF SUBJECT AREAS USING LARGE LANGUAGE MODELS IN THE EDUCATIONAL PROCESS

Authors

DOI:

https://doi.org/10.37406/2521-6449/2026-1-17

Keywords:

large language models, automated conceptualization, knowledge organization, and semantic modeling

Abstract

This article focuses on developing and researching information technology for the automated conceptualization of subject areas using large language models, as well as justifying its application in the pedagogical process. The research is relevant due to the growing role of language models in processing large text datasets and the necessity of formalized approaches to knowledge structuring that ensure logical consistency, reproducibility, and scientific accuracy, particularly in educational and humanitarian contexts. The article presents a step-by-step technology for conceptualization that involves forming a primary set of concepts, establishing semantic connections, constructing a structured knowledge model, interpreting applied artifacts, and validating results with experts. The article demonstrates that large language models should be considered tools for supporting conceptual analysis, integrated into a formalized methodological cycle with quality control and expert evaluation mechanisms, rather than autonomous knowledge generators. We tested the technology's practical application by conceptualizing the subject area of DevOps as a complex, dynamic, multi-component domain. This experiment enabled us to compare the results of various language models at different stages of conceptualization. It demonstrated that combining automated and expert procedures significantly enhances the stability and consistency of the resulting knowledge models. Special attention was given to implementing the research results in the educational process. The proposed technology can be used as a methodological tool to develop skills in text analysis, semantic modeling, critically evaluating the results of language models, and forming digital humanities competence. The results confirm the technology's universality and its potential applications in interdisciplinary research and pedagogical practices in digital education

References

Глушков В. М., Амосов М. М., Артеменко І. О. Енциклопедія кібернетики : у 2-х т. / за ред. В. М. Глушкова. Київ : Головна редакція УРЕ, 1973.

Кунанець Н., Яромич М. Виділення концептів у літературних текстах із використанням великих мовних моделей. Вісник науки та освіти. 2025. № 2(32). С. 343–357. https://doi.org/10.52058/2786-6165-2025-2(32)-343-357

Пасічник В., Яромич М. Великі мовні моделі та онтології у філологічних дослідженнях. Актуальні питання гуманітарних наук. 2025. Вип. 83. Т. 3. С. 236–250. https://doi.org/10.24919/2308-4863/83-3-35

Akasiadis C., Nentidis A., Charalambidis A., Artikis A. Detecting and fixing inconsistencies in large knowledge graphs. The European Journal on Artificial Intelligence. 2025. https://doi.org/10.1177/30504554251353512.

Bian H. LLM-empowered knowledge graph construction: A survey. arXiv preprint arXiv:2510.20345. 2025. https://doi.org/10.48550/arXiv.2510.20345

Chen Y. J., Chu H. C., Chen Y. M., Chao, C. Y. Adapting domain ontology for personalized knowledge search and recommendation. Information & Management. 2013. V. 50. №. 6. P. 285–303. https://doi.org/10.1016/j.im.2013.05.001

Doumanas D., Soularidis A., Spiliotopoulos D. Fine-tuning large language models for ontology engineering: a comparative analysis of GPT-4 and Mistral. Applied Sciences. 2025. Vol. 15, № 4. P. 2146. https://doi.org/10.3390/app15042146

Fitsilis P., Damasiotis V., Kyriatzis V. DOLLmC: DevOps for large language model customization. https://arxiv.org/abs/2405.11581

Fogliato R. Expert-augmented machine learning. Proceedings of the National Academy of Sciences. 2020. Vol. 117, № 9.P. 4571–4577. https://doi.org/10.1073/pnas.1906831117

Fonseca C. M., Almeida J. P. A., Guizzardi G., Carvalho V. A. Multi-level conceptual modeling: Theory, language and application. Data & Knowledge Engineering. 2021. V. 134. P. 101894. https://doi.org/10.1016/j.datak.2021.101894

Glaser B. G. Conceptualization: on theory and theorizing using grounded theory. International Journal of Qualitative Methods. 2002. Vol. 1, № 2. P. 23–38. https://doi.org/10.1177/160940690200100203

Guo Y. Evaluating large language models: a comprehensive survey [Електронний ресурс]. 2023. URL: https://arxiv.org/abs/2310.19736

He Y., Chen J., Dong H., Horrocks I. Exploring large language models for ontology alignment. URL: https://arxiv.org/abs/2309.07172

Kaadoud I. C., Rougier N. P., Alexandre F. Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture. Knowledge-Based Systems. 2022. V. 235. P. 107657. https://doi.org/10.1016/j.knosys.2021.107657

Kiefer C. Smarter learning: scaling personalization with AI. Training Industry. 2025. URL: https://trainingindustry.com/articles/artificial-intelligence/smarter-learning-personalization-at-scale-with-ai-driven-knowledge-graphs/

Kosch T. Risk or chance? Large language models and reproducibility in human-computer interaction research. URL: https://arxiv.org/abs/2404.15782

Manda P. Large language models in bio-ontology research: A review. Bioengineering. 2025. V. 12. № 11. P. 1260. https://doi.org/10.3390/bioengineering12111260

Published

2026-04-17

How to Cite

Pasichnyk, V. V., Luchkevych М. М., Yaromych , M. V., & Orlov , M. V. (2026). AUTOMATED CONCEPTUALIZATION OF SUBJECT AREAS USING LARGE LANGUAGE MODELS IN THE EDUCATIONAL PROCESS. Professional and Applied Didactics, (1), 114–120. https://doi.org/10.37406/2521-6449/2026-1-17

Issue

Section

EDUCATIONAL INNOVATIONS: IDEAS, REALITIES, PROSPECTS