Vol. 20 / 2025 – International Conference on Virtual Learning
The hallucination problem in Generative Artificial Intelligence: accuracy and trust in digital learning
Bogdan-Iulian CIUBOTARU
Generative Artificial Intelligence (GenAI) changes digital learning systems by offering personalized content, adaptive feedback, and interactive study materials. This technology can help fill gaps in resources and reduce educational inequality. The main issue is that it also brings challenges. One major issue is called "The Hallucination Problem." This is when GenAI models create content that sounds believable but is actually false. If these errors are not detected in time, they can reduce trust in the technology and spread misinformation. This paper looks at how GenAI works and how is transforming digital education. The reasons behind these hallucinations are examined, so mitigation risk strategies can be defined. In the end, using GenAI responsibly in digital education means that everyone, teachers, students, and institutions, must understand its risks and the need for systems which were build with human-in-the-loop methods. The best way of hallucination risks mitigation is hallucination awareness. This way, we can take advantage of new technology without sacrificing the quality of education.
Keywords:
artificial intelligence hallucination,
hallucination awareness,
artificial intelligence in e-learning
CITE THIS PAPER AS:
Bogdan-Iulian CIUBOTARU,
"The hallucination problem in Generative Artificial Intelligence: accuracy and trust in digital learning",
International Conference on Virtual Learning,
ISSN 2971-9291, ISSN-L 1844-8933,
vol. 20,
pp. 35-45,
2025.
https://doi.org/10.58503/icvl-v20y202503