EU Publishes Report on Explainable Artificial Intelligence in Education

In October, we announced that our institute’s researchers, Airina Volungevičienė and Giedrė Tamoliūnė, had participated in the international workshop “Explainable AI in Education”, organized by the European Digital Education Hub (EDEH). In addition, Giedrė Tamoliūnė was a member of the EDEH Explainable AI (XAI) working group, contributing to the development of a comprehensive report on the topic.

We are pleased to share that the outcome of this intensive collaboration — the report “Explainable AI in Education: Fostering Human Oversight and Shared Responsibility” — has now been officially published by the Publications Office of the European Union, ensuring broader accessibility and impact.

Why does this report matter?

The publication addresses several critical questions:

  • Why is explainability essential in the use of AI systems in education?
  • What legal frameworks and requirements must be observed?
  • How can the needs of diverse educational stakeholders be met to strengthen trust in AI systems?
  • What competences do educators require in order to effectively work with explainable AI?

Explainable AI in education goes far beyond the technical “explanation” of complex models. It represents a fundamental requirement for building trust, enhancing effectiveness, and empowering human agency in learning environments that increasingly rely on AI tools.

Moreover, explainable AI is key to ensuring that emerging technologies remain aligned with educational values, legal standards, and pedagogical goals. Achieving this requires continuous cooperation across disciplines, sectors, and stakeholder communities.

We invite you to explore the full report by following this link.