events,

CEFIG International Study Visit: Discussing Responsible AI with European Youth Workers

Martin Krutský Jakub Peleška Martin Krutský
Jakub Peleška Mar 25, 2026 · 1 min read
CEFIG International Study Visit: Discussing Responsible AI with European Youth Workers

On Wednesday, March 25, the Responsible AI Initiative hosted a study visit for 24 participants of an Erasmus+ Youth Training Course “AI Unlocked” coordinated by CEFIG International. The group, consisting of youth workers, NGO staff, and educators from various European countries, gathered to explore the ethical and safety dimensions of artificial intelligence.

Presentation for CEFIG International

The visit was part of a week-long training in Kolín focused on AI usage in education. Having already spent several days discussing digital security and practical AI tools, the participants came prepared with a solid foundation and hands-on experience.

During the session, we focused on several critical areas of Responsible AI:

  • Large Language Models vs. Traditional ML: We discussed the distinction between highly visible but often unreliable generalist LLMs and the narrow, domain-specific expertise of traditional machine learning.
  • The AI Act Framework: We explored the EU’s risk-based approach, distinguishing between “High Risk” systems used in recruitment or law enforcement and “Unacceptable Risk” systems involving social scoring or subliminal manipulation.
  • The Alignment Dilemma: A key point of discussion was the challenge of “alignment”—the process of tuning AI to human expectations. We examined whose values are being taught to these models, noting the influence of Western cultural norms and the role of the Global South workforce in data labeling.
  • Beyond the Black Box: We addressed the fundamental bottleneck of AI—its lack of transparency. We introduced the concept of Explainable AI (XAI), demonstrating how visual and importance-based explanations can help humans understand, verify, and potentially challenge AI decisions.

Group photo with participants

The presentation was followed by an interactive discussion where participants applied these concepts to their own work and asked insightful follow-up questions. We were thrilled to see such high interest in Responsible AI, and we hope these conversations will foster critical thinking regarding the next generation of AI systems.

Written by:
Martin Krutský
Martin Krutský Follow
PhD student of AI at Czech Technical University interested in AI explainability, neuro-symbolic integration, and ethical AI.
Jakub Peleška
Jakub Peleška Follow
PhD student of AI at Czech Technical University interested in Relational Deep Learning and AI interpretability.