The TRUST-AI workshop, held during the weekend (25.–26. 10.) before The European Conference on Artificial Intelligence 2025, set out to examine what “trustworthiness” in AI really means, and whether the research community can ever settle on a shared definition. Judging by the conversations, most participants had already discovered (the hard way) that agreeing on a common vocabulary across disciplines is roughly as easy as explaining your AI research to your grandparents. So our interdisciplinary proposal for assessing explainable AI (read the accepted paper) found itself in very sympathetic company.

Over one and a half days, the workshop offered a rich selection of perspectives: neurosymbolic explanations of AI behaviour, ethical analyses of high-risk AI systems, and even a developer-friendly compliance toolbox. By the time I presented our work on assessment dimensions for explainable AI on Sunday morning, no one needed reminding that trustworthiness (explainability included) is not just a fashionable trademark on AI papers, but something we actually need to operationalise. After all, thanks to the previous day’s groupwork session, where we were split into random teams and wrestled with a shared proposal that needed to be finalized in under two hours, the organisers’ gentle plea for a shared language no longer felt aspirational; it felt like a survival skill.
Despite the overflow of ideas, the audience remained impressively focused, at least judging by the lively Q&A that followed. Questions ranged from whether we should broaden our scope beyond the harm-oriented explanations required by Article 86 of the AI Act (towards more proactive, pre-emptive types of explanation), to deeper debates about the feasibility and scalability of assessing non-technical dimensions. In the workshop spirit, the discussion managed to validate our current direction while simultaneously pointing to the next round of challenges, just enough to keep things interesting.
Martin Krutský
RAI at Ethical AI workshop at ACM KDD conference