On the 3rd of August, in Toronto, the 4th Workshop on Ethical Artificial Intelligence took place. It was held in conjunction with the ACM SIGKDD’25 conference, and we had the pleasure to present our work. It was the latest iteration of the assessment framework for XAI methods through the lens of the AI Act. This work shall help practitioners choose suitable XAI methods for their use case (or even influence the decision about the choice of the predictive model).

The accepted paper and the oral presentation were well received, and we even put up a poster for the afternoon, which also sparked some discussions. Overall, it has been a fruitful event, with important topics discussed, such as the various theories of bias or the exponential growth in reported AI incidents (where staggering 55% are from third-party tools). In general, the reception of our work showed us that we are moving in a meaningful direction. We hope to finalize the work and ready it for a major submission in the coming months.
Jiří Němeček
RAI at KU Leuven: Assessing Explainability for AI Safety Governance