events,

AI Days: Understanding AI-Enabled Injustice And Biases

Martin Krutský Jiří Němeček Jakub Peleška Martin Krutský
Jiří Němeček
Jakub Peleška Nov 12, 2025 · 3 mins read
AI Days: Understanding AI-Enabled Injustice And Biases
Image source: prg.ai, photographed by adam & costey

When can we trust AI? What does it take to overturn complex AI decisions? How biased are such decisions made by today’s state-of-the-art AI models? And, is the accuracy of AI models all we need? We shared these and many other questions with the audience of the 2025 edition of AI Days, a two-week program spanning 36 Czech cities that featured both public educational events and researcher meetups.

Talk on AI explanations in high-risk scenarios; photo by adam & costey

Last year, we had a great experience with a presentation on AI ethics for technically oriented students. This year at the AI for Talents event, we took on another type of challenge: two talks (6. and 7. 11.) for a large audience of high-school students with diverse backgrounds. Aiming for an interactive experience, we asked the question: What kind of information would you like to have when an AI makes an important decision about your studies/career/life? In a realistic online test scenario with an automatic proctoring system, the audience had to decide whether to trust the AI’s decision or override it. And while the concrete results of this interactive experiment varied from one group of students to another, one lesson always persisted: some of the decision instances remained highly ambiguous. How to resolve the uncertainty then? We provided two types of explanations of the automated decision process: one based on visual cues and the other on the importance of extracted features. Yet, this move only introduced another dilemma: What is a good explanation of AI prediction? We briefly explained that this is exactly our current work in the RAI initiative and urged our audience to always think critically about AI-based decisions.

Audience at the AI for Talents event; photo by adam & costey

A couple of days later, we were invited to discuss the influence of AI on the LGBTQ+ population as a part of an event on 11. 11., asking a central question: “Is AI an ally or an enemy of LGBTQ+ diversity?” We briefly introduced how Large Language Models (LLMs) work and what makes them different from other, earlier types of AI. We then described the dependence of the models’ capability and “safety” on the data provided for training, describing the secretive nature of cutting-edge model training and psychological demands on workers in third-world countries. We finally discussed some downsides (and possible upsides) of AI models for LGBTQ+ people and minorities in general. We concluded the talk with a couple of suggestions on how current AI tools should (and should not) be used, based on our insights. Afterwards, we had a fruitful discussion with Juraj Hvorecký, a philosopher from CETE-P.

Discussion at Young AI Research Forum; photo by adam & costey

Our contribution to the AI Days would, however, not be complete without pitching our ideas to our peers at the Young AI Research Forum (12. 11.). We decided to take a bird’s-eye view of our work and pinpoint the arguments for pursuing explainability research, simultaneously preaching the necessity for interdisciplinary dialogue. While we were initially unsure about fitting in with this non-technical overview of our efforts, the topic perfectly complemented other talks by following the overarching topic of this year’s forum, interdisciplinarity.

On behalf of all RAI members, we thank the organizers of AI Days, especially the amazing team from prg.ai!

Written by:
Martin Krutský
Martin Krutský Follow
PhD student of AI at Czech Technical University interested in AI explainability, neuro-symbolic integration, and ethical AI.
Jiří Němeček
Jiří Němeček Follow
PhD student of AI at Czech Technical University interested in AI explainability and Integer Optimization.
Jakub Peleška
Jakub Peleška Follow
PhD student of AI at Czech Technical University interested in Relational Deep Learning and AI interpretability.