events,

Explainable AI at FEE CTU Open Day

Martin Krutský Jiří Němeček Martin Krutský
Jiří Němeček Nov 28, 2025 · 2 mins read
Explainable AI at FEE CTU Open Day
Image source: photo by Karolína Pštross

Should we follow AI predictions? What kind of explanation of AI’s processes would persuade us? What does explainable AI (XAI) really mean? And which XAI method should we use? On Friday (28 November 2025), we posed these questions to our audience at the Open Day of the Faculty of Electrical Engineering at the Czech Technical University in Prague.

Most of our listeners (and potential students) had heard about explainability for the first time, so it was crucial to introduce the topic in an approachable way and to showcase the need for explanation through a relatable example. After discussing the problem of black-box AI, we turned to a tried-and-tested scenario of cheating detection during exam proctoring and played a recording of slightly suspicious student behavior, leaving many participants unsure. Was the student cheating or not? What should I do when the AI detects cheating?

Jiří unpacking what is and what isn't Explainable AI for Open Day visitors at FEE CTU

One of the most interesting outcomes for us was the significant division in opinions across the nearly 20 groups we presented to. Some people clearly saw cheating even in the most unlikely situations, while others were more sympathetic to the proctored student. Some said they would reexamine the video after hearing the AI’s decision, while others wouldn’t have batted an eyelid. The only answer that remained fairly consistent was that almost nobody wanted to overturn their decision solely based on the AI’s recommendation.

As regular readers of our website know, this is exactly our main case for XAI: the only reliable way to be persuaded by an AI recommendation is to uncover the reasoning behind it. We then demonstrated that the XAI landscape is vast and that one must carefully select methods based on the audience for the explanation and the application at hand. It was encouraging to see most listeners left convinced of the usefulness of XAI.

We hope that our listeners gained valuable insight into explainable AI and will take to heart our final pledge to consider the broader impact of technology, particularly if they decide to pursue a career in computer science and artificial intelligence at FEE CTU or elsewhere.

Written by:
Martin Krutský
Martin Krutský Follow
PhD student of AI at Czech Technical University interested in AI explainability, neuro-symbolic integration, and ethical AI.
Jiří Němeček
Jiří Němeček Follow
PhD student of AI at Czech Technical University interested in AI explainability and Integer Optimization.