Schedule
On September 20 from 2pm to 7pm and on September 21 from 2pm to 6pm (all times CEST).
Please find the detailed schedule below.

Day 1: September 20
Session 1: Into Explainability – 2:00pm – 2:50pm
2:00pm - 2:10pm
Opening Statements
By the Organizers2:10pm - 2:30pm
Paper Presentation 1: Towards Perspicuity Requirements
Sarah Sterz, Kevin Baum, Anne Lauber-Rönsberg, and Holger Hermanns
Abstract
2:30pm – 2:50pm
Paper Presentation 2: Explainability auditing for intelligent systems: A rationale for multi-disciplinary perspectives
Markus Langer, Kevin Baum, Kathrin Hartmann, Stefan Hessel, Timo Speith, and Jonas Wahl
Abstract
Session 2: Interdisciplinary Explainability – 3:00pm – 3:40pm
3:00pm - 3:20pm
Paper Presentation 3: Can Explanations Support Privacy Awareness? A Research Roadmap
Wasja Brunotte, Larissa Chazette, and Kai Korte
Abstract
3:20pm – 3:40pm
Paper Presentation 4: On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness
Lena Kästner, Markus Langer, Veronika Lazar, Astrid Schomäcker, Timo Speith, and Sarah Sterz
Abstract
Session 3: Activities 1 – 3:50pm – 4:50pm
3:50pm - 4:20pm
Author Panel
All Presenters of Day 1
4:20pm – 4:50pm
Activity 1: Identifying Open Research Questions and Takeaways
All Participants
Session 4: Industry Keynote – 5:00pm – 7:00pm
5:00pm - 6:30pm
Industry Keynote: Human-Assisted AI – A Visual Analytics Approach to Addressing Industrial AI Challenges
Liu RenAbstract
Domain knowledge offers enormous USP (unique selling point) opportunities for industrial AI products and services solutions. However, leveraging domain know-how to enable trustworthy industrial AI products and services with minimum human effort remains a big challenge in both academia and industry. Visual analytics is a promising approach to addressing this problem by leveraging a human-assisted AI framework that combines explainable AI (e.g., semantic representation learning), data visualization and user interaction. In this talk, I would like to demonstrate the effectiveness of this approach using Bosch Research’s recent innovations that have been successfully applied to several key industrial AI domains such as Smart Manufacturing (I4.0), Autonomous Driving, Driver Assistance, and IoT. Some of the highlighted innovations will also be featured in various incoming academic venues. In particular, I will also share some of the key insights learned from requirement and system perspective when some of our award-winning research work was transferred or deployed for real-world industrial AI product and service solutions.
6:30pm – 7:00pm
Summary and Closing
By the Organizers
Day 2: September 21
Session 1: Research Keynote – 2:00pm – 3:15pm
2:00pm - 2:15pm
Review Session
By the Organizers2:15pm - 3:15pm
Research Keynote: Psychological Dimensions of Explainability
Markus Langer
Abstract
Session 2: Explainability Use-Cases – 3:30pm – 4:30pm
3:30pm - 3:50pm
Paper Presentation 5: Holistic Explainability Requirements for End-to-End Machine Learning in IoT Cloud Systems
My Linh Nguyen, Thao Phung, Duong-Hai Ly, and Hong-Linh Truong
Abstract
3:50pm – 4:10pm
Paper Presentation 6: Cases for Explainable Software Systems: Characteristics and Examples
Mersedeh Sadeghi, Verena Klös, and Andreas Vogelsang
Abstract
4:10pm – 4:30pm
Paper Presentation 7: A Quest of Self-Explainability: When Causal Diagrams meet Autonomous Urban Traffic Manoeuvres
Maike Schwammberger
Abstract
Session 3: Activities 2 – 4:40pm – 6:00pm
4:40pm - 5:25pm
Activity 2: Developing a Research Roadmap
All Participants
5:25pm – 5:40pm
Activity 2: Result Presentation
All Participants
5:40pm – 6:00pm
Summary and Closing
By the Organizers