Schedule

On September 20 from 2pm to 7pm and on September 21 from 2pm to 6pm (all times CEST).

Please find the detailed schedule below.

 

Day 1: September 20

Session 1: Into Explainability – 2:00pm – 2:50pm

}

2:00pm - 2:10pm

Opening Statements

By the Organizers
}

2:10pm - 2:30pm

Paper Presentation 1: Towards Perspicuity Requirements

Sarah Sterz, Kevin Baum, Anne Lauber-Rönsberg, and Holger Hermanns

Abstract
System quality attributes like explainability, transparency, traceability, explicability, interpretability, understandability, and the like are given an increasing weight, both in research and in the industry. All of these attributes can be subsumed under the term of „perspicuity“. We argue in this vision paper that perspicuity is to be regarded as a meaningful and distinct class of quality attributes from which new requirements along with new challenges arise, and that perspicuity as a requirement is needed for legal, societal, and moral reasons, as well as for reasons of consistency within requirements engineering.
}

2:30pm – 2:50pm

Paper Presentation 2: Explainability auditing for intelligent systems: A rationale for multi-disciplinary perspectives

Markus Langer, Kevin Baum, Kathrin Hartmann, Stefan Hessel, Timo Speith, and Jonas Wahl

Abstract
National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multi-disciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.

Session 2: Interdisciplinary Explainability – 3:00pm – 3:40pm

}

3:00pm - 3:20pm

Paper Presentation 3: Can Explanations Support Privacy Awareness? A Research Roadmap

Wasja Brunotte, Larissa Chazette, and Kai Korte

Abstract
Using systems as support tools for decision-making is a common part of a citizen’s daily life. Systems support users in various tasks, collecting and processing data to learn about a user and provide more tailor-made services. This data collection, however, means that users’ privacy sphere is increasingly at stake. Informing the user about what data is collected and how it is processed is key to reaching transparency, trustworthiness, and ethics in modern systems. While laws and regulations have come into existence to inform the user about privacy terms, this information is still conveyed in a complex and verbose way to the user, making it unintelligible to them. Meanwhile, explainability is seen as a way to disclose information about a system or its behavior in an intelligible manner. In this work, we propose explanations as a means to enhance users’ privacy awareness. As a long-term goal, we want to understand how to achieve more privacy awareness with respect to systems and develop heuristics that support it, helping end-users to protect their privacy. We present preliminary results on private sphere explanations and present our research agenda towards our long-term goal.
}

3:20pm – 3:40pm

Paper Presentation 4: On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness

Lena Kästner, Markus Langer, Veronika Lazar, Astrid Schomäcker, Timo Speith, and Sarah Sterz

Abstract
Recently, requirements for the explainability of software systems have gained prominence. One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders‘ trust in a system. Although this seems intuitively appealing, recent psychological studies indicate that explanations do not necessarily facilitate trust. Thus, explainability requirements might not be suitable for promoting trust. One way to accommodate this finding is, we suggest, to focus on trustworthiness instead of trust. While these two may come apart, we ideally want both: a trustworthy system and the stakeholder’s trust. In this paper, we argue that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness — and that a system’s explainability can crucially contribute to its trustworthiness.

Session 3: Activities 1 – 3:50pm – 4:50pm

}

3:50pm - 4:20pm

Author Panel

All Presenters of Day 1

}

4:20pm – 4:50pm

Activity 1: Identifying Open Research Questions and Takeaways

All Participants

Session 4: Industry Keynote – 5:00pm – 7:00pm

}

5:00pm - 6:30pm

Industry Keynote: Human-Assisted AI – A Visual Analytics Approach to Addressing Industrial AI Challenges

Liu Ren
Abstract

Domain knowledge offers enormous USP (unique selling point) opportunities for industrial AI products and services solutions. However, leveraging domain know-how to enable trustworthy industrial AI products and services with minimum human effort remains a big challenge in both academia and industry. Visual analytics is a promising approach to addressing this problem by leveraging a human-assisted AI framework that combines explainable AI (e.g., semantic representation learning), data visualization and user interaction. In this talk, I would like to demonstrate the effectiveness of this approach using Bosch Research’s recent innovations that have been successfully applied to several key industrial AI domains such as Smart Manufacturing (I4.0), Autonomous Driving, Driver Assistance, and IoT. Some of the highlighted innovations will also be featured in various incoming academic venues. In particular, I will also share some of the key insights learned from requirement and system perspective when some of our award-winning research work was transferred or deployed for real-world industrial AI product and service solutions.

}

6:30pm – 7:00pm

Summary and Closing

By the Organizers

Day 2: September 21

Session 1: Research Keynote – 2:00pm – 3:15pm

}

2:00pm - 2:15pm

Review Session

By the Organizers
}

2:15pm - 3:15pm

Research Keynote: Psychological Dimensions of Explainability

Markus Langer

Abstract
Although there are already several decades of research on explainable artificial intelligence (XAI) in computer science, the need for multi-disciplinary perspectives on this topic has only recently received increasing attention. In this talk, I will introduce a psychological perspective on XAI. Specifically, I will provide an overview on psychological theories that – applied to human-computer interaction – can be used to derive hypotheses about the possible effects of explanatory information on psychological variables (e.g., trust, perceived justice).

Session 2: Explainability Use-Cases – 3:30pm – 4:30pm

}

3:30pm - 3:50pm

Paper Presentation 5: Holistic Explainability Requirements for End-to-End Machine Learning in IoT Cloud Systems

My Linh Nguyen, Thao Phung, Duong-Hai Ly, and Hong-Linh Truong

Abstract
End-to-end machine learning (ML) in Internet of Things (IoT) Cloud system consists of multiple processes, covering data, model, and service engineering, and is involved by multiple stakeholders. Therefore, to be able to explain ML to relevant stakeholders, it is important to identify explainability requirements in a holistic manner. In this paper, we present our methodology to identify explainability requirements for end-to-end ML in developing ML services to be deployed within IoT Cloud systems. We identify and classify explainability requirements engineering through (i) involvement of relevant stakeholders, (ii) end-to-end data, model, and service engineering processes, and (iii) multiple explainability aspects. We present our work with a case of predictive maintenance for Base Transceiver Stations (BTS) in the telco domain.
}

3:50pm – 4:10pm

Paper Presentation 6: Cases for Explainable Software Systems: Characteristics and Examples

Mersedeh Sadeghi, Verena Klös, and Andreas Vogelsang

Abstract
The need for systems to explain behavior to users has become more evident with the rise of complex technology like machine learning or self-adaptation. In general, the need for an explanation arises when the behavior of a system does not match the user’s expectation. However, there may be several reasons for a mismatch including errors, goal conflicts, or multi-agent interference. Given the various situations, we need precise and agreed descriptions of explanation needs as well as benchmarks to align research on explainable systems. In this paper, we present a taxonomy that structures needs for an explanation according to different reasons. For each leaf node in the taxonomy, we provide a scenario that describes a concrete situation in which a software system should provide an explanation. These scenarios, called explanation cases, illustrate the different demands for explanations. Our taxonomy can guide the requirements elicitation for explanation capabilities of interactive intelligent systems and our explanation cases build the basis for a common benchmark. We are convinced that both, the taxonomy and the explanation cases, help the community to align future research on explainable systems.
}

4:10pm – 4:30pm

Paper Presentation 7: A Quest of Self-Explainability: When Causal Diagrams meet Autonomous Urban Traffic Manoeuvres

Maike Schwammberger

Abstract
While autonomous systems are increasingly capturing the markets, they also become more and more complex. Thus, the (self-) explainability of these complex and adaptive systems becomes evermore important. We introduce explainability to our previous work on formally proving properties of autonomous urban traffic manoeuvres. We build causal diagrams by connecting actions of a crossing protocol with their reasons and derive explanation paths from these diagrams. We strive to bring our formal methods approach together with requirements engineering approaches, by suggesting to use run-time requirements engineering to update our causal diagrams at run-time.

Session 3: Activities 2 – 4:40pm – 6:00pm

}

4:40pm - 5:25pm

Activity 2: Developing a Research Roadmap

All Participants

}

5:25pm – 5:40pm

Activity 2: Result Presentation

All Participants

}

5:40pm – 6:00pm

Summary and Closing

By the Organizers