Schedule

On August 15 from 11:00am to 4:30pm (CEST).

Please find the detailed schedule below.

Session 1: Keynotes – 11:00am-12:10pm

}

11:00am – 11:10am

Opening Statements

By the Organizers

}

11:10am – 11:40pm

Keynote 1: Constructing Explainability

Katharina Rohlfing 

Abstract

Katharina Rohlfing will give some insights into the goals, methods and current research challenges of the Transregional Collaborative Research Centre 318 “Constructing Explainability”.

}

11:40am – 12:10pm

Keynote 2: Foundations of Perspicuous Software Systems

Holger Hermanns

Abstract

Holger Hermanns will give some insights into the goals, methods and current research challenges of the Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems”.

Session 2: Formal Frameworks for Explainability – 12:20pm-1:10pm

}

12:20pm – 12:50pm

Paper Presentation 1: Self-Explanation in Systems of Systems

Goerschwin Fey, Martin Fränzle and Rolf Drechsler

Abstract

Technical systems have reached a complexity rendering their behaviour difficult to comprehend or seemingly non-deterministic. Thus, self-explaining digital systems would be a strong support for tasks like debugging, diagnosis of failures, reliably operating the system or optimization. Useful self-explanation must be efficiently computable in a technical system and must be understandable to the addressee, who might be another technical system at the same or another system layer or a human.

We provide a conceptual framework for self-explanation including formalization of the inherent concepts of explanation, understandability etc. We instantiate these generic concepts on the example of Mealy machine models of embedded systems and illustrate their use via example from autonomous driving.

}

12:50pm – 1:10pm

Paper Presentation 2: Towards formal concepts for explanation timing and justifications

Akhila Bairy, Willem Hagemann, Astrid Rakow and Maike Schwammberger

Abstract

Explainability of automated systems gains increasingly more importance, since these systems get more complex and at the same time we – individually and as a society – get more reliant on them. There is a vast amount of works on explainability in the field of computer science alone, but there is little work on formal concepts, an agreed terminology or categorisation of terms. We infer some notions and questions from philosopher’s discussions about the nature of explanations from long ago and motivate why these should be captured by formal concepts in computer science. We discuss that both the questions “What is an explanation?” and “When to explain?” need to be considered for building trustworthy systems.

Session 3: Challenges and Solutions for Constructing Explanations – 1:40am-2:40am 

}

1:40pm – 2:00pm

Paper Presentation 3: Expanding the horizon of Linear Temporal Logic Inference for Explainability

Rajarshi Roy and Daniel Neider

Abstract

Linear Temporal Logic (LTL), a logical formalism originally developed for verification of reactive systems, has emerged as a popular model for explaining the behavior of complex systems. The popularity of LTL as an explainable model can mainly be attributed to its similarity to natural language and its ease of use owing to its simple syntax and semantics. To aid the explanations using LTL, a task commonly known as inference of Linear Temporal Logic formulas or LTL inference, in short, has been of growing interest in recent years. Roughly, this task asks to infer succinct LTL formulas that describe a system based on its recorded observations. Inferring LTL formulas from a given set of positive and negative examples is a well-studied setting, with a number of competing approaches to tackle it. However, for the widespread applicability of LTL as an explainable model, we argue that one still needs to consider a number of different settings. In this vision paper, we, thus, discuss different problem settings of LTL inference and highlight how one can expand the horizons of LTL inference by investigating these settings.

}

2:00pm – 2:20pm

Paper Presentation 4: An Abstract Architecture for Explainable Autonomy in Hazardous Environments

Matt Luckcuck, Hazel M Taylor and Marie Farrell

Abstract

Autonomous robotic systems are being proposed for use in hazardous environments, often to reduce the risks to human workers. In the immediate future, it is likely that human workers will continue to use and direct these autonomous robots, much like other computerised tools but with more sophisticated decision-making. Therefore, one important area on which to focus engineering effort is ensuring that these users trust the system. Recent literature suggests that explainability is closely related to how trustworthy a system is. Like safety and security properties, explainability should be designed into a system, instead of being added afterwards. This paper presents an abstract architecture that supports explainable autonomy, providing a design template for implementing explainable autonomous systems. We present a worked example of how our architecture could be applied in in the civilian nuclear industry, where both workers and regulators need to trust the system’s decision-making capabilities.

}

2:20pm – 2:40pm

Paper Presentation 5: Challenges in Achieving Explainability for Cooperative Transportation Systems

Björn Koopmann, Alexander Trende, Karina Rothemann, Linda Feeken, Jakob Suchan, Daniela Johannmeyer and Yvonne Brück

Abstract

The anticipated presence of highly automated vehicles and intelligent traffic infrastructures in urban traffic will yield new types of demand-driven mobility and valueadded services. To provide these services, future transportation systems will consist of large-scale, cooperative ensembles of highly automated and connected systems that operate in mixed traffic and have to interact with human drivers and vulnerable road users while coordinately ensuring traffic efficiency and safety. We posit that the ability to explain processes and decisions is essential for such systems. Adequately addressing the needs of the involved actors will require that explainability and trustworthiness are handled as core properties in the development of highly automated systems. To support explainability-driven design approaches, we identify a set of explainability challenges in the context of a large-scale ongoing endeavor on cooperative transportation and present approaches to target these challenges.

Session 4: Evaluation and Discussion of Explainability – 2:50pm-4:30pm

}

2:50pm – 3:10pm

Paper Presentation 6: How to Evaluate Explainability? – A Case for Three Criteria

Timo Speith

Abstract

The increasing complexity of software systems and the influence of software-supported decisions in our society have sparked the need for software that is safe, reliable, and fair. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to develop explainable systems, we need to understand when this NFR is present in a system. To this end, appropriate evaluation methods are needed. However, the field is crowded with evaluation methods, and there is no consensus on which are the “right” ones. Much less, there is not even agreement on which criteria should be evaluated. In this vision paper, we will provide a multidisciplinary motivation for three such quality criteria concerning the information that systems should provide: comprehensibility, fidelity, and assessability. Our aim is to to fuel the discussion regarding these criteria, such that adequate evaluation methods for them will be conceived.

}

3:10pm – 3:50pm

Panel Discussion

 

}

3:50pm – 4:20pm

Research Activity

}

4:20pm – 4:30pm

Closing

By the Organizers

Social Event in Oldenburg – starting 5:30pm