Schedule

On April 20th from 9:00am to 3:30pm (GMT).

 

Please find the detailed schedule below.

Session 1: Opening & Keynote – 9:00am-10:30am

}

09:00am – 09:15am

Opening Statements

By the Organizers

}

09:15am – 10:30am

Keynote: Illuminating the black mirror in system design:
Promoting human values and their reflection through explainability

Irit Hadar

Abstract

Explainability transcends the realm of individual requirements; it further serves as a mirror reflecting how a system addresses various demands. While we assess the quality of explainability based on the system’s ability to elucidate its outcomes, the very need for explanation sheds light on design decisions, fostering higher awareness of human values like ethics and responsibility. In this talk, I will delve into the current priorities of software developers concerning the integration of specific quality attributes addressing human values, such as privacy, security, and sustainability into the software they develop. Drawing from theories rooted in cognitive science, psychology, and organizational science, I will explore strategies for elevating these priorities within organizational contexts, where needed. Finally, I will propose approaches showing promise for promoting attitudes and behaviors supporting explainability and for understanding its interplay with other objectives guiding the system development process.

Session 2: Paper Session – 11:00am-12:30pm

}

11:00am – 11:35am

Generating Context-Aware Contrastive Explanations in Rule-based Systems

Lars Herbold, Mersedeh Sadeghi and Andreas Vogelsang

Abstract

Human explanations are often contrastive, meaning that they do not answer the indeterminate „Why?“ question, but instead „Why P, rather than Q?“. Automatically generating contrastive explanations is challenging because the contrastive event (Q) represents the expectation of a user in contrast to what happened. We present an approach that predicts a potential contrastive event in situations where a user asks for an explanation in the context of rule-based systems. Our approach analyzes a situation that needs to be explained and then selects the most likely rule a user may have expected instead of what the user has observed. This contrastive event is then used to create a contrastive explanation that is presented to the user. We have implemented the approach as a plugin for a home automation system and demonstrate its feasibility in four test scenarios.

}

11:35am – 12:00pm

What if Autonomous Systems had a Game Master? Targeted Explaining with the help of a Supervisory Control System

Akhila Bairy and Martin Fränzle

Abstract

In the era of increasing automation, systems are making more autonomous decisions than ever before, often leading to conflicts with other systems. In situations where humans are entangled in such conflicts without proper explanation, they experience frustration. The consideration of the human’s frustration level in such scenarios therefore is a key aspect of solving these conflicts. Our approach involves synthesizing an appropriate supervisory control system (SCS) within a gaming context. The main focus of this paper is to seamlessly blend a SCS with conflict resolution capabilities, capitalizing on the explanatory potential. The central objective is to explore how this integration can be accomplished by employing the SCS as a game master or as a moderator within a game-theoretic framework. In this capacity, the system becomes a central authority equipped with comprehensive information relevant to the game, possessing the knowledge and insights required to navigate intricate scenarios. Employing a game-theoretic approach allows us to harness the system’s in-depth understanding of the game’s dynamics to strategically resolve conflicts and optimize outcomes. The game master utilises an explanation model designed to provide understandable and situationally aware explanations, effectively mitigating frustration levels for humans involved in the process.

}

12:00pm – 12:25pm

Towards a Computational Architecture for Co-Constructive Explainable Systems

Meisam Booshehri, Hendrik Buschmeier, Philipp Cimiano, Stefan Kopp, Jaroslaw Kornowicz, Olesja Lammert, Marco Matarese, Dimitry Mindlin, Amelie S. Robrecht, Anna-Lisa Vollmer, Petra Wagner and Britta Wrede

Abstract

Explanations are not unidirectional but are interactive processes by which an explainer and an explainee co-construct the explanation. In the realm of explainable artificial intelligence (XAI), where computer systems and robots justify their actions to human users, the co-construction of explanations remains a key yet underexplored aspect of XAI. This short paper proposes an architectural design for technical systems that facilitates the interactive co-construction of explanations. By outlining its fundamental components and their specific roles, we aim to contribute to the advancements of XAI computational frameworks that actively engage users in the explanation process.

Session 3: Paper Session & Activity – 2:00pm-3:30pm

}

02:00pm – 02:25pm

Why Reinforcement Learning in Energy Systems Needs Explanations

Hallah Shahid Butt and Benjamin Schäfer

Abstract

With economic development, the complexity of infrastructure has increased drastically. Similarly, with the shift from fossil fuels to renewable sources of energy, there is a dire of such systems that not only predict and forecast with accuracy but also help in understanding the process of predictions. Artificial intelligence and machine learning techniques have helped in finding out well-performing solutions to different problems in the energy sector. However, the usage of state-of-the-art techniques like reinforcement learning is not surprisingly convincing. This paper discusses the application of reinforcement techniques in energy systems and how explanations of these models can be helpful.

}

02:25pm – 03:25pm

Interactive Activity

}

03:25pm – 03:30pm

Closing

By the Organizers