Schedule

On September 04 from 9:00am to 6:00pm (CEST).

 

Please find the detailed schedule below.

Session 1: Keynotes – 9:00am-10:30am

}

09:00am – 09:10am

Opening Statements

By the Organizers

}

09:10am – 09:50am

Keynote 1: To trust or not to trust? – On Challenges for AI in Modern Society

Lena Kästner

Abstract

Who has not heard of them: Alexa, Siri, ChatGPT and the like. Modern AI systems like these increasingly permeate our everyday lives; they range from face recognition in our smartphones to spam filters in our email to media creators all the way to decision support systems for judges, pilots and doctors. But under what conditions is it safe to really rely on AI systems? This is one of the most pressing questions for developers, users and regulators alike. To ensure safety, trustworthiness is often quoted as a central desideratum. But what exactly does trustworthiness imply and how can that be ensured in the context of high-stake applications? This talk offers a philosophical perspective.

}

09:50am – 10:30am

Keynote 2: Bridging the Explainability Gap: Transparency Requirements in the EU’s AI Act

Miriam Buiten

Abstract

The European Union’s proposed Artificial Intelligence Act introduces a risk-based approach to regulating AI and emphasizes the importance of transparency and explainability. This talk delves into the Act’s broad notions of transparency and explainability, assessing their alignment with the technical view of AI explainability. It uncovers challenges arising from disparate interpretations of explainability across scientific disciplines and stakeholders, while exploring the Act’s goals for transparency. The talk also acknowledges the limits of explainable AI and emphasises the need to align transparency requirements with what is useful and technically feasible.

Session 2: Long Paper Talks – 11:00am-12:30pm

}

11:00am – 11:30am

A Conceptual Framework for Explainability Requirements in Software-Intensive Systems

Marcello M. Bersani, Matteo Camilli, Livia Lestingi, Raffaela Mirandola, Matteo Rossi and Patrizia Scandurra

Abstract

Software-intensive systems range from enterprise systems to IoTs and cyber-physical systems to industrial control systems where software plays a vital role. In such systems, decision-making is software-supported in an automated and/or autonomous way. However, trust can be hindered by the black-box nature of these systems, whose autonomous decisions may be confusing or even dangerous for humans. Thus, explainability emerges as a crucial non-functional property to be featured to achieve transparency and increase the awareness of the systems’ behavior, hopefully leading to their acceptance in our society.
In this paper, we introduce a conceptual framework for the elicitation of explainability requirements at different granularity levels. Each level is associated with a set of meta-requirements and means for the instantiation of the framework within a system to make it capable of producing explanations in a given application domain. We illustrate our conceptual framework using a running example in the robotics domain.

}

11:30am – 12:00pm

Revisiting the Performance-Explainability Trade-Off in Explainable Artificial Intelligence (XAI)

Barnaby Crook, Maximilian Schlüter and Timo Speith

Abstract

As the prevalence of Artificial Intelligence (AI) systems continues to grow, ensuring their transparency and accountability has become a paramount concern. Within the field of Requirements Engineering (RE), the increasing significance of Explainable AI (XAI) in aligning AI-supported systems with user needs, societal expectations, and regulatory standards has garnered recognition. In general, explainability has emerged as an essential non-functional requirement (NFR) with a substantial impact on system quality. However, the assumed trade-off between explainability and performance challenges the alleged positive influence of explainability. If the performance of a system were to suffer from the requirement of explainability, then careful consideration must be given to which of these quality aspects takes precedence and how to compromise between them.

This paper critically examines this trade-off, questioning conventional assumptions and exploring avenues for redefining the relationship between performance and explainability. By providing a foundation for future research, guidance, and best practices, this work aims to advance the field of RE for AI.

 

}

12:00pm – 12:30pm

Topic Collection for Group Activity

By the Organizers

Session 3: Short Paper Talks – 2:00pm-3:30pm

}

2:00pm – 2:20pm

Understanding and Addressing Sources of Opacity in Computer Systems

Sara Mann, Barnaby Crook, Lena Kästner, Astrid Schomäcker and Timo Speith

Abstract

Modern computer systems are ubiquitous in contemporary life yet many of them remain opaque. This poses significant challenges in domains where desiderata such as fairness or accountability are crucial. We suggest that the best strategy for achieving system transparency varies depending on the specific source of opacity prevalent in a given context. Synthesizing and extending existing discussions, we propose a taxonomy consisting of eight sources of opacity that fall into three main categories: architectural, analytical, and socio-technical. For each source, we provide actionable suggestions as to how to address the resulting opacity in practice. The taxonomy is intended to help practitioners to understand contextually prevalent sources of opacity, and to select or develop appropriate strategies for overcoming them.

}

2:20pm – 2:40pm

A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)

Timo Speith and Markus Langer

Abstract

One of the big challenges in the field of explainable artificial intelligence (XAI) is how to evaluate explainability approaches. Many evaluation methods (EMs) have been proposed, but a gold standard has yet to be established. Several authors classified EMs for explainability approaches into categories along aspects of the EMs themselves (e.g., heuristic-based, human-centered, application-grounded, functionally-grounded). In this vision paper, we propose that EMs can also be classified according to aspects of the XAI process they target. Building on models that spell out the main processes in XAI, we propose that there are explanatory information EMs, understanding EMs, and desiderata EMs. This novel perspective is intended to augment the perspective of other authors by focusing less on the EMs themselves but on what explainability approaches intend to achieve (i.e., provide good explanatory information, facilitate understanding, satisfy societal desiderata). We hope that the combination of the two perspectives will allow us to more comprehensively evaluate the advantages and disadvantages of explainability approaches, helping us to make a more informed decision about which approaches to use or how to improve them.

}

2:40pm – 3:00pm

What Explanations of Autonomous Systems are of interest to Lawyers?

Miriam Buiten, Louise A. Dennis and Maike Schwammberger

Abstract

While more and more (semi-) autonomous cars are driving on our roads, accountability of these systems is of importance for reasoning about guilt in lawsuits. However, the decision making process of these complex autonomous systems is often not easy to grasp. As a solution, system explainability is a widely discussed topic that helps in understanding complex systems. In this research vision, we illuminate what types of explanations lawyers need.

}

3:00pm – 3:30pm

Summary & Introduction to Group Activity

By the Organizers

Session 4: Group Activity – 4:00pm-6:00pm

}

4:00pm – 5:50pm

Group Activity

}

5:50pm – 6:00pm

Closing

By the Organizers