HNI Forum on 2nd May 2024

 |  Heinz Nixdorf Institut

The topic of the forum on 2nd May at the Heinz Nixdorf Institute is "Pragmatic opacity of machines from a techno-philosophical and ethical perspective". The HNI Forum is a series of lectures focussing on current research trends and projects. The aim is to convey ideas, challenges and possible solutions in a generally understandable way.

A differentiated approach to this problem begins by distinguishing between different types of opacity: For whom does a machine/software present itself as opaque and for what reasons? The reason of business secrecy can make systems appear opaque to those not privy to them for legal reasons. Systems can be opaque for users because they do not have the knowledge and practical skills to be able to assess them (educational issue). This must be distinguished from epistemic opacity, which refers to a fundamental opacity of the systems, even for experts - due to the genuine characteristics of the systems. 

Two presentations on the topic provide an introduction to the research: The first presentation introduces an important type of opacity that can be distinguished from epistemic opacity, which Prof Dr Andreas Kaminski calls "pragmatic opacity". The second presentation deals with the ethical discussion about opaque ML systems and discusses how to react to "pragmatic opacity".

In recent years, the opacity of (some) ML systems has sparked a debate in society beyond the field of computer science. The opacity of AI can be problematic in various ways. Firstly, from the perspective of developers and manufacturers, it can be challenging to adequately assess the behavior of systems that are based on non-linear models and/or exceed a certain level of complexity in order to (a) optimize them in a targeted manner and (b) evaluate them (reliability, safety). With the spread of software whose behavior is generally opaque in various, particularly sensitive fields of application such as medicine, further questions also arise: How can doctors verify and justify their decisions if they have arrived at these decisions with the help of recommendations from an ML-based system that is opaque for them? Against this backdrop, there have been calls from the political, social and scientific spheres for these ML systems to be transparent and explainable. An entire field of research has taken on this task, that of explainable AI.

HNI Forum