Explainable AI (Parallel Session 1.2)
AGENDA
- Welcome & Introduction – Martin Kaltenböck (Semantic Web Company) & Malte Beyer-Katzenberger (EC)
- Panel Discussion
- Andreas Blumauer (Semantic Web Company)
- Sarah Spiekermann (University of Economics Vienna)
- Dietmar Millinger (Austrian Society for AI)
- Zbigniew Jerzak (SAP)
- Sonja Zillner (Siemens)
- Summary and takeaways – Malte Beyer-Katzenberger (EC) & Martin Kaltenböck
DESCRIPTION
With computing power, new methods and algorithms become more widely available, Artificial Intelligence has become THE topic in and around data management. Huge amounts of (big) data are harvested and ingested into AI & cognitive computing engines to analyse, calculate patterns and prediction to enable powerful applications. One concern is that often these engines are “black boxes” including self-learning algorithms, furthermore, that input data is noisy and often not pre-selected along the requirements of the output. This leads to AI solutions that (i) do not provide useful results, (ii) provide applications that are not fulfilling the requirements and (iii) make it very difficult to explain the processes that have led to a certain outcome or decision.
Explainable AI or Transparent AI refers to applications of artificial intelligence (AI) whose actions can be understood and explained by humans. It contrasts with “black box” AIs that employ complex opaque algorithms, where even their designers cannot explain why the AI arrived at a specific decision. Explainable AI can be used to implement a right to explanation wherever such right exists. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem (Source Wikipedia).
To enable Explainable AI and its full potential, semantic technologies can help – to provide better data quality, provide the possibility to configure the engine by making use of Knowledge Graphs, and finally to help AI engines to understand language and thereby ensure that context and meaning is taken into account to realise really useful data-driven AI applications for the future.
This session introduces the concept of Explainable AI and why this is of such high importance. It explains the benefits of Explainable AI for powerful industry application. In the course of a panel discussion the risks of ‘black box AI’ and the opportunities of Explainable AI are discussed, but also the challenges and bottlenecks. Transparency rarely comes for free; there are often tradeoffs between how “smart” an AI is and how transparent it is, and these tradeoffs are expected to grow larger as AI systems increase in internal complexity. Experts from industry, research, ethics and law will bring in different viewpoints on the topic and the audience is invited to get involved into this discussion together with the panellists.
The results of this session will be summarised in the form of the EBDVF 2018 Explainable AI report that will be published via BDVA (and brought in into the BDVA AI Action Group) and via social media (e.g. XAI LinkedIn group).