Explainability by Design: A Methodology to Support Explanations in Decision-Making Systems Explainability by Design: A Methodology to Support Explanations in Decision-Making Systems
Editor’s note: Luc Moreau is a speaker for ODSC Europe 2022. Be sure to check out his talk, “Explainability by Design:... Explainability by Design: A Methodology to Support Explanations in Decision-Making Systems

Editor’s note: Luc Moreau is a speaker for ODSC Europe 2022. Be sure to check out his talk, “Explainability by Design: a Methodology to Support Explanations in Decision-making Systems,” there!

There are increasing calls for the explainability of data-intensive applications.  Such a demand for explainability stems from various reasons, such as regulations, governance frameworks, or business drivers. Explanations are becoming a mechanism to demonstrate good governance of data-processing pipelines. Good explanation capability should be able to answer a vast range of queries including the following “Was consent obtained from the user to process their data?”, “What process was followed to check a model before its deployment?”, “What are the factors that influenced a decision”, “What action could be taken to correct some data”, and “Is the actual execution of the Data Science application complying with a given data governance framework?”

So far, there has been a lack of a principled approach to generating explanations. From an implementation viewpoint, there is no good separation of concerns between decision-making systems, their querying, and the composition of explanations. From a methodological viewpoint, there is not a well-specified workflow for engineers to follow, which leaves every organization to “reinvent the wheel” for their explanation capability.


Against this background, we introduce “Explainability by Design” a methodological approach that tackles this issue in the design and architecture of IT systems and business practices. Explainability-by-Design (EbD) is a methodology that is characterized by proactive measures to include explanations in the design rather than relying on reactive measures to bolt on explanation capability after the fact, as an add-on.

A key aspect in the journey of developing a methodology is to understand what an explanation is, and all the properties it is intended to satisfy. To this end, I will outline a taxonomy of explanations and their requirements.

Another aspect of this journey is to conceptualize an explainability component as an integral part of a business system, enriching its functionality with capabilities that can address regulatory requirements but also functional and business needs.  We have produced a reference implementation of this component, which is called the Explanation Assistant.  A configured and ready-to-be-deployed Explanation Assistant is a key output of the Explainability-by-Design methodology.

The final aspect is the breaking down of the methodology into steps ingesting explanation requirements in order to construct an explanation capability that a system’s triggers can activate, in order to generate the required explanations.

The above steps of the methodology are to be carried out by various roles in an organization. The socio-technical Engineer focuses on the requirement analysis, leading to a set of explanation requirements that the Data Engineer processes in order to produce a configuration of the explanation assistant that can be validated by the Application Stakeholder.

I look forward to seeing you at ODSC Europe 2022, my presentation will further motivate the methodology and will dive into the various steps of a well-specified workflow, which should be carried out by the data engineer.

About the author/ODSC Europe 2022 Speaker on Explainability:

Luc Moreau is a Professor of Computer Science and Head of the Department of Informatics, at King’s College London. Before joining King’s, Luc was Head of the Web and Internet Science, in the department of Electronics and Computer Science, at the University of Southampton.

Luc was co-chair of the W3C Provenance Working Group, which resulted in four W3C Recommendations and nine W3C Notes, specifying PROV, a conceptual data model for provenance of the Web, and its serializations in various Web languages. Previously, he initiated the successful Provenance Challenge series, which saw the involvement of over 20 institutions investigating provenance inter-operability in 3 successive challenges, and which resulted in the specification of the community Open Provenance Model (OPM). Before that, he led the development of provenance technology in the FP6 Provenance project and the Provenance Aware Service Oriented Architecture (PASOA) project.

ODSC Community

The Open Data Science community is passionate and diverse, and we always welcome contributions from data science professionals! All of the articles under this profile are from our community, with individual authors mentioned in the text itself.