0000000016 00000 n 0000067005 00000 n do not provide a sufficient foundation for a proposed field of study. 0000013715 00000 n 0000007685 00000 n 0000123567 00000 n Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity… 0000041166 00000 n This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. We propose a formal framework for interpretable machine learning. 0000123673 00000 n We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. 0000139092 00000 n 0000016538 00000 n Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stake-holders. 0000136338 00000 n 0000123319 00000 n stakeholders "just won't accept that!" 0000002456 00000 n 0000138733 00000 n Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. 0000005999 00000 n We performed an extensive comparison between the machine-learning approaches and a human expert-based model—FICO credit scoring system—by using a Survey of Consumer Finances (SCF) data. 0000135688 00000 n 0000139663 00000 n In order to answer these questions, we'll have to give real-world problems and their respective stakeholders greater consideration.An Empirical Comparison of Machine-Learning Methods on Bank Client Credit AssessmentsEthics of Data Publication in the Context of Asylum ClaimsBITrum: Interdisciplinary Elucidation of the Information Concepts, Theories, Metaphors and ProblemsClarification of the theoretical network of concepts, metaphors and notions used to deal with information and related problems. 0000139870 00000 n 0000018546 00000 n In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. 0000067548 00000 n 0000136004 00000 n There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision.

0000137212 00000 n We propose a formal framework for interpretable machine learning. 0000135396 00000 n 0000139516 00000 n 0000066396 00000 n 0000015355 00000 n 0000138926 00000 n 0000136924 00000 n The way forward is to design models that are inherently interpretable. 0000013313 00000 n

Two exogenous variables, U X and U Y , have unobserved causal effects on two endogenous variables, X and Y , respectively. 0000017602 00000 n Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it.

0000053972 00000 n Indeed, we ought to be cautious about injecting machine learning (or anything else, for that matter) into applications where there may be a significant risk of causing social harm. 0000012060 00000 n We propose a formal framework for interpretable machine learning. 0000018970 00000 n 0000006627 00000 n 0000019807 00000 n 0000135234 00000 n 0000139364 00000 n 0000138280 00000 n