Based on recent advances in artificial intelligence (AI), AI systems have become components of high-stakes decision processes that ultimately require a level of trust for user confidence. This draft publication and solicitation for comment from NIST highlights the importance of user trust in considering AI decisions and presents four principles for explainable AI, principles designed to capture a broad set of motivations, reasons, and perspectives regarding outputs from AI systems.
The post A Satisfactory Explanation? NIST Proposes Four Principles for Explainable AI Systems appeared first on ComplexDiscovery.