DailyHum Logo
What is explainable AI? Building trust in AI models
Nov 26, 2021 | Venture Beat
Picture - What is explainable AI? Building trust in AI models

As AI-powered technologies proliferate in the enterprise, the term “explainable AI” (XAI) has entered mainstream vernacular. XAI is a set of tools, techniques, and frameworks intended to help users and designers of AI systems understand their predictions, including how and why the systems arrived at them.

A June 2020 IDC report found that business decision-makers believe explainability is a “critical requirement” in AI. To this end, explainability has been referenced as a guiding principle for AI development at DARPA, the European Commission’s High-level Expert Group on AI, and the National Institute of Standards and Technology. Startups are emerging to deliver “explainability as a service,” like Truera, and tech giants such as IBM, Google, and Microsoft have open-sourced both XAI toolkits and methods.

Sponsored
Jan 14, 2022 | Venture Beat
4 principles for responsible AI
Jan 13, 2022 | Venture Beat
The future of robotics
Jan 12, 2022 | Venture Beat
Freedom and guardrails for Citizen X