Tech companies are stepping up development of artificial intelligence software that can “explain” how it arrives at decisions, such as whether to grant a loan or approve an insurance policy, according to this report from Bloomberg. This shows how an issue that was previously relegated to AI researchers is now on the radar of tech companies. Developing AI that is transparent about its decision-making is a potentially lucrative business opportunity, in part due to government regulations that require companies to show more detail around such processes.
But not all AI experts agree the goal of “explainable AI” is attainable. Geoff Hinton, a Google engineering fellow who is one of the world’s top AI researchers, told Wired that neural networks, the type of AI that is used in self-driving cars, should be regulated based on how they perform as opposed to how they make decisions.