Although artificial Intelligence (AI) as a scientific discipline dates to 1956 with the Dartmouth Summer Research Project, recent advances have brought intelligent systems to the center of the conversation. What’s changed to shift these technologies into focus? RTI Innovation Advisors interviewed Raj Minhas, director of the AI lab at the Palo Alto Research Center (PARC) to hear his views on the question. Below are highlights from our conversation. Listen to the full interview.
While AI has seen growing interest in many sectors, one of the big concerns is ‘explainability’ – that is, how do we understand how the AI application arrived at its insights or conclusions. Humans can typically explain how they arrived at a conclusion, which leads to insight or indications of bias or narrow perception. For AI to gain widespread use, we must be able to explain how it reached its decisions or conclusions. Jim Redden from RTI discussed what’s leading many organizations to look more closely at AI and what may hold it back from wider adoption.
A confluence of trends has led to a resurgence in interest in AI.
Cheaper, faster, and well distributed computing power coupled with the availability of large amounts of data have led to renewed interest in AI. Because of this, we see AI applied to more and more scenarios. Effective applications for enhancing outcomes, like computer vision or robotics, have resulted in remarkable progress in automating specific tasks.
Combining the power of AI tools with the domain knowledge of subject matter experts is important for developing explainable AI.
The use of AI and machine learning combined with the use of cognitive science to model human interactions have enabled engineers to embed the capabilities of subject experts within these intelligent systems. Minhas noted that building ‘models of the world’ that leverage subject matter experts’ domain knowledge enables AI to predict or solve a complex task in ways that are more explainable.
As an example, “PARC has augmented the experience of people to help the US Medicaid and Medicare agencies to shortlist fraudulent claims out of the large number of applications received. This was done by incorporating the capabilities of the industry experts in the intelligent system that could filter out the relevant cases”
As Intelligent systems expand their capabilities across industries, it is necessary to understand:
- How data is being used
- What the reasons are behind particular intelligent system actions and results
This need for transparency and explainability will drive improved policy, additional regulations, and technological advances.
According to Minhas, we’re starting to see this. “… EU countries have already implemented policies like General Data Protection Regulation (GDPR).”
The GDPR places an increased emphasis on understanding what is going on inside the “AI black box” and holds companies accountable for this understanding. Thus, the ability to inspect intelligent systems has in turn made the concept of ‘Explainable AI’ a crucial part of the growth in the field.
Hardware advances could create a step change in AI capabilities
The future of Artificial Intelligence is bright, and we are on the cusp of a dramatic change in the way we gather, process and use data. Further, as we enter an era of big data, machine learning and artificial intelligence, we’ll find new insights and new opportunities based on AI and Intelligent Systems. Until they are fully explainable, however, there will remain concerns about how a system made decisions and whether or not there is unintended bias in the decision. More work on explainability will go a long way to resolving this concern.