explainable artificial intelligence

Plan and Goal Reasoning for Explainable Autonomous Robots

Robots are rapidly emerging in society and will soon enter our homes to collaborate and help us in daily life. Robots that provide social and physical assistance have huge potential to benefit society, especially for those who are frail and dependent. This was evident during the Covid-19 outbreak, where assistive robots could aid in the care of older adults at risk, in accessing contaminated areas, and providing social assistance to people in isolation.

PLEAD

PLEAD brings together an interdisciplinary team of technologists, legal experts, commercial companies and public organisations to investigate how provenance can help explain the logic that underlies automated decision-making to the benefit of data subjects as well as help data controllers to demonstrate compliance with the law. Explanations that are provenance-driven and legally-grounded will allow data subjects to place their trust in automated decisions and will allow data controllers to ensure compliance with legal requirements placed on their organisations.

THuMP

The goal of the THuMP project is to advance the state-of-the-art in trustworthy human-AI decision-support systems. The Trust in Human-Machine Partnership (THuMP) project will address the technical challenges involved in creating explainable AI (XAI) systems so that people using the system can understand the rationale behind and trust suggestions made by the AI.

Cracking the black box

How KCL researchers build a provenance-based system to make AI explainable