Computational modelling of the COVID-19 pandemic has been playing a significant role in the UK's effort to combat COVID-19. Across the country, there are about 100 research teams working on different models, and several dozens have provided simulation, estimation, and prediction to inform the governmental decisions in the four home nations.
This project will develop a set of interactive, visual analytics approaches to better understand these complicated and extensive timelines, drawing on the example of social media and the more formal discussions of a legislative setting (for example, the European Union Withdrawal Acts (Brexit legislation)).
The aims of the fellowship are to examine how emerging technologies can fundamentally re-envision the conceptual models and mechanisms-of-delivery for existing prevention interventions in the context of child mental health.
PLEAD brings together an interdisciplinary team of technologists, legal experts, commercial companies and public organisations to investigate how provenance can help explain the logic that underlies automated decision-making to the benefit of data subjects as well as help data controllers to demonstrate compliance with the law. Explanations that are provenance-driven and legally-grounded will allow data subjects to place their trust in automated decisions and will allow data controllers to ensure compliance with legal requirements placed on their organisations.
SAIS (Secure AI assistantS) is a cross-disciplinary collaboration between the Departments of Informatics, Digital Humanities and The Policy Institute at King's College London, and the Department of Computing at Imperial College London, working with non-academic partners: Microsoft, Humley, Hospify, Mycroft, policy and regulation experts, and the general public, including non-technical users.
The goal of the THuMP project is to advance the state-of-the-art in trustworthy human-AI decision-support systems. The Trust in Human-Machine Partnership (THuMP) project will address the technical challenges involved in creating explainable AI (XAI) systems so that people using the system can understand the rationale behind and trust suggestions made by the AI.