(Part of the Festival of Disruptive Thinking series)
Computer algorithms are used to make automated decisions about us every day: from A-level results to mortgage applications, school catchment areas, employment decisions, COVID-19 vaccinations, even sentencing in court cases.
There is no doubt that this new technology comes with benefits: processes become more efficient, fewer mistakes are made, and costs are saved. But what about when things go wrong? With artificial intelligence making the processes ever more complex, what happens if you don’t like the decision? Would you know how to challenge it?
Join us to hear from experts in the field about what is being developed to make automated decisions more transparent to you and to enable you to take action if needed. Have your say on the technology and let the experts know what you care about when the computer says no to you.
Luc Moreau is a Professor of Computer Science and Head of the Department of Informatics, at King’s College London. Luc is renowned for his work on provenance (both provenance of data and provenance of things). Luc was co-chair of the W3C Provenance Working Group, which resulted in standard model for provenance. In the PLEAD project, with his collaborators, he is using the provenance of decisions to construct explanations.
Sophie Stalla-Bourdillon is a Professor in Information Technology Law and Data Governance within Southampton Law School at the University of Southampton and Senior Privacy Counsel and Legal Engineer at Immuta. Her research focuses upon platform responsibility, data governance and data sharing models, and privacy and data protection, with a special emphasis on the legal and ethical implications of data operations within analytics environments.
Niko Tsakalakis is a Research Fellow at the University of Southampton, working on the implications of disruptive technologies for data protection and on ways to engineer data protection in the design of complex IT systems. For the purpose of the PLEAD project, Niko run the analysis for explainability requirements and how this could be operationalised by a provenance driven explanation system.
Dong Huynh is a Research Fellow at King’s College London with experiences in tracking the flows and use of data with provenance technology in a wide range of applications. He led the development of the demonstrator showcasing provenance-enabled explanations for automated decisions at explain.openprovenance.org.