King’s investigates the use of quantitative modelling to predict innovation programme success.

A team of researchers from the Department of Informatics have been exploring how quantitative modelling and machine learning techniques can be used to predict the success of applicants to open innovation programmes.

A team of researchers from the Department of Informatics have been exploring how quantitative modelling and machine learning techniques can be used to predict the success of applicants to open innovation programmes.

This work was motivated by Professor Elena Simperl’s experience of leading and supporting competitive EU-funded projects that offer business incubation and acceleration services to data-centric startups and SMEs (namely ODINE, DataPitch and DMS Accelerator projects). One of the key strengths of such programmes lies in their ability to attract, select and effectively allocate resources towards the most promising companies. Dr Maria Priestley and Dr Gefion Thuermer made use of the growing availability of administrative data from these programmes. Their goal was to explore whether applications submitted in the past can be used to develop predictive models for shortlisting applicants in future, and to consider how the parameters of the model can be interpreted to check that previous selection trends align with the intended objectives of public funders.

Based on the analysis, the team found that the use of past applications data for predictive modelling is not yet a viable option. This is attributed to the difficulty of quantifying subjective selection criteria and the current shortage of data that capture successful applicants. Nonetheless, the researchers were able to audit the attributes that contributed to companies’ chances of being selected, finding that teams with diverse areas of expertise and longer application answers were more likely to be selected. This suggests that innovation programmes have a preference for disciplinary diversity and companies that make the effort to submit informative responses at application stage.

Another important part of the work involved using automated tools to infer demographic metrics from applicants’ names, where explicit data on equality, diversity and inclusion (EDI) were unavailable. Although the authors could not detect overt selective biases against teams that were predicted to include women and non-European ethnicities, they were also unable to prove a preference for teams that were inclusive of this kind of diversity. The researchers estimate that only 19% of individuals named inside programme applications were women, and that 19% came from ethnic minorities (11% Asian, 8% African). These findings underscore public funders’ ongoing concern with pursuing better representation of women and ethnic minorities in industries affiliated with data and AI.

This research forms part of a public deliverable in the Data Market Services Accelerator project – a three-year EU-funded initiative whose monitoring and analysis work was led by King’s.