Automating analysis – Machine learning and predictive analytics in children’s services

Automating analysis – Machine learning and predictive analytics in children’s services

 

Workshop Summary 

More than 120 colleagues participated in this workshop exploring the practicalities and ethical issues involved in using machine learning and predictive analytics.  

At the beginning of the session, Jeni Tennison from Connected by Data (chair) provided an overview of machine learning and predictive analytics. The audience were asked to type responses to the question ““What comes to mind when you hear that description of machine learning?”

 

Does predictive analytics work in children’s social care? How do we know if machine learning models work? What next?
Professor Michael Sanders talked about ways of testing whether machine learning algorithms make accurate predictions and his experience of testing models based on children’s social care data and case notes. None of these models met the threshold of 65% accuracy. On a more positive note, none of the systems showed systematic bias in their predictions (over and above the bias that may already be in the input data). He called for better government regulation of these models to require transparency of model accuracy. 

You can watch Michael’s talk here.

 

What do we need to consider to implement machine learning and predictive analytics in practice?
Laura Carter from the Ada Lovelace institute spoke about a research project into implementation of machine learning in an English local authority. Laura encouraged participants to think about the system around machine learning as a social system, rather than just a technical one.  She identified the need for improved procurement processes to help authorities choose the right model, including defining what success criteria to use to judge the model, the need to recognise the different ethical perspectives from the range of stakeholders in machine learning systems, and the need to consider the impact of machine learning tools on professional expertise and agency. Social workers felt it was important that they could explain the outputs of machine learning, and that transparency was important to increase trust in the system.
You can watch Laura’s talk here.

 

How one local authority is using predictive analytics to help social workers and early help understand risk
Gary Davies, a project manager at Somerset County Council spoke about his experience developing in-house machine learning algorithms to evaluate risk of particular problems (CSE, CCE and NEET).  He gave his perspective on the wider issues, suggesting that we need to engage with these tools and find out what they are good for, rather than rejecting them wholesale, and think about where they are most useful in the system, for example for use to target early help rather than in social work where the risks are already known to be high. 

 

What are the ethical issues we need to consider when using machine learning and predictive analytics?
Professor Lisa Holmes highlighted a range of ethical issues that need to be considered as part of any plan incorporating machine learning in children’s social care. These issues included understanding what we mean by value, protecting relationship based practice, establishing the infrastructure, resources and culture to underpin the use of machine learning, and noting the potential for perpetuating cycles of discrimination and patterns of inequality. She called for inclusive and consent- based practices for designing machine learning models, and making sure we focus on outcomes that are meaningful for children and families, rather than cost saving. 

You can watch Lisa’s talk here.

 

How Essex has built the infrastructure and culture to support sophisticated use of data, including machine learning
Stephen Simpkin, Data Science Fellow at Essex County Council provided an overview of the different ways data can be used in local authorities, from description and diagnostics, through prediction to prescription (when decisions are made automatically based on analysis). He spoke about Essex’s “human in the loop” approach, ensuring that decisions aren’t fully automated, and the need to develop a culture of data use and understanding in the workforce to support this.

Audience questions and comments:

  • Data quality: How do we improve data quality, especially around ethnicity and disability, and what is the effect on data quality when practitioners know it is feeding the algorithm
  • Senior management buy-in: What is the appeal of these systems to senior managers? How best to talk to senior managers about the potential and risks in these systems?
  • Measuring bias: What do we mean by bias in machine learning? How is this different from the human bias affecting the input data? 
  • Improving accuracy: Would using data from other agencies and services (e.g. police, health education) improve the accuracy of the models?
  • Other uses of machine learning: Can we use machine learning to fill in missing data, match data across systems or find other sorts of patterns (e.g. social worker travel patterns, or to identify patterns in practice), or to identify cohorts with similar journeys?
  • Ethics: What are public attitudes to machine learning use? How do these attitudes differ across different groups (e.g ethic groups with more mistrust of government/ surveillance?) What do we mean by consent – is it just consent to share and use data under GDPR, or do we need consent to the design and goals of the system?  How do we balance public acceptability with a more risk-averse culture in management?
  • Geography: Does each LA need its own model? What is thew scope for a model based on national data, or a minimum national dataset?

 

Resources shared by speakers and the audience
Clayton V, Sanders M, Schoenwald E, Surkis L and Gibbons D (2020) Machine learning in Children’s Social Care: Does it work? 
Leslie D, Holmes L, Hotrova C and Ott E (2020) Ethics review of machine learning in children’s social care           Dr Andrey Kormilitzin - Ensuring LGBTQI+ people are treated fairly in mental health data