The (problematic) rise of predictive analytics in children’s social care 

The problem and the technical solution 

Recent years have been challenging for many local authorities. Reduction in the size of workforces and economic pressures have impacted many areas of public life. Arguably, one of the areas where this has been felt most keenly has been child and family welfare.

Alongside this, a number of serious case reviews of child deaths where services have not identified children at risk have shown all too clearly what can happen when things go wrong.

It’s against this backdrop that many authorities have turned to technical solutions, particularly related to data linkage and predictive analytic modelling of risk. There are assumptions that data solutions will both save time and eradicate some of the human errors that have occurred in the past.

A focus on data as the basis for decision-making and the sharing of data between statutory services has expanded substantially over the last 10 years across many social welfare contexts in the UK and globally.

This, combined with increasing possibilities of artificial intelligence in these systems, is driving a huge expansion in the use of data-driven systems as pre-emptive solutions to directing scant human resources in targeted ways.   

In the UK, one study identified 53 councils that were using predictive analytic systems in child and family welfare contexts with data being used to score and ‘flag’ families predicted to be at risk of falling into debt, homelessness, ill health or future mistreatment of children.

Opaque systems 

Algorithmic bias and the challenges of interrogating the processes used in ‘black box’ AI systems are well documented. Yet in social care settings data-based systems are increasingly deployed to underpin decision making, often without acknowledging or addressing their inherent biases or flaws. 

AI, especially machine learning, has demonstrated superiority over humans when handling large volumes of data to discover frequent patterns or to detect anomalies. Yet all systems have limitations or weaknesses. For machine learning models, this is the data that is used to train them.

Issues in training datasets, such as missing or inaccurate values, the way data is collected or how questions are framed or understood, all have a huge effect on the trustworthiness of the predictions from machine learning models..

Therefore, in order to deploy AI or machine learning technologies responsibly, it is necessary to interrogate, not only how predictive models are trained or built, but also whether the datasets themselves are biased.

a blue and black machine

A cause for concern

In the contexts of services for children and families, concerns are being raised about potential discrimination based on the types of data which are available and are linked in these systems.

In-built problematised pre-set indicators of inadequate parenting and notions of families who cost the public purse more which are applied to a proprietary algorithmic coding system can lead to incorrect assignments of children and families.

The Data Justice Lab in Cardiff defined and started a record of data harms from algorithmic systems in 2020. This documents a running record of concrete examples of uses of algorithmic systems that have led to exploitation, discrimination, and losses of privacy due to data breaches, as well as forms of data violence where people are profiled, targeted and excluded from aspects of social life.

As governments, humanitarian agencies and statutory services turn more to data solutions, it’s increasingly important that organisations understand flaws and biases within these systems.

Not only will this lead to systems that are more effective, but it will also avoid potential harms and injustices that arise from the use of these predictive systems in children’s social welfare contexts. 

Stay connected

Follow the Centre for Sociodigital Futures on LinkedIn and X.

To receive news and updates from the Centre for Sociodigital Futures, join our mailing list.

What are Sociodigital Futures and why do they matter?

Find out more about our work and research:

Logo of the ESRC Centre for Sociodigital Futures
Logo of the Economic and Social Research Council

The support of the Economic and Social Research Council (ESRC) is gratefully acknowledged.