Anytime Netflix customers log onto their accounts, they receive automated suggestions for movies they might like. The software uses the customer’s “previously watched” data to make informed predictions about similar films that probably align with the person’s viewing tastes. Such suggestion services are ubiquitous with online companies: Amazon proffers a book you might like; Twitter tells you whom to follow; LinkedIn guides you to new career connections.
Similar applications in the public sector are less common, but a recent pilot study in Los Angeles County used the same approach to anticipate and prevent criminal behavior among foster children, with some success.
Between 2012 and 2014, the L.A. County Department of Children and Family Services screened children and teenagers to assess their risk of committing a crime and winding up in juvenile detention. The department used an actuarial tool to score children’s risk based on factors associated with criminal behavior. For children identified as high risk, caseworkers connected them to specific services -- drug treatment, additional schooling and therapy -- intended to address problems that might lead to criminal behavior. The department also monitored a similar group of children identified as high risk who did not receive a suite of specialized response services.
An evaluation by the National Council on Crime and Delinquency found that after six months, the children who received services had no arrests, whereas 9 percent of the control group did. Jesse Russell, a researcher involved with the evaluation, says the results represent a tentative first step in applying predictive analytics in a child welfare setting. Localities already have evidence-based strategies for helping foster children once they’ve been arrested. But Russell says the difference with Los Angeles County was the timing of the intervention: identifying a child before he became involved in the juvenile justice system. The county attempted to predict which children might commit a crime and focus assistance on those children before an arrest occurs.
Although L.A. County is on the leading edge in experimenting with predictive analytics in child welfare, local officials actually avoid the term. “Predictive” is a loaded word, says Armand Montiel, a spokesman for the county’s department of children and family services. It can set unrealistic expectations for the public, implying that the county knows ahead of time who will commit crimes. Technically, the phrase refers to the practice of analyzing past data to make educated guesses about what may happen in the future. The results are not definitive, of course. The county can’t really predict when a parent will abuse a child, or when a teenager is about to commit a crime. But it provides the county with a sense of where to devote its resources.
The pilot builds off a recent effort in 88 counties to bridge programs and communications between local child welfare agencies and peer agencies, such as school districts and juvenile probation departments. Since 2008, the Center for Juvenile Justice Reform at Georgetown University has assisted counties in improving data collection and coordinated case management so that local officials know when a foster child is arrested or suspended from school.
As more local governments experiment with predictive analytics in child welfare, Russell advises caution. Netflix “will make lots of recommendations [because] so what if they get it wrong?” he says. “The consequences are so low.” When officials are trying to predict whether a child is safe to stay with her family, or needs attention from a caseworker to avoid run-ins with law enforcement, “the consequences of getting it wrong are massive.” People still must decide whether to separate a child from her parents, Russell says. “[The data] cannot make those tradeoffs for you.”