Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

4 Reasons Data Analytics Often Fail

It’s one of the hottest trends in the public sector, but it’s not easy to succeed with data.

analytics
(Shutterstock)
The Chicago Police Department thought it had a fail-proof strategy for keeping a lid on violent crime: a heat map of the 400 individuals most likely to break the law. The index of violent individuals was the result of a predictive analytics program that used a mathematical algorithm to sift through crime data. It worked much like the analytics programs Netflix or Amazon use to predict a person’s next movie rental or book purchase. 

But the algorithm ran into a firestorm of controversy in late 2013 when a Chicago Tribune article told the story of a man on the list who had no criminal arrests. While the police defended the tool, critics said it was nothing more than racial profiling. They compared to it to a bad version of Minority Report, the popular sci-fi film about police who predict crimes before they happen. 

Chicago’s experience demonstrates both the promise and limitations of analytics in government. The public sector is already using it at all levels: The U.S. Border Patrol uses it to figure out how best to allocate resources along the border with Mexico. States use it to stop fraud in Medicaid and tax returns. Local governments use analytics to determine which buildings may have code violations, or to predict possible traffic and transit disruptions before they happen.

But despite all the successful implementations of analytics, many such projects actually fail. According to IT research firm Gartner, more than half of all projects aren’t completed within budget or on time, or they fail to deliver the expected results. Like other types of IT projects, an analytics initiative can fail for a variety of factors, big and small. But several key reasons stand out.

First, there are misconceptions about analytics. It’s not a technology project that should be run by the IT department, though it will need input from CIOs and their staff to manage the databases and networks that underpin it. It’s also not about data. Rather, it’s a way to predict future strategies and support decision-making. That’s why the right stakeholders need to be involved.

Second, analytic projects fail when the data quality is inferior. Bad data creates poor results. Lack of data sharing can also hobble the best planned analytics project. While there are technical barriers to data sharing, too often the problem is an unwillingness to share between agencies or departments. The result is turf battles that erupt when one agency wants to protect the data they’ve collected.

Third, states and localities suffer from a talent shortage when it comes to finding people who can successfully run an analytics project in the public sector. The field of analytics is still relatively new, so the pool of skilled analytics experts is shallow. To improve a public service, you need analysts with domain knowledge, says Jennifer Bachner, director of the master of science in government analytics program at Johns Hopkins University. “This is essential to identifying and measuring outcomes that matter.” 

Last, measuring the impact of analytics in government is far more complex than in the private sector. As the Chicago Police Department found out, analytics can lead to messy results. The mathematician who created the algorithm for the heat map of likely criminals said the data did not use any racial or negative bias about minority groups. But that’s not how the results were viewed by others.

Finding a correlation between two sets of data and predicting an outcome works fine in the private sector, but as Bachner points out, government policymakers need to identify where they can intervene in a policy to make it better. “That’s hard to do and requires more substantive knowledge,” she says. “Improving a government program requires policymakers to make changes that lead to desired outcomes. This kind of challenge is about identifying causal relationships, not just correlations.” 

Tod is the editor of Governing . Previously, he was the senior editor at Government Technology and the editor of Public CIO, e.Republic’s award-winning publication for IT executives in the public sector, and is the author of several books on information management.
Special Projects