Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.
Sponsor Content
What does this mean?

The Four Steps to Practical Performance Management: Determining What to Measure

Determining what to measure is the first step of performance management. Part one of this series of articles will show you how to start on your performance journey.

Tasked with measuring and reporting on performance, today’s State and local governments face multiple challenges in how they implement a performance measurement program. It often starts with a general disconnect between the theoretical gold standard of performance management and what is actually achievable within an organization tasked with fiscal responsibility while delivering programs and services to the communities it serves. In reality, true performance-based budgeting is unlikely to be part of your process, but an effective and useful program can and should be employed that matches your organizational readiness.

Budget and performance best practices

There are four major components of performance management including:

  • Determining what to measure – what data should we be collecting
  • How to collect the data, focusing on technology options
  • How to link performance data to budgets and actuals, how to incorporate performance measures in decision analysis, the process of analyzing measures and financial data for improved decision making
  • Finally, the outputs to consider
Determining what data to collect, for most clients, is the most difficult task. Most government organizations have begun gathering performance data and may have dozens to hundreds of existing measures. We recommend grouping existing measures by their intended target.

  • Executive and legislative critical performance indicators and/or key outcomes for constituents
  • Departmental measures – used for internal management

Executive and legislative measures

These measures are fluid and focus on major programs and policies that need executive and legislative attention. This means that when there is progress made, then that measure may no longer be highlighted. The organization will shift concentration to other areas that need support. When trying to determine what measures to track, consider the programs that may experience:

  • Rising service demand (homelessness, for example)
  • Flat demand but rising costs (public transit – bus services)
  • Shifts in services (libraries)
  • Rising cost of delivery (public safety)
  • Loss or reduction in funding – often driven by changes in non-general fund support

What is a good measure?

Begin by identifying the critical measures you intend to track. This example will track homeless issues caused by a rising service demand. The urgency may be driven by a census of homeless by month, by age and gender that already takes place that shows the current heading to be negative. Identify measures that track outputs. These may include the number of residential client nights at shelters, the number of meals served, the number of career guidance sessions, and the number of homeless placed into jobs.

The more challenging measures are outcome measures. For example, attempting to link program objectives and service delivery to outcome measures. These may include the number of homeless permanently placed in housing, the number of homeless that remain in jobs after 30 days, 60 days, and one year. When budgeting, you can then create decision packages that link outcomes to service delivery outputs. A decision package is a discrete request that combines budget dollars, narrative justification, and performance outcomes, often in narrative form. The decision package includes activities that will enable the outcome. Data in a decision package may include:

The initiatives and budget:

  • Add 50 residential beds Homeless Shelters to address rising demand. Cost – $43,000 one-time plus $63,000 per year on-going for personnel and supplies.
  • Expand job counseling program. Increase current FTE from .75 to 1.0 at a cost of $23,000 in personnel costs.
The outputs:

  • Increase the number of job counseling recipients by 20% from 630 to 736 per year.
  • House 35 additional people per night April – November and 50 additional people per night December to March.
     
The outcome:

  • The number and percent of clients that remain employed for six months (there are dozens of other outcomes to be tracked in this area).
For a proper evaluation, an organization would need to measure this outcome for a set period of time to determine if the original outcome has been met. Do you expect the initiative to see results immediately, or is there a lag? As part of this process, each initiative should have a checkpoint to evaluate the effectiveness, which is often a cost-benefit analysis. During the evaluation, you should also be open to the fact that your programs may be very effective – but not at exactly what was intended. The original outcome was to track employment, but the program manager may also note that crimes of certain types may have fallen significantly, or hospital visits have decreased; both may be related to the initiative. Your tracking should allow not just the intended outcome to be included in the evaluation, but a holistic review of outputs and related outcomes.

Some outcomes may be expensive or difficult to track. Does a city have the data to prove changes in carbon emissions caused by a program? It is doubtful, but you can still enact policies to plant trees, add electric car charging stations, tax parking or gasoline, encourage solar panels and green buildings – and measure the outputs for these programs. Do you have the appropriate survey methodology in place to track citizen satisfaction with State parks? A simple survey may have the wrong sample size and questions may not be phrased in a way to solicit honest answers. As part of the decision package, you can hire a research expert to design the tracking methodology for those outcomes. In some cases, measuring takes a great deal of preparation, but you can still track outputs associated with the outcomes until you are ready.

Departmental measures

Departmental measures often have a different focus on inputs and outputs. Most departments still track outcome measures for key initiatives; departmental measures are in addition to those. Here are some key questions to ask to determine if a measure is worth tracking:

  • For output measures, do they relate to a relevant outcome?
  • Can you rely on the data that is being tracked and reported for that measure?
  • Is tracking it more cumbersome than the value of the information being collected?
  • Based on the results you have garnered, are there clear actions that can be used to improve performance?
If the answer No to any of these questions it is likely an unproductive measure.

Benchmarking

There is a desire to track similar functions across organizations; the City of Fresno and City of Sacramento, for example, are close to each other and do similar things. In our review of benchmarking in the past, it is challenging to compare any metrics for departmental measures due to differences in measurement and organizational structure. Instead of simply comparing 'cost per permit issued' organizations can have qualitative reviews that discuss policies, software, and staffing model to see if either can be more efficient or effective. Nearly all our State and local government clients have this network established and lean on it when undertaking projects.

Do we need more measures?

More measures are not always the right approach. There is often pressure on each program or service to have a certain number of measures. The number of measures you have is not indicative of the efficacy of your performance management program. Avoid placing targets on departments, for example, to have a defined number of measures.

Central services are particularly difficult. For example, measures that count the number of operating budgets developed, Comprehensive Annual Financial Reports (CAFRs) produced, or executing 26 payroll runs on schedule are not meaningful measures since they must be completed. The fact is there will never be 0 or 11 CAFRs produced. They are expected/minimum standards and have no impact on operations. Tracking and presenting less useful statistics distract analysts from focusing on the initiatives that can have positive impacts. Consider the number of decision packages analyzed by the budget office, the number of financial audits completed, or the percent of IT help desk calls resolved within one hour.

Most clients are better off with fewer measures. Focus on what the key issues are for each of the performance categories and do not attempt to track every activity. Be prepared to change what you track over time. Be sure the software for tracking and displaying results is flexible enough to shift as your priorities change.

Next steps

That is a lot to take in – so where can you start? The first step is to make an honest evaluation of your organization's capabilities. Is there support at the highest levels of the organization for extensive investment? Do we have the staff to take on such an initiative? Are there existing technologies in place or are they easy to acquire to enable the process? Depending on these answers, you may be ready for a performance management initiative or incremental improvements that can be implemented over time.

For over 20 years, GTY Technology’s budgeting solutions companies have been helping State and local government across North America to streamline their budget process. Our consultative approach to implementation and support means we work with our clients to evolve our software to meet their end-to-end budgeting, forecasting and performance reporting requirements. For more information about GTY’s government budget and performance software solutions from Questica and Sherpa Government Solutions, visit gtytechnology.com.

From Our Partners