Wikipedia defines Performance measurement as:
the process whereby an organization establishes the parameters within which programs, investments, and acquisitions are reaching the desired results. This process of measuring performance often requires the use of statistical evidence to determine progress toward specific defined organizational objective.
Why Measure Performance in Government?
The biggest value in measuring our own performance is in discovering whether or not our activities are working, and then adjusting accordingly. Then repeating the cycle all over again!
Publicly funded organizations are accountable for reporting on how money was spent. That part’s easy. More importantly, with our mandate to serve Canadians, we must also demonstrate that by spending that money, the communities we live in have been improved. Funds continue to flow from taxpayers, through the political machinations and into departmental programming depending, in part, on our ability to report to Canadians on the results achieved, not just amount spent or activities performed.
A number of policies and processes require that we use data to report on the success of how we work (Departmental Performance Report) and what we do (Report on Plans and Priorities). Everything we do must be measured in this way. This may seem obvious, but in very large organizations that are not driven by profit to shareholders, defining what success looks like can be a hotly debated topic. And that’s before we even start trying to figure out what indicators to use to show that success has indeed taken place!
Implementing a continuous improvement cycle using client feedback to drive improvements to a product or service is likely to result in a strong rating on the Management Accountability Framework, specifically the Area of Management “Citizen-focused Service”.
What to Measure
A Performance Measurement Framework is a conceptual framework that uses a set of tools in order to define the story that demonstrates the impact of activities. No framework looks the same because our activities and goals are different from each other and constantly evolving internally. A Performance Measurement Strategy is the description of the goal, indicators, targets, data that will be collected, timeframe and reporting cycles.
Fortunately, best practices abound. A framework for measuring performance can be unearthed through this process for any project, large or small:
Identify goal > Determine indicators of success > Set targets > Collect data to measure against target > Report > Start over
Reporting should not be considered the final stage in the process but rather the beginning of the change management process. As resources are finite, decisions need to be taken about where they should be allocated and in what order projects should be completed. Information provided in reports should inform these decisions. A reporting cycle should be developed that aligns to a release cycle or series of improvements of the project, product or service. This, essentially, is evidence-based decision-making.
For each project undertaken, each of these should be measured:
- Efficiency – An efficiency indicator demonstrates value for money spent, cost savings, or costs avoided.
- Effectiveness – An effectiveness indicator demonstrates the impact of the activity.
- Satisfaction – So what if an activity was successful or low cost if the target audience is still not happy with the outcome?
The content of each of these indicators changes depending on the activity or project. I suggest practicality and relevance over stringent and rigourous adherence to data collection principles. That said, if the data is questionable, it’s not going to be meaningful either.
When to Measure
Tracking periodically throughout the project lifecycle is key to enabling ongoing tweaking based on the incoming data from users of the project, product or service. This will help ensure success of the project by analyzing the information as it comes in. Tracking should be cyclical, attuned to the cadence of the project. Leaving tracking and reporting to the end of a project doesn’t allow room for any changes to be made.
Methods and Methodology
Data on its own is meaningless. To make data meaningful, it needs to be contextualized. Using a variety of data sources helps to paint the whole picture – choosing which method to use depending on what type of information you want to collect. Each method of collecting data has a purpose. Statistics, off-line data (reduction of time spent due to fewer errors in a form for example), satisfaction surveys, usability testing and A/B and multivariate testing can all be used to measure website goals, but for each method, the goal of activity has to be understood since the testing is validating assumptions.
I hope this is enough info for any newbie getting started on their own performance measurement framework.