Social change leaders hit the ground running: challenges come up, forks in the road present themselves, and their business models constantly evolve. Ideally, they have some good data on hand that can help with tough decisions, and that provide insights about social, environmental and financial bottom lines.
Social innovation, by its very definition, involves finding new, and more effective approaches to addressing the complex social and environmental problems of our times. We need to collectively learn from what is happening so that we can scale these efforts. Impact measurement is seen as a critical need to developing the sector and building new financial support. Yet, if nascent impact measurement methods are imposed on organizations pre-maturely, we may actually be limiting growth.
The problem with many impact measurement efforts is that organizations get stuck tracking easy-to-measure indicators that aren’t fundamentally important to the emerging work, and don’t ultimately support decisions on the ground. Information that could be more beneficial is not tracked, and processes to learn (and potentially share) about what works, and doesn’t can be lost.
In theory, we want to be able to measure outcomes (i.e. the benefits or changes that have come about as result of the work underway) but often struggle to measure even outputs (i.e. measurable units of production created – e.g. # of workshops, # individuals served, etc.). If a social-change organization is being innovative, and testing out new ways of doing things, then output measurement will miss the mark, because the outputs are constantly evolving. But credible outcome measurement is not quickly or easily attainable, particularly without significant expertise and resources. Even with a clear picture of outcomes, it is another thing entirely to tease out how the organizations contributed. Nevertheless, impact measurement may lock down a set of output measures to be tracked, or measure outcome or proxy outcomes based on fuzzy assumptions about attribution and causation.
What are we to do? There is no simple solution, but many of the following ideas have guided us in developing Demonstrating Value’s tools and resources. We need to:
- Build in a culture of performance measurement that is flexible and which can grow as the organization grows. It is not enough to collect data - capacity to actually analyse and use data in decision-making, has to develop alongside.
- Take the time and energy to systematically learn about what strategies works, and don’t, and involve all those who are key to the work. This doesn’t mean endless meetings, but it should mean that we don’t substitute a few numbers for people’s experience and wisdom. Numbers should only act as a starting point for discussion.
- Fund and measure outcomes properly and share the data. Most outcomes data are valuable to many, so we should look to collective projects to fund this work. We already do this with government statistics agencies, yet these data are often at a level, or format and cost that aren’t accessible. University projects, open data movements, and shared outcome frameworks and tools are other valuable approaches.
- Embrace systems complexity. It is tempting to want a simple matrix of aggregate measures. But social change is messy. We can’t loose the context and non-measurable insights. Where there is significant uncertainty and assumptions involved in a calculation, it may be misleading to roll this up as an aggregate number.
- Learn from new thinking in evaluation (like developmental evaluation). This community has already spend decades trying to measure impact, with mixed success, and plenty of insight.