I recently discovered the meaning to life in a New York Times opinion piece: ‘A Formula for Happiness’. Well not totally, but I now have some pretty cool insights. What is fascinating is that the author never proposes a mathematical formula (that I would love to see), but he brings up evidence on different determinants of happiness and how they interrelate. I was struck at how useful it was to look at such a BIG question in this way even if we can’t actually can’t articulate the math. Happiness is big, complex and messy. Business, programs and other pursuits are often pretty big, complex and messy too. Can we apply more ‘formula-thinking’ into how we measure success in business and programs, even if we never come up with the perfect mathematical equation?
In the world of business, success is measured via performance indicators that are tracked against targets for different goals that are often the responsibility of particular departments/teams. This may be rooted in a strategy map that considers how learning, growth and innovation, process improvement, and customer satisfaction relate to revenue growth. In program evaluation, measurement is usually rooted in a logic model that identifies indicators in terms of the resources we put into something (inputs) that creates something (outputs) that may lead to something (outcomes).
In the measurement world we are not without some guiding formulas, but whether they meet our needs sufficiently and have been updated and adjusted to how our organization evolves, is something we all need to critically look at. Here are some ideas to help you bring out ‘formula-thinking’ in your measurement practices.
- Think outside the box and outside your bubble. If factors outside of your direct control have a significant influence on achieving an outcome/goal , do you track them? This could mean factors outside of a program or department, and beyond the boundaries of the organization as a whole. Michael Quinn Patton has a great example about systems thinking that looks at how a simple logic model for a teenage pregnancy program (that seeks to instill more healthy behaviours to improve pregnancy outcomes) can be improved by thinking about the people and institutions (peers, schools, etc.) and their relationship to what the program seeks to accomplish. (Utilization Focused Evaluation, p.360)
- Think about what’s going on and why. Be more explicit about what you think happens to reach a goal and/or to create impact. This means going beyond listing indicators, but relating them to ‘ theory of change’ that includes explicit assumptions about connections between outcomes and how the work you do will bring them about.
- Focus and refine your indicators based on what you are learning (or want to learn). Once you have identified measures, you will need to refine them in practice. Many organizations reduce the number of indicators they collect because they realize some are more important than others; are too hard to collect; or they missed the mark in measuring something that was important. However, don’t limit yourself to a ‘smaller is better’ mantra. The number of measures should balance your need to get the best insights possible (given your resources), and in some cases this can mean adding indicators to help understand underlying dynamics that led to the result. Take the time to think critically through what you have and be flexible to update and change measures to reflect organizational changes.
- Analyze relationships by actually developing an analytical model. Depending on the nature of the what you are measuring, it may be possible to apply what you've measured to future planning in an analytical model that can allow you to test key decision factors on the outcomes that you care about. Here is an example: http://www.demonstratingvalue.org/snapshots/interactive-example-planning-tool