Rawpixel.com / Shutterstock

Building a Culture of Strategic Data Use

November 8, 2017

Social impact leaders face a daunting paradox.

On the one hand, they are operating in an increasingly demanding environment, as foundations are requiring that organizations submit more rigorous evidence of impact. On the other hand, organizations face declining revenue and have limited internal capacity for collecting and using data. Over the past several years, ImpactED has partnered with many public and nonprofit organizations wrestling with these very real challenges. We work collaboratively with our clients, helping them develop meaningful and practical approaches to data collection and evaluation so that they can more effectively use data to inform strategy and drive results. Our clients represent a diverse range of sectors, including city agencies, educational and cultural institutions, environmental advocacy groups, and community-based nonprofits. Below are three lessons we’ve learned about how evaluation can be used to maximize impact.

Lesson #1: Strategy matters.

There’s a lot of talk about data collection methods and analytic procedures – What is the best mode for administering surveys to get a strong response rate? How do we isolate the impact of a program? While these types of questions are important, until organizations have developed a clearly articulated strategy, the answers won’t have much meaning. Organizational leaders should start any data collection effort by creating a program/policy logic model and ensuring that the data they’re collecting aligns to the indicators that matter most. Program logic models depict the theory and assumptions underlying a program, policy, or strategy by linking outcomes (both short- and long-term) with activities and processes.

Lesson #2: Evaluation should balance rigor with relevance.

Organizations often contract with external evaluators to assess program implementation and impact. This level of independence is important for maintaining objectivity and bringing a rigorous third-party perspective to evaluate the impact of investment. However, a desire for rigor should be balanced with the need for relevance. Organizational leaders must often make decisions in the face of incomplete information. Unfortunately, research typically concludes that there is not enough information to draw conclusions. To be relevant, evaluation needs to offer recommendations, even if the evidence base is only descriptive.

Lesson #3: Data can be a powerful communication tool. 

Evaluation reports contain a wealth of information about the effectiveness of policy and program implementation. However, they don’t generally express complex ideas with clarity or precision and instead overwhelm the reader with statistics. Neuroscience research has shown that individuals make decisions based on emotion, not logic. Indeed, research by Stanford Professor Chip Heath has shown that 63% of people can remember stories, but only 5% can remember a single statistic. Organizational leaders should use research and evaluation data to tell their story of change.

Tags: 

Contact Information

Fels Institute of Government
University of Pennsylvania
3814 Walnut Street
Philadelphia, PA 19104

Phone: (215) 898-2600
Fax: (215) 746-2829

felsinstitute@sas.upenn.edu