If you are on the fence about developing program evaluations because you are scared of scoring ‘low’ in performance, consider the following:
If you have programs that are scoring ‘low’ in performance, it means:
Foundations and high-end donors believe in the work you do and want to see you and your mission succeed. But they have limited resources to apply towards funding and want to be sure that they are investing their funds appropriately. High-end donors especially want to see their hard-earned money used well. Most of these high-end donors are business men and women who value the use of metrics and evaluation in their own businesses. They are successful because they evaluate, make hard decisions, and course correct (cut products or programs, redesign, innovate, reallocate resources, etc.). They value the business pillars of transparency, comparability, and accountability—all of which you can easily introduce into your programs.
Unless they have ties to the programs you are running, few donors will stick around if you aren’t asking the right questions and growing your nonprofit. Even if they choose to stay, they may begin questioning your effectiveness. If it were your money, wouldn’t you do the same? So, why not build their trust and your credibility?
Why not ask the hard questions and course correct?
When you find that your outcomes do not align with your target goals, you should be able to look at the measurement process and tools used that resulted in those outcomes. The learning approaches offered, the delivery mechanisms, and other variables can help you determine where improvements should be made. This reflection on your program ultimately brings you back to the design table.
You must keep an open mind and understand that changes may need to happen at any one of these levels – design, evaluation, or implementation.
First, take the results back to your stakeholders. This should not be a single group, but rather many groups involved at various levels: board members, management, employees, and probably most importantly your beneficiaries. Because of the nature of the questions and in order to elicit honest responses (the most useful), it is generally best to consult outside help with this element. A third party allows a safe place for discussion so key stakeholders can provide honest, thorough, anonymous feedback.
And yes, it is worth the cost to be sure that your data are reliable.
This process of reflection should include sharing the results and asking appropriately phrased questions to ‘drill down’ to the issue(s). Maybe it is an inappropriate fit of service or content. Maybe it is a misalignment between mission and demographic. Maybe it is a complete lack of understanding about what the beneficiaries need or want. Once the issue has been identified, ask your stakeholders (especially the beneficiaries) their ideas on how things could be improved. Then, it’s back to the design, implement, and evaluate stages.
Be sure to communicate with your stakeholders (especially your beneficiaries, staff) and funding sources (foundations and donors) to explain what you are doing and why.
Honest communication is critical to create trust and confidence that the program has the beneficiaries’ best interest in mind.
Lastly, don’t be scared to refocus and, if need be, trim your programs. It’s best to stay true to your mission and focus only on where you have a competitive advantage – what do you do well? What makes you different from other nonprofits working in the same field?
Go on, make hard decisions.
Remember, build trust by being transparent.
Sources
From “What Nonprofits Need to Learn From Business,” The Chronicle of Philanthropy, URL: https://philanthropy.com/article/Opinion-What-Nonprofits-Need/233892
From “Using the Evaluation Results,” University of Texas at El Paso, URL: https://academics.utep.edu/Portals/951/Step%206%20Using%20the%20Results%20in%20Evaluation.pdf
This is Part 1 of the Program Evaluation Whitepaper series.
There are a number of reasons why evaluation should become routine practice in your nonprofit.
According to the Performance Imperative, a recently released framework for social-sector excellence, high-performance (i.e. the ability to deliver—over a prolonged period of time—meaningful, measurable, and financially sustainable results for the people or causes served by the organization) organizations typically:
Good intentions, wishful thinking, and one-off testimonies don’t cut it. Social and public sectors are increasingly moving funds and resources towards evidence-based efforts. Regardless of size and budget, there are always incremental moves that nonprofits can make towards performance improvement.
The investment is worth it.
Building a data/evaluation-friendly culture takes time. Not everyone will be on board with a decision to move to evaluation – change is uncomfortable. Sometimes, it’s the board pushing managers to evaluate, and sometimes it is management appealing for board approval of funding for evaluations. Regardless, it’s usually a series of small successful steps at various levels that leads to larger, broader implementation.
Informing Change created a diagnostic tool that can help you determine where you are in terms of practical evaluation readiness. As stated on the site,
“This Evaluation Capacity Diagnostic Tool… captures information on organizational context and the evaluation experience of staff and can be used in various ways. For example, the tool can pinpoint particularly strong areas of capacity as well as areas for improvement, and can also calibrate changes over time in an organization’s evaluation capacity. In addition, this diagnostic can encourage staff to brainstorm about how their organization can enhance evaluation capacity by building on existing evaluation experience and skills. Finally, the tool can serve as a precursor to evaluation activities with an external evaluation consultant.”
It’s worth the time to walk through the tool to reveal any potential red flags whether it be lack of organizational context or your staff’s skills. Just because flags (may) exist does not mean that you cannot continue to move forward; identifying those flags helps you create a game plan for how to move forward with the greatest success. Among other things, these flags help you assess where you need the most help now.
Your program evaluation plans depend on what information you need to collect in order to make major decisions. For example, do you want to know more about what is actually going on in your programs, whether your programs are meeting their goals, the impact of your programs on recipients, etc.? Understanding your intended purpose allows you to build a more efficient plan, and will ultimately save you time and resources.
As a nonprofit, you will most likely be restrained by (or at least have to consider) the financial costs of conducting program evaluations. With this in mind, the broader your evaluation, the less in-depth aspects of the evaluation will be; if deeper levels of information are more important, you will need to narrow the scope of the evaluation. Of course, it is possible to obtain both broad and specific information using a variety of methods – it will cost you more, but it is possible.
When considering program evaluations, there are two kinds – formative and summative evaluations. While they are both informative, they approach evaluations differently.
A logic model is an excellent preparation tool as it visually outlines how your organization does its work (the theory and assumptions behind the programs).
Questions must be crafted carefully and with great consideration for the purpose of the evaluation. They must also be reflective of the intended scope, be it narrow or broad.
To ensure that questions are appropriately tied to the program, it is good to refer to program objectives and goals, strategic plans, needs assessments, benchmark studies, and ultimately, the mission of your organization.
Regardless of whether you are taking on a formative or summative evaluation, the process is the same:
If you are thinking of implementing program evaluations across your organization, it is good to test your evaluation approach on one or two programs first in a pilot evaluation. In this way, you can determine what works best for your nonprofit culture. This also gives your team the time to grow in ownership and expertise of the entire process.
Evaluations do not need to be highly complex in order to be worthwhile. For the most part, if you have staff and resources able to adequately devote time to the evaluation process, there is no need to hire a professionally trained individual or company to carry out the process. In fact, it is almost better to handle evaluations internally, as you can begin to build and integrate new skills (organizational learning) by working through this process.
The only time it is critical to have an outside professional conduct the evaluation is if it is a stipulation of a grant or funding requirement, or if there are failing programs or personnel implications that would be better handled by an objective third-party (persons outside the organization).
Having said that, it is an excellent practice to have expert review plans and questions, so that you know that your efforts will yield usable (valid, reliable, and credible) data. When possible, having an expert check in with you along the way to ensure processes are in line with the methodology further ensures that your data can be trusted. By including expertise in the process, you can be confident in your findings when presenting your findings to your donors and boards.
The last question is a highly important question for nonprofits. If there is no commitment to make decisions based on the findings, why go through the evaluation process?
Once you have done this, you will be able to select the type of evaluation most appropriate for your needs.
Sources
From ‘Program Evaluation Model 9-Step Process,’ Sage Solutions, URL: http://region11s4.lacoe.edu/attachments/article/34/(7)%209%20Step%20Evaluation%20Model%20Paper.pdf
From ‘Basic Guide to Program Evaluation (Including Outcomes Evaluation).’ Free Management Library, URL: http://managementhelp.org/evaluation/program-evaluation-guide.htm
From ‘The Performance Imperative: A framework for social-sector excellence,’ Leap of Reason Ambassadors Community, licensed under CC BY ND https://creativecommons.org/licenses/by-nd/4.0/
From ‘Logic Model Development Guide,’ W.K. Kellogg Foundation, URL: http://www.smartgivers.org/uploads/logicmodelguidepdf.pdf
From “Developing a Plan for Outcome Measurement: Chapter 3 Logic Models”, URL: http://www.strengtheningnonprofits.org/resources/e-learning/online/outcomemeasurement/default.aspx?chp=3
This is Part 3 of the Program Evaluation Whitepaper series. The previous papers covered what you should consider before starting a program evaluation and how to get started. Be sure to read those papers before diving into writing your plan.
For an evaluation, you should have done the following:
So what now? The next step in the process is to get started writing the evaluation plan.
We propose something that looks like this:
At the end of this document, you will see an example of how all of this is put together.
Aside from inputting the outcomes and indicators, answer the following questions in order to complete the plan:
If you are new to research and evaluation, you may need to consider adding account details fit for research and development expenses. Add as much detail as possible (even down to geographic location) without it becoming too laborious to track. By creating this detail, you will be able to better understand the breakdown of costs, variances between projects, differences in costs between states or countries (if you are an international nonprofit), etc. Pivot tables can be your best friend. Suggestions for details include:
When considering surveys and quantitative data, always:
If you are submitting your outcomes for any type of grant (foundation or federal), get some extra help.
This includes submitting the actual grant proposal as well. The stipulations for data collection and analysis are far more restrictive and demanding. You risk losing your funding or not being considered for the grant if you do not have a sound data collection and analysis plan in place.
Outcomes data rarely give you reasons as to why the outcomes have occurred.
Outcomes data merely identify what works well and what does not. This might be the reason many nonprofits do not engage with measuring outcomes. The question of, “What if what we are doing isn’t really making a difference?” is enough to throw any manager, CEO, or board into panic mode. But rather than being afraid of these results, nonprofit leaders should embrace them as a way of improving and bettering their programs. When you identify what is not working well, you can take a step back, redesign, and try again. This process of redesign should include qualitative research with various stakeholders (beneficiaries, administrators, providers, etc.) to pinpoint the breakdown in service or design. The “A-ha” moment is worth it.
Evaluation Plan Outline for Data Collection (example)
Evaluation Plan for Data Collection (blank)
Sources
From “Evaluation Plan Workbook,” Innovation Network, URL: http://www.innonet.org/resources/eval-plan-workbook
From “Building a Common Outcome Framework to Measure Nonprofit Performance,“ Urban Institute, URL: http://www.urban.org/sites/default/files/alfresco/publication-pdfs/411404-Building-a-Common-Outcome-Framework-To-Measure-Nonprofit-Performance.PDF
From “Candidate Outcome Indicators: Youth Mentoring Program,” Urban Institute and the Center for What Works, URL: http://www.urban.org/sites/default/files/youth_mentoring.pdf
This is Part 2 of the Program Evaluation Whitepaper series. The previous whitepaper covered what you should consider before starting a program evaluation. Please read that paper before continuing on.
For this evaluation, you should have determined the following:
Going through the question-and-answer process, you have probably already determined whether you need an evaluation that informs your audience about the program as it happens (formative) or whether you need an evaluation that looks at the completed program (summative).
Couple this with the resources available to you, and you probably have a pretty good idea of your scope. The fewer the resources, the more narrow and less involved the evaluation process will be.
After you have determined the scope of your evaluation, you can get started writing questions.
The tricky part about writing questions is keeping your questions tied to your evaluation purpose and not allowing yourself to get sidetracked with extraneous questions. Your over-arching question is never really answered by itself in an evaluation; it is answered by compiling the answers (evidence) from a number of sub-questions.
It is important to get your sub-questions (i.e., your evaluation questions) right.
When thinking about your questions, reflect on them in terms of the considerations above, along with whether or not you are looking to evaluate process, impact, or outcomes. Process evaluations help you see how a program outcome or impact was achieved, while impact and outcome evaluations look at the effectiveness of the program in producing change. Within the body of research on evaluation, you will find conflicting definitions of impact and outcome measurement—mostly about which actually comes first. The bottom line is they both look at change that occurs on the beneficiary’s part. Within the nonprofit sector, it will mostly be referred to as outcome measurement (we will refer to outcomes in this whitepaper as well), and these measurements will focus on changes that come from program involvement. These changes are shifts in knowledge, attitudes, skills, and aspirations (KASA), as well as long-term behavioral changes.
Most funders are concerned with nonprofits reporting outcomes.
Funders want to know, “Are you making a difference?” Outcome data answer that question to keep the focus there. Process questions may look like, “What problems were encountered with the delivery of services?” or “What do clients consider to be the strengths of the program?”; however, outcomes questions ask end-goal questions, such as “Did the program succeed in helping people change?” and “Was the program more successful with certain demographics than others?”
With the focus on actual change, outcomes questions should be asking things related to shifts in KASA that can be attributed to the participant’s involvement in the program. Furthermore, questions should be asking about things related to the ultimate goal of a particular program.
The below chart provides a summary of types of evaluation questions. When discussing methods, quantitative refers to numeric information pulled from surveys, records, etc., and qualitative refers to more subjective, open responses that capture themes. A combination of the two is generally referred to as a ‘mixed methods’ approach.
Evaluation Questions |
What They Measure |
Why They Are Useful |
Methods |
Process |
How well the program is working Is it reaching the intended people? |
Tells how well the plans developed are working Identifies any problems that occur in reaching the target population early in the process Allows adjustments to be made before the problems become severe |
Quantitative Qualitative Mixed |
Outcome |
Helps to measure immediate changes brought about by the program Helps assess changes in KASA (knowledge, attitude, skills, and aspirations) Measures changes in behaviors
|
Allows for program modification in terms of materials, resource shifting, etc. Tells whether or not programs are moving in the right direction Demonstrates whether or not (or the degree to which) the program achieved its goal |
Quantitative Qualitative Mixed
|
Refer to your logic model for writing your outcomes evaluation questions. Because you should have already listed outcome types (short term, intermediate term, and long term) and when they occur, your outcomes questions should be reflective of those listed in the logic model progression. Outcome questions should be written in a way that reflect the change you want to see, the direction of change intended (increase, improve, decrease, etc.), and who it is intended towards. For example:
When you are ready to start writing your questions, consider these criteria:
There are some really great sources out there that can help walk you through the process of outcome measurement. One of the best sources for developing this plan is Strengthening Nonprofits’ Measuring Outcomes (see sources for direct link to document). There is no need to reinvent the wheel when it comes to resources—we can just point you to the right sources and give you some key ideas to think about.
In order to actually measure change, you need criteria for data collection. Using an example from above, improved school attendance, we only see what we desire to change, but not how it will be measured. Without a way to measure it, we do not know if we are actually making a difference; we only know that it is our desired change.
When referring to impact and outcomes, measurements come in the form of performance indicators. Indicators are measures that can be seen, heard, counted, or reported using some type of data collection method. To measure your performance, you need a baseline (starting point) and a target (goal). For each impact or outcome statement (evaluation question), you should establish baseline indicators and target indicators in order to effectively evaluate your performance.
Considerations when writing your indicators:
The great thing is that there are a number of groups who have already put together lists of indicators by sector. Be sure to check out Perform Well, The Urban Institute’s Outcomes Indicator Project, the Foundation Center’s TRASI, and United Way’s Toolfind.
Once you have done this, you will be able to select the type of evaluation plan appropriate for your needs.
Sources
From “A Framework to Link Evaluation Questions to Program Outcomes,” Journal of Extension, URL: http://www.joe.org/joe/2009june/tt2.php
From “Developing A Plan For Outcome Measurement,” Strengthening Nonprofits, URL: http://strengtheningnonprofits.org/resources/e-learning/online/outcomemeasurement/Print.aspx
From “Measuring outcomes,” Strengthening Nonprofits, URL: http://www.strengtheningnonprofits.org/resources/guidebooks/MeasuringOutcomes.pdf
From “Outcomes Indicators Project”, URL: http://www.urban.org/policy-centers/cross-center-initiatives/performance-management-measurement/projects/nonprofit-organizations/projects-focused-nonprofit-organizations/outcome-indicators-project
From “Candidate Outcome Indicators: Youth Mentoring Program,” Urban Institute and the Center for What Works, URL: http://www.urban.org/sites/default/files/youth_mentoring.pdf
This is Part 1 of Needs Assessment Whitepaper series.
A needs assessment is the active systematic step of gathering accurate information that represents the needs and assets of a particular community or target group.
Needs assessment findings are used to define the extent of the need within a community and the assets that are available to address the needs within that same community.
Organizations can only make the most effective, appropriate, and timely programs and services by knowing these elements.
The assessment of these needs are usually conducted as part of a strategic planning process, where programs or services are being conceptualized, developed, or revamped. Characteristics of a needs assessment include:
Needs assessments can be used in many different areas including:
Given the focus on nonprofit work, we will focus on the community needs assessment. Needs assessments generally focus on one of three areas:
Communities and organizations, like living organisms, are in a continual state of change. Political, economic, and social variables brings shifts in the demographics (ex. age, ethnicity, unemployment rate, etc.) of each community. Programs and services that were created for a community or particular audience years ago may not be what is needed (or wanted) at present.
As a nonprofit, if your purpose is to serve the community, then…
You need to stay informed about the change that’s occurring within your community.
When your nonprofit takes an active role in understanding the community, a natural by-product may be exactly what it takes to get your program and service going. By interacting with the communities in which you serve and utilizing their input and feedback in your planning, you are increasing their understanding of their needs (their gaps), why they exist, and why the gaps must be addressed. And, when people are involved in the process from the ground-up, they are more likely to pursue the programs and services that are being developed and less likely to resist change. Addressing their desires and concerns head-on can build a sense of ownership in the community. Understanding why services or programs are being offered empowers community members to utilize them.
Bottom line, needs assessments are used to guide decision-making by providing a justification for why your programs and services are needed.
Needs assessments should involve multiple people in each step of the process. They can be conducted:
There is always a needs assessment committee or management team that manages the needs assessment process; this team and its responsibilities are discussed in the next whitepaper entitled, “How do you conduct a needs assessment?”
There are essentially three steps in conducting a needs assessment:
The next whitepaper lays out the steps in detail, but for now, consider the initial steps below.
Committing to a needs assessment to maintain your relevance within a community is, for many, a tough pill to swallow. It may mean having to reformat programs, products, or locations in order to be the most effective with the resources you have. But, if community change and social impact are your goals, you most certainly need to make sure you are needed.
Considerations for the Needs Assessment Committee (NAC)
Source: Oregon State University’s Needs Assessment Primer and Strengthening Nonprofit’s Conducting a Community Assessment
Sources
From ‘Community Needs Assessments,’ Learning to Give, URL: https://www.learningtogive.org/resources/community-needs-assessments
From ‘Comprehensive Needs Assessment,’ US Office of Migrant Education, URL: https://www2.ed.gov/admins/lead/account/compneedsassessment.pdf
From ‘Conducting a Community Assessment,’ Strengthening Nonprofits: A Capacity Builder’s Resource Library, URL: http://strengtheningnonprofits.org/resources/guidebooks/Community_Assessment.pdf