Module 5 discussion

Open Posted By: ahmad8858 Date: 17/09/2020 High School Coursework Writing

Based upon Module 5 readings, summarize and assess the evolving nature and impact of performance measurement on public service program creation and implementation regarding a specific public service program and/or service that you are familiar with. 

Your initial response to this discussion should be at least 500 words in length. Be careful to cite sources appropriately.

Category: Business & Management Subjects: Business Law Deadline: 12 Hours Budget: $150 - $300 Pages: 3-6 Pages (Medium Assignment)

Attachment 1



Ci t i zen Par t i ci pat i on i n Per f or mance Measur ement Alfred T. Ho

For the past two decades, many governments have paid growing attention to a tool that has been around for almost a century—performance measurement. With the advancement of information technologies, the ease of data analysis, and the popular concept of “results-oriented government,” performance measurement has become more sophisticated and is now commonly used in today’s public management. While collection of input, workload, and cost-efficiency measures was the focus for many decades and has remained an indispensable part of the exercise, more effort is now placed on measuring outcomes and results and exploring the links between performance measurement and management. Collecting and reporting data are no longer sufficient. Government officials are now expected to use the information intelligently to align performance goals and activities and demonstrate results and progress. Nonetheless, the major clientele utilizing performance information has largely remained the same—program managers, budget analysts, and the elected officials of the government. The public and external stakeholders are seldom involved in defining, selecting, and using performance measures.

The purpose of this chapter is to explore the possible role of citizens in performance measurement and to discuss why they should be involved more and how they should be engaged in the process. The chapter argues that performance measurement may lose part of its potential relevancy and significance in the political decision-making process if the public is not involved. Even though there are many technical and political hurdles to engaging citizens in performance measurement, public managers also have a professional and ethical duty to expand the scope of users of performance measurement so that the tool can indeed be used to hold government more accountable to the public.

Per f or mance Measur ement i n Gover nment

Performance measurement refers to the usage of quantifiable indicators to measure the output, efficiency, and results of public services. Even though the practice has caught the attention of many policy makers and managers over the past two decades, it is hardly a recent innovation. As early as the turn of the twentieth century, the New York Bureau of Municipal Research had already proposed tracking the cost and output of public programs so that managers could conduct unit-cost analysis to improve efficiency and prevent fraud and corruption (New York Bureau of Municipal Research, 1918; Ridley & Simon, 1938). The practice was gradually adopted by state and local governments that were more progressive in reforming their managerial practices, and later, by the U.S. federal government in the 1950s when some of the early reformers from the New York Bureau of Municipal Research, such as Frederick Cleveland, went to the federal government to help implement budgetary reforms (Kahn, 1997).

For the past few decades, the practice of performance measurement has continued to evolve, becoming broader in scope and more sophisticated. For example, the types of measures that government agencies keep track of have expanded from cost-efficiency data to output, workload, intermediate outcome, outcome, and explanatory data (Ho & Ni, 2005). Agencies that adopt performance measurement have also expanded, from more technical departments such as public works and police, to human-service-oriented departments, such as education, welfare, and community development. Many reforms have also been introduced to integrate performance measurement into public decision-making, particularly the budgetary process. For example, the “planning-programming-budgeting system” (PPBS) in the late 1960s attempted to introduce the measurement of program output, efficiency, and effectiveness to guide policy making and program budgeting. Later reforms, such as “zero-base budgeting” (ZBB) in the 1970s and “management-by-objectives” (MBO) in the

C o p y r i g h t 2 0 0 7 . R o u t l e d g e .

A l l r i g h t s r e s e r v e d . M a y n o t b e r e p r o d u c e d i n a n y f o r m w i t h o u t p e r m i s s i o n f r o m t h e p u b l i s h e r , e x c e p t f a i r u s e s p e r m i t t e d u n d e r U . S . o r a p p l i c a b l e c o p y r i g h t l a w .

EBSCO Publishing : eBook Collection (EBSCOhost) - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE AN: 199747 ; Box, Richard C..; Democracy and Public Administration Account: ecor.main.usg


• •

1980s followed the same emphasis and tried to use performance information to rationalize budgetary and program decisionmaking (U.S. GAO, 1997).

In the 1990s, the performance measurement movement reached another peak of development. Because of the antigovernment movement and the tax revolts of the 1980s, many politicians looked for ways to change the image of government bureaucracy and rebuild public trust in their capacity to deliver public services efficiently and effectively (Nye, Zelikow, & King, 1997). It was in this context that Osborne and Gaebler (1993) published their landmark work, , in which they pushed for new ways toReinventing Government manage government operations, such as the idea of “competitive government” through contracting-out and “mission-driven government,” which focuses more on goals, not rules. They also recommended “results-oriented government,” in which government should measure and reward policy outcomes.

Traditional bureaucratic governments … focus on inputs, not outcomes. They fund schools based on how many children enroll; welfare based on how many poor people are eligible; police departments based on police estimates of manpower needed to fight crime. They pay little attention to outcomes—to results. … Entrepreneurial governments seek to change these rewards and incentives. Public entrepreneurs know that when institutions are funded according to inputs, they have little reason to strive for better performance. But when they are funded according to outcomes, they become obsessive about performance. (Osborne & Gaebler, 1993, p. 139)

The idea of “results-oriented government” was quickly disseminated among federal, state, and local governments. At the federal level, the Clinton administration introduced the National Performance Review and Congress passed the Government Performance and Results Act to require agencies to establish goals and measure outcomes. At the state and local level, many governments introduced their own versions of performance measurement reforms that emphasized public accountability to taxpayers. More public officials began to ask questions like the following:

Are my performance measures aligned with the goals and performance targets of programs? How can the budget office use performance information to evaluate program results more effectively, ensuring that tax money is put to the best use? How can program managers use performance measures to motivate line staff to make continuous improvements to program delivery? How can policy makers and managers use performance information to evaluate the current status of program delivery and establish strategic goals for programs?

Beyond Per f or mance Management

There is evidence to show that the new emphasis of performance measurement has made a difference in the way government is managed. At the federal government, for example, program managers are found to pay closer attention to program results and public accountability (U.S. GAO, 2004). Studies of state and local government reforms have also confirmed similar effects and show that performance measurement can help improve communication between the budget office and departments and between the executive branch and the legislative branch, as well as strengthen the culture of public accountability (Ho, 2006a; Melkers & Willoughby, 2001).

However, the impact to date seems to have been limited to the executive process of decision-making. Even though performance information is advocated as a way to influence how legislators make policy and budgetary decisions so that appropriation decisions can be rationalized to maximize program efficiency and effectiveness, empirical evidence has generally shown that many legislators pay limited attention to performance information (Jordan & Hackbart, 1999; Joyce, 1993). Special interests, partisan influence, and political maneuvering seem to remain the primary driving force behind budgeting and other policy decisions. Hence, until performance information has greater political weight—that is, when the information means something to voters and major stakeholders who will use it to hold politicians accountable for results—the political reality that performance information has limited influence in the legislative phase of government is unlikely to change.

EBSCOhost - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


• • • • •

For advocates of performance measurement, this has been a disappointing finding. Although many data and reports are being generated each year by the government bureaucracy, the information has not been fully used, which means time and resources have been wasted in the data collection and analysis process. Also, one has to question whether managers have been measuring the “right” thing. If major stakeholders are not interested in the information, which is supposed to show the “results” that matter, the whole purpose of “results-oriented management” may be an empty promise.

The lack of citizen participation may also create implementation hurdles in performance-oriented reforms even within the executive branch of government. Past studies have shown that if a government engages citizens more in performance measurement, its officials are more likely to use performance information to make managerial changes, including setting strategic goals, improving internal and external communication, and reinforcing the customer focus of government (Ho, 2006a). They are also more likely to establish performance targets for departments and discuss performance results in meetings to hold department officials accountable for results (Ho, 2006b). Hence, insufficient effort to engage policy stakeholders and the public not only limits the impact of performance measurement on the legislature, but also reduces the incentive for managers to follow through and use the information to make a difference in program management.

Rol e of Ci t i zens i n Per f or mance Measur ement

Perhaps these are some of the reasons why in recent years several professional organizations have started to advocate the role of citizens in performance measurement. For example, in the guidelines for reporting “service effort and accomplishments” (SEA) released in 2003, the Governmental Accounting Standards Board (GASB) recommends the following practices (GASB, 2003, pp. 6–8):

The [performance measurement] report should include a discussion of involvement of citizens, elected officials, management, and employees in the process of establishing goals and objectives for the organization. … Citizen and customer perceptions of the quality and results of major and critical programs and services should be reported when appropriate.

Many local governments and community organizations have also started to involve citizens. For example, the Jacksonville Community Council has worked with local officials and community leaders to produce an annual quality-of-life report that evaluates the performance and needs of health and human services according to a strategic vision and specific goals, and the experience has been highly constructive for the community. The Fund for the City of New York has involved citizens in focus-group discussions and monitoring and measurement efforts to deal with street-level problems, such as graffiti, garbage collection, and potholes in roadways, and the effort has made some impact on how New York City manages its neighborhood services. The Sustainable Seattle group has organized continuous dialogues among local residents and officials to analyze community indicators and rethink neighborhood issues. Several Iowa municipalities have also launched an initiative called “citizen-initiated performance assessment,” in which citizens help elected officials and managers develop, select, and use performance indicators to improve the quality of public services (Ho & Coates, 2004).

Reviewing some of these successful experiences of different communities that engage citizens in performance measurement, Epstein, Coates, and Wray (2005) in their recent book, ,Results That Matter summarize five roles that citizens may play:

Citizens as customers and stakeholders Citizens as advocates Citizens as issue framers Citizens as collaborators Citizens as evaluators

EBSCOhost - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Chal l enges i n Engagi ng Ci t i zens

One may agree normatively that citizens should be more involved. However, public administrators face many practical challenges in their attempt to engage citizens meaningfully and effectively in the exercise of performance measurement. Theories of public choice have long established that citizens are rational decision-makers and have little incentive to participate in public decision-making when the benefits of participation are spread across a community but the costs of participation, both monetary and non-monetary, are individualized and can be very high. Political apathy is especially common in a well-governed community in which citizens are satisfied and see no looming crisis that should prompt their immediate participation in public affairs.

Citizen participation in performance measurement is even more difficult when compared to other forms of public participation, such as voting, because of the following reasons:

Performance measurement involves technical details and data questions. Ordinary citizens may not feel interested in understanding the methodological and technical questions involved. Performance measurement is a routine exercise that tries to track data over time to monitor progress and evaluate results. It is not a single event that has a clear beginning and end. Performance measurement does not necessarily dictate policy outcomes. Performance measures are simply information that allows more meaningful and informed dialogues about policy and program decisions. How the information should be used and what policy options should be proposed and chosen are often beyond the scope of performance measurement. Citizens who expect to use performance measurement to dictate how elected officials should govern may feel disappointed and may not be interested in participating. Even if elected officials and managers are serious about performance measurement and are committed to use public input and performance information to make a difference in policy making and program management, citizens are unlikely to see concrete results from their input until years later. This again may discourage citizen participants to commit their time and effort to the exercise.

How t o Engage Ci t i zen Par t i ci pant s: The Exper i ences of t he I owa Ci t i zen- I ni t i at ed Per f or mance Assessment Pr oj ect

These inherent challenges are real and significant and can easily deter government officials and citizens from public engagement in performance measurement. Overcoming these hurdles requires diligent and innovative effort in rethinking and reorganizing the performance measurement routines. In 2001–4, a number of Iowa cities experimented with “citizen-initiated performance assessment” (CIPA), in which citizens joined with elected officials and managers to develop and use performance measures and help government evaluate various municipal services, such as nuisance control, garbage disposal, snow removal, police and fire protection, and transportation (Ho & Coates, 2004). Based on the three-year project experience, several practical lessons about citizen engagement have emerged.

1. Traditional Mechanisms of Citizen Participation

Citizen committees, public hearings or town hall meetings, and focus-group discussions are still useful tools to engage citizens in discussing performance of government programs and services. There is nothing more effective than face-to-face interactions between citizens and public officials, helping break down stereotypes and mistrust and showing each other they can be sincere and equal participants in making government more effective in meeting the needs of a community.

However, these mechanisms have significant limitations. For example, they allow only a small number of citizens to engage in in-depth dialogues and exchanges of ideas. The frequency and length of discussion are also constrained by the physical location of the meeting place and the time schedules of the participants.

EBSCOhost - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Finally, citizens who volunteer to participate in these meetings tend to be community activists or citizens who can afford the time. As a result, they may not be highly representative of the demographic profile of a community.

2. Usage of Surveys and Response Cards

Citizen surveys are another viable mechanism to solicit public input about the quality and performance of public programs. Many local governments have annual or biennial citizen surveys to evaluate citizen satisfaction with and perception of community priorities. Many governments also have user response cards for specific services. Instead of surveying a whole community where not many may be users of particular services, user response cards are targeted surveys of particular user groups to evaluate specifically how users perceive the quality of services. The tool is applicable for many local services, such as library, public works, water and sewage services, and other customer services. In Iowa, for example, some communities use response cards to evaluate the responsiveness and professionalism of fire and medical emergency staff. If these citizen and user surveys are conducted with the appropriate sampling methodology, usually by random phone or mail surveys of the community or the specific user group, the data can give managers and policy makers valuable input about the performance of the government from a representative sample of citizens. If the data are tracked consistently over time, they can also provide a trend analysis of program performance so that policy makers can evaluate whether steady improvement has been accomplished.

Like the above mechanisms, citizen surveys and response cards also have their limitations. First, the validity of the instruments depends on the sampling methodology. If the sampling frame is not representative, possibly because of out-of-date addresses or phone numbers, non-listing of certain residents, or using only a selected segment of the population, policy makers may get biased results that may misinform decision-making. Second, the response rate can be a major concern. Many citizens are tired of answering phone and mail surveys. The response rate problem can be worse in communities with many low-income families and minority population, as these groups tend to have lower response rates to government surveys. To compensate for these problems and to make the survey results more reliable and representative, government officials usually have to invest more time and resources to circulate follow-up surveys and must use different incentives to induce better responses. However, these mechanisms can be expensive, and not many communities have the fiscal capacity or the willingness to invest in them.

Moreover, surveys suffer from the fact that the public feedback is constrained heavily by the structure and wording of the survey instruments. Unlike committee work or focus-group discussion, in which citizens have more freedom to express their opinions and concerns, survey respondents have to respond to specific questions and select specific answers in multiple choice questions. If the questions and answers are framed in a biased way or if certain questions are not asked to avoid potential political embarrassment, the true public perception of public program performance may not be revealed.

Finally, there is a constraint about how frequently a survey can be conducted, and when a survey can be sent out. Too many surveys will create survey fatigue and low response rates. Surveys sent at the wrong time, such as before holidays, may also yield a low response rate. Also, because of the length and complexity concerns, a government cannot give too much information prior to asking a question and can only ask a limited number of questions in a survey. A mail survey is also more constrained in structure than a phone survey and survey questions cannot be tailored easily by user responses. Hence, though surveys are good instruments to get at public perception of program performance, they have many technical and cost constraints of which managers should be aware.

3. Usage of the Internet

Information technologies and the World-Wide Web open up new possibilities to solicit public input and evaluate the performance of public programs more conveniently and easily. Instead of coming to a physical location or filling out a paper survey, citizens can now visit a specific website to file complaints or service requests, report their satisfaction level with government services, and conduct synchronous or asynchronous discussions with officials and other fellow citizens to find solutions to improve government performance.

EBSCOhost - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


The advantages of the Internet over traditional means of public engagement are not only that it is more convenient and less constrained by time and physical location, but also that the content can be flexibly tailored to the needs and interests of the users and can allow many interactive features to help the public make more informed decisions. For example, when a citizen is asked to evaluate the performance of the police department online, he or she may be presented with some performance statistics, such as the crime rates and the average response times to different types of crimes and in different neighborhoods of a community, side by side with the survey. After the survey, the respondent can be asked to give open-ended comments on how the department may improve its services, whether the respondent is interested in learning more about various volunteering opportunities, and whether he or she is willing to participate in some of the programs and connect with other citizen volunteers.

However, there are also significant limitations to using the Web. First, even though the Internet has become more widely accessible and commonly used, there is still a concern about the “digital divide” problem among racial minorities, the elderly, and the poor. Second, how to get residents to learn about the city government website and visit it can be a major challenge. Third, because Internet surveys can be done so easily and cheaply, ensuring that no one can easily “game” the system and try to provide multiple entries to bias the results can be a technical challenge. These limitations suggest that while the Internet can offer a lot to enhance public participation, it is still at a developing stage and so should be complemented with other forms of participating channels to get a more balanced and representative view of government performance.

4. Usage of Administrative Data

Finally, a government may tap into its internal database and evaluation instruments to get objective public input about program performance. For example, the number of program users or members and the number of volunteers and donations that support a program may indirectly reflect the performance and user satisfaction with the program. The number of service requests and complaints and types of service requests may show what areas are poorly rated by the public and need improvement. Data such as response times and scientific or standardized test results may also complement the survey-based data to facilitate evaluation of the performance of public programs.

For the past few decades, many professional organizations and government agencies have invested significant resources in developing and standardizing methodologies for generating and collecting various kinds of administrative data. For example, crime statistics and response times for the police department and many user statistics for library services are now commonly available in many local governments because of these efforts by the federal government and professional organizations. These accomplishments should be applauded and maintained. However, public managers should also recognize that these administrative data have some limitations and cannot replace direct public input. First, collecting these data may require significant investment of time and resources. Managers should strike a balance between the benefits and costs of getting the data and should be aware of the opportunity cost implications for service delivery. Second, some of the data, such as scientific data about water quality and usage, can be highly technical. How to communicate the data and analytical results effectively to policy makers and the public is an important but often overlooked step. Finally, managers need to remember that “perception is reality” in the political decision-making process. One or two tragedies in a community, an unexpected event such as a natural disaster, and changes in the political and economic atmosphere may completely change the political significance and interpretation of these “objective” data. Even if the data themselves have not changed much over time, policy makers and managers may still need to make policy and program changes to cater to the changing expectations and demands of the public. Hence, the usage of administrative data should also be complemented with other forms of public input to give a full picture of the public’s evaluation of government performance.

EBSCOhost - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Concl usi on

For the past few decades, the practice of performance measurement in the public sector has become more widespread and sophisticated. Many useful and detailed data are collected and reported internally by federal, state, and local governments each year. As the data-collection effort matures, policy makers and managers today are challenged to think more carefully about how to use and report the data more intelligently and effectively. One of the responses to this challenge, which has been a major emphasis in recent public administration reforms, is to focus on “performance management” or “results-oriented management” and think about how to align performance measurement with strategic planning, program evaluation, budgeting, and personnel decisions. Another response, which is equally important but has been overlooked by many practitioners, is how to engage the public and policy stakeholders more to develop and use the performance information so that the information becomes more relevant and significant in the political process.

The second response about public engagement is especially important in the current fiscal environment, in which the federal government has a serious problem of structural deficits and must ask state and local governments to take on additional responsibilities. This is occurring while many voters are not fully prepared to think about the tax implications of federal devolution, yet expect state and local governments to do more without paying more. Every state and local politician will eventually face this harsh reality and will have to consider which programs and services should be cut or what taxes will have to be raised. To help make these tough decisions, both voters and politicians are better off if they are more informed about the needs of the community and the service accomplishments and efforts of the government so that they can make informed and balanced decisions about revenue and spending choices. It is in this context that performance measurement can contribute much to meeting the future challenges of public administration, but its potential benefits can only be fully realized if it is used along with effective public engagement strategies.

Tremendous progress has been made in efforts to obtain performance measurement data for the past few decades. However, performance measurement in the twenty-first century has to move beyond the data focus and pay more attention to issues of performance management and governance—how different stakeholders and users can be more effectively involved to use the data. As this happens, it is inevitable that performance measurement may become less technically driven by professional managers, some of the measures may become less objective and scientific, and political pressure to manipulate the collection and interpretation of performance data may increase. These challenges, however, are some of the inherent social costs of democracies, in which information is always vulnerable to distortion by different political, social, and economic segments of society. Citizens should not be shielded from performance measurement and performance politics. After all, they are the owners of a democratic government and have the right to define the “results” and “performance” for which government managers should strive.

Ref er ences

Epstein, P., Coates, P. M. & Wray, L. D. (2005). Results that matter: Improving communities by engaging citizens, . San Francisco: Jossey-Bass.measuring performance, and getting things done

Governmental Accounting Standards Board [GASB], (2003). Reporting performance information: Suggested criteria for . Norwalk, CT.effective communication

Ho, A. T.-K. (2006a). Accounting for the value of performance measurement from the perspective of city mayors. , , 217–237.Journal of Public Administration Research & Theory 16

Ho, A. T.-K. (2006b, in press). Exploring the roles of citizens in performance measurement. International Journal of .Public Administration

Ho, A. T.-K., & Coates, P. (2004). Citizen-initiated performance assessment: The initial Iowa experience. Public , , 29–50.Performance & Management Review 27

Ho, A. T.-K., & Ni, A. (2005). Have cities shifted to outcome-oriented performance reporting? A content analysis of city budgets. , , 61–83.Public Budgeting & Finance 25

Jordan, M., & Hackbart, M. (1999). Performance budgeting and performance funding in the states: A status assessment.

EBSCOhost - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


, , 68–88.Public Budgeting & Finance 19 Joyce, P. G. (1993). Using performance measures for federal budgeting: Proposals and prospects. Public Budgeting &

, , 3–17.Finance 13 Kahn, J. (1997). . Ithaca, NY: CornellBudgeting democracy: State building and citizenship in America, 1890–1928

University Press. Melkers, J. E. & Willoughby, K. G. (2001). Budgeters’ views of state performance budgeting systems: Distinctions

across branches. , 67,54–64.Public Administration Review New York Bureau of Municipal Research. (1918). The citizen and the government— a statement of policy and method.

, 57, 1–4.Municipal Research Nye, J. S. Jr., Zelikow, J. D. & King, D. C. (Eds.). 1997. . Cambridge, MA: HarvardWhy people don’t trust government

University Press. Osborne, D., & Gaebler, T. (1993). Reinventing government: How the entrepreneurial spirit is transforming the public

. New York: Plume.sector Ridley, C. E. & Simon, H. A. (1938). Measuring municipal activities: A survey of suggested criteria and reporting forms

. Chicago: International City Managers’ Associatio.for appraising administration U.S. General Accounting Office [GAO]. (1997). Performance budgeting: Past initiatives offer insights for GPRA

. Washington, DC.implementation. GAO/AIMD-97–46 U.S. General Accounting Office [GAO]. (2004). Results-oriented government: GPRA has established a solid foundation

. Washington, DC.for achieving greater results. GAO-04–38

EBSCOhost - printed on 6/16/2020 12:04 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use

Attachment 2


C H A P T E R 1 7


Every nonprofit should adopt practices that support and promote assessment and evaluation. Implementing and enhancing systems and protocols that foster assess- ment and evaluation impact everyone: constituents and communities, nonprofit staff and leadership, board members, and funders. Assessment and evaluation are ongoing processes that can be beneficial at all stages of a program. Evaluation is a well-thought-out systematic approach to assess the performance, quality, and ben- efit of a program, service, or the overall functioning of the organization. It is the means by which we are able to establish consistency in exploring, collecting, track- ing, and examining the work we “say” we do. It involves a set of predetermined protocols to assure data are captured in a way that accurately represents the services delivered to determine if the outcomes have been achieved.

Assessment and evaluation provide an opportunity for nonprofits and funders to discuss outcomes and program improvement concretely rather than abstractly. The current funding climate requires nonprofits to embrace evaluation as a required means for achieving sustainability. Although some nonprofits have made great strides in establishing the infrastructure and competencies required to effectively and efficiently assess and evaluate outcomes, it is, in fact, a multifaceted under- taking. This chapter provides an overview of key steps to support assessment and evaluation in addition to building the capacity and competency required to do so successfully.


The question should be, why not evaluate? When we consider the scope and the depth of contributions from the nonprofit sector to society, at its core, we find the collective persistence to prioritize the preservation and well-being of human capi- tal. Many nonprofits tout their services, resources, and supports in line with some aspect of social justice and social good, often reflected through their mission. An organizational culture defined by accountability, innovation, and learning, driven by systems and protocols associated with assessment and evaluation, is essential for nonprofit sustainability and social change.

There was a time when “stories of success” were sufficient for the nonprofit sector and funders (Miles, 2006). Although the concept of outcomes and account- ability has been around for some time, there was an evident surge in the demand for

Congress_27372_PTR_17_271-286_07-25-16.indd 271Congress_27372_PTR_17_271-286_07-25-16.indd 271 7/26/2016 4:43:46 PM7/26/2016 4:43:46 PM

C o p y r i g h t 2 0 1 7 . S p r i n g e r P u b l i s h i n g C o m p a n y .

A l l r i g h t s r e s e r v e d . M a y n o t b e r e p r o d u c e d i n a n y f o r m w i t h o u t p e r m i s s i o n f r o m t h e p u b l i s h e r , e x c e p t f a i r u s e s p e r m i t t e d u n d e r U . S . o r a p p l i c a b l e c o p y r i g h t l a w .

EBSCO Publishing : eBook Academic Collection (EBSCOhost) - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE AN: 1336884 ; Congress, Elaine Piller, Luks, Allan, Petit, Francis.; Nonprofit Management : A Social Justice Approach Account: ecor.main.ehost


outcomes before, during, and certainly after the economic downturn in 2008. The ongoing demand from funders for nonprofits to demonstrate evidence to justify a return on investment created a shift in the culture of nonprofit funding (Morley, Vinson, & Hatry, 2001). The nonprofit leaders and organizations that were attuned to the changing tides in the external environment were much more poised and prepared to respond to the increasing demands for evidence and outcomes from funders. However, many nonprofits were ill equipped, struggling to demonstrate evidence of impact, and, in extreme cases, ultimately phased out. The approach to meeting these demands left many nonprofits floundering to find the appropriate staff, interventions, outcomes, and data tracking systems to remain competitive. This harried approach to collecting and reporting outcomes left many nonprofits scrambling to desperately demonstrate and convince funders that their outcomes were worthy of investment. Assessment and evaluation should be perceived as direct benefits to the organization and not solely to comply with funding require- ments. The following examples highlight the benefits of evaluation to nonprofit organizations.

Example 1: Citizen BUILD is a nonprofit organization with a youth-mentoring workforce development program, designed to increase job placement and reten- tion for formerly incarcerated youth. Findings from the evaluation helped the organization to identify the most effective methods of communication between mentors and mentees, the most effective mentor–mentee engagement trainings, and barriers that compromised mentee job retention.

Example 2: Renewed Reformed Reentry (R3) conducted an evaluation of the alter- native to incarceration treatment program to see if they achieved their goal to reduce recidivism and hospitalization rates. Findings from the evaluation helped program administrators identify the ideal length of time in treatment that resulted in fewer triggers for recidivism. It also revealed the most effective community treatment partnerships and collaborations.

Example 3: Urban Coders is a nonprofit organization that partners with General Education Development (GED) programs to engage youth through technol- ogy to reignite their passion to learn and pursue a college education and major in computer science. Although youth study and prepare for the GED, they receive intensive training in coding, web design, app development, and so forth. Findings from the evaluation revealed a 60%-college engagement and placement rate; however, 15% of students indicated an entrepreneurial career goal. The organization found this information extremely helpful for future fundraising goals in addition to innovative expansion to recruit entrepreneurs as mentors.


Data-Driven Culture

A data-driven agency uses evidence to drive practice and decision making. In order to become “data driven,” “outcomes driven,” or promote a culture of “inquiry” or “continuous improvement,” it is essential that leadership understands assessment

Congress_27372_PTR_17_271-286_07-25-16.indd 272Congress_27372_PTR_17_271-286_07-25-16.indd 272 7/26/2016 4:43:47 PM7/26/2016 4:43:47 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


FIGURE 17.1 Building a culture of evaluation.

and evaluation as both a process and an outcome (Miles & Woodruff-Bolte, 2013). Transitioning or enhancing an organization’s data-driven culture takes time to learn, refine, and master. Figure 17.1, developed by Community Solutions (2001), is an illus- tration of 30 ideas with which to build a culture of evaluation within organizations.

Congress_27372_PTR_17_271-286_07-25-16.indd 273Congress_27372_PTR_17_271-286_07-25-16.indd 273 7/26/2016 4:43:47 PM7/26/2016 4:43:47 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Nonprofit Leadership

Nonprofit leadership must prioritize assessment and evaluation, as they would any key human resource, budgeting, or fundraising matter. Promoting a data-driven cul- ture requires leadership to be consistent with messaging and communication that rein- forces the value and priority for all staff to adopt and adhere to evaluation systems and protocols (Hernandez & Visher, 2001). At the onset, the leadership and/or the designated evaluation team or staff are tasked with the responsibility of cultivating an outlook of enthusiasm and optimism about assessment and evaluation. Formal and informal meetings with staff should explore and address any discomfort, taboos, or misunderstandings associated with assessment and evaluation. Leadership will need to continually assess and address the impact of increased responsibility of a data-driven culture on the quality of service delivery. This style of open and ongoing communica- tion sets the tone for a data-driven culture of partnership where everyone plays a role in maintaining the integrity of the organization’s assessment and evaluation efforts.

Leadership must remain engaged yet make it clear that the appointed staff, division, or consultant will take the lead in facilitating the organization’s ongoing efforts to be data driven. In the absence of dedicated and qualified personnel with the support of the leadership, the path to become data driven will be more compli- cated and challenging for all (Miles, 2006). Promoting a data-driven culture should be an experience defined by collective learning among leadership, middle manage- ment, and staff.

Characteristics of a Data-Driven Organization

1. Data-informed board, executives, program leadership, and staff 2. Accessible and visible evaluation staff and resources 3. A functional Theory of Change (ToC) 4. Comprehensive evaluation plan 5. Data collection tools align with program strategies and indicators 6. Data collection is integrated and prioritized with existing program practices 7. Effective and efficient data management systems and protocols 8. Collect, store, and analyze intentional data 9. Utilize data to inform quality improvement

10. Data-driven organizational decision making 11. Annual budget allocation to develop and sustain data management system 12. Accessible evaluation and data management staff and resources 13. Prioritize data integrity 14. Effective communication among leadership, management, and staff 15. Functional data reports 16. Formal and informal structures to review and discuss data 17. Quality assurance and evaluation protocols align to monitor compliance, fidel-

ity, and data integrity 18. In-service professional development training on assessment and evaluation 19. Leadership, management, and staff understand their role in assessment and


Congress_27372_PTR_17_271-286_07-25-16.indd 274Congress_27372_PTR_17_271-286_07-25-16.indd 274 7/26/2016 4:43:48 PM7/26/2016 4:43:48 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


20. Employee performance evaluation aligns with compliance to data management protocols


Value of a ToC

An often overlooked, yet critical step in assessment and evaluation involves the development of a ToC as a precursor to evaluation. Every nonprofit should have a functional ToC as a blueprint to provide common knowledge for how organiza- tions assess performance, outcomes, and impact (Hunter, 2013). A functional ToC is designed to (a) foster communication, accountability, strategic planning, and decision making and (b) describe how and why a set of interventions and a set of strategies are expected to lead to outcomes. Stakeholders and funders value a ToC because it provides a commonly understood vision of long-term goals, how they will be achieved, and strategies to measure progress along the way (Mackinnon, Ammot, & McGarvey, 2006).

Irrespective of staff roles, discipline, or expertise, a ToC helps all staff under- stand how his or her individual and collective work with the target population will lead to the desired outcomes. It is a visual representation that provides deeper insight to the multidimensional construction of social issues. A ToC illustrates the added value and appreciation for interdisciplinary teams with expertise to address complex social issues. For example, a nonprofit organization worked for years fulfilling its mission to fight hunger by providing meals and building capacity in under-resourced communities. During a strategic planning meeting to review and update their ToC, they ultimately realized the need to rethink their approach to community capacity building. They decided to expand the scope of their services with a nutrition division to educate communities and their existing partners. They determined that more informed communities would be a key strategy to advance their mission to fight hunger. This expansion of a nutrition division deepened the interdisciplinary team expertise and strategy to address hunger as a multidimen- sional social issue.

ToC Development, Implementation, and Utilization

Developing a ToC involves a series of critical thinking steps to produce a compre- hensive picture about the nature of a social problem, and how specific actions will lead to solutions (Hunter, 2013). It is a collaborative process to achieve internal alignment following the exploration of several programmatic building blocks.

Some organizations will have the staff expertise to facilitate the process for developing a ToC. Others will need to hire staff or consultants who are skilled and able to support the organization in developing a ToC and the systems and protocols to support ongoing assessment and evaluation. At the onset, representatives from all levels of staffing (i.e., executive leadership, program manager, case planner, psy- chiatrist, peers, board members, volunteers) should be identified and engaged in the ToC development process. Often, as a result of stretched resources and capacity,

Congress_27372_PTR_17_271-286_07-25-16.indd 275Congress_27372_PTR_17_271-286_07-25-16.indd 275 7/26/2016 4:43:48 PM7/26/2016 4:43:48 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


leadership and management take the lead in developing a ToC. Nevertheless, before a ToC is finalized, organizations should engage representatives of various levels of staff for their input and feedback.

Once finalized, organizations should provide training on the implementation and utilization of a ToC. A fundamental component of training should provide all stakeholders with a deeper understanding of how the organization will meet the mission through the contributions of their roles, expertise, and available resources. ToC training should be ongoing and a frequent point of discussion among staff and departments. Training is particularly essential for direct service delivery profes- sionals who spend a great deal of time with program participants. Leadership and management must help them see the connection in delivering and documenting services as meaningful to their roles and the clients they serve (Miles, 2006). Shifting to a data-driven culture requires continuous communication regarding the added benefits of documenting services with accuracy and consistency. The ToC provides a tangible way for all levels of staff to internalize and actualize the mission. A data- driven organization assures that the ToC is visible and accessible to staff. The ToC











FIGURE 17.2 Components of a Theory of Change.

Congress_27372_PTR_17_271-286_07-25-16.indd 276Congress_27372_PTR_17_271-286_07-25-16.indd 276 7/26/2016 4:43:48 PM7/26/2016 4:43:48 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


planning team should convene frequently, to review and determine if the ToC needs to be updated.

If developed, implemented, and used effectively, a ToC allows nonprofits to streamline efforts to assure they collect “intentional data” rather than “dead data.” Intentional data have justification and alignment with the mission and serves the needs of the organization and funders. Although there may be several data points of interest, the goal is to collect data that advances organizational development, organizational impact, and sustainability. The overall goal of the evaluation will determine if nonprofits should develop a programmatic ToC or an organizational ToC. An organizational ToC assesses the quality, cost, and alignment (goodness of fit) among all programs and the mission. A programmatic ToC assesses alignment of the mission, program strategies, and outcomes. A ToC (Figure 17.2) should be carefully developed as an invaluable organizational tool.

Theory of Change

Social Impact: Broad statement about the expected benefi ts or cumulative effect on the general population. Example: More youth will be career or college ready with technical knowledge and skills.

Outcomes: Changes in knowledge, skill, attitude, behavior, condition, or status as a result of program strategies. Example: Greater knowledge of career planning, improved academic performance, effective responses to confl ict, greater fi nancial stability.

• Short-term—knowledge, skills, attitude, motivation, awareness • Mid-term—behaviors, practices, policies, procedures • Long-term—environmental, social, economical, political conditions

Indicators: Specifi c and measurable changes that represent achievement of an outcome. Example: A nonprofi t college prep and placement program for high school seniors would have the following indicators: 80% of high school seniors will apply to a minimum of three universities; 75% of high school students will identify a major; 90% of youth will develop a career plan.

Outputs: Units of service or the amount of work a nonprofi t does. Outputs produce the desired outcomes for program participants. Example: The number of workshops, meals provided, resource packets distributed, participants served, and so forth.

Strategies: What a program does with the inputs. The activities and interventions to achieve the outcomes. Strategies produce outputs. Example: Educate the public about signs of mental illness and depression in youth, provide adult mentors for youth, feed homeless families, and so forth.

Inputs: Resources an organization or program uses to run the program. Example: Staff, volunteers, facilities, clients, equipment, curricula, money, partnerships, and so forth.

High Priority Services: Essential areas of practice supported by literature, research, trends, and practice experience, which drive service delivery.

Target Group: Customers, participants, communities, or population to be served by the program.

Social Issue: Social problem or point of focus identifi ed by the mission.

Congress_27372_PTR_17_271-286_07-25-16.indd 277Congress_27372_PTR_17_271-286_07-25-16.indd 277 7/26/2016 4:43:48 PM7/26/2016 4:43:48 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Utilizing a ToC to Guide Assessment and Evaluation

Assessment: Provide evidence of target population needs and trends; identify new areas for program support.

Data Collection: Provide a framework for when, how, and what to measure; identify staff responsible for documentation and time frames for documentation.

Management: A centralized guide for defi ning work tasks, roles, and program deliverables; inform hiring, orientation, and professional development training needs; identify areas of emphasis for supervision.

Organizational Functioning: Identify targeted and general fundraising goals; proactive evidence-informed response to new and existing funder request; update the organizational chart.

Quality Improvement: Determine which program and services need to be improved to strengthen outcomes; reveal organizational and programmatic strengths and gaps in services; monitor program fi delity.

Strategic Planning: Identify services and practice models that can be replicated; determine new benchmarks, explore new interventions; reveal critical areas for collaborations and partnerships.


Data Collection

Once a functional ToC is developed, it will inform the evaluation plan to determine the data collection measures, processes, and systems. Data collection requires atten- tion to detail and adherence to strict and predetermined protocols. There are many factors that can compromise the integrity of data, so nonprofits should develop a well-thought-out evaluation plan. Identifying the data collection tools, persons responsible for data collection, and timing for data collection are essential to the quality and integrity of findings. The quality of the data collected will be reflected in the quality of the data reported.

There are several options available for data collection. Nonprofits may decide to administer a survey, have participants complete a questionnaire, extract data from client records, or hold a focus group. An evaluation plan may include a combination of methods. The selected technique will depend on the goal of the evaluation, stage of the program, characteristics of the target population, and available budget and resources to successfully complete the evaluation (W.K. Kellogg Foundation, 2004).

The designated person(s) responsible for collecting data play(s) a critical role in the success of the evaluation. Key staff should be trained on all data collection tools and the data management protocols. Many nonprofits will select a survey or questionnaire and will designate staff with the responsibility to solicit information from participants. All of the staff responsible for interviewing participants should receive training on the role of the interviewer to assure they do not unintentionally compromise the integrity of the data.

Congress_27372_PTR_17_271-286_07-25-16.indd 278Congress_27372_PTR_17_271-286_07-25-16.indd 278 7/26/2016 4:43:48 PM7/26/2016 4:43:48 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


The following tips on interviewing skills should be addressed when training staff.

1. Emphasize overall purpose of the evaluation 2. Discuss confidentiality 3. Ask participants to answer questions as honestly as possible 4. Review response categories clearly and check to make sure the respondent

understands 5. Review response categories (to remind participants) before beginning a new

section 6. Maintain a neutral attitude 7. Refrain from behaviors that could influence how the respondent answers 8. Avoid sharing your personal opinions 9. Never suggest an answer

10. Speak clearly and slowly

It is essential to maintain communication with staff about their experiences collecting data from participants and compliance with the evaluation systems and protocols. These communications will often reveal gaps or challenges in the evalu- ation plan that may not be as transparent to leadership and data management staff. Establishing organizational compliance is fundamental, so systems and protocols must be transparent, effective, and efficient.

Selecting the appropriate data collection measure is as important as the method in which the data is collected. Data collection tools should accurately and efficiently measure the indicators to determine if the outcomes were achieved. More impor- tantly, data collection measures should align with the strategies and services deliv- ered. For example, a nonprofit organization recently received funding to develop and evaluate a new youth entrepreneurship training prevention program. They hired a new program director to develop and implement the program and eval- uation. The program was designed to teach and train youth aged 15 to 17 years entrepreneurship skills and support them in the development and pilot of a small business. On completion of the training program, youth graduated to peer trainers with the responsibility to mentor the next cohort of youth trainees. As a data-driven organization, they identified key indicators and developed a survey designed to measure the outcomes. During the data analysis phase, the data management team notified the program leadership about a gap in the findings when compared with the projected outcomes to the funder. None of the data analyzed measured the par- ticipant’s leadership skills. Although they intended to address leadership skills through the training program, they realized this was an underdeveloped compo- nent of the training curriculum. This evaluation proved to be a valuable lesson to the organization. In retrospect, they acknowledged proceeding with a great deal of haste throughout the program planning and evaluation phase. They also real- ized that they should have allotted more time to assess the new program direc- tor’s evaluation competencies. Three essential lessons emerged from the evaluation: (a) assess program development and evaluation expertise during the interview and hire phase; (b) provide ToC training and supervision for new leadership and staff;

Congress_27372_PTR_17_271-286_07-25-16.indd 279Congress_27372_PTR_17_271-286_07-25-16.indd 279 7/26/2016 4:43:49 PM7/26/2016 4:43:49 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


and (c) confirm alignment throughout the program planning and evaluation plan- ning phase. They agreed to apply these lessons to all future program initiatives and evaluations to avoid this disconnect in services delivered, data collection tools, and outcomes.

Evaluation Planning

1. What are the program indicators? 2. What do you want to learn from the evaluation? 3. What data will you collect? 4. What methods will be used for data collection? 5. Will data be collected from the entire target population? 6. Who will collect the data? 7. How often will data be collected? 8. Who will analyze the data? 9. How frequently will data reports be generated for review?

10. How will the data be used?

Reporting Data

Establishing clarity at the onset regarding the purpose of assessment and evalu- ation will be helpful to inform the organization on how it can leverage the data. Initial decisions about the usage of data will evolve as the organization evolves or as the program continues to demonstrate success or experience new trends and chal- lenges. A nonprofit organization responsible for recruiting and training foster par- ents initially produced evaluation reports that were useful for the board, managers, and funders. As they continued to review and apply the data to strategic decision making for the organization, they determined that the data would be instrumental in local advocacy efforts. It was also determined that the data would be used to inform updates and revisions to the foster parent training curriculum.

The capacity of any nonprofit organization to generate reports will depend on the data management system and staffing capacity. Often times, limitations of an efficient data management system result in a great deal of staff time to gener- ate reports with accuracy. With these considerations in mind, nonprofits should determine the feasibility, frequency, and consistency of generating and dissemi- nating reports. For example, program managers at a nonprofit organization were responsible for providing immigration services and support including English as a second language and computer literacy. At the beginning of the funding cycle, program managers received monthly reports highlighting key performance areas. Throughout the entire third quarter of the funding cycle, program managers did not receive monthly reports, despite their repeated requests. In the absence of current and relevant data, program managers struggled to keep staff engaged in discus- sions specific to assessment and evaluation. Monthly reports resumed during the final quarter of the funding cycle. Program managers reported, “We feel like we’re right where we started. We have to work just as hard as we did in the beginning to reacquaint staff with data and twice as hard to get them engaged to discuss the

Congress_27372_PTR_17_271-286_07-25-16.indd 280Congress_27372_PTR_17_271-286_07-25-16.indd 280 7/26/2016 4:43:49 PM7/26/2016 4:43:49 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


data.” A culture driven by continuous improvement is contingent on staff consis- tently expecting to receive, review, and make decisions based on data.

The ToC can be instrumental in helping staff and leadership identify a dash- board of performance indicators that will communicate progress at different points throughout the year.

Reports should be developed in a way that will translate a clear story about the population, the services, the outcomes, and the overall performance of a program. The goal of disseminating reports is to tell a story in a way that makes it engaging for the audience to connect, reflect, and respond to the data.

Characteristics of a Functional Report

• Features standard template to organize the data • Includes easily interpreted data (graphs, charts, colors, etc.) • Provides relevant data for the intended recipient (board, managers, staff, volun-

teer, clients, etc.) • Highlights performance areas of success and areas of improvement • Provides brief explanatory notes (as needed) for special considerations for inter-

preting the data • Provides comparative or trend data to support interpretation of data and


Review and Discussion of Data

The report has been prepared and disseminated . . . now what? Once reports are disseminated, it can cause a great deal of anxiety for staff and program leaders. Therefore, ongoing support and communication to normalize the experience of discussing performance are key. Forums that are dedicated to the review and discussion of data should be guided and aimed at celebrating the successes and brainstorming a plan of action for areas of improvement. Program leadership should create internal structures to convene key staff and support staff who play a critical role in achieving outcomes. Everyone should be familiar with the contents of the report and prepared to engage in a discussion about the findings.


One nonprofit leader decided that it was beneficial for his team to review a dashboard of performance indicators every other month. He took the lead in disseminating reports, convening the team, and leading the discussion. Six months later, he implemented several changes, which proved to be instrumental in supporting staff to build a data-driven culture that values assessment and evaluation. He implemented the following protocol:

• Disseminate reports electronically 1 week in advance of meetings • Instruct program staff to bring one question, comment, or plan of action to the meeting • Rotate meeting facilitators

Congress_27372_PTR_17_271-286_07-25-16.indd 281Congress_27372_PTR_17_271-286_07-25-16.indd 281 7/26/2016 4:43:49 PM7/26/2016 4:43:49 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Following the general review of the report, staff shared their comment, question, or plan of action. The team used both small and large group discussion formats. Each meeting closed with a brief recap of lessons learned and a plan of action. This new format generated a great deal of discussion and gave the team more ownership of the data. They

FIGURE 17.3 The Innovation Network.

Congress_27372_PTR_17_271-286_07-25-16.indd 282Congress_27372_PTR_17_271-286_07-25-16.indd 282 7/26/2016 4:43:49 PM7/26/2016 4:43:49 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


were more focused on reviewing and using the data to determine the need for course- corrective decisions. Overall, staff developed a deeper appreciation and understanding of nonprofi t assessment and evaluation.

Figure 17.3 illustrates the various ways nonprofi ts use and report data, based on the 2011 research on a national sample of nonprofi ts conducted by the Innovation Network Inc.

Generate Discussion When Reviewing Data

• Do the data reveal any new information about the target population that infl uence program strategies and outcomes?

• Do the data reveal key areas of quality performance that lead to outcomes? • Do the data reveal any challenges or obstacles to achieving outcomes? • Do the data reveal an area of focus for program improvement? • Do the data inform new services and supports for staff or participants? • Should new indicators of success be explored or included in future reports? • Are there any internal or external collaborations and partnerships to be explored?

Data Management System

Technology is a key component for building an infrastructure to support assess- ment and evaluation. The ability to analyze data and maximize on lessons learned will depend on the limitation or sophistication of the data management system. A comprehensive assessment of the needs of the program or organization and the intended purpose of the data management system is the first step to determine the best technology solution. The path to identifying the right data management system requires a great deal of resources and investment to implement and main- tain. Nonprofits must consider these expenses and build it into their budget and fundraising goals (Morariu, Athanasiades, & Emery, 2012).

Increased demands to provide quality services, while maintaining compliance for internal and external purposes, often result in duplicative data entry. Many non- profits are faced with the daunting task of managing several reports for several funders. Reporting requirements vary, with different expectations for program out- comes and how they should be measured and reported (Major, 2011). Several gov- ernment funders require nonprofits to enter data into an online database. However, the data are available for a limited time, so institutional data are lost once the case is closed or when the contract ends. Often times, nonprofits cannot extract data from these online databases to tell their story of success, to inform program improve- ment, or to leverage data for fundraising efforts (Snibbe, 2006). This complicates organizational efforts to motivate staff to enter data that ultimately will have little impact on their day-to-day activities and interventions. The goal of entering data is to use the data. Until these issues are resolved, nonprofits will need to identify the appropriate data management system that will streamline data entry and pro- vide direct access for storing, analyzing, and reporting data. Exploring mechanisms and solutions to leverage data from these portals will ultimately strengthen the sec- tor’s efforts to become more data driven, data informed, and avoid duplicative data entry (Miles & Woodruff-Bolte, 2013).

Congress_27372_PTR_17_271-286_07-25-16.indd 283Congress_27372_PTR_17_271-286_07-25-16.indd 283 7/26/2016 4:43:50 PM7/26/2016 4:43:50 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Characteristics of a Functional Data Management System

• Systematically store and analyze data • Provide efficiency for staff and program leadership • Generate reports to assess performance at any given time • User friendly • Easily accessible and updated • Minimize duplicity across multiple reporting systems • Interface with other systems in the public domain (i.e., Excel, Access) • Customized to reflect relevant data points specific to the target population, strate-

gies, and indicators


Although funders have increased their demand on nonprofit assessment and evalu- ation with a focus on outcomes, funding for these efforts have not shared the same momentum and fervor. Nonprofits would benefit from a group of funders (govern- ment agencies, private foundations, corporate funders, and individual donors) who partner and invest in the capacity building of data management software, systems, training on assessment and evaluation, and technical assistance (Major, 2011). The technology industry can play a more visible and critical role in building and upgrad- ing the infrastructure of nonprofit data management systems. Capacity is increas- ingly becoming an area of mention and concern across nonprofits. Many nonprofits are taxed with various reporting requirements, frequent audits, new initiatives, and policies. Everyone is required to do more with less, while maintaining quality.

Some funders have made advances by allocating funds for capacity develop- ment, training, and technical assistance. They have redefined a collaborative and supportive relationship with grantees to support mutually beneficial assessment and evaluation (Major, 2011). More funders should broaden the scope of their role in the investment for social justice by acquiring a deeper and realistic understand- ing of the costs and efforts associated with achieving outcomes and creating change. Often times, the approved budget and staffing plan do not account for the required expertise of evaluation staff and the associated technology resources to effectively and efficiently collect and report data. The absence of an effective infrastructure for data management, data integrity, and reporting can compromise the ability of an organization to compete for sustainable funding. It is both logical and timely that funders invest in services and outcomes as well as data-driven capacity- building resources to support assessment and evaluation. It is in the best interest for all invested in social good to assure organizations have the tools to successfully exe- cute their mission.


Community Solutions. (2011). Building a culture of innovation: 30 ideas to apply to your organiza- tion. Retrieved from http://communitysolutions.ca/web/

Congress_27372_PTR_17_271-286_07-25-16.indd 284Congress_27372_PTR_17_271-286_07-25-16.indd 284 7/26/2016 4:43:51 PM7/26/2016 4:43:51 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use


Hernandez, G., & Visher, M. (2001). Creating a culture of inquiry: Changing methods and minds on the use of evaluation in nonprofit organizations (pp. 1–20). San Francisco, CA: The James Irvine Foundation.

Hunter, D. (2013). Working hard—And working well: A practical guide to performance manage- ment. Washington, DC: Venture Philanthropy Partners.

Mackinnon, A., Ammot, N., & McGarvey, C. (2006). Mapping change: Using a theory of change to guide planning and evaluation (pp. 1–11). New York, NY: Grantcraft.

Major, D. (2011). Expanding the impact of grantees: How do we build the capacity of nonprofits to evaluate, learn and improve? Grantmakers for Effective Organizations. Retrieved from http://www.socialimpactexchange.org/sites/www.socialimpactexchange.org/files/ GEO_SWW_BuildCapacityToEvaluateLearnImprove.pdf

Miles, M. (2006). Good stories aren’t enough: Becoming outcomes-driven in workforce development. Working Ventures, Public/Private Ventures. Retrieved from http://www.issuelab.org/ resource/good_stories_arent_enough_becoming_outcomes_driven_in_workforce_ development

Miles, M., & Woodruff-Bolte, S. (2013). Nurturing inquiry and innovation: Lessons from the work- force benchmarking improvement collaborative (pp. 1–22). Ann Arbor, MI: Corporation for a Skilled Workforce.

Morariu, J., Athanasiades, K., & Emery, A. (2012). State of evaluation 2012: Evaluation practice and capacity in the nonprofit sector (pp. 1–20). Washington, DC: Innovation Network.

Morley, E., Vinson, E., & Hatry, H. (2001). Outcome measurement in nonprofit organizations: Current practices and recommendations (pp. 3–10). Washington, DC: Independent Sector.

Snibbe, A. (Fall 2006). Drowning in data. Stanford Social Innovation Review, 4(3), 39–45. W. K. Kellogg Foundation. (2004). Evaluation handbook. Battle Creek, MI: W. K. Kellogg

Foundation. Retrieved from https://www.wkkf.org/resource-directory/resource/2010/ w-k-kellogg-foundation-evaluation-handbook

Congress_27372_PTR_17_271-286_07-25-16.indd 285Congress_27372_PTR_17_271-286_07-25-16.indd 285 7/26/2016 4:43:51 PM7/26/2016 4:43:51 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use

Congress_27372_PTR_17_271-286_07-25-16.indd 286Congress_27372_PTR_17_271-286_07-25-16.indd 286 7/26/2016 4:43:51 PM7/26/2016 4:43:51 PM

EBSCOhost - printed on 6/16/2020 12:06 PM via GEORGIA ONMYLINE. All use subject to https://www.ebsco.com/terms-of-use