Loading...

DISCUSSION Using a Logic Model as a Strategic Planning Tool

Open Posted By: highheaven1 Date: 02/03/2021 Graduate Report Writing

 

Using a Logic Model as a Strategic Planning Tool

In your readings for this week, Watson and Hoefer (2014) provide a general overview of using logic models to define a problem and identify inputs, activities, outputs, and outcomes. 

In your initial post, discuss how application of a logic model would be different for a small nonprofit organization, a large federal agency, or the development of a policy for a specific human service program. How would the leader for each of these organizations engage the participants in the process? Cite examples from the reading.

Category: Mathematics & Physics Subjects: Algebra Deadline: 12 Hours Budget: $120 - $180 Pages: 2-3 Pages (Short Assignment)

Attachment 1

73

INTRODUCTION

Nonprofit administrators both develop and evaluate programs. A logic model is useful for both, even though development happens before the program begins and evaluation happens after it has been in operation. A good evaluation, however, is planned at the same time that the program is designed so that necessary data is collected along the way, rather than annually or after the program finishes. This chapter first describes the process of logic modeling using an example of the logic model. Then, it discusses how to use the logic model to plan an evaluation.

LOGIC MODELS

The idea of logic models as an adjunct to program evaluation extends at least as far back as 2000 when the Kellogg Foundation published a guide to developing logic models for program design and evaluation. According to Frechtling (2007), a logic model is “a tool that describes the theory of change underlying an intervention, product or policy” (p. 1). While one can find many variations on how a logic model should be constructed, it is a versatile tool that is used to design programs, assist in their implementation, and guide their evaluation. This chapter describes one basic approach to logic modeling for program evaluation and links the planning and evaluation aspects of human service administration.

You should understand that not all programs have been designed with the aid of a logic model, although that is becoming less common every year. Federal grants, for example, often require applicants to submit a logic model, and their use throughout the human services sec- tor is growing through academic education and in-service training. If there is no logic model for a program you are working with, it is possible to create one after a program has been implemented. You can thus bring the power of the tool to bear when changing a program or creating an evaluation plan.

Logic model terminology uses system theory terminology. Because logic models are said to describe the program’s “theory of change,” it is possible to believe that this refers to something such as social learning theory, cognitive-behavioral theory, or any one of a number of psycho- logical or sociological theories. In general, though, logic models have a much less grand view of theory. We begin with the assumption that any human services program is created to solve a problem. The problem should be clearly stated in a way that does not predetermine how the problem will be solved. The utility of a logic model is in showing how the resources used (inputs) are changed into a program (activities) with closely linked products (outputs) that

7Logic Models and Program Evaluation

Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. ProQuest Ebook Central <a onclick=window.open('http://ebookcentral.proquest.com','_blank') href='http://ebookcentral.proquest.com' target='_blank' style='cursor: pointer;'>http://ebookcentral.proquest.com</a> Created from capella on 2021-03-02 05:29:01.

C o p yr

ig h t ©

2 0 1 3 . S

A G

E P

u b lic

a tio

n s.

A ll

ri g h ts

r e se

rv e

d .

74 LEADERSHIP SKILLS

then lead to changes in clients in the short, medium, and long terms. The net effect of these client changes is that the original problem is solved or at least made better for the clients in the program. An example of a logic model is shown as Figure 7.1.

The problem being addressed by the example program is, “School-aged youth have anger management problems leading to verbal and physical fights at school and home.” This prob- lem statement is specific about who has a problem (school-aged youth), what the problem is (anger management problems leading to verbal and physical fights), and where it is a problem (school and home). It also does not prejudge what the solution is, allowing for many possible programs to address the problem. An example problem statement that is not as good because it states the problem in a way that allows only one solution is, “There is a lack of anger man- agement classes in schools for school-aged youth.”

Another way to make the problem statement good is to phrase the statement in such a way that almost anyone can agree that it is actually a problem. The example problem statement might make this point more clearly by saying, “There are too many verbal and physical fights at school and home among school-aged youth.” Phrased this way, there would be little doubt that this is a problem, even though the statement is not specific about the number of such fights or the cause of the fights. If the program personnel want to focus on anger management problems, this way of stating the problem might lead to a host of other issues being addressed instead that might be leading to fights—such as overcrowding in the halls, gang membership, conflict over curfews at home, or anything else that might conceivably cause youth to fight at school or home. Be prepared to revisit your first effort at the problem statement and seek input from interested stakeholders to be sure that you are tackling what is really considered the reason for the program. The problem statement is vital to the rest of the logic model and evaluation so take the time to make several drafts to get full agreement.

After the problem statement, the logic model has six columns. Arrows connect what is written in one column to something else in the next column to the right or even within the same column. These arrows are the “logic” of the program. If the column to the left is achieved, then we believe that the element at the end of the arrow will be achieved. Each arrow can be considered to show a hypothesis that the two elements are linked. (The example presented here is intentionally not “perfect” so that you can see some of the nuances and challenges of using this tool.)

The first column is labeled “Inputs.” In this column, you write the major resources that will be needed or used in the program. Generically, these tend to be funds, staff, and space, but can include other elements such as type of funds, educational level of the staff, and location of the space (on a bus line, for example), if they apply to your program. The resource of “staff,” for example, might mean MSW-level licensed counselors. In the end, if only staff members with bachelor degrees in psychology are hired, this would indicate that the “staff” input was inadequate.

The second column is “Activities.” In this area, you write what the staff members of the program will be doing—what behaviors you would see them engage in if you sat and watched them. Here, as elsewhere in the logic model, there are decisions about the level of detail to include. It would be too detailed, for example, to have the following bullet points for the case management activity:

• Answer phone calls about clients • Make phone calls about clients • Learn about other agencies’ services • Write out referral forms for clients to other agencies

This is what you would see, literally, but the phrase “case management” is probably enough. Somewhere in program documents, there should be a more detailed description of the duties of a case manager so that this level of detail is not necessary on the logic model, which is, after all, a graphical depiction of the program’s theory of change, not a daily to-do list.

Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. ProQuest Ebook Central <a onclick=window.open('http://ebookcentral.proquest.com','_blank') href='http://ebookcentral.proquest.com' target='_blank' style='cursor: pointer;'>http://ebookcentral.proquest.com</a> Created from capella on 2021-03-02 05:29:01.

C o p yr

ig h t ©

2 0 1 3 . S

A G

E P

u b lic

a tio

n s.

A ll

ri g h ts

r e se

rv e

d .

75

Fi gu

re 7

.1

Ex am

p le

o f

a Lo

gi c

M o d el

S h

o rt

-t e rm

L o

n g

-t e

rm M

e d

iu m

-t e

rm

B e

tt e

r re

co g

n iti

o n

o f

ro le

a n

g e

r p

la ys

in t

h e

ir li

ve s

B e g in

n in

g

le ve

l u se

o f

sk ill

s to

h a n d le

a n g e r

H ig

h e

r le

ve l

u se

o f

sk ill

s to

h

a n

d le

a n

g e

r

R e

fr a

m e

si tu

a tio

n s

so a

n g

e r

o cc

u rs

le ss

f re

q u

e n

tly

F e

w e

r fig

h ts

a

t sc

h o

o l

F e

w e

r fig

h ts

a t

h o

m e

In p

u ts

P ro

b le

m : S

ch o o l- a g e d y

o u th

h a ve

a n g e r

m a n a g e m

e n t

p ro

b le

m s

le a d in

g t

o v

e rb

a l a

n d p

h ys

ic a

l f ig

h ts

a t

sc h

o o

l a n

d h

o m

e .

A c ti

v it

ie s

O u

tp u

ts

F u n d in

g S

ta ff

S p a ce

C a se

m a n a g e m

e n t

In d iv

id u

a l

co u n se

lin g

R e fe

rr a ls

t o

o th

e r

a g e n ci

e s

C o u n se

lin g

se ss

io n s

O u

tc o

m e

s

Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. ProQuest Ebook Central <a onclick=window.open('http://ebookcentral.proquest.com','_blank') href='http://ebookcentral.proquest.com' target='_blank' style='cursor: pointer;'>http://ebookcentral.proquest.com</a> Created from capella on 2021-03-02 05:29:01.

C o p yr

ig h t ©

2 0 1 3 . S

A G

E P

u b lic

a tio

n s.

A ll

ri g h ts

r e se

rv e

d .

76 LEADERSHIP SKILLS

The other danger is being too general. In this case, a phrase such as “provide social work services” wouldn’t be enough to help the viewer know what the employee is doing as there are so many activities involved in social work services. Getting the correct level of specificity is important in helping develop your evaluation plan here and throughout the logic model.

As you can see from the arrows leading from the inputs to the activities, the program the- ory indicates that, given the proper funds, staff, and space, the activities of case management and individual counseling will occur. This may or may not happen, however, which is why a process evaluation is needed and will be discussed later in this chapter.

The third column lists “Outputs.” An output is a measurable result of an activity. In this example, the activity of “case management” results in client youth being referred to other agencies for services. The output of the activity “individual counseling” is counseling sessions. It is important to note that outputs are not changes in clients—outputs are the results of agency activities that may or may not then result in changes to clients. The connection between agency activity and outputs is perhaps the most difficult part of putting together a logic model because many people mistakenly assume that if a service is given and docu- mented, then client changes are automatic. This is simply not true.

The next three columns are collectively known as “Outcomes.” An outcome is a change in the client and should be written in a way that is a change in knowledge, attitude, belief, status, or behavior. Outcomes are why programs are developed and run—to change clients’ lives. Outcomes can be developed at any level of intervention—individual, couple or family, group, organization, or community of any size. This example uses a program designed to make a change at an indi- vidual youth level, but could also have changes at the school or district level if desired.

Outcomes are usually written to show a time dimension with short-, medium-, and long- term outcomes. The long-term outcome is the opposite of the problem stated at the top of the logic model and thus ties the entire intervention back to its purpose—to solve a particular problem. The division of outcomes into three distinct time periods is obviously a helpful fic- tion, not a tight description of reality. Still, some outcomes are expected to come sooner than others. These short-term outcomes are usually considered the direct result of outputs being developed. On the example logic model, the arrows indicate that referrals and individual counseling are both supposed to result in client youth better recognizing the role that anger plays in their life. After that is achieved, the program theory hypothesizes that clients will use skills at a beginning level to handle their anger. This is a case where one short-term outcome (change in self-knowledge) leads the way for a change in behavior (using skills).

Logic models use the term outcome, but many people use the terms goals and objectives to talk about what a program is trying to achieve. In the previous chapter, you were told that an outcome objective answers the question, “What difference did it make in the lives of the people served?” In this chapter, you are told that an outcome is a “change in the client.” What’s the difference?

In reality, there is not much difference. Goals and objectives are one way of talking about the purpose of a program. This terminology is older than the logic model terminol- ogy and more widespread. But it can be confusing, too, because an objective at one level of an organization may be considered a goal at another level or at a different time.

Outcomes are easier to fit into the logic model approach to showing program theory by relating to resources, activities, and outputs. Systems theory terminology is more wide- spread than before and avoids some of the conceptual pitfalls of goals and objectives thinking.

OUTCOMES AND GOALS AND OBJECTIVES: WHAT’S THE DIFFERENCE?

Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. ProQuest Ebook Central <a onclick=window.open('http://ebookcentral.proquest.com','_blank') href='http://ebookcentral.proquest.com' target='_blank' style='cursor: pointer;'>http://ebookcentral.proquest.com</a> Created from capella on 2021-03-02 05:29:01.

C o p yr

ig h t ©

2 0 1 3 . S

A G

E P

u b lic

a tio

n s.

A ll

ri g h ts

r e se

rv e

d .

Logic Models and Program Evaluation 77

The element “beginning level use of skills to handle anger” has two arrows leading to medium-term outcomes. The first arrow leads to “higher level use of skills to handle anger.” In this theory of change, at this point, there is still anger, but the youth recognize what is occurring and take measures to handle it in a skillful way that does not lead to negative con- sequences. The second arrow from “beginning level use of skills to handle anger” indicates that the program designers believe that the skills youth learn will assist them to reframe situ- ations they are in so that they feel angry less frequently. This is a separate behavior than applying skills to handle anger, so it receives its own arrow and box.

The final column represents the long-term outcomes. Often, there is only one element shown in this column, one indicating the opposite of the problem. In this logic model, since the problem is seen to occur both at school and at home, each is looked at separately. A youth may reduce fights at home but not at school, or vice versa, so it is important to leave open the possibility of only partial success.

This example logic model shows a relatively simple program theory, with two separate tracks for intervention but with overlapping outcomes expected from the two intervention methods. It indicates how one element can lead to more than one “next step” and how dif- ferent elements can lead to the same outcome. Finally, while it is not necessarily obvious just yet, this example shows some weak points in the program’s logic that will emerge when we use it as a guide to evaluating the program.

PROGRAM EVALUATION

As you can see from this discussion, we have used a logic model to represent what we believe will happen when the proper inputs are applied to the correct client population. In the end, if all goes well, clients will no longer have the problem the program addresses, or at least the degree or extent of the problem will be less.

Evaluation is a way to determine the worth or value of a program (Rossi, Lipsey, & Freeman, 2003). There are two primary types of evaluation: process and outcome. The first, process evaluation, examines the way a program runs. In essence, a process evaluation exam- ines the first three columns of a logic model to determine whether required inputs were avail- able, the extent to which activities were conducted, and the degree of output accomplishment. Another aspect of a process evaluation, called fidelity assessment, examines whether the program being evaluated was conducted in accord with the way the program was supposed to be conducted. If all components of a program are completed, fidelity is said to be high. Particularly with evidence-based and manualized programs, if changes are made to the pro- gram model during implementation, the program’s effectiveness is likely to be diminished.

The value of the logic model for evaluation is that most of the conceptual information needed to design the evaluation of a program is in the logic model. The required inputs are listed, and the evaluator can check to determine which resources actually came into the pro- gram. Activities are similarly delineated, and an evaluator can usually find a way to count the number of activities that the program completed. Similarly, the logic model describes what outputs are expected, and the evaluator merely has to determine how to count the number of completed outputs that result from the program activities.

Looking at the example logic model shows us that we want to have in our evaluation plan at least one way to measure whether funding, staff, and space (the inputs) are adequate; how

We present both sets of terms so that you can be comfortable in all settings. But you should realize that both approaches are ultimately talking about the same thing: the ability of an organization to make people’s lives better.

Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. ProQuest Ebook Central <a onclick=window.open('http://ebookcentral.proquest.com','_blank') href='http://ebookcentral.proquest.com' target='_blank' style='cursor: pointer;'>http://ebookcentral.proquest.com</a> Created from capella on 2021-03-02 05:29:01.

C o p yr

ig h t ©

2 0 1 3 . S

A G

E P

u b lic

a tio

n s.

A ll

ri g h ts

r e se

rv e

d .

78 LEADERSHIP SKILLS

much case management occurred and individual counseling was conducted (the activities); and the extent to which referrals were made (and followed up on) and the number of indi- vidual counseling sessions that happened (the outputs). This information should be in pro- gram documents to compare what was planned for with what was actually provided. Having a logic model from the beginning allows the evaluator to ensure that proper data are being collected from the program’s start, rather than scrambling later to answer some of these basic questions.

As noted earlier, this is not a perfect logic model. The question in the process evaluation at this stage might be to determine how to actually measure “case management.” The output is supposed to be “referrals to other agencies,” but there is much else that could be considered beneficial from a case management approach. This element may need careful delineation and discussion with stakeholders to ascertain exactly what is important about case management that should be measured.

The second primary type of evaluation examines program outcomes. Called an outcome evaluation, it focuses on the right half of the logic model, where the designated short-, medium-, and long-term outcomes are listed. The evaluator chooses which outcomes to assess from among the various outcomes in the logic model. Decisions need to be made about how to measure the outcomes, but the logic model provides a quick list of what to measure. In the example logic model, the short-term outcome “better recognition of the role anger plays in their lives” must be measured and could be accomplished using a set of questions asked at intake into the program and after some time has passed after receiving services. One standard- ized anger management instrument is called the “Anger Management Scale” (Stith & Hamby, 2002). A standardized instrument, if it is appropriate for the clients and program, is a good choice because you can find norms, or expected responses, to the items on the instrument. It is helpful to you, as the evaluator, to know what “average” responses are so you can compare your clients’ responses to the norms. Sometimes, however, it can be difficult to find a stan- dardized instrument that is fully appropriate and relevant to your program.

Another way of measuring is to use an instrument you make up yourself. This has the advantage of simplicity and of being directly connected to your evaluation. In this case, for example, you could approach this outcome in at least two ways. First, you could request a statement from the case worker or counselor indicating that the client has “recognized the role that anger plays” in his or her life, without going into any detail. A second approach would be to have the client write a statement about the role anger plays in his or her life. Neither of these measurements will have a lot of practical utility. Going through the logic model in this way actually shows that this link in program logic is difficult to measure and may not be totally necessary.

Outcome evaluations also sometimes include a search for unanticipated outcomes. An unanticipated outcome is a change in clients or the environment that occurs because of the program, intervention, or policy, but that was not thought would result and so is not included in the logic model.

WHAT IS AN UNANTICIPATED OUTCOME?

While it may seem startling to have an example in a text that shows a less-than-perfect approach, it is included here to show that using a logic model is very useful in showing weak spots in the program logic. This link to “better recognition” is not a fatal problem, and may indeed be an important cognitive change for the client. The issue for evaluation is how to measure it, and whether it really needs to be measured at all.

Of more importance is the next link, which leads to “learn skills to handle anger.” The eval- uation must ensure that clients understand skills to help them handle anger and so document

Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. ProQuest Ebook Central <a onclick=window.open('http://ebookcentral.proquest.com','_blank') href='http://ebookcentral.proquest.com' target='_blank' style='cursor: pointer;'>http://ebookcentral.proquest.com</a> Created from capella on 2021-03-02 05:29:01.

C o p yr

ig h t ©

2 0 1 3 . S

A G

E P

u b lic

a tio

n s.

A ll

ri g h ts

r e se

rv e

d .

Logic Models and Program Evaluation 79

these skills. It is not enough to indicate that skills were taught, as in a group or individual session. Teaching a class is an activity and so would be documented in the process evaluation portion of the overall evaluation, but being in a class does not guarantee a change in the client. In this evaluation, we would like to have a measure of skill that can show improvement in the ability to perform the anger management skill. This attribute of the measure is important because we expect the clients to get better in their use over time and include more skillful use of the techniques as a medium-term outcome in the logic model.

The other medium-term outcome expected is that clients will be able to reframe situations so that they actually get angry less frequently. The program logic shows this outcome occur- ring as a result of both beginning and higher level use of skills. Because this element is broken out from the use of skills to “handle anger,” it will need a separate measure. As an evaluator, you can hope that an established, normed instrument is available, or that this is a skill that is measured by a separate item on a longer scale. If not, you will need to find a way to pull this information from staff members’ reports or client self-assessments.

The final links in the logic model connect the medium-term outcomes to the long-term outcomes of fewer fights at school and fewer fights at home. Because youth having too many fights was identified as the problem this program is addressing, we want to know to what degree fights decreased. The measure here could be client self-reports, school records, or reports from people living in the home.

Implicit in the discussion of the use of this logic model for evaluation purposes is that measurements at the end will be compared to an earlier measure of the same outcome. This is called a single group pretest-posttest evaluation (or research) design. It is not considered a strong design due to the ability of other forces (threats to internal validity) to affect the results. The design could be stronger if a comparison group of similar youth (perhaps at a different school) were chosen and tracked with the same measures. The design could be much stronger if youth at the same school were randomly assigned to either a group that received the program or a different group that did not receive the program. It is beyond the scope of this book to cover in detail all the intricacies of measurement and evaluation design, but we hope this brief overview whets your appetite to learning more.

Measurement of outcomes, while alluded to earlier, is an important part of any evaluation effort. If measures are not appropriate or have low validity and reliability, the value of the evaluation will be seriously compromised. It is suggested that anyone designing an evaluation look at a book on research methods such as Rubin and Babbie (2012), and also have access to books about measures, such as Fischer and Corcoran (2007). (The cost of a new book on research methods may be pretty high, but used editions contain much the same information and can be found for much lower prices.)

SUMMARY

Using an example, this chapter has covered the components of a logic model and how to develop one. It also demonstrates how to use a logic model to design an evaluation plan, including how it raises issues of comprehending program logic, measurement, and evaluation design.

REFERENCES

Fischer, J., & Corcoran, K. (2007). Measures for clinical practice and research: A sourcebook (4th ed.). New York: Oxford University Press.

Frechtling, J. (2007). Logic modeling methods in program evaluation. San Francisco: Jossey-Bass. Preskill, H., & Russ-Eft, D. (2004). Building evaluation capacity. Thousand Oaks, CA: Sage. Rossi, P., Lipsey, M., & Freeman, H. (2003). Evaluation: A systematic approach (7th ed.). Thousand

Oaks, CA: Sage.

Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. ProQuest Ebook Central <a onclick=window.open('http://ebookcentral.proquest.com','_blank') href='http://ebookcentral.proquest.com' target='_blank' style='cursor: pointer;'>http://ebookcentral.proquest.com</a> Created from capella on 2021-03-02 05:29:01.

C o p yr

ig h t ©

2 0 1 3 . S

A G

E P

u b lic

a tio

n s.

A ll

ri g h ts

r e se

rv e

d .

80 LEADERSHIP SKILLS

Rubin, A., & Babbie, E. (2012). Essential research methods for social work (3rd ed.) Brooks-Cole. Stith, S., & Hamby, S. (2002). The anger management scale: Development and preliminary psychomet-

ric properties. Violence and Victims, 17(4), 383–402.

HELPFUL TERMS

Activities—elements of a logic model that describe what is done in the program, interven- tion, or policy with the inputs allocated.

Fidelity evaluation or fidelity assessment—a type of process evaluation specifically designed to determine the fidelity with which a program, intervention, or policy was implemented. In other words, a fidelity evaluation (or fidelity assessment) determines the degree to which the program was conducted in the way it was supposed to be conducted.

Goals—descriptions of future outcomes or states of being that typically are not measurable or achievable. Instead, goal statements are focused on outcomes and are ambitious and ide- alistic (see Chapter 6).

Inputs—elements of a logic model that describe the resources that will be used to address the problem described in the problem statement. Inputs typically include funding, staff, and space.

Logic …