Understanding the Level of Information Needed for the Program Evaluation

  • Post category:Nursing
  • Reading time:10 mins read
  • Post author:

Understanding the Level of Information Needed for the Program Evaluation

Understanding the Level of Information Needed for the Program Evaluation
Understanding the Level of Information Needed for the Program Evaluation

Order a Understanding the Level of Information Needed for the Program Evaluation paper today!

The success of the program evaluation revolves around the evaluator’s ability to develop practical, researchable questions. A good rule to follow is to focus the evaluation on one or two key questions. Too many questions can lengthen the process and overwhelm the evaluator with too much data that, instead of facilitating a decision, might produce inconsistent findings. Sometimes, funding sources require only that some vague unde- fined type of evaluation is conducted. The funding sources m ight nei ther expect nor desire disserta tio n-quality researc h; they simply migh L expect “good fa ith” efforts when beginning eva luation processes. Other agencies may be quite demand ing in the types and forms of data to be provided. Obviously, the choice of methodology, data collection procedures, and reporting formats will be strongly affected by the purpose, objectives, and questions exam ined in the study.

It is important to note the difference between general research and evaluation. In resea rch, th e investigator often· focuses on q uestions based on theoretical considerations o r hypotheses gene rated to hu ilcl o n research in a specific area of study. Although program evaluations m ay foc us on an intervention derived from a theory, the evaluatio n questions should, first and foremost, be driven by the program’s objectives. The eval- uator is less con ce rned with buildi ng o n prior literature o r conributing to the development of practice theory than with determinin g whether a program worked in a specific community or location.

T here are actually two main types of evalu ation questi ons. There are quc~>tions that focus on client outcomes, such as, “What impact did the program have?” Th ese kinds of questions are addressed by using outcome evaluation methods. Then there are questions that ask, “Did the program achieve its goals?” “Did the program ad here to the spec ified procedures or standards?” o r “vVh at was learned in operating this program?” These kinds of questions are addressed by using process evaluation methods. We will examine both of these two types o f evaluation approaches in the following sec tions.

Process Evaluation Process evaluations offer a “snapshot” of the program at any given time. Process evalua- tions typically describe the day-to- day program effo rts; program modifica tions and changes; outs ide even ts that infl uenced the program; people and institutions involved; culture, customs, and traditions that evolved; and sociodemographic makeup of the clien- tele (Scarpitti, In ciardi, & Pottieger, 1993). P rocess evaluation is conce rned with identify- ing p rogra m st rengths and weaknesses. T his level of p rogram cvalua rion can be usefuhn several ways, including providing a contex-t within wh ich to interpret program outcomes and so that other agen ci es o r localities wishin g to sta rr sim ilar programs ca n benefit with- out havin g to make the same mistakes.

As an example, Bentelspacher, DeSilva, Goh, and La Rowe ( 1996) conducted a process eva luation o f the cultural co mpatibility of psychoed ucational fam ily grou p treatment with eth n ic Asian cl ients. As another example, Logan, Williams, Leukefeld, an d Minton (2000) conducted a detailed process evaluation of the drug court programs before under- taking an outcome evalual ion of the same programs. T he Loga n et al. sl udy used multiple m ethods to condu ct the process evaluati o n, including .in-depth i nterviews with the program administra tive personnel, inten,iews with each of five judges involved in the progr am, surveys a nd face- to -face interviews with 22 randomly selected current clients, and surveys of all program staff, 19 community treatment provider representatives, 6 ran – domly selected d efense attorney representatives, 4 prosecu tin g attorney representatives, l representative 6:om the probation and parole offi ce, 1 representa tive from the local co unty jail, an d 2 police depa rtmen l representatives. In all, 69 different individuals repre- senting I 0 different agency perspectives provided information about the drug court program. Also, all agency documents were ex amined and analyzed , observations of vari- ous aspects of the program process were conducted, and client intake data were analyzed as pa rt of the process evaluation. The results were all integrated an d compiled into one co mprehensive repo r t.

What makes a process evaluation so important is that resea rchers often have relied only on selected program outcome indicators such as termination and grad uation rates or number of rearrests to determine effectiveness. However, to better understand how an d why a program such as drug court is effective, an analysis of how the p rogram was con cep- tualized, implemented, and revised is needed. Consider this exan1ple-say one outcome eva luation of a drug cou rt p rogram showed a gra duat ion rate of 80% of those who began the program, while another outcome evaluation found that only 40o/o of those who began the program graduated. Then, the graduates of the second program were more likely to be free from substance usc an d crimin al behaviors at the l2- month foUow-up than the graduates from the first program. A process evaluation could help to explain the specific differences in facto rs such as selection (how clients get into the programs), treatment plans, monitor- ing, program length, and other program features that may influence how many people graduate and slay free from drugs and criminal behavior at follow-up. Tn other words, a process evaluation, in contrast to an examina tion of program outcome only, can provide a clearer and more com prehensive pictm e of how drug cou rt affects those involved in the program. More specifically, a process evaluation can provide information about program aspects that need to be improved and those that work well (Scarpilli, Inciard i, & Pottieger, 1993). Finally, a process evaluation m ay help to facilita te replicatio n of the drug cou rt program in other areas. This often is referred to as technology transfer.

A different but related process evaluation goal might be a description of the failures and depa r tures from the way in which the interventio n o riginally was designed. How were the staff trained and hired? Did the intervention depart from the treatment manual rec- ommendations? Influences that shape and affect the intervention that clients receive need to be identified because they affect the fidelity of the treatment p rogram (e.g., delayed funding or staff hires, ch anges in policies or procedu res). \”/hen program implementation deviates significantly from what was intended, this might be the logical explanation as to why a program is not working.

Outcome or Impact Evaluation Outcome or impact evaluation focuses on the targeted objectives of the program, often looking at variables such as behavior change. For example, many drug t reatment programs may measure outcomes or “success” by the number of clients who abstain from drug use. Questions always arise, though. For instance, an evaluation might reveal that 90% of those who graduate from the program abstai n from drug use 30 days after the prog ram was com- pleted. However, only 50% report abstai ning from drug use 12 months after the program was completed. Would key stakeholders involved all consider that a success or failure of the progr am? This exam ple brings up three critical issues in outcome evaluations.

One of the critical issues in outcome evaluations is related to understanding for whom docs the program work best and under what conditions. In other words, a more interest- ing and important question , rather than just asking whether a program works, would be to ask, “Who are those 50% of people who remained abstinent from drug use 12 mo nths after completing the program, and how do they differ from the 50% who relapsed?” It is not unusual for some evaluation questions to need a combination of both process and im pact evaluation m ethodol ogies. For example, if it turned o ut that r esults of a particular evaluatio n showed that the program was not effective (impact), then it might be useful to know why it was not effective (process ). Tn such cases, it would be important to know how the program was im plemented, what changes were made in the pro gram during the im plementation, what problems were experienced dur ing the implem entation, and what was done to overcome those problems.

Another important issue in outcome evaluation has to do with the timing of meas ur- ing the o utcomes. Ou tcome effects are usually measured after treatmen t or postin terven- tion. These effects may be either short term or long term. immediate outcomes, or those generally measured at the end of the treatment or intervention, might or might not pro- vi de the same resu lts as one would get later in a 6- or 12-m onth follow- up, as highlighted in the exa mple above.

The third important issue in outcome evaluation has to do with what specific measures were used. Is abstinence, for example, the only measure of interest, or is reduction in use something that might be of inte rest? Refra inin g from cri minal activity or holding a steady job may also be an important goal of a subslance abuse program. If we only m easure abstinence, we would never know about other kinds of outcomes the program may affect .

These last two issues in outcome evaluations have to d o with the evaluation methodol- ogy and analysis and are add ressed in more detail below.