The case study from the CDC Get Smart Program Planner that has been selected for this discussion is the one titled ‘Clinic-Based Education for Patients and Providers’. This is the case study that focuses on the design and implementation of an antibiotic use campaign in one of the states in the US. It is mentioned that the campaign was funded by the CDC, and because these funds were limited, the people involved in the project knew that it was crucial to evaluate campaign efforts to make sure that the available resources were utilized wisely (Center for Disease Control and Prevention, 2006).
The type of evaluation that was used in this case was processed or implementation evaluation. This is a type of evaluation that determines whether the program’s or the project’s activities have actually been implemented as was intended (Center for Disease Control and Prevention, n.d).
This evaluation was to take place during the first year of the program’s implementation (Center for Disease Control and Prevention, 2006). At this time, it was common knowledge that not all of the program’s goals and objectives would have been met. Therefore, the program staff made a decision that the purpose of evaluation at this stage was to improve the current program strategies and materials in order to increase the likelihood of achieving the project’s intended outcomes. For example, the staff involved in the project wanted to know if patients read and comprehended the various materials provided to them. They also wanted to know if providers had found the materials provided to them helpful. This led them to develop several evaluation questions related to the current progress of the project. These assessed whether some of the program’s activities had been implemented as planned and also looked at the initial reaction of the involved parties (patients and providers) to these activities. Such an evaluation design was very fitting and appropriate because, through it, it would be possible to establish whether the program was on its way to achieving its intended outcomes. If it was not, appropriate changes and adjustments could be made. Also, such an evaluation would act as a basis of seeking more funding because the evaluators could show the results of the evaluation to the program’s funders (the CDC), and convinced them that it was well on its way to achieving its intended outcomes and, therefore, I deserved even more funding.
The major challenge that was faced by the evaluators in the development of the evaluation plan was agreeing on the priority outcomes and, therefore on the focus of evaluation. The different stakeholders involved in the project had different priority outcomes and therefore, agreeing on where the evaluation’s focus was to be prove to be difficult (Center for Disease Control and Prevention, 2006). This is a common problem in the development of many programs’ evaluation plans, one that must be overcome if any program is, in fact, to be successful (Wholey, Hatry & Newcomer, 2010).
In spite of the initial lack of consensus between the program’s stakeholders in regards to the evaluation plan, together with the program’s staff, they were finally able to come to a consensus and actually agree on a set of evaluation questions. The role of the program stakeholders cannot therefore not be undermined. First, the stakeholders assisted the formulation of the evaluation questions. They also provide feedback on the initial implementation of the project’s activities, for instance, the effectiveness of the provided educational materials and tools. This included both patients and providers. Their feedback was, therefore, crucial in assessing whether the project was on course to achieving its objectives. Therefore, it is safe to say that stakeholders were sufficiently involved in the evaluation process of this project.
Center for Disease Control and Prevention (n.d). Types of Evaluation. Retrieved January 24, 2016, from http://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf
Center for Disease Control and Prevention. (2006). Get Smart About Antibiotics | Evaluation Manual: Case Studies | CDC. Retrieved January 24, 2016, from http://www.cdc.gov/getsmart/community/improving-prescribing/program-development-eval/evaluation-manual/case-studies.html
Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (2010). Handbook of Practical Program Evaluation. New York, NY: John Wiley & Sons.