- About Us
- Statewide Resources
- Get Involved
- Find Us
Evaluation is an Everyday Activity
Program Evaluation Discussions
Updated: 3 days 17 hours ago
People often say one thing and do another.
This came home clearly to me with a nutrition project conducted with fifth and sixth grade students over the course of two consecutive semesters. We taught them nutrition and fitness and assorted various nutrition and fitness concepts (nutrient density, empty calories, food groups, energy requirements, etc.). We asked them at the beginning to identify which snack they would choose if they were with their friends (apple, carrots, peanut butter crackers, chocolate chip cookie, potato chips). We asked them at the end of the project the same question. They said they would choose an apple both pre and post. On the pretest, in descending order, the students would choose carrots, potato chips, chocolate chip cookies, and peanut butter crackers. On the post test, in descending order, the students would choose chocolate chip cookies, carrots, potato chips, and peanut butter crackers. (Although the sample sizes were reasonable [i.e., greater than 30], I’m not sure that the difference between 13.0% [potato chips] and 12.7% [peanut butter crackers] was significant. I do not have those data.) Then, we also asked them to choose one real snack. What they said and what they did was not the same, even at the end of the project. Cookies won, hands down in both the treatment and control groups. Discouraging to say the least; disappointing to be sure. What they said they would do and what they actually did were different.
Although this program ran from September through April, and is much longer than the typical professional development conference of a half day (or even a day), what the students said was different from what the students did. We attempted to measure knowledge, attitude, and behavior. We did not measure intention to change.
That experience reminded me of a finding of Paul Mazmanian . (I know I’ve talked about him and his work before; his work bears repeating.) He did a randomized controlled trial involving continuing medical education and commitment to change. After all, any program worth its salt will result in behavior change, right? So Paul Mazmanian set up this experiment involving doctors, the world’s worst folks with whom to try to change behavior.
He found that “…physicians in both the study and the control groups were significantly more likely to change (47% vs 7%, p<0.001) IF they indicated an INTENT (emphasis added in both cases) to change immediately following the lecture ” (i.e., the continuing education program). He did a further study and found that a signature stating that they would change didn’t increase the likelihood that they would change.
Bottom line, measure intention to change in evaluating your programs.
Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowitz, M. P. (August 1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8), 882-886.
Mazmanian, P. E., Johnson, R. E., Zhang, A., Boothby, J. & Yeatts, E. J. (June, 2001). Effects of a signature on rates of change: A randomized controlled trial involving continuing education and the commitment-to-change model. Academic Medicine, 76(6), 642-646.
I may have mentioned naturalistic models; if not I needed to label them as such.
Today, I’ll talk some more about those models.
These models are often described as qualitative. Egon Guba (who died in 2008) and Yvonna Lincoln (distinguished professor of higher education at Texas A&M University) talk about qualitative inquiry in their 1981 book, Effective Evaluation (it has a long subtitle–here is the cover). They indicate that there are two factors on which constraints can be imposed: 1) antecedent variables and 2) possible outcomes, with the first impinging on the evaluation at its outset and the second referring to the possible consequences of the program. They propose a 2×2 figure to contrast between naturalistic inquiry and scientific inquiry depending on the constraints.
Besides Eisner’s model, Robert Stake and David Fetterman have developed models that fit this model. Stake’s model is called responsive evaluation and Fetterman talks about ethnographic evaluation. Stake’s work is described in Standards-Based & Responsive Evaluation (2004) . Fetterman has a volume called Ethnography: Step-by-Step (2010) .
Stake contended that evaluators needed to be more responsive to the issues associated with the program and in being responsive, measurement precision would be decreased. He argued that an evaluation (and he is talking about educational program evaluation) would be responsive if it “oreints more directly to program activities than to program intents; responds to audience requirements for information and if the different value perspectives present are referred to in reporting the success and failure of the program” (as cited in Popham, 1993, pg. 42). He indicates that human instruments (observers and judges) will be the data gathering approaches. Stake views responsive evaluation to be “informal, flexible, subjective, and based on evolving audience concerns” (Popham, 1993, pg. 43). He indicates that this approach is based on anthropology as opposed to psychology.
More on Fetterman’s ethnography model later.
Fetterman, D. M. (2010). Ethnography step-by-step. Applied Social Research Methods Series, 17. Los Angeles, CA: Sage Publications.
Popham, W. J. (1993). Educational Evaluation (3rd ed.). Boston, MA: Allyn and Bacon.
Stake, R. E. (1975). Evaluating the arts in education: a responsive approach. Columbus, OH: Charles E. Merrill.
Stake, R. E. (2004). Standards-based & responsive evaluation. Thousand Oaks, CA: Sage Publications.
Warning: This post may contain information that is controversial .
Schools (local public schools) were closed (still are).
The University (which never closes) was closed for four days (now open).
The snow kept falling and falling and falling. (Thank you Sandra Thiesen for the photo.)
Eighteen inches. Then freezing rain. It is a mess (although as I write this, the sun is shining, and it is 39F and supposed to get to 45F by this afternoon).
This is a complex messy system (thank you Dave Bella). It isn’t getting better. This is the second snow Corvallis has experienced in the same number of months, with increasing amounts.
It rains in the valley in Oregon; IT DOES NOT SNOW.
Another example of a complex messy system is what is happening in the UK.
These are examples extreme events; examples of climate chaos.
Evaluating complex messy systems is not easy. There are many parts. If you hold constant one part, what happens to the others? If you don’t hold constant one part, what happens to the rest of the system?. Systems thinking and systems evaluation has come of age with the 21st century; there were always people who viewed the world as a system; one part linked to another, indivisible. Soft systems theory dates back to at least von Bertalanffy who developed general systems theory and published the book by the same name in 1968 (ISBN 0-8076-0453-4).
Evaluating systems is complicated and complex.
Bob Williams, along with Iraj Imam, edited the volume Systems Concepts in Evaluation (2007), and along with Richard Hummelbrunner, wrote the volume Systems Concepts in Action: A Practitioner’s Toolkit (2010). He is a leader in systems and evaluation.
These two books relate to my political statement at the beginning and complex messy systems. According to Amazon, the second book “explores the application of systems ideas to investigate, evaluate, and intervene in complex and messy situations”.
If you think your program works in isolation, think again. If you think your program doesn’t influence other programs, individuals, stakeholders, think again. You work in a complex messy system. Because you work in a complex messy system, you might want to simplify the situation (I know I do); only you can’t. You have to work within the system.
Might be worth while to get von Bertalanffy’s book; might be worth while to get Williams books; might be worth while to get a copy of Gunderson and Holling book Panarchy: Understanding Transformations in Systems of Humans and Nature.
After all, nature is a complex messy system.