- About Us
- Statewide Resources
- Get Involved
- Find Us
OSU Extension Blogs
The effort, under a $644,000 grant from the Oregon Department of Education, will be based at OSU’s Hatfield Marine Science Center under the guidance of Sea Grant’s marine education team. The goal is to help equip teachers to better provide STEM (science, technology, engineering and math) education to k-12 students.
The grant is to the Lincoln County School District, which is partnering with Sea Grant, Tillamook School District and the Oregon Coast Aquarium. The new STEM Hub is one of six across Oregon intended to foster 21st Century career skills, particularly for historically under-served student populations. The new Oregon Coast Regional STEM Hub will help provide coastal schools and educators with the tools and support necessary to deliver world-class STEM instruction to rural students.Learn more
- Read about the grant in HMSC Currents
- Learn more about Sea Grant’s marine education program at the HMSC
The American public may be divided over whether climate is changing, but coastal managers and elected officials in nine states say they see the change happening—and believe their communities will need to adapt.
That’s one finding from a NOAA Sea Grant research project, led by Oregon Sea Grant and involving multiple other Sea Grant programs, which surveyed coastal leaders in selected parts of the nation’s Atlantic, Pacific, Gulf and Great Lakes coasts, as well as Hawaii.
Three quarters of coastal professionals surveyed – and 70% of all participants – said they believe that the climate in their area is changing—a marked contrast to results of some national surveys of the broader American public which have found diverse and even polarized views about climate change and global warming.
The Sea Grant survey was developed to understand what coastal/resource professionals and elected officials think about climate change, where their communities stand in planning for climate adaptation and what kinds of information they need, said project leader Joe Cone, assistant director of Oregon Sea Grant. Sea Grant programs in Connecticut, Hawaii, Illinois-Indiana, Louisiana, Maryland, Minnesota, Oregon, and Washington—states that represent most of NOAA’s coastal regions—took part, administering the survey at various times between January 2012 and November 2013.Learn more:
I’ve been reading about models lately; models that have been developed, models that are being used today, models that may be used tomorrow.
Webster (Seventh New Collegiate) Dictionary has almost two inches about models–I think my favorite definition is the fifth one: an example for imitation or emulation. It seems to be most relevant to evaluation. What do evaluators do if not imitate or emulate others?
To that end, I went looking for evaluation models. Jim Popham’s book has a chapter (2, Alternative approaches to educational evaluation) on models. Fitzpatrick, Sanders, and Worthen has numerous chapters on “approaches” (what Popham calls models). (I wonder if this is just semantics?)
Models have appeared in other blogs (not called models, though). In the case of Life in Perpetual Beta (Harold Jarche) provides this view of how organizations have evolved and calls them forms.(The below image is credited to David Ronfeldt.)
(Looks like a model to me. I wonder what evaluators could make of this.)
The reading is interesting because it is flexible. It approaches the “if it works, use it” paradigm; the one I use regularly.
I’ll just list the models Popham uses and discuss them over the next several weeks. (FYI-both Popham and Fitzpatrick, et. al., talk about the overlap of models.) Why is a discussion of models important, you may ask? I’ll quote Stufflebeam: “The study of alternative evaluation approaches is important for professionalizing program evaluation and for its scientific advancement and operation” (2001, p. 9).
Popham lists the following models:
- Goal-Attainment models
- Judgmental models emphasizing inputs
- Judgmental models emphasizing outputs
- Decision-Facilitation models
- Naturalistic models
Popham does say that the model classification could have been done a different way. You will see that in the Fitzpatrick, Sanders, and Worthen volume where they talk about the following approaches:
- Expertise-oriented approaches
- Consumer-oriented approaches
- Program-oriented approaches
- Decision-oriented approaches
- Participant-oriented approaches
They have a nice table that does a comparative analysis of alternative approaches (Table 10.1, pp. 249-251)
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson.
Popham, W. J. (1993). Educational Evaluation (3rd ed.). Boston, MA: Allyn and Bacon.
Stufflebeam, D. L. (2001). Evaluation models. New Directions for Evaluation (89). San Francisco, CA: Jossey-Bass.
People often say one thing and do another.
This came home clearly to me with a nutrition project conducted with fifth and sixth grade students over the course of two consecutive semesters. We taught them nutrition and fitness and assorted various nutrition and fitness concepts (nutrient density, empty calories, food groups, energy requirements, etc.). We asked them at the beginning to identify which snack they would choose if they were with their friends (apple, carrots, peanut butter crackers, chocolate chip cookie, potato chips). We asked them at the end of the project the same question. They said they would choose an apple both pre and post. On the pretest, in descending order, the students would choose carrots, potato chips, chocolate chip cookies, and peanut butter crackers. On the post test, in descending order, the students would choose chocolate chip cookies, carrots, potato chips, and peanut butter crackers. (Although the sample sizes were reasonable [i.e., greater than 30], I’m not sure that the difference between 13.0% [potato chips] and 12.7% [peanut butter crackers] was significant. I do not have those data.) Then, we also asked them to choose one real snack. What they said and what they did was not the same, even at the end of the project. Cookies won, hands down in both the treatment and control groups. Discouraging to say the least; disappointing to be sure. What they said they would do and what they actually did were different.
Although this program ran from September through April, and is much longer than the typical professional development conference of a half day (or even a day), what the students said was different from what the students did. We attempted to measure knowledge, attitude, and behavior. We did not measure intention to change.
That experience reminded me of a finding of Paul Mazmanian . (I know I’ve talked about him and his work before; his work bears repeating.) He did a randomized controlled trial involving continuing medical education and commitment to change. After all, any program worth its salt will result in behavior change, right? So Paul Mazmanian set up this experiment involving doctors, the world’s worst folks with whom to try to change behavior.
He found that “…physicians in both the study and the control groups were significantly more likely to change (47% vs 7%, p<0.001) IF they indicated an INTENT (emphasis added in both cases) to change immediately following the lecture ” (i.e., the continuing education program). He did a further study and found that a signature stating that they would change didn’t increase the likelihood that they would change.
Bottom line, measure intention to change in evaluating your programs.
Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowitz, M. P. (August 1998). Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change. Academic Medicine, 73(8), 882-886.
Mazmanian, P. E., Johnson, R. E., Zhang, A., Boothby, J. & Yeatts, E. J. (June, 2001). Effects of a signature on rates of change: A randomized controlled trial involving continuing education and the commitment-to-change model. Academic Medicine, 76(6), 642-646.
Oregon Sea Grant’s invasive species specialist, Sam Chan, visited Vancouver B.C. recently and took some time to walk the beaches with his Canadian counterparts and talk about the potential for unwanted plant and animal visitors, washed to sea in the Japanese tsunami of 2011, to make it to North American shores – and the consequences if they do:
I may have mentioned naturalistic models; if not I needed to label them as such.
Today, I’ll talk some more about those models.
These models are often described as qualitative. Egon Guba (who died in 2008) and Yvonna Lincoln (distinguished professor of higher education at Texas A&M University) talk about qualitative inquiry in their 1981 book, Effective Evaluation (it has a long subtitle–here is the cover). They indicate that there are two factors on which constraints can be imposed: 1) antecedent variables and 2) possible outcomes, with the first impinging on the evaluation at its outset and the second referring to the possible consequences of the program. They propose a 2×2 figure to contrast between naturalistic inquiry and scientific inquiry depending on the constraints.
Besides Eisner’s model, Robert Stake and David Fetterman have developed models that fit this model. Stake’s model is called responsive evaluation and Fetterman talks about ethnographic evaluation. Stake’s work is described in Standards-Based & Responsive Evaluation (2004) . Fetterman has a volume called Ethnography: Step-by-Step (2010) .
Stake contended that evaluators needed to be more responsive to the issues associated with the program and in being responsive, measurement precision would be decreased. He argued that an evaluation (and he is talking about educational program evaluation) would be responsive if it “oreints more directly to program activities than to program intents; responds to audience requirements for information and if the different value perspectives present are referred to in reporting the success and failure of the program” (as cited in Popham, 1993, pg. 42). He indicates that human instruments (observers and judges) will be the data gathering approaches. Stake views responsive evaluation to be “informal, flexible, subjective, and based on evolving audience concerns” (Popham, 1993, pg. 43). He indicates that this approach is based on anthropology as opposed to psychology.
More on Fetterman’s ethnography model later.
Fetterman, D. M. (2010). Ethnography step-by-step. Applied Social Research Methods Series, 17. Los Angeles, CA: Sage Publications.
Popham, W. J. (1993). Educational Evaluation (3rd ed.). Boston, MA: Allyn and Bacon.
Stake, R. E. (1975). Evaluating the arts in education: a responsive approach. Columbus, OH: Charles E. Merrill.
Stake, R. E. (2004). Standards-based & responsive evaluation. Thousand Oaks, CA: Sage Publications.
We participate in the Oregon State U Food Science Camp for middle school students.
Part of the STEM [science technology engineering math] Academies@OSU Camps.
We teach about bread fermentations, yeast converting sugars to CO2 and ethanol, lactobacillus converting sugar to lactic and acetic acids, how the gluten in wheat can form films to trap the gas and allow the dough to rise. On the way we teach about flour composition, bread ingredients and their chemical functionalities, hydration, the relationships between enzymes and substrates [amylases on starch to produce maltose for the fermentation organisms]; gluten development, the gas laws and CO2′s declining solubility in the aqueous phase during baking which expands the gas bubbles and leads to the oven spring at the beginning of baking; and the effect of pH on Maillard browning using soft pretzels that they get to shape themselves..
All this is illustrated by hands on [in] activities: they experience the hydration and the increasing cohesiveness of the dough as they mix it with their own hands, they see their own hand mixed dough taken through to well-risen bread. They get to experience dough/gluten development in a different context with the pasta extruder, and more and more.
A great way to introduce kids to the relevance of science to their day to day lives: in our case chemistry physics biochemistry and biology in cereal food processing.
We were also fortunate to have Erik Fooladi from Volda University College in Norway to observe the fun: http://www.fooducation.org/
If you have not read his blog and you like what we do here: you should!
pH, colloidal calcium phosphate, aging, proteolysis, emulsification or its loss and their interactions lead to optimum melting qualities for cheeses. A module in this year’s food systems chemistry class.
This module was informed by this beautiful article “The beauty of milk at high magnification“ by Miloslav Kalab, which is available on the Royal Microscopical Society website.
Of course accompanied by real sourdough wholegrain bread baked in out own research bakery.
“The Science of a Grilled Cheese Sandwich.”
by: Jennifer Kimmel
in: The Kitchen as Laboratory: Reflections on the Science of Food and Cooking
Edited by Cesar Vega, Job Ubbink, and Erik van der Linden
I’m back from maternity leave and getting resettled into some new responsibilities. We had a staff member leave us, so Glenda and I are having to pick up the work load until we find someone new, or our responsibilites change. Being a new mom is lots of work too, so I’ve gone part time (24 hours aweek) but am still trying to get everything done… that being said, we’ve decided to put our nutrition education volunteering on hold, until I have a managable workload.
We look forward to being able to start things back up in the summer or fall of 2011. Thanks so much and since a few of you have been asking, here’s a photo of our boy. He is 5 months old today!