These are comments about evaluation that I found to be particularly insightful. All comments are posted to this web site by permission of the person who made the comment.

These were posted on Evaltalk*

This was posted on Evaltalk, 11/25/06, by Kate McKegg.

One issue that has struck me is the way in which evaluation use seems to have been conceptualised, in the main, as an event, something that happens with the results of an evaluation.  I am increasingly of the view that if we concern ourselves with use and influence once the evaluation has been produced, the impact of our work will be limited. At this point, we are really engaged in the dissemination of ideas to audiences that may or may not be receptive.

The greatest use or influence I have had an evaluator is during an evaluation process, in the application and shared use (with clients) of evaluative thinking, tools and practice. In the process of thinking about the evaluand, from an evaluative perspective (which is often different to the perspective taken by many policy or operational staff), new understandings and insights can be and are in fact co-created or co-produced as part of the evaluation process. And this new knowledge can (and has) influenced policy and practice change, which has also lead to improved outcomes for citizens.



This was posted on Evaltalk, 5/9/06, by Dr. Michael Scriven.

The usual mistake... of thinking that the job of evaluation is to see if the donor's goals are achieved. ... the evaluator is to assume that the goals are unquestionable.

The goal of evaluation is to find out how much good and bad the program did and to whom.

That means: finding out whether the goals were really, and not just rhetorically, matched to the needs of the impactees, not the preferences of the donors: and it means looking for side effects as avidly as for intended effects, etc.

Michael Scriven
Western Michigan University


Logic models

This was posted on Evaltalk on 8/22/06 by Sharon Stout

The logic model serves four key purposes:

Exploring what is valued (values clarification) – e.g., as in building consensus in developing a logic model, how elements interact in theory, and how this program compares;

Providing a conceptual tool to aid in designing an evaluation, research project, or experiment to use in supporting -- to the extent possible -- or falsifying a hypothesized causal chain;

Describing what is, making gaps between what was supposed to happen and what actually happened more obvious, and more likely to be observed, measured, or investigated in future research or programming;  and

Finally, developing a logic model may make evaluation unnecessary, as sometimes the logic model shows that the program is so ill-conceived that more work needs to be done before the program can be implemented – or if implemented, before the program is evaluated.

She mentioned that this was her "synopsis of Jonathan Morell's synopsis (plus later additions by Patricia Rogers) with additional text taken from a post of Doug Fraser's thrown in with a short bit credited earlier to Joseph Wholey."


This was posted on Evaltalk on 8/23/06 by Dr. Patricia Rogers

In theory, it is not essential to articulate an explicit logic model (or program theory) to do an evaluation.

I have also found, in practice, that it is usually useful to do so for program evaluation, that it does not have to take a huge amount of time, and that the times I haven't done it up front, I have found it important to do it at some stage during the evaluation and have wished I had done it earlier.

This generalisation comes from my own experience where:

I am not a content expert in the areas where I do evaluations, so doing an initial, and subsequent, logic model helps me to check I have understood what it is that I am supposed to be evaluating, including negotiating boundaries of this entity and identifying areas of disagreement;

Most of my evaluations are in situations where people absolutely want to get some information about how things work not just whether they do;

Recognising the diminishing returns on finessing logic models, I often use very quick and simple versions as a way to check understanding and facilitate thinking about evaluative criteria, indicators of progress towards long-term goals, etc.



* Evaltalk is the email list for the American Evaluation Association, and is listed here
http://www.eval.org/Resources/Listservs.asp

click here to return to methods page