The evaluation of change:
by Gregory V. G. O'Dowd (Hamamatsu University School of Medicine)
|
". . . evaluating the impact of change stemming from curriculum renewal and innovations has rarely been given the consideration it deserves." |
[ p. 23 ]
Models of curriculum innovationEvaluation is concerned with gathering data on the dynamics, effectiveness, acceptability, and efficiency of a program to facilitate decision making (Popham, 1975; Jarvis and Adams, 1979, cited in Richards 1990).
Evaluation is the determination of the worth of a thing. It included obtaining information for the use in judging the worth of a program, product, procedure or objective, or the potential utility of alternative approaches designed to attain specified objectives (Worthen & Sanders, 1973).
[ p. 24 ]
In addition, evaluation may be concerned with how a program works. That is, with how teachers, learners and materials interact in classrooms, and how teachers and learners perceive the program's goals, materials and learning experiences. As was found with earlier ballistic models, it is too late to determine at the end of the process that the process was flawed and had failed; something needs to be done earlier to ascertain whether or not the program is on course to meeting its objectives. Without continual evaluation of the process at each stage, when adjustments could be implemented to keep the process on track, the chances of real success are minimal at best.the systematic collection and analysis of all relevant information necessary to promote the improvement necessary to promote the improvement of the curriculum, and assess its effectiveness and efficiency, as well as the participants' attitudes within the context of the particular institutions involved (1989, p. 223).This goes beyond goals-evaluation, which is in Brown's opinion "limiting" because it doesn't allow for "upgrading and modifying."
The utility standards relate to the duty of an evaluator to find out who are the stakeholding audiences and to provide them with the relevant information on time.
The feasibility standards require evaluators to ensure that the evaluation design be workable in real world settings.
The propriety standards demand that the evaluator behave ethically and recognize the rights of individuals who might be affected by the evaluation.
Finally, the accuracy standards are concerned with the soundness of an evaluation, requiring that information be technically adequate and that conclusions be linked logically to the data. (p.18)
[ p. 25 ]
intended to improve the curriculum [and] gather-information from different people over a period of time...As opposed to merely passing an evaluative judgment on the end product of a teaching program (summative evaluation), formative evaluation is designed to provide information that may be used as the basis for future planning and action.
goal-free evaluation, which attempts to observe and define at what is actually happening in the classroom. In this type of evaluation, goals will be apparent in class if they are relevant...the value of a program resides in the extent to which a program's effects are congruent with the perceived needs of the students. (cited in Beretta, 1992, p. 17)
Ideally, the evaluator should be independent of the curriculum development team, so that no vested interests are involved, or perceived to be involved." However, after providing a list of who, why, what, and how, he concludes by saying that evaluators, "should play some part in decision making, as they do possess expert information. After all the potential for misunderstanding of research findings by administrators is considerable.
The threat is that the outsider cannot possibly gain an adequate understanding of the background to the project, the nature of is development over time, the reason for important decisions and the likely effect of alternative decisions, the status ante, and the organic perceptions of all associated with the project (pp. 25-6).
[ p. 26 ]
Thus they posit, this, "will involve the sharing of decisional, planning roles as well as the donkey-work amongst all involved" (p. 38).Experience shows, however, that unpalatable findings are most likely to be accepted if they have been 'discovered' by those working on a project, and if those people understand. . . the reasons for such a state of affairs...unpalatable outsider evaluations are simply less likely to be accepted (p. 55).
[ p. 27 ]
Designing the evaluation instrument[ p. 28 ]
|
[ p. 29 ]
|
. . . to improve the quality and reputation of the school. Since the student body will be declining, this might also attract students.
[ p. 30 ]
. . . there is always fear of change. . . and the CRP is going to be a huge change at this school and a lot of people who aren't involved don't feel it the right direction. I think there is a lot of worrying out there . . . between that (the direction the project is going in) and the divide that has developed between those that are involved and those that are not. . . it's a shame that it's divided the faculty.
It depends on how much anxiety you read into it . . . Japan is becoming a more anxiety-ridden place to be working if you're an English teacher and so a lot of people are jumping on the bandwagon to secure themselves a position here . . . there is a danger that there is a join-the-club mentality and an us-and-them mentality and a your-jobs-on-the-line kind of mentality which tend to divide and. . . create antagonisms.And:
There hasn't been a lot of concrete evidence that change is good. . . there have been a lot of reports, a lot of work put out, a lot of paper used, but nothing concrete.
[ p. 31 ]
They (the management of the CRP) haven't sold the idea (of the CRP) to teachers well. They should have been open from the very beginning and they haven't been.
one (presentation) out of three or four (presentations)is together (pitched for the audience level and focused). . . it depends on how coherent the presentation is. . . some have been well done, but others. . . and:
Sometimes they present something new and that's fine. . . but I wish they'd just give us the information and not waste so much time.
Dry reading, long, heavy and difficult to get through. . . a huge stack of papers. . . physically and mentally difficult to read, and I was forced to read one.
more structures, maps and time-lines of the CRP. . . more of their responsibility in the process . . . something about dealing with our cultural realities here and addressing why our students come here . . . their lectures have been too frilly and don't hit the heart of matters. . . all theory but not important to me right now.
[ p. 32 ]
a major problem for the ESL program evaluator is to address the questions that those who commission and hope to use evaluation findings want answered, without relinquishing his claims to academic research standards. (Beretta, 1990, p. 1).Equally, as Lynch has noted, some conclusions will need to be omitted:
This is not so much out of fear that "the truth" will not set well with individuals or groups, but out of concern that the conclusions and their supporting evidence be understood and interpreted as intended by the evaluator. (Lynch, 1990, p. 39)As one of the external consultants reminded participants in one of the early meetings, whenever evaluation is concerned, there is a strong potential for biased interpretation of the findings, whatever they may be. Therefore, evaluators must constantly strive to maintain their independence to ensure that their findings are not selectively construed.
[ p. 33 ]
References