Curriculum Innovation, Testing and Evaluation: Proceedings of the 1st Annual JALT Pan-SIG Conference.
May 11-12, 2002. Kyoto, Japan: Kyoto Institute of Technology.

The evaluation of change:
The role of evaluation in the curriculum renewal process

by Gregory V. G. O'Dowd (Hamamatsu University School of Medicine)


Introduction

Over the past decade, Japanese educational institutions have directed an ever-increasing amount of attention to curriculum change and innovation, driven largely by demographic factors. Typically, the prime focus of curriculum renewal has been solely on what to change rather than on the impact that changes would have on students, teachers and the institution itself as a whole (Nunan 1988, p. 2, Richards, 1990, p. 1). Needless to say, evaluating the impact of change stemming from curriculum renewal and innovations has rarely been given the consideration it deserves. The fact that curriculum renewal usually means scrapping the old, comfortable curriculum structure in favor of something new, unfamiliar and untried certainly leads to the conclusion that the proposed changes are supposed to have an impact on the particular institution, the teachers and the quality of the education that it is endeavoring to provide.
". . . evaluating the impact of change stemming from curriculum renewal and innovations has rarely been given the consideration it deserves."

Unfortunately, instituting changes without determining the effect those changes will have on students, teaching staff, management, future curriculum changes, and the educational entity itself may doom the curriculum innovation to failure even before it is put into practice. Indeed, it does not make sense to ignore the many contentious issues raised by implementing sweeping changes; in particular, three central questions should be addressed at the outset:
  1. Why were changes deemed necessary in the first place and why has the current curriculum "failed"?,
  2. Are the goals of the new implementation are realizable and who set those goals and how were they determined?, and
  3. What is the best way to determine if set goals have been reached, ensuring that the new curriculum isn't as flawed as the old one?

In usual practice, the first questions normally raised are what new textbooks are available and what are similar educational institutions (i.e. competitors for students) doing. However, recent literature offerings concerning curriculum innovation suggest that evaluation of change must be an indispensable element of the curriculum renewal process (Nunan 1988, p. 7; Richards, 1990, p. 17).
Based on observations and experience with various approaches and methodologies which have been implemented and trialled in schools and colleges in Japan, this paper will examine the role of evaluation in the curriculum renewal process both from the theoretical and practical perspectives. The aim is to show that this type of evaluation is not just about a one-shot effort of collecting information to determine success or failure of the changes that have been implemented; it is an indispensable part of the continual decision making process that impacts upon the entire education system. And once the new curriculum is in operation, additional evaluation procedures will be needed to enable the program to be monitored and its effects on learners and learning outcomes evaluated to ensure it stays on-track to achieving its stated goals and objectives. The ultimate goal of this paper is to raise teacher awareness of curriculum innovation problems which have been experienced for the benefit of future curriculum renewal efforts.

[ p. 23 ]

Models of curriculum innovation

Curriculum development in English language education at the tertiary level, in Japan in particular, has not generally been viewed as an integrated and interdependent set of processes that involves careful data gathering, planning, experimenting, monitoring, consultation, and evaluation. Rather, as most colleagues confirm, it has been top-down in nature and has not been open to question by the teachers that the innovation affects directly. In addition, simplistic solutions to complex problems are often advocated that address only one dimension of the educational process, for example, by advocating changes in course content, teaching techniques, methods, learning styles, technologies, materials, class contact hours, or assessment strategies. The result has been a staccato revamping of curriculums approximately every three years.
To illustrate the most common examples of how curriculum innovation decisions have been made in educational institutions in Japan I have developed several flowchart models. Appendix 1 shows four models which are labeled: Follow the Leader, Window Dressing, Hotchpotch, and Bureaucratic Top-Down. These are models I have experienced and made note of as an observer of the process; exchanges with other teaching professionals indicated that these were indeed common experiences and processes. A common feature of these models is the ballistic nature of the process (that is, one-way and no turning back), without any evaluation element to verify the achievement or otherwise of the stated curriculum goals if indeed any such goals where specified.
In contrast, Appendix 2 offers four theoretical models describing how the curriculum innovation process in general (i.e. in other countries) has evolved in the post-World War II era. The salient feature of this evolution is that the process has evolved from a ballistic process to an approach where the element of innovation evaluation becomes critical to the continual feedback process by which modifications can be made to keep the process on target.

What does evaluation mean in curriculum renewal?

Let's start with some definitions to put us in the ballpark:
Evaluation is concerned with gathering data on the dynamics, effectiveness, acceptability, and efficiency of a program to facilitate decision making (Popham, 1975; Jarvis and Adams, 1979, cited in Richards 1990).

Evaluation is the determination of the worth of a thing. It included obtaining information for the use in judging the worth of a program, product, procedure or objective, or the potential utility of alternative approaches designed to attain specified objectives (Worthen & Sanders, 1973).

As a basic premise, it can therefore be taken that the primary focus of evaluation is to determine whether the goals and objectives of a curriculum's language program are being attained - that is, whether the program is effective. Curriculum goals are general statements of the intended outcomes of a language program, and represent what the curriculum planners believe to be desirable and attainable program aims based on what was revealed in the needs analysis carried out in the initial stages. These goals can be used as a basis for developing more specific descriptions of the intended outcomes of the program; a good example of this are the intended learner outcomes produced by the Australian Adult Migrant Education Program (Coleman, 1988). After these steps have been taken, mechanisms need to be put in place to determine when and how these goals and objectives can be determined to be achieved; that is, how to evaluate the degree of success of each stage of implementation.

[ p. 24 ]

In addition, evaluation may be concerned with how a program works. That is, with how teachers, learners and materials interact in classrooms, and how teachers and learners perceive the program's goals, materials and learning experiences. As was found with earlier ballistic models, it is too late to determine at the end of the process that the process was flawed and had failed; something needs to be done earlier to ascertain whether or not the program is on course to meeting its objectives. Without continual evaluation of the process at each stage, when adjustments could be implemented to keep the process on track, the chances of real success are minimal at best.
J. D. Brown defines evaluation as
the systematic collection and analysis of all relevant information necessary to promote the improvement necessary to promote the improvement of the curriculum, and assess its effectiveness and efficiency, as well as the participants' attitudes within the context of the particular institutions involved (1989, p. 223).
This goes beyond goals-evaluation, which is in Brown's opinion "limiting" because it doesn't allow for "upgrading and modifying."
The one thing that many of the latest offerings on innovation evaluation can be said to have in common is that a consensus exists that there can be no one agreed upon way to do evaluation. However, standards have been articulated by the American Evaluation Association (Standards for evaluations of educational programs, projects, and materials) cited in Beretta (1992). These are summarized in four components; utility, feasibility, propriety, accuracy.
The utility standards relate to the duty of an evaluator to find out who are the stakeholding audiences and to provide them with the relevant information on time.

The feasibility standards require evaluators to ensure that the evaluation design be workable in real world settings.

The propriety standards demand that the evaluator behave ethically and recognize the rights of individuals who might be affected by the evaluation.

Finally, the accuracy standards are concerned with the soundness of an evaluation, requiring that information be technically adequate and that conclusions be linked logically to the data. (p.18)

As always, the difficulty lies in ensuring that all four of these components are recognized and included in the curriculum renewal process.

Formative and summative evaluation

Another important issue concerns the timing of evaluation, that is, when should evaluation be carried out. Traditionally, evaluations are carried out at the completion of a process to determine success or failure; this subsequently became referred to as a summative evaluation. However, if the process evaluated was found to have failed, the question then raised was whether it could have been saved by some earlier monitoring and intervention. This gave rise to the idea of carrying out formative evaluations, that is, evaluations conducted during the course of a program to ensure it stays on track.

[ p. 25 ]


Scriven (1967, cited in Beretta) originally posed the distinction between "formative" and "summative" evaluation in order to clarify how the processes could impact upon outcomes.
The advantages of formative evaluations, according to Rea-Dickins and Germaine (1992, pp. 25-26), were clear in that such a process
intended to improve the curriculum [and] gather-information from different people over a period of time...As opposed to merely passing an evaluative judgment on the end product of a teaching program (summative evaluation), formative evaluation is designed to provide information that may be used as the basis for future planning and action.

Scriven subsequently took the idea a step further and proposed the idea of:
goal-free evaluation, which attempts to observe and define at what is actually happening in the classroom. In this type of evaluation, goals will be apparent in class if they are relevant...the value of a program resides in the extent to which a program's effects are congruent with the perceived needs of the students. (cited in Beretta, 1992, p. 17)

Of course, formative and summative evaluations are the most commonly used, and there is a place for both these types of evaluation in curriculum renewal processes. The real question concerning their value is how they are implemented and for what purpose. Case study examinations of how these methods have been used shed the most light on their utility and pitfalls.

The role of the evaluator

What is probably clear from even this minimal discussion of the field of curriculum evaluation is that, potentially, evaluation can mean many different things. What is essential from the very outset, most professionals agree, is to 'tailor' the evaluative process to the context. That is, the context of the educational institution, it's environment, the needs of the institution, the goals it has set for itself, and the approach it is taking to curriculum development. Furthermore, given that many different stakeholders have an interest in the outcome of the curriculum renewal process, the conduct of any evaluation needs to be given particular scrutiny to reduce bias or skewing.
Elley (1989, cited in Johnson, 1989, p. 270) makes this explicit by stating
Ideally, the evaluator should be independent of the curriculum development team, so that no vested interests are involved, or perceived to be involved." However, after providing a list of who, why, what, and how, he concludes by saying that evaluators, "should play some part in decision making, as they do possess expert information. After all the potential for misunderstanding of research findings by administrators is considerable.

In addition, decisions are not ultimately the responsibility of the evaluator because, "other considerations such as cost of materials, disruption of schools, retraining of teachers and revision of curricula, not to mention political considerations, will usually help sway the decision process" (Elley, cited in Johnson, 1989, p. 285). Somewhat in contrast is the position taken by Alderson and Scott (1992). In their description of "participatory evaluation," they specifically warn against the JIJOE or Jet-In Jet-Out Expert. In their arguments against relying on such help, they state that:
The threat is that the outsider cannot possibly gain an adequate understanding of the background to the project, the nature of is development over time, the reason for important decisions and the likely effect of alternative decisions, the status ante, and the organic perceptions of all associated with the project (pp. 25-6).

[ p. 26 ]

Thus they posit, this, "will involve the sharing of decisional, planning roles as well as the donkey-work amongst all involved" (p. 38).
In the case study described in this article, the external consultant's lack of worth was notable because he failed to present any new approaches, insights or interpretations that differed from those already reached by the project participants themselves. This example is useful in their opinion because:
Experience shows, however, that unpalatable findings are most likely to be accepted if they have been 'discovered' by those working on a project, and if those people understand. . . the reasons for such a state of affairs...unpalatable outsider evaluations are simply less likely to be accepted (p. 55).

This opinion is well supported by my own evaluation experiences highlighted later in this paper.

Evaluation in practice: A case study

The following case study provides a clear example of how the evaluation of change during the change process in particular was important to the ongoing progress of the curriculum renewal. In this case study, which took place in a private college in Tokyo, I was an active participant in the evaluative processes undertaken. The purpose of the innovation, hereafter called the Curriculum Renewal Project (CRP), was to totally revamp all the language courses offered at the college to put it at the cutting-edge of foreign language education in Japan. A particular feature of this curriculum innovation was the formation of various focus groups, usually comprised of four staff members, who would work on assigned aspects of the process. The Evaluation Focus Group, of which I was a member, was initially comprised of four staff members and met regularly with the Project Manager and the external consultants. As well as developing evaluations for various aspects of materials development and the trialling of new courses, I was assigned the task of undertaking a study of how the various changes taking place at the college were affecting the faculty in particular.

Impact-of-Change study

The Impact-of-Change study came about as a result of the Evaluation Focus Group discussions with the external consultants concerning how the implementation of the new curriculum was affecting the faculty of the college. The study was conceived to gather information about the project at several stages of its implementation from teachers of the institute. A lot of attention had been given to student needs by the Needs Analysis Focus Group, and a great deal of feedback had been elicited from teachers during curriculum renewal workshops, but little had been done to address the impact which the new curriculum, and its implementation, would have on the faculty. It was hoped, by polling faculty members at different stages, that a better picture of how changes from the curriculum renewal project were perceived by the faculty, what aspects of the project the faculty were most aware of, and what aspects needed greater attention by the management. After some initial discussions it was decided that a series of questionnaires, as well as some taped interviews with teachers would be the means by which data would be gathered.

[ p. 27 ]

Designing the evaluation instrument

The first step was to compile a list of possible questions, for inclusion in an impact of change questionnaire, specifically suited to the college environment. Several brainstorming sessions produced a list of forty-two possible questions under the following headings: "The Current Curriculum," "The Curriculum Renewal Process," "Classroom Practices," and "Professional Development." Readings from Evaluating Second Language Education, by J. Charles Alderson and Alan Beretta, were useful in considering the design of the first questionnaire. The book includes a teacher's questionnaire from the Brazilian Universities' ESP Project. Many of the issues addressed in this questionnaire were similar to issues the evaluation group wanted to address. Several questions focused on teacher's impressions of the reasons for implementation, and the goals of the new curriculum. As well, questions on teacher's perceptions of the effectiveness of the newly-implemented curriculum were also considered useful to ask in the questionnaire.
The next step was to distribute this list to nine teachers in order to determine the suitability of both the headings and the questions included. The teacher's feedback was instrumental in eliminating unsuitable and inappropriate items.
Both the project manager, an outsider brought into the college to run the project, and two external consultants expressed their reservations about the length of the proposed draft for the questionnaire and the scope of its questions. It was generally agreed that there were too many questions and too many areas addressed for one instrument. It was subsequently agreed that two questionnaires – one focusing on the current curriculum and the other focusing on the implementation of the renewal project and its effect on classroom practices – would yield more accurate data. The original document was thus divided and revised into two questionnaires of ten questions each, which were then given to nine teachers for pre-trialling (seven were returned). This pre-trialled instrument was faxed to the project manager for comments. Aside from some comments on format, the project manager's main reservation concerned the question, "In what ways, if any, do you feel the implementation of the CRP may affect your job security?" It was felt this question was a union issue rather than a project issue, but after some deliberation it was decided within the evaluation group that this question should remain, as it seemed pertinent, in terms of faculty perceptions of the implementation of the new curriculum.
Regrettably, after all the effort that had gone into developing this evaluation instrument, it was only used once. The original plan to collect further data as the implementation of the CRP progressed was derailed by the project manager as he fought to impose his control of the independent-minded Evaluation Focus Group and the results that were reported. The following results and analysis were obtained from its first distribution early in the process. Even so, the results obtained were a clear indication of how useful it could be to monitor the impact of change during implementation.

Results

The Impact-of-Change questionnaire was distributed to the mailboxes of 45 randomly selected staff members (out of a total pool of 88) with a request that they be completed within one week.
After two weeks, 21 completed questionnaires and one piece of written feedback addressing the questions posed on the questionnaire had been returned, representing a return rate of almost 47%. This return rate was considered to be reasonable considering the staff had been asked to complete three different questionnaires within several months prior to distribution.

[ p. 28 ]

Tabulation of Returns

  1. Why do you think the management decided to implement the Curriculum Renewal Project (CRP)?
    • To make the school look more attractive to new students [14]
    • To improve the quality of education [10]
    • An exercise in Public Relations [2]
    • Financial considerations [5]
  2. What do you think are the goals of the CRP?
    • To encourage student enrollment [4]
    • To improve English education provided [11]
    • To improve the overall quality of the teaching staff [3]
    • Same as the management's [2]
    • Hard to say [2]
    • Other [2]
  3. If you haven't become involved in the CRP, it's because: [12 responses]
    • Applied but was not accepted by the CRP office [3]
    • Personalities and a feeling the agenda is already set [2]
    • Large amount of time required [2]
    • Too busy with the present curriculum [3
    • Other [2]
  4. What influence has the CRP had on your teaching?
    • None [12]
    • Very little/Negligible [6]
    • New awareness of possible techniques & methods [4]
  5. What influence has the CRP had on your work environment?
    • A negative impact [13]
    • A positive impact [2]
    • Some effect, but neither positive nor negative [3]
    • No impact [3]
    • No response [3]
  1. In what ways do you think the new curriculum will differ from the current curriculum? (Class size, levels, Teaching approach, etc.)
    • Four skills to be combined in one class 8
    • Class sizes to be increased 5
    • Teaching approach will change 4
    • More emphasis on student-based learning 2
    • Expect many of the changes to be for the worst 4
    • No real difference 4
    • No idea 6
  2. How do you usually get information about developments with the new curriculum?
    • Memos from administration 8
    • Memos from the project manager 9
    • Project newsletter 5
    • Colleagues involved in CRP 11
    • Word of mouth 15
  3. In what ways, if any, do you feel the implementation of the CRP may affect your job security?
    • There is a threat to my job security 17
    • Nil 4
    • I think my job will be more secure 1

Number Distributed: 45 Number Returned: 22
* In some questions, respondents gave more than one response. ** Most questions presented several answers to choose from, based on the responses to previously trialled instruments, but always included a space for write-in responses.

[ p. 29 ]

  1. On a scale of one to four, how useful have you found the following to your teaching?
  2. Very useful  Not at all useful (1) (2) (3) (4) 
	Focus group workshops 					1 7 4 8 
	Focus group /CRP reports 				2 3 6 9 
	Lectures from external consultants 		1 2 4 12 
	Colleagues involved in CRP 				3 3 7 6 
	Other teacher suggestions 				   -1 -
	Idea sharing with colleagues 	 		   -1 -

  3. In your opinion, are there any areas in which the CRP could be improved? If so, how ? (open-ended question)


Interpreting the data

Of the twenty two respondents, only six responded that they were directly involved in the project (two abstained from comment). In terms of fair interpretation, then, a great deal of this initial data would have been more significant after the analysis of the planned second questionnaire (which was to have included similar, if not identical, questions), after the first official materials trial was in full swing.
At this early stage, what was useful to consider was what ways this compilation of teachers' impressions was used to correct some of the information anomalies between the project and the faculty at large. For example, a majority of respondents claim the Focus Group workshops were not at all useful, so the workshop format was modified; subsequent workshops were better received with feedback indicating that teachers felt there was more gained from them.
In looking at the first two questions, it was fair to say a majority of respondents felt the project was undertaken to improve the quality of English education at the college, and subsequently attract more students. One teacher's comment reflected the mood of others:
. . . to improve the quality and reputation of the school. Since the student body will be declining, this might also attract students.

The third question drew such a variety of responses that it was difficult to try and draw absolute conclusions from it. However, judging from the responses given, it was fair to say that there were still a variety of controversial issues surrounding teacher involvement in the CRP. When interviewed about this point, a senior teacher commented:

[ p. 30 ]

. . . there is always fear of change. . . and the CRP is going to be a huge change at this school and a lot of people who aren't involved don't feel it the right direction. I think there is a lot of worrying out there . . . between that (the direction the project is going in) and the divide that has developed between those that are involved and those that are not. . . it's a shame that it's divided the faculty.

Question four needed some consideration. In regard to Teaching, did the high incidence of negative responses reflect a weakness on the part of the project, or did it merely reflect the early stage of project development? It would have been unfair to conclude that the CRP had no effect on the teaching of half the respondents since virtually no curriculum changes had taken place at this point. The reason this question was asked was to gauge how much awareness and acceptance of alternate teaching methods had been raised as a result of the lectures and workshops. However, thirteen respondents had been teaching at the college for four years or more and may have required greater structural changes than a series of lectures and workshops to modify their teaching. Conversely, this data might indicate that twenty percent of the respondents had gained some awareness, in terms of teaching, from the project at this early stage. One teacher who had not been involved directly with the CRP commented, "I've learned a thing or two. . . picked up from other people (in workshop discussions). " A teacher involved with the CRP said, " I've learnt a lot by reading a lot. My in-class teaching definitely is now more focused on the student and less on me (the teacher). "
Looking at the Work Environment section, thirteen respondents claimed the CRP had a negative impact on their work environment. The fairest means of interpreting this data was to pursue it; subsequent audio interviews included this question for elaboration. Comments included the following:
It depends on how much anxiety you read into it . . . Japan is becoming a more anxiety-ridden place to be working if you're an English teacher and so a lot of people are jumping on the bandwagon to secure themselves a position here . . . there is a danger that there is a join-the-club mentality and an us-and-them mentality and a your-jobs-on-the-line kind of mentality which tend to divide and. . . create antagonisms.
And:
There hasn't been a lot of concrete evidence that change is good. . . there have been a lot of reports, a lot of work put out, a lot of paper used, but nothing concrete.

These issues needed to be seriously addressed by the project management (but, in fact, were treated more as brush-fires to be extinguished) because, as one of the external consultants pointed out in a Focus Group Workshop, dissent is the surest way to stall, or kill, a project.
From the range of responses to question five it was clear that some teachers had a general idea of proposed changes, but many did not have a clear idea of how the new curriculum would differ from the current curriculum. Teachers' comments included, "Rumors, rumors, rumors, what am I to believe?", and "All the different courses will be merged into one." Question six seems to corroborate this lack of clear vision, tangentially, in that fifteen of twenty-two respondents claimed to rely on word of mouth for information about the new curriculum. It was assumed that either:

[ p. 31 ]

  1. teachers were unclear about major differences because decisions regarding these differences hadn't been finalized,
  2. decisions had been made and communicated to the faculty, but somehow communicated ineffectively, or
  3. decisions had been made and information had been made available, but failed to register with the faculty.

By looking at some teachers' comments in question nine, a case was made for better communication and information dissemination between the faculty and the project management team. Three teachers mention better communication (response b), and three teachers suggested greater honesty and clarity from the CRP office about decision-making (response f). Whether communication actually was a problem or not was a matter of conjecture, but due to teachers' perceptions, a review of the means of information dissemination between the project and the faculty was undertaken. This was necessary as a means of responding to the response in question seven, where seventeen teachers feel their job security was threatened by the project. As one teacher summed up the situation:
They (the management of the CRP) haven't sold the idea (of the CRP) to teachers well. They should have been open from the very beginning and they haven't been.

When asked about the usefulness of various factors in question number eight, there was a marked tendency toward "Not useful" regarding all items. Concerning Focus group workshops, however, there was a nearly neutral balance between "Useful" (n=7) and "Not at all useful" (n=8). This question was pursued in audio interviews and yielded the following comments:
one (presentation) out of three or four (presentations)is together (pitched for the audience level and focused). . . it depends on how coherent the presentation is. . . some have been well done, but others. . . and:

Sometimes they present something new and that's fine. . . but I wish they'd just give us the information and not waste so much time.

Responses to Focus group reports/CRP reports tend heavily towards "Not at all useful". Could there be some correlation between this response and the data from questions five and six? Teachers who read the reports carefully appeared to gain a greater knowledge of the process of the CRP, but didn't seem to feel the reports had any useful bearing on their teaching. Teachers commented,
Dry reading, long, heavy and difficult to get through. . . a huge stack of papers. . . physically and mentally difficult to read, and I was forced to read one.

As for Lectures from external consultants, a majority of respondents replied "Not at all useful." When some teachers were polled regarding what they considered useful in terms of lectures from the outside consultants, the responses included the following:
more structures, maps and time-lines of the CRP. . . more of their responsibility in the process . . . something about dealing with our cultural realities here and addressing why our students come here . . . their lectures have been too frilly and don't hit the heart of matters. . . all theory but not important to me right now.

[ p. 32 ]


Regarding Colleagues involved in CRP, there was some indication that teachers involved in the CRP had some effect on their colleagues. Though the results leaned toward "Not at all useful," an indication of any effect was significant at that point in the project's development.
Finally, the comments and suggestions in question nine were presented to the project management. As a result, some positive changes were made. In particular, Japanese staff members were invited to become more involved in some aspects of the CRP. The focus of subsequent workshops was also directed more towards informing and inviting comment rather than just lecturing. Moreover, ways were sought to try to improve how CRP issues were communicated to the staff. Nevertheless, serious problems relating to the impact of changes continued to confront the CRP. The greatest problem stemmed from the mistrust that existed between the project manager, the Administration and the staff as a whole, and could be summed up in one word: agendas.

Agendas

A critical issue often raised in curriculum innovation is that of competing agendas. In this case study, the issue of agendas was continually raised at meetings, in particular the hidden agendas believed to be held by various parties to the renewal process. Regrettably, the level of suspicion and mistrust was considerably high as the management's motives for change were continually questioned throughout the entire process. Suspicions concerning the agenda of the project manager were continually aroused by his manner of management of the various groups working under his direction and by comments he carelessly threw about at the most inappropriate times. As Beretta warned:
a major problem for the ESL program evaluator is to address the questions that those who commission and hope to use evaluation findings want answered, without relinquishing his claims to academic research standards. (Beretta, 1990, p. 1).
Equally, as Lynch has noted, some conclusions will need to be omitted:
This is not so much out of fear that "the truth" will not set well with individuals or groups, but out of concern that the conclusions and their supporting evidence be understood and interpreted as intended by the evaluator. (Lynch, 1990, p. 39)
As one of the external consultants reminded participants in one of the early meetings, whenever evaluation is concerned, there is a strong potential for biased interpretation of the findings, whatever they may be. Therefore, evaluators must constantly strive to maintain their independence to ensure that their findings are not selectively construed.

Conclusion

What is hopefully clear from this minimal examination of evaluation in the curriculum renewal process is that, potentially, evaluation can mean many different things and carries both risks and benefits. What is essential, most experts agree, is to make an attempt from the start to 'tailor' the evaluative process within the entire process. For without the essential element of evaluation to determine whether or not set goals and objectives are being achieved, educational institutions are most likely doomed to repeat the mistakes of the past and put in jeopardy not only the futures of their students but their own existence as well.

[ p. 33 ]

References

Alderson, J. C., & Beretta A. (Eds.). (1992). Evaluating second language education. Cambridge: Cambridge University Press.

Beretta, A. (1992). Evaluation of language education: An overview. In J. Alderson and A. Beretta (Eds.). (1992). Evaluating second language education (pp. 5-24). Cambridge: Cambridge University Press.

Beretta, A. (1990). The program evaluator: The ESL researcher without portfolio. Applied Linguistics, 11, 1, 1-14.

Beretta, A. (1986). Program-fair language teaching evaluation. TESOL Quarterly, 20 (3), 431-444.

Brown, J. D. (1989). Language program evaluation: A synthesis of existing possibilities. In R. Johnson (Ed.). (1989). The second language curriculum. (Which Pages?????) Cambridge: Cambridge University Press.

Colman, J. (1988). Curriculum structures in adult language learning: Implications for the AMEP. Prospect, 4 (1), 25-37.

Johnson, R. K. (Ed.). (1989). The second language curriculum. Cambridge: Cambridge University Press.

Lynch, B. K. (1990). A context-adaptive model for program evaluation. TESOL Quarterly, 24, 1, 23-39.

Nunan, D. (1988). The learner-centered curriculum. Cambridge: Cambridge University Press.

Nunan, D. (1992). Research methods in language learning. Cambridge: Cambridge University Press.

Rhea-Dickins, P. & Germaine, K. (1992). Evaluation. Oxford: Oxford University Press.

Richards, J. C. (1990). The language teaching matrix. Cambridge: Cambridge University Press.

Tyler, R. W. (1949). Basic principles of curriculum and instruction. Chicago: The University of Chicago Press.

Appendix 1 Appendix 2


2002 Pan SIG-Proceedings: Topic Index Author Index Page Index Title Index Main Index
Complete Pan SIG-Proceedings: Topic Index Author Index Page Index Title Index Main Index

[ p. 24 ]
Last Next