An investigation of method effects on
|
Keywords: reading comprehension, language testing, text structure, test method effects |
[ p. 64 ]
Bachman (1990), which was later modified in Bachman and Palmer (1996), presents a model of language ability. He includes 'test method facets' in his discussion of language ability and draws attention to a range of factors which can affect test performance. Bachman and Palmer (1996:62) posit the importance of method facets, which they now term 'task characteristics' as follows:Language use involves complex and multiple interactions among the various individual characteristics of language users, on the one hand, and between these characteristics and the characteristics of the language use or testing situation, on the other. Because of the complexity of these interactions, we believe that language ability must be considered within an interactional framework of language use.
". . . five types of top-level relationships are thought to represent patterns in the way we think . . . " |
[ p. 65 ]
". . . open-ended questions . . . [are] more effective in measuring the understanding of main ideas of . . . [a] text whereas cloze tests only . . . [touch] upon local understanding . . . " |
[ p. 66 ]
1. A 50-item English proficiency test which was mainly based on grammar and vocabulary
2. A variety of reading comprehension tests
[ p. 67 ]
". . . reading comprehension is assessed through open-ended questions, it does not matter what kind of text structure is involved as long as there is some kind of structure . . ." |
[ p. 68 ]
More interestingly, the two-way interaction between the two effects proved to be statistically significant (F (6, 723) = 6,149**, p < .005). This means that text types and response format not only had significant effects on reading comprehension separately, but they also interacted with each other.Figure 3. Open-Ended Questions results (%) by proficiency levels. |
Figure 4. Summary Writing results (%) by proficiency levels. |
[ p. 69 ]
From this finding, it can be posited that, in open-ended questions and summary writing, the impact of different kinds of text organization varies considerably across different proficiency groups. When texts with looser structures were used, the reading comprehension measured by these response formats did not correspond to general language proficiency as much as when more tightly-organized texts were used. This seems to suggest that, in these test formats students of higher proficiency could be unfairly disadvantaged and their proficiency may not be reflected accurately in test performance if less structured passages are presented.[ p. 70 ]
[ p. 71 ]
[ p. 72 ]
Main Article | Appendix 1 | Appendix 2 |