Lifelong Learning: Proceedings of the 4th Annual JALT Pan-SIG Conference.
May 14-15, 2005. Tokyo, Japan: Tokyo Keizai University.

An investigation of method effects on
reading comprehension test performance

by Wakako Kobayashi (Kanda University of International Studies)


ŠT—v


Abstract

Following Bachman's model of test method facets (1990), this study focuses on the nature of input and of expected response by manipulating text structures and response formats. Furthermore, it examines the effects of varied text structure and response format on Japanese university students' reading comprehension test performance. The main finding is that these two variables have a significant impact on the students' performance. Furthermore the impact of different kinds of text structure varied considerably across different proficiency groups. In the light of this research, this paper further reports the results of a brief investigation into the suitability of the reading passages used for the university entrance examinations in Japan.

Keywords: reading comprehension, language testing, text structure, test method effects



The main purpose of this research was to investigate the effects of factors other than language ability on reading comprehension test performance. The two main variables were text organization and response format. In addition, as a brief follow-up, the paper examines the reading passages used for some university entrance examinations in Japan to examine actual testing practice.
The theoretical framework for this research is firstly Bachman's model (1990) of language ability and test method facets and secondly Meyer's model (1975, 1985) of prose analysis. Kintsch and Yarbrough's study (1982) also helped operationalize the response format variable.

[ p. 64 ]

Bachman (1990), which was later modified in Bachman and Palmer (1996), presents a model of language ability. He includes 'test method facets' in his discussion of language ability and draws attention to a range of factors which can affect test performance. Bachman and Palmer (1996:62) posit the importance of method facets, which they now term 'task characteristics' as follows:
Language use involves complex and multiple interactions among the various individual characteristics of language users, on the one hand, and between these characteristics and the characteristics of the language use or testing situation, on the other. Because of the complexity of these interactions, we believe that language ability must be considered within an interactional framework of language use.

Bachman classifies test method facets into five categories: 1) testing environment; 2) test rubrics; 3) the nature of the input; 4) the nature of the expected response; and 5) the interaction between the input and the response. According to Bachman, these factors can affect test performance; it is important for testers to be aware of their influences and, if possible, minimize them. This study focuses on the third and fourth of these facets. 'The nature of the input' (the materials presented to test takers) was chosen as the main variable of this study because reading materials are a very important factor in reading comprehension tests. Background knowledge, for example, is a well-researched area (see for example Alderson and Urquhart 1983, 1985a, 1985b; Bernhardt 1991; Carrell and Eisterhold 1983; Clapham 1996; Johnson 1981, 1982; Mohammed and Swales 1984; Salager-Meyer 1991; Steffensen and Joag-Dev 1984; Steffensen et al. 1979; Ulijn and Strother 1990). Of the various factors, text organization, especially rhetorical organization, was chosen for this investigation. This decision was based on an extensive literature survey of previous studies on text characteristics and readability (e.g. a series of studies by Beck and his colleagues, e.g. 1982, 1984, 1989, 1991, 1995; Britton et al. 1989; Davison and Kantor 1982; Duffy and Kabance 1982; Duffy et al. 1989; Graves et al. 1988, 1991; Klare 1985; Olsen and Johnson 1989; Reder and Anderson 1980; Urquhart, 1984).
". . . five types of top-level relationships are thought to represent patterns in the way we think . . . "
After exploring ways of operationalizing text organization, a difficult concept to express in concrete terms, Meyer's model of prose analysis was discovered (Meyer 1975, 1985). In Meyer's content structure analysis, idea units are organized in a hierarchical manner on the basis of their rhetorical relationships. The rhetorical relation at the highest level in the hierarchy is called the top-level rhetorical organization and this characterizes the text. The top-level rhetorical structure is identified as one of the following: 'collection', 'causation', 'response', 'description' or 'comparison' (Meyer later renamed 'response' as 'problem-solution'. This latter term has been used in this study). These five types of top-level relationships are thought to represent patterns in the way we think (Meyer 1985: 20).
The link between the ideas is weakest in 'collection', where ideas are loosely associated with each other around a common topic. 'Time sequence' is another type of 'collection', for example when recounting events in chronological order. In the 'causation' relation, the ideas are related both in terms of time (i.e. one event happens before another) and causality (i.e. the earlier event causes the latter). Finally, the 'response' ('problem-solution') relation involves more inter-relationship between ideas in that a solution is suggested in response to the existing causality.
'Comparison' and 'description' are on a different plane from the others because they are based on a hierarchy or subordination of ideas. In a 'description' relation, ideas are arranged in a hierarchical manner: "one argument is superordinate and the other modifies this superordinate argument" (Meyer 1985: 20). The 'comparison' relation has at least two subordinate arguments which are linked by an element of comparison. This means that there is more interlinking in the 'comparison' relation than in the 'description' relation.
Based on a number of empirical studies (Meyer and Freedle 1984; Meyer et al. 1980, 1993), Meyer claims that ideas are more easily remembered when presented in tightly-organized texts because of the close link between the ideas. Her claim has been supported by other researchers (e.g. Carrell 1984; Goh 1990; McGee 1982; Richgels et al. 1987). This study builds on these findings and explores their applicability in foreign language reading comprehension tests. After exploring this issue in a preliminary study, four text types: 'association', 'description', 'causation' and 'problem-solution' were chosen.

[ p. 65 ]

". . . open-ended questions . . . [are] more effective in measuring the understanding of main ideas of . . . [a] text whereas cloze tests only . . . [touch] upon local understanding . . . "
Another variable to be investigated was response format. Meyer and her associates used 'recall', that is, asking students to reproduce what they have read, as a way of measuring reading comprehension. But recall is not a common format in second language testing. Therefore, it was considered more worthwhile to examine more typical test formats. There are a number of research studies on the effects of test format on test performance (e.g. Graesser et al. 1980; Graves et al. 1991; Kintsch and Yarbrough 1982; Lewkowicz 1983; Reder and Anderson 1980; Shohamy 1984; Shohamy and Inbar 1991). Among others, Kintsch and Yarbrough's (1982) study proved inspirational. They investigated the effects of two test formats on reading comprehension: open-ended questions and cloze tests. They found that open-ended questions were more effective in measuring the understanding of main ideas of the text whereas cloze tests only touched upon local understanding and did not reflect the reader's overall understanding. Since text organization was the primary focus of this study, their findings were particularly relevant and it was decided to adopt their approach.
After a pilot study and further reading, this study included another test format: summary writing. Summary writing seemed to be even more sensitive to overall understanding than open-ended questions (Bensoussan and Kreidler 1990). The problem with open-ended questions is that different types of questions require varying levels of reading skills and varying amounts of information in the reading passage. If questions are asked about minor details or are related to local understanding, they do not require the reader to grasp the meaning of the whole text. Even though open-ended questions can touch upon the main themes of a text, questions normally prompt the reader to focus on specific ideas in the text. On the other hand, to write a summary, the reader needs to be able to distinguish the main ideas from minor details and to identify the macrostructure of a text. This seemed to suggest that the text structure involved in the passages could have a strong impact on performance in summary writing.
In addition to the two main variables, learners' language proficiency level was selected as a third variable: the results of the pilot test had suggested that the impact of this factor would also be worth exploring. The participants were therefore divided into three groups according to their results on a short English proficiency test described below.

The purpose of the study

The research questions were formulated as follows:
  1. Does text organization affect reading comprehension test performance?
  2. Does test format affect reading comprehension test performance?
  3. Does learners' proficiency level interact with either of these variables?
The research hypotheses were:
Methodology

The research methods are as follows:

Participants

In total 735 Japanese university students participated. The majority of them were 18-19 years of age and in their first- or second-year. These students were mainly at lower-intermediate to intermediate levels of English proficiency. They were randomly divided into twelve groups to cater for the variables, with each student receiving one of a selection of 12 reading comprehension tests (see below).

[ p. 66 ]


Materials

Two tests were administered to the students:
	1.  A 50-item English proficiency test which was mainly based on grammar and vocabulary

The purposes of the test were to establish the comparability of the twelve student groups and identify three different proficiency groups as a basis for comparison at a later stage of the study. The test statistics were: = 29.7 out of 50; S.D. = 8.07; reliability alpha = .82; facility values ranged from .17 to .99 with a mean of .59; item-total correlation ranged from .08 to .53 with a mean of .34.
	2.  A variety of reading comprehension tests


The texts used in the study were specially prepared to maximise control over the variables identified in the pilot study. On the basis of expert judgement regarding their suitability as representative samples of the selected text types, two sets of texts concerning 'international aid' and 'sea safety' were finally selected for use in the study. The mean length of the texts was 369.3 words (with the range of 352-384), and the mean score was 64.4 (with the range of 58.5-69.9) on the Flesch Reading Ease Formula, which is one of the most widely recognised readability indices. After two sets of four texts had been selected, test items were developed for each text in three formats: cloze, open-ended questions, and summary writing.
The number of items for each test were 25 for the cloze test; 5 for the open-ended question format; and 10 for summary writing. Two response formats & open-ended questions and summary writing & were set in Japanese, the students' first language, to eliminate undesirable effects of the use of English on reading performance.
Every effort was made to maximize the comparability across the eight tests. To achieve this, extensive use was made of expert judgements (see below). For example, for the cloze tests, the deletion rate (every 13th word) was decided on the basis of the results of the pilot study, and the starting points for deletion were decided after extensive analysis of the nature and types of potential cloze items (see Appendix 1).

Procedures

Ideally, all the students would have received all the versions to facilitate comparison of test performance. However, this approach had two limitations: in practicality and validity. It was impractical for the students to take all 24 tests, considering the amount of time required. Secondly, the validity of the research would have been undermined if the students had been given all 24 texts because they would have read a set of eight texts three times. Shohamy (1984) questions the validity of the study of Samson (1983), who compared three test formats by allowing the participants to take all the versions based on the same passage. Furthermore, in my study, the four texts within each topic were fairly similar, varying only in text structure. This would have caused a similar problem, arising from familiarity effects.
Therefore, a matrix sampling procedure was adopted in which each student was given only two of the 24 texts. Each student would receive one text from each topic, 'international aid' and 'sea safety'. Both of these texts would be of the same text type and in one of the three test formats. This meant that there would be 12 participant groups, each taking a different set of test versions, varying in text type and response format. Table 1 below summarises the 12 participant groups. For example, Group 2 would take a cloze test with two 'causation' texts while Group 9 would write summaries of two 'association' texts. A one-way ANOVA was conducted, and it was statistically confirmed that there was no significant difference among the twelve groups in their English language proficiency (F (11, 723) = 0.39, n.s.).

[ p. 67 ]



Table 1. Twelve participant groups used in the present study.
Table 1

Furthermore, to eliminate an order effect, the order of the two texts was counterbalanced in each set of tests. This resulted in 24 different sets of test booklets: two sets of twelve different tests. The test booklets, 24 different versions, were arranged so that each version would be randomly distributed among the students. In this way, the students were randomly divided into twelve groups.

Expert judgement

**ADD NUMBER HERE** expert judges were invited to assist at different stages of this study, from text selection and item analysis to establishing marker reliability. For example a number of people were asked to analyse test items in detail in order to maximise the comparability across the eight texts.

Statistical analysis

Using SPSS/PC, descriptive statistics were calculated. Furthermore, ANOVAs (both one-way and two-way analysis of variance) were conducted to test the research hypotheses. The significance level was set at p < .05. To assess the reliability of the researcher's marking, 15% of the papers of for the open-ended questions and the summary writing task were independently marked by two other expert judges. The correlation coefficients among the raters ranged from .85 to .92. These were deemed to be satisfactory for the purpose.

Results

Figure 1 below shows the students' mean scores on the reading tests for the four different types of text structure and the three types of response format.

Figure 2
Figure 1. Reading Comprehension Test scores (%).

". . . reading comprehension is assessed through open-ended questions, it does not matter what kind of text structure is involved as long as there is some kind of structure . . ."
The figure shows that, in the cloze tests, the mean scores were highest in 'association' texts and lowest in 'problem-solution' texts. In other words, comprehension performance as measured by the cloze format was better in loosely-organized texts and became poorer as the text structure became tighter. On the other hand, in open-ended questions and summary writing, the students' mean scores were lowest in 'association' texts, the most loosely organized texts. While the highest scores for open-ended questions were in 'description' texts, for summary writing the highest scores were in 'causation' texts. More generally, for the summary writing response format the two most tightly-organized texts ('causation' and 'problem-solution' texts) produced the highest mean scores, whereas for the open-ended response format equally high values were observed in three text types ('description', 'causation', and 'problem-solution' texts). This may suggest that when reading comprehension is assessed through open-ended questions, it does not matter what kind of text structure is involved as long as there is some kind of structure. There seems to be a clear distinction between cloze tests and the other two formats in their interaction with types of text organization. This difference was statistically significant (see Appendix 2). In other words, it can be claimed that test performance is affected by text type and response format.

[ p. 68 ]

More interestingly, the two-way interaction between the two effects proved to be statistically significant (F (6, 723) = 6,149**, p < .005). This means that text types and response format not only had significant effects on reading comprehension separately, but they also interacted with each other.
It is interesting to find that the presence of clear text structure did not help reading comprehension performance in cloze tests, and perhaps even hindered it. No other studies have been conducted in this area, so it is difficult to explain this pattern. It may be related to the density of information; tightly-organized texts may compress more different ideas into a limited space so as to include all elements needed to develop an argument, and therefore may contain more new words (see Kintsch and Keenan 1973). As the frequency of a word's recurrence in a text seems to be one of the factors affecting cloze item difficulty (see Kobayashi 2002b), this is an interesting area to explore further.
When the results were examined in terms of the learners' language proficiency level, more striking results emerged. The following set of figures (Figures 2-4) show the mean scores of the three proficiency groups.

Figure 2
Figure 2. Cloze Test results (%) by proficiency levels.
Figure 3
Figure 3. Open-Ended Questions results (%) by proficiency levels.
Figure 4
Figure 4. Summary Writing results (%) by proficiency levels.

Overall the effects of tighter text organization were more apparent with higher proficiency learners, notably when the open-ended questions and summary writing were used as the response format. By contrast, the performance of less proficient students showed little variation according to text type or test format. This was again statistically confirmed (see Appendix 2).

[ p. 69 ]

From this finding, it can be posited that, in open-ended questions and summary writing, the impact of different kinds of text organization varies considerably across different proficiency groups. When texts with looser structures were used, the reading comprehension measured by these response formats did not correspond to general language proficiency as much as when more tightly-organized texts were used. This seems to suggest that, in these test formats students of higher proficiency could be unfairly disadvantaged and their proficiency may not be reflected accurately in test performance if less structured passages are presented.

Conclusion

This research has employed Bachman's influential model of language ability and test method facets as an organizing framework. The findings of this study have provided data to support two aspects of his model: the effect of the nature of the input and the nature of the expected response on reading comprehension. More research needs to be conducted in this area so that the findings reported here can be illuminated further, but it seems that the main implications of this study for language testing and second language research are clear.
Very often, test results are used as evidence for making important decisions. For example, test results may be used to decide whether a student should be admitted to university, whether a prospective employee should be hired, or whether a project should continue or not. This study has clearly demonstrated that there is a systematic relationship between the students' test performance and the two variables examined. Therefore, it is extremely important for language testers, or anyone who makes judgements on the basis of test results, to pay attention to the test methods used when they produce their assessment instruments or interpret test scores.

Follow-up

As a brief follow-up, this research has recently been extended to investigate text structures involved in actual reading passages used for university entrance examinations in Japan. Reading passages from the Centre Examinations for the past seven years were examined. Altogether 28 passages were examined, four for each year: Question 4 & Question 6 of both the main exam and the additional exam for those who could not take the first one. Out of the 28 passages, half of them were narrative, mostly heart-warming stories with some moral messages. These passages did not have any clear text structure, apart from a loose time sequence. The other half were expository texts, involving charts or tables. The analysis revealed that the vast majority of these texts had 'description' as the main text organization, and none had 'causation' or 'problem-solution'. As discussed earlier in my research, 'description' is not tightly organized compared with 'causation' or 'problem-solution'. This lack of clear structure in many of the actual reading passages used for the Centre Examinations seems to present a problem as the more proficient students could be disadvantaged.
The types of questions which appeared in the Centre Examinations are a further source of concern. Kobayashi (1995, 2004b) discovered that local level questions tended to have poor discrimination between students. It is therefore worrying that the vast majority of the questions in the Centre Examinations seemed to require only a small amount of context. It is also a problem that some questions only required the ability to understand a chart or table, not the comprehension of the content of a passage.
Of course, this investigation is exploratory and has been limited in its scope, but this finding seems to present important practical problems with the Centre Examination. This is a high-stakes examination and therefore it is essential that it should be well-designed. This research has identified a number of issues which should be taken into account during the future reviews of the examinations.

References

Alderson, J.C. & Urquhart, A.H. (1983). The effect of student background discipline on comprehension: a pilot study. In A. Hughes, & D. Porter, (Eds.) Current Developments in Language Testing, (pp.121-127). London: Academic Press.

Alderson, J.C. & Urquhart, A.H. (1985a). This test is unfair: I'm not an economist. In P. L. Carrell, J. Devine, & D. Eskey (Eds.), Interactive Approaches to Second Language Reading (pp.168-182). Cambridge: Cambridge University Press.

Alderson, J.C. & Urquhart, A.H. (1985b). The effect of students' academic discipline on their performance on ESP reading tests. Language Testing, 2, 192 204.

[ p. 70 ]


Bachman, L. F. (1990). Fundamental Considerations in Language Testing. Oxford: Oxford University Press.

Bachman, L. F. & Palmer, A. S. (1996). Language Testing in Practice. Oxford: Oxford University Press.

Beck, I. L., McKeown, M. G. & Gromoll, E.W. (1989). Learning from social studies texts. Cognition and Instruction, 6, 99 158.

Beck, I. L., McKeown, M. G., Omanson, R. C. & Pople, M. (1984). Improving the comprehensibility of stories: The effects of revisions that improve coherence. Reading Research Quarterly, 19, 263 277.

Beck, I. L., McKeown, M. G., Sinatra, G. M., & Loxterman, J.A. (1991). Revising social studies text from a text processing perspective: Evidence of improved comprehensibility. Reading Research Quarterly, 26, 251 276.

Beck, I. L., McKeown, M.G. & Worthy, J. (1995). Giving a text voice can improve students' understanding. Reading Research Quarterly, 30, 220-238.

Beck, I. L., Omanson, R. C. & McKeown, M. G. (1982). An instructional redesign of reading lessons: Effects on comprehension. Reading Research Quarterly, 17, 462 481.

Bensoussan, M. & Kreidler, I. (1990). Improving advanced reading comprehension in a foreign language: Summaries vs. short answer questions. Journal of Research in Reading, 13, 55 68.

Bernhardt, F. B. (1991). Reading Development in Second Language: Theoretical, Empirical and Classroom Perspectives. ***ADD NAME OF CITY*****, New Jersey: Ablex Publishing Corporation.

Britton, B. K., Van Dusen, L., Gulgoz, S. & Glynn, S. M. (1989). Instructional texts rewritten by five expert teams: Revisions and retention improvements. Journal of Educational Psychology, 81, 226 239.

Carrell, P. L. (1984). The effects of rhetorical organization on ESL readers. TESOL Quarterly, 18, 441 469.

Carrell, P. L. & Eisterhold, J.C. (1983). Schema theory and ESL reading pedagogy. TESOL Quarterly, 17, 553 573.

Clapham, C. (1996). The Development of IELTS: A study of the effect of background knowledge on reading comprehension. Cambridge: Cambridge University Press.

Davison, A. & Kantor, R. (1982). On the failure of readability formulas to define readable texts: A case study from adaptations. Reading Research Quarterly, 17, 187 209.

Duffy, T. M. & Kabance, P. (1982). Testing a readable writing approach to text revision. Journal of Educational Psychology, 74, 733 748.

Duffy, T. M., Higgins, L., Mehlenbacher, B., Cochran, C., Wallace, D., Hill. C., Haugen, D., McCaffrey, M., Burnett, R., Sloane, S., & Smith, S. (1989). Models for the design of instructional text. Reading Research Quarterly, 24, 434 457.

Goh, S. T. (1990). The effects of rhetorical organization in expository prose on ESL readers in Singapore. RELC Journal, 21, 1-13.

Graesser, A. C., Hoffman, N. L. & Clark, L. F. (1980). Structural components of reading time. Journal of Verbal Learning and Verbal Behavior, 19, 135 151.

Graves, M. F., Slater, W. H., Roen, D., Redd Boyd, T., Duin, A.H., Furniss, D. W. & Hazeltine, P. (1988). Some characteristics of memorable expository writing: Effects of revisions by writers with different backgrounds. Research in the Teaching of English, 22, 242 280.

Graves, M. F., Prenn, M. C., Earle, J., Thompson, M., Johnson, V. & Slater, W. H. (1991). Improving instructional texts: Some lessons learned. Reading Research Quarterly, 26, 110 132.

[ p. 71 ]


Johnson, P. (1981). Effects on reading comprehension of language complexity and cultural background of a text. TESOL Quarterly, 15, 169-181.

Johnson, P. (1982). Effects of reading comprehension of building background knowledge. TESOL Quarterly, 16, 503-516.

Kintsch, W. & Keenan, J. (1973). Reading rate as a function of number of propositions in the base structure of sentences. Cognitive Psychology, 6, 257 274.

Kintsch, W. & Yarbrough, J.C. (1982). Role of rhetorical structure in text comprehension. Journal of Educational Psychology, 74, 828 834.

Klare, G. R. (1985). How to Write Readable English. London: Hutchinson.

Kobayashi, M. (1995). Effects of text organization and test format on reading comprehension test performance. Unpublished Ph.D Thesis. Thames Valley University, London.

Kobayashi, M. (2002a). Method effects on reading comprehension test performance: Text organization and response format. Language Testing, 19, 191-218.

Kobayashi, M. (2002b). Cloze tests revisited: Exploring item characteristics with special attention to scoring methods. Modern Language Journal, 86, 571-586.

Kobayashi, M. (2004a). Investigation of test method effects - text organization and response format: A response to Chen, 2004. Language Testing, 21, 235-244.

Kobayashi, M. (2004b). Reading comprehension assessment: From text perspectives. Scientific Approaches to Language (Center for Language Sciences, Kanda University of International Studies), 3, 129-157.

Lewkowicz, J. A. (1983). Method effect on testing reading comprehension: A comparison of three methods. Unpublished MA Thesis. University of Lancaster.

McGee, L. M. (1982). Awareness of text structure: Effects on children's recall of expository text. Reading Research Quarterly, 17, 581 590.

Meyer, B.J.F. (1975). The organization of prose and its effects on memory. Amsterdam: North Holland Publishing Company.

Meyer, B.J.F. (1985). Prose analysis: Purpose, procedures, and problems: Parts I and II. In B. K. Britton & J. B. Black (Eds.), Understanding Expository Text (pp. 11 64 & pp. 269 304). Hillsdale, N.J.: Lawrence Erlbaum.

Meyer, B.J.F., Brandt, D. M. & Bluth, G. J. (1980). Use of top level structure in text: Key for reading comprehension of ninth grade students. Reading Research Quarterly, 16, 72 103.

Meyer, B.J.F. & Freedle, R. O. (1984). Effects of discourse type on recall. American Educational Research Journal, 21, 121 143.

Meyer, B.J.F., Marsiske, M. & Willis, S. L. (1993). Text processing variables predict the readability of everyday documents read by older adults. Reading Research Quarterly, 28, 234-249.

Mohammed, M.A.H. & Swales, J. M. (1984). Factors affecting the successful reading of technical instructions. Reading in a Foreign Language, 2, 206-217.

Olsen, L.A. & Johnson, R. (1989). A Discourse based approach to the assessment of readability. Linguistics and Education, 1, 207 231.

Reder, L. M. & Anderson, J. R. (1980). A comparison of texts and their summaries: Memorial consequences. Journal of Verbal Learning and Verbal Behavior, 19, 121 134.

Richgels, D. J., McGee, L. M., Loman, R. G., & Sheard, C. (1987). Awareness of four text structures: Effects on recall of expository text. Reading Research Quarterly, 22, 177 196.

[ p. 72 ]


Salager-Meyer, F. (1991). Reading expository prose at the post-secondary level: the influence of textual variables on L2 reading comprehension (a genre-based approach). Reading in a Foreign Language, 8, 645-662.

Samson, D.M.M. (1983). (cited in Shohamy 1984). Rasch and reading. In H. van Weeren, (Ed.), Practice and Problems in Language Testing. Arnhem: S. CTTO.

Shohamy, E. (1984). Does the testing method make a difference? The case of reading comprehension. Language Testing, 1, 147 170.

Shohamy, E. and Inbar, O. (1991). Validation of listening comprehension tests: the effect of text and question type. Language Testing, 8, 23 40.

Steffensen, M.S. & Joag-Dev, C. (1984). Cultural knowledge and reading. In J.C. Alderson, & A.H. Urquhart, (Eds.), Reading in a Foreign Language (pp. 48-61). London: Longman.

Steffensen, M.S., Joag-Dev, C. & Anderson, R.C. (1979). A cross-cultural perspective on reading comprehension. Reading Research Quarterly, 15, 10-29.

Ulijn, J. M. and Strother, J. B. (1990). The effect of syntactic simplification on reading EST texts as L1 and L2. Journal of Research in Reading, 13, 38-54.

Urquhart, A. H. (1984). The effect of rhetorical ordering on readability. In J.C. Alderson & A.H. Urquhart (Eds.), Reading in a Foreign Language (pp. 160 175). London: Longman.


Main Article Appendix 1 Appendix 2


2005 Pan SIG-Proceedings: Topic Index Author Index Page Index Title Index Main Index
Complete Pan SIG-Proceedings: Topic Index Author Index Page Index Title Index Main Index

Last [ p. 73 ] Next