Assessing speaking in junior high school:
by Tomoyasu Akiyama (The University of Melbourne)
|
[ p. 107 ]
The 'indirect speaking tests' in Section 2 of this test were low on interactiveness because students were only required to select an English sentence which situated a given scenario most appropriately. In terms of test impact, this test does not encourage students or teachers to focus on oral/aural skills. In terms of practicality, however, the current paper-and-pencil English examination test is highly practical. Hence the 1999 version of the test has two strong points (reliability and practicality) and four low points (construct validity, impact, authenticity and interactiveness), as depicted in Figure 2.". . . speaking tests tend have inherently many variables which reduce reliability . . . In terms of authenticity, however, the inclusion of the speaking tests could be a genuine boon." |
[ p. 108 ]
As studies by Brindley (1999) indicate, the reliability of school based assessment tends to be low. The construct validity could potentially be high as Hamp-Lyons (1996) claims. She argues that portfolio assessment tends to have more task validity than traditional tests. Authenticity and interactiveness could be potentially high because school-based assessment can provide ample opportunity to speak. However, these judgments need to be made with caution because results may vary significantly depending on teachers and teaching styles. Practicality seems to be the main reason that tests do not currently have a speaking component.[ p. 109 ]
Raters and scoring criteria[ p. 110 ]
Person fit indexes[ p. 111 ]
References