Testing, as a part of English teaching, is a very important procedure, not just because it can be a valuable source of information about the effectiveness of learning and teaching but also because it can improve teaching, and arouse the student’s motivation to learn. Testing oral proficiency has become one of the most important issues in language testing since the role of speaking ability has become more central in language teaching with the advent of communicative language teaching (Nakamura, 1993). However, assessing speaking is challenging (Luoma, 2004). Validity and reliability, as fundamental concerns and essential measurement qualities of the speaking test (Bachman, 1990; Bachman & Palmer, 1996; Alderson et al, 1995), have aroused widespread attention. The validation of the speaking test is an important area of research in language testing.
Test of oral proficiency just started in China 15 years ago, and there are a few very dominant tests. An increasing number of Chinese linguists are putting their attention and efforts on analysis of their validity and reliability. Institutions began to introduce speaking tests into English exams in recent years with the widespread promotion of communicative language teaching (CLT). Publications that deal with speaking tests within institutions provide some qualitative assessments (Cai, 2002). But there is relatively little research literature relating to the reliability and validity of such measures within a university context. (Wen, 2001).
The College English Department at Dalian Nationalities University (DLNU) has been selected as one of thirty-one institutions of the College English Reform Demonstration Project in the Peoples’ republic of China. In College English (CE) course of DLNU, the speaking test is one of the four subtests of the final examination of English assessment. The examination uses two different formats. One is a semi-direct speaking test, in which examinees talk to microphones connected to computers, and have their speeches recorded for the teachers to rate afterwards. The other is a face-to-face interview. This research in this paper aims to ascertain the degree of the reliability and validity of the speaking tests. By analyzing the results of the research, teachers will become more aware of the validity and reliability of oral assessments, including how to improve the reliability and validity of speaking tests. I, as a language teacher, will gain insight into the operation of language proficiency test, In order to better degree of reliability and validity of a particular test, I will also take other qualities of test usefulness into account when designing the language proficiency test., such as practicality and authenticity.
This study mainly addresses the questions of validity and reliability of the speaking test administered at DLNU. They are comprehensive concepts that involve analysis of test tasks, administration, rating criteria, examinee and tester’s attitudes towards the test, the effect of the test on teaching and teacher or learner attitudes towards learning the tests (Luoma, 2004). Therefore, the purpose of this study is to answer the following research questions:
1. Is the speaking test administered at DLNU a valid and reliable test? This question can involve the following two sub-questions:
1) To what extent is the speaking test administered at DLNU reliable?
2) To what extent is the speaking test administered at DLNU valid?
2. In what aspects and to what extent may the validity and reliability of the speaking test administered at DLNU be improved?
This chapter presents a theoretical framework of speaking construct, ways of testing speaking, marking of speaking test and the reliability and validity of speaking test, also introduces the situation of speaking test in China.
Analyzing Speaking And Speaking Test
The Nature Of Speaking
Speaking, as a social and situation-based activity, is an integral part of people’s daily lives (Luoma, 2004). Testing second language speaking is often claimed to be a much more difficult undertaking than testing other second language abilities, capacities or competencies, skills¼ˆUnderhill, 1987). Assessment is difficult not only because speaking is fleeting, temporal and ephemeral, but also because of the comprehensibility of pronunciation, the special nature of spoken grammar and spoken vocabulary, as well as the interactive and social features of speaking (Luoma, 2004), because of the “unpredictability and dynamic nature” of language itself (Brown, 2003). To have a clear understanding of what it means to be able to speak a language, we must understand that the nature and characteristics of the spoken language differ from those of the written form (Luoma, 2004; McCarthy & O’Keefe, 2004; Bygate, 2001) in its grammar, syntax, lexis and discourse patterns due to the nature of spoken language.
Spoken English involves reduced grammatical elements arranged into formulaic chunk expressions or utterances with less complex sentences than written texts. Spoken English breaks the standard word order because the omitted information can be restored from the instantaneous context (McCarthy & O’Keefe, 2004; Luoma, 2004; Bygate, 2001; Fulcher, 2003). Spoken English contains frequent use of the vernacular, interrogatives, tails, adjacency pairs, fillers and question tags which have been interpreted as dialogue facilitators (Luoma, 2004; Carter & McCarthy, 1995). The speech also contains a fair number of slips and errors such as mispronounced words, mixed sounds, and wrong words due to inattention, which is often pardoned and allowed by native speakers (Luoma, 2004). Conversations are also negotiable, unpredictable, and susceptible to social and situational context in which the talks happen (Luoma, 2004).
The Importance Of Speaking Test
Testing oral proficiency has become one of the most important issues in language testing since the role of speaking ability has become more central in language teaching with the advent of CLA (Nakamura, 1993). Of the four language skills (listening, speaking, reading, &writing), listening and reading occur in the receptive mode, while speaking and writing exist in the productive mode. Understanding and absorption of received information are foundational while expression and use of acquired information demonstrate an improvement and a more advanced test of knowledge. A lot of interests now in oral testing is partly because second language teaching is more than ever directed towards the speaking and listening skills¼ˆUnderhill, 1987). Language teachers are engaged in “teaching a language through speaking” (Hughes, 2002:7). On one hand, spoken language is the focus of classroom activity. There are often other aims which the teacher might have: for instance, helping the student gain awareness of practice in some aspect of linguistic knowledge (ibid). On the other hand, speaking test, as a device for assessing the learners’ language proficiency also functions to motivate students and reinforce their learning of language. This represents what Bachman (1991) has called an “interface” between second language acquisition (SLA) and language testing research.
However, assessing speaking is challenging, “because there are many factors that influence our impression of how well someone can speak a language” (Luoma, 2004:1) as well as unpredictable or impromptu nature of the speaking interaction. The testing of speaking is difficult due to practical obstacles and theoretical challenges. Much attention has been given to how to perfect the assessment system of oral English and how to improve its validity and reliability. The communicative nature of the testing environment also remains to be considered (Hughes, 2002).
The Construct Of Speaking
Introduction To Communicative Language Ability (CLA)
A clear and explicit definition of language ability is essential to language test development and use (Bachman,1990). The theory on which a language test is based determines which kind of language ability the test can measure, This type of validity is called construct validity. According to Bachman (1990:84), CLA can be described as “consisting of both knowledge or competence and the capacity for implementing or executing that competence in appropriate, contextualized communicative language use”. CLA includes three components: language competence, strategic competence and pyschophysiological mechanisms. The following framework (figure 2.1) shows components of communicative language ability in communicative language use (Bachman,1990:85).
Knowledge Structures Language Competence
Knowledge of the world Knowledge Of Language
Context Of Situation
This framework has been widely accepted in the field of language testing. Bachman (1990:84) proposes that “language competence” essentially refers to a set of specific knowledge components that are utilized in communication via language. It comprises organizational and pragmatic competence. Two areas of organizational knowledge that Bachman (1990) distinguishes are grammatical knowledge and textual knowledge. Grammatical knowledge comprises vocabulary, syntax, phonology and graphology, and textual knowledge, comprises cohesion and rhetorical or conversational organization. Pragmatic competence shows how utterances or sentences and texts are related to the communicative goals of language users and to the features of the langue-use setting. It includes illocutionary acts¼Œor language functions, and sociolinguistic competence, or the knowledge of the sociolinguistic conventions that govern appropriate language use in a particular culture and in varying situations in that culture (Bachman, 1987).
Strategic competence refers to mastery of verbal and nonverbal strategies in facilitating communication and implementing the components of language competence. Strategic competence is demonstrated in contextualized communicative language use, such as socialcultural knowledge, real-world knowledge and mapping this onto the maximally efficient use of existing language abilities.
Psychophysiological competence refers to the visual and auditory skill used to gain access to the information in the administrator’s instructions. Among other things, psychophysiological competence includes things like sound and light.
Fulcher’s Construct Definition
To know what to assess in a speaking test is a prime concern. Fulcher (1997b) points out that the construct of speaking proficiency is incomplete. Nevertheless, there have been various attempts to reflect the underlying construct of speaking ability and to develop theoretical frameworks for defining the speaking construct. Fulcher’s framework (figure 2.2) (Fulcher, 2003: 48) describes the speaking construct.
As Fulcher (2003) points out that there are many factors that could be included in the definition of the construct:
Phonology: the speaker must be able to articulate the words, have an understanding of the phonetic structure of the language at the level of the individual word, have an understanding of intonation, and create the physical sounds that carry meaning.
Fluency and accuracy: these concepts are associated with automaticity of performance and the impact on the ability of the listener to understand. Accuracy refers to the correct use of grammatical rules, structure and vocabulary in speech. Fluency has to do with the ‘normal’ speed of delivery to mobilise one’s language knowledge in the service of communication at relatively normal speed. The quality of speech needs to be judged in terms of the gravity of the errors made or the distance from the target forms or sounds.
Strategic competence: this is generally thought to refer to an ability to achieve one’s communicative purpose through the deployment of a range of coping strategies. Strategic competence includes both achievement strategies and avoidance strategies. Achievement strategies contain overgeneralization/morphological creativity. Learners transfer knowledge of the language system onto lexical items that they do not know, for example, saying “buyed” instead of “bought”, Speakers also learn approximation: learners replace an unknown word with one that is more general or they use exemplification, paraphrasing (use a synonym for the word needed), word coinage (invent a new word for an unknown word), restructuring (use different words to communicate the same message), cooperative strategies (ask for help from the listener) , code switching (take a word or phrase from the common language with the listener in order to be understood) and non-linguistic strategies (use gestures or mime, or point to objects in the surroundings to help to communicate). Avoidance or reduction strategies consist of formal avoidance (avoiding using part of the language system) and functional avoidance (avoiding topical conversation). Strategic competence includes selecting communicative goals and planning and structuring oral production so as to fulfill them.
Textual knowledge: competent oral interaction involves some knowledge of how to manage and structure discourse, for example, through appropriate turn-taking, opening and closing strategies, maintaining coherence in one’s contributions and employing appropriate interactional routines such as adjacency pairs.
Pragmatic and sociolinguistic knowledge: effective communication requires appropriateness and the knowledge of the rules of speaking. A range of speech acts, politeness and indirectness can be used to avoid causing offence.
Ways Of Testing Speaking
Clark (1979) puts forward a theoretical basis to discriminate three types of speaking tests: direct, semi-direct and indirect tests. Indirect tests belong to “procommunicative” era in language testing, in which the test takers are not actually required to speak. It has been regarded as having the least validity and reliability, while the other two formats are more widely used (O’Loughlin, 2001). In this section, the characteristics, advantages and disadvantages of the direct and semi-direct test are presented,
The Oral Proficiency Interview Format
One of the earliest and most popular direct speaking test formats, and one that continues to exert a strong influence, is the oral proficiency interview (OPI) –developed originally by the FSI (Foreign Service Institute) in the United States in the 1950s and later adopted by other government agencies. It is conducted with individual test-taker by a trained interviewer, who assesses the candidate using a global band scale (O’Loughlin, 2001). It typically begins with a warm-up discussion of a few easy questions, such as getting to know each other or talking about the day’s events. Then the main interaction contains the pre-planned tasks, such as describing or comparing pictures, narrating from a picture series, talking about a pre-announced or examiner-selected topic, or possibly a role-play task or a reverse interview where the examinee asks question of the interviewer (Luoma. 2004). An important example of this type of test is the speaking component of the International English Language Testing System (IELTS), which is adopted in 105 different countries around the world each year.
The Advantage Of An Interview Format
The oral interview was recognized as the most commonly used speaking test format. Fulcher (2003) suggests that it is partly because the questions used can be standardized, making comparison between test takers easier than when other task types are used. Using this method, the instructor can get a sense of the oral communicative competence of students and can overcome weakness of written exams, because the interview, unlike written exams, “is flexible in that the questions can be adapted to each examinee’s performance, and thus the testers have more controls over what happens in the interaction” (Luoma, 2004:35). It is also relatively easy to train raters and obtain high inter-rater reliability (Fulcher, 2003).
The Disadvantage Of An Interview Format
However, concern and skepticism exist about whether it is possible to test other competencies or knowledge because of the nature of the discourse that the interview produces (van Lier, 1989).
a. Issue of time
For the instructor, time management can be quite an issue. For instance, using a two-hour period for exams for 20 students means each student is allowed only six minutes for testing. This includes the time needed to enter the room and adjust to the setting. With such a time limit the student and instructor can hardly have any kind of normal real-world conversation.
b. Issue of asymmetrical relationship
The asymmetrical relationship between examiners and candidates elicits a form of inauthentic and limited socio-cultural contexts (van Lier, 1989; Savignon, 1985; Yoffe, 1997). Yoffe (1997) commented on ACTFL (American Council on the Teaching of Foreign Languages) OPI that the tester and the test-taker are “clearly not in equal positions” (Yofee, 1997).
The asymmetry is not specific to the OPI but is inherent in the notion of an ‘interview’ as an exchange wherein one person solicits information in order to arrive at a decision while the interlocutor produces what he or she perceives as most valued. The interviewee is, in most cases, acutely aware of the ramifications of the OPI rating and is, consequently, under a great deal of stress.
Van Lier (1989) also challenges the validity of OPI in terms of the asymmetry between them because “the candidate speaks as to a superior and is unwilling to take the initiative” (van Lier, 1989). Under the unequal relationship, the speech discourse, such as turn –taking, topic nomination and development, and repair strategies are all substantially different from normal conversational exchanges (see van Lier 1989).
c. Issue of interviewer variation
Given the fact that the interviewer has considerable power over the examinee in an interview, concerns have been aroused about the effect of the interlocutor (examiner) on the candidate’s oral performance. Different interviewers vary in their approaches and attitudes toward the interview. Brown (2003) warns the danger of such variation to fairness. O’Sullivan (2000) conducts an empirical study that indicated learners perform better when interviewed by a woman, regardless of the sex of the learner. Underhill (1987:31) expresses his concern on the unscripted “flexibility… means that there will be a considerable divergence between what different learners say, which makes a test more difficult to assess with consistency and reliability.”
Testing Speaking In Pairs
There has been a shift toward a paired speakers format: two assessors examine two candidates at a time. One assessor interacts with the two candidates and rates them on a global scale, while the other does not take part in the interaction and just assesses–using an analytic scale. The paired oral test has been used as part of large-scale, international, standardized oral proficiency tests since the late 1980s (Ildikó, 2001). Key English Test (KET), Preliminary English Test (PET), First Certificate in English (FCE) and Certificate in Advanced English (CAE) make use of the paired format. In a typical test, the interaction begins with a warm-up, in which the examinees introduce themselves to the interlocutor, followed by two pair interaction task. The talk may involves comparing two photographs by each candidate at first, such as in Cambridge First Certificate (Luoma, 2004), then a two-way collaborative task between the two candidates based on more photographs, artwork or computer graphics, and ends up with a three-way discussion with the two examinees and the interlocutor about a general theme that is related to the earlier discussion.
The advantages of the paired interview format
Many researchers claim that the paired format is preferable to OPI. The reasons are:
a. The changed role of the interviewer frees up the instructors in order to pay closer attention to the production of each candidate than if they are participants themselves (Luoma, 2004).
b. The reduced asymmetry allows more varied interaction patterns, which elicits a broader sample of discourse and increased turn-takings than were possible in the highly asymmetrical traditional interview (Taylor, 2000).
c. The task type based on pair-work will generate a positive washback effect on classroom teaching and learning (Ildiko, 2001). In the case of the instructor following Communicative Language Teaching (CLT) methodology, where pair work may take up a significant portion of a class, it would be appropriate to incorporate similar activities in the exam. In that way the exam itself is much better integrated into the fabric of the course. Students can be tested for performance related to activities done in class. There may also be benefits in regards to student motivation. If students are aware that they will be tested on activities similar to the ones done in class, they may have more incentive to be attentive and use class time effectively.
The disadvantages of the paired interview format
There are, however, also concerns voiced regarding the paired format.
a. Mismatches between peer interactants
The most frequently raised criticisms against the paired speaking test relate to various forms of mismatches between peer interactants (Fulcher, 2003). Ildiko (2001) points out that when a candidate has to work with an incomprehensible or uncomprehending peer partner, it may negatively influence the candidate’s performance. As a consequence, in such cases it is quite impossible to make a valid assessment of candidates’ abilities.
b. Lack of familiarity between peer interactants
The extent to which this testing format actually reduces the level of anxiety of test-takers compared to other test formats remains doubtful (Fulcher, 2003). O’Sullivan (2002) suggests that the spontaneous support offered by a friend positively reduces anxiety and task performance under experimental conditions. However, the chances are quite high that the examinee will meet with strangers as his or her peer interactant. It is hard to imagine how these strangers can carry out some naturally flowing conversations. Estrangement, misinterpretation and even breakdown may occur during their talk.
c. Lack of control of the discussion
Problems are generated if the examiner loses control of the oral task (Luoma, 2004). When the instructions and task materials are not clear enough to facilitate the discussion, the examinees’ conversation may go astray. Luoma (2004) points out that testers often feel uncertain about what amount of responsibility that they should give to the examinees. Furthermore, examinees do not know what kind of performance will earn them good results without the elicitation of the examiner. When one of the examinees has said too little, the examiner ought to monitor and jump in to give help when necessary.
Semi-Direct Speaking Tests
The term “semi-direct” is employed by Clark (1979:36) to describe those tests that are characterized “by means of tape recordings, printed test booklets, or other ‘non-human’ elicitation procedures, rather than through face-to-face conversation with a live interlocutor.” Appearing during 1970s, and being an innovative adaptation of the traditional OPI, the semi-direct method normally follows the general structure of the OPI and makes an audio-recording of the test taker’s performance which is later rated by one or more trained assessors (Malone, 2000). Examples of the semi-direct type used in the U.S.A. are the simulated oral proficiency interviews (SOPI) and the Test of Spoken English 2000 (TSE) (Ferguson, 2009). Examples in U.K. include the Test in English for Education Purpose (TEEP) and the Oxford-ARELS Examinations (O’Loughlin, 2001). Another mode of delivery is testing by telephone — as in the PhonePass test (the test mainly consists of reading sentences aloud or repeating sentences), or even video-conferencing (Ferguson, 2009).
The Advantages Of The Semi-Direct Test Type
First, the semi-direct test is more cost efficient than direct tests, because many candidates can be tested simultaneously in large laboratories and administered by any teacher, language lab technician or aide in a language laboratory where the candidate hears taped questions and has their responses recorded (Malone, 2000).
Second, the mode of testing is quite flexible. It provides a practical solution in situations where it is not possible to deliver a direct test (O’Loughlin, 2001), and it can be adapted to the desired level of examinee proficiency and to specific examinee age groups, backgrounds, and professions (Malone, 2000).
Third, semi-direct testing represents an attempt to standardize the assessment of speaking while retaining the communicative basis of the OPI (Shohamy, 1994). It offers the same quality of interview to all examinees, and all examinees respond to the same questions so as to remove the effect that the human interlocutor will have on the candidate (Malone, 2000). The uniformity of the elicitation procedure greatly increases the reliability of the test.
Some empirical studies (Stansfield, 1991) show high correlations (0. 89- 0. 95) between the direct and semi-direct tests, indicating the two formats can measure the same language abilities and the SOPI can be the equivalent and surrogate of the OPI. However, there are also disadvantages.
The Disadvantages Of The Semi-Direct Test Type
First, the speaking task in semi-direct oral test is less realistic and more artificial than OPI (Clark, 1979; Underhill, 1987). Examinees use artificial language to “respond to tape-recorded questions — situations the examinee is not likely to encounter in a real-life setting” (Clark, 1979:38). They may feel stressful while speaking to a microphone rather than to another person, especially if they are not accustomed to the laboratory setting (O’Loughlin, 2001).
Second, the communicative strategy and speech discourse elicited in these semi-direct SOPIs is quite different from that found in typical face-face interaction – being more formal, less conversation-like (Shohamy, 1994). Candidates tend to use written language in tape-mediated test, more of a report or narration; while, they focus more on interaction and on delivery of meanings in OPI.
Third, there are often technical problems that can result in poor quality recordings or even no recording in the SOPI format (Underhill, 1987).
In conclusion, one cannot assume any equivalence between a face-to face test and a semi-direct test (Shohamy, 1994). It may be that they are measuring different things, different constructs, so the mode of test delivery should be adopted on the basis of test purpose, accuracy requirement, practicability, and impartiality (Shohamy, 1994). Stansfield (1991) proposes the OPI is more applicable to the placement test and evaluation test of the curriculum, while SOPI is more appropriate for large-scale test with requirement of high reliability.
Marking Of Speaking Test
Marking and scoring is a challenge in assessing second language oral proficiency.. Since only a few elements of the speaking skill can be scored objectively, human judgments play major roles in assessment. How to establish the valid, reliable, effective marking criteria scales and high quality scoring instruments have always been central to the performance testing of speaking (Luoma, 2004). It is important to have clear, explicit criteria to describe the performance, as it is important for raters to understand and apply these criteria, making it possible to score them consistently and reliably. For these reasons, rating and rating scales have been a central focus of research in the testing of speaking (Ferguson, 2009).
Definition Of Rating Scales
A rating scale, also referred to as a “scoring rubric” or “proficiency scale” is defined by Davies et al as following (see Fulcher, 2003):
·consisting of a series of band or levels to which descriptions are attached
·providing an operational definition of the constructs to be measured in the test
·requiring training for its effective operation
Holistic And Analytic Rating Scales
There are different types of rating scales used for scoring speech samples. One of the traditional and commonly used distinctions is between holistic and analytic rating scales. Holistic rating scales also are referred to as global rating. With these scales, the rater attempts to match the speech sample with a particular band whose descriptors specify a range of defining characteristics of speech at that level. A single score is given to each speech sample either impressionistically or by being guided by a rating scale to encapsulate all the features of the sample (Bachman & Palmer, 1996).
Analytic rating scales: They consist of separate scales for different aspects of speaking ability (e.g. grammar / vocabulary; pronunciation, fluency, interactional management, etc). A score is given for each aspect (or dimension), and the resulting scores may be combined in a variety of ways to produce a composite single overall score. They include detailed guidance to raters, and rich information that they provide on specific strengths and weakness in examinee performance (Fulcher, 2003). Analytic scales are particularly useful for diagnostic purposes and for providing a profile of competence in the different aspects of speaking ability (Ferguson, 2009). The type of scale that is selected for a particular test of speaking will depend upon the purpose of the test
Validity And Reliability Of Speaking Test
Bachman And Palmers Theories On Test Usefulness
The primary purpose of a language test is to provide a measure that can be interpreted as an indicator of an individual’s language ability (Bachman, 1990; Bachman and Palmer, 1996). Bachman and Palmer (1996) propose that test usefulness including six test qualities—reliability, construct validity, authenticity, interactiveness, impact (washback) and practicality. Their notion of usefulness can be expressed as in Figure2.3:
Usefulness=Reliability + Construct validity + Authenticity +
Interactiveness + Impact +Practicality
These qualities are the main criteria used to evaluate a test. “Two of the qualities — reliability and validity — are critical for tests and are sometimes referred to as essential measurement qualities” (Bachman & Palmer, 1996:19), because they are the “major justification for using test scores as a basis for making inferences or decisions” (ibid). The definitions of types of validity and reliability will be presented in this section.
Validity And Reliability
The quotation from AERA (American Educational Research Association ) indicates:
“Validity is the most important consideration in test evaluation. The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores. Test validation is the process of accumulating evidence to support such inferences. A variety of inferences may be made from scores produced by a given test, and there are many ways of accumulating evidence to support any particular inference. Validity, however, is a unitary concept. Although evidence may be accumulated in many ways, validity always refers to the degree to which that evidence supports the inferences that are made from the score. The inferences regarding specific uses of a test are validated, not the test itself.”
(AERA et al., 1985: 9)
Messick stresses that “it is important to note that validity is a matter of degree, not all or none’ (Mess