The topic related National English Speaking Exam in secondary Education is an important one. There are official guidelines but also challenges that are have raised passion and controversy regarding its effectiveness in assessing candidates for the BEPC (8th Grade O.Level) and the Baccalaureate (12th Grade A.Level). At some point, some officials at the MOE suggested that this oral part of the English test be simply removed. But, it seems to be here to stay, given the importance of the English language. In this talk, I’d like to highlight the following points:
- The rationale for the introduction of an oral component in the English tests for the BEPC and the Bac exams
- Format and Administration
- Evaluation in terms of practicality, validity and reliability
- Challenges that need to be taken up
The rationale for the introduction of an oral part in the Bac and BEPC exam
EFL courses in most countries are based on the 4 main macro-skills: Reading, Writing, Speaking and Listening. With reference to a well-known document written by the then ELT officials (The objectives of ELT in CI,1996) and signed by two ministers, the English language learning in the context of West Africa should be in line with the integration goals of West African States within such organizations as ECOWAS that comprise both Francophone and anglophone countries. This document suggests that ELT highlight development of the four macro-skills in an integrated way through CLT. This explains why both the BAC and BEPC have two main components:
Written part: Reading passage + comprehension questions; Grammar section (Language in use) and a piece of Writing (letter writing-opinion-giving- article writing- Describing- telling a story).
Oral part: The candidate has a direct exchange with the examiner over a material which may be a short passage or pictures.
Format and Administration
Objectives: According to the reference handbook for the administration of the BEPC and BAC, the Oral tests aims to check the candidate’s comprehension and analysis of a text or an iconographic material through questions and answers. In this regard, it specifies that the material is just a pretext for discussion and not an end in itself. The reference handbook also adds that the rater or examiner can also consider the candidate’s “intellectual curiosity” and “interest” for the topic as bonus criteria for the assessment of his/her performance.
Format:
The Oral part: consists in having the candidate have a direct exchange with the examiner over a material which may be a short passage or images.
Materials:
Texts and pictures related to topics from the curriculum/syllabus. For some unknown reasons, only BEPC candidates use both pictures and texts while Terminale classes use texts only.
Administration:
Prior to the beginning of the oral exam a moderator or harmonizer chosen from among experienced teachers, holds a meeting with the examiners/interviewer to remind them of official instructions: equality of chances to all candidates, prohibition to visit colleague interviewers, using phones, etc. Next, the moderator interviews one or 2 candidate to set as an example.
Timing: officially 30 minutes per candidate (15 prep + 15mn interview)
Number of candidates: Officially 32 per day, ie 16 half a day
Grades: out of 20
Weighting: Coefficient1 vs 2 for writing (30% of the English Test)
Evaluation of Oral Test
From 3 perspectives: Practicality; Content validity and Reliability
Practicality:
A test is said to be practical when it’s easy to administer and is cost-effective. Does our oral test respect this criterion? I would say ‘yes’, because we just need texts and pictures as materials and scorers.
Content Validity:
Does the content tested reflect what has been taught or what is in the curriculum? Validity refers to the extent to which a test actually measures what it intends to measure.
Based on the topic discussed in the materials used for the test, we could say ‘yes’
Reliability:
How reliable is this oral test? Reliability is another criterion through which we can say a test is good or not. It’s the ability of a test to produce the same results if given on different occasions. What could make the test reliable?
Do we use a rubric that describes the different levels of performance of each candidate and the grade allotted to it. And if this exists, are examiners trained as to how to use it?
We used to have one very simplistic rubric that lists 5 skill categories targeted in the test: Comprehension, Fluency, Pronunciation, Vocabulary and Grammar. Against each category, we have a scale of grades ranging from 1 to 4. The examiner circles the appropriate grade for each category and makes the total of all the categories out of 20. How many scorers use this rubric? We cannot tell. Sometimes, it’s available and some other times it’s not.
As you can see, the role of the harmonizer/moderator is to help reduce both intra-scorer and inter-scorer reliability issues, but do they succeed in this? Hard to say.
In addition to these objective issues, there’s the question of bribery that seriously affects the reliability of the test.
Challenges to take up
What is likely to happen in the future? Will the MOE decide to stop administering the Oral exam due to its lack of reliability?
- Set up a more reliable rubric: A rubric that describes in detail different levels of performance with corresponding grades (band descriptors)
- A training of scorers as to how to use it
- Reinforce the fight against bribery
- Providing students with the marking guide in advance helps them understand the nature of good work and helps them to evaluate the quality of their own work in the assessment
Will these be enough to restore the reliability of the Oral exam? That is another issue.
Dr. Appia,
Thank you for this detailed breakdown and your concerns with regard to the reliability of the secondary school oral examinations.
As a former English teacher at College Pascal in Marcory for five years, I am aware of the challenges of administering a fair oral examination. Corruption is rampant in some areas, and this further complicates the work of proctors. However, I do not think eliminating the oral exam is a good idea. Granted, this is a decision to be made by the higher-ups. But I strongly wish to see some harmonization work in order to increase the reliability of those exams.
One reason to maintain the oral exams is rather obvious and ties into the function of English. English is primarily for communication purposes. As such, all the four skill areas that you listed above, that is, reading, writing, listening, and speaking ought to be an integral part of classroom instruction and assessment. I must say, based on my experience from 1991 to 1996, that listening is the least taught and assessed skill. But in the current age of technological advancement, this deficiency can easily be addressed with cell phones equipped with internet connection. I understand that access to technology might be limited in local schools. However, most teachers if not all, have a cell phone and can easily accommodate their students.
With regard to rubrics, it is important to adopt at least one per region (DREN). A uniform national rubric might be more challenging to put in place simply because of the energy it might take to train scores and proctors at the national level. Once you have a reliable rubric to assess the students in all four skills, the next step would be to train the proctors so they feel comfortable using it. Initially, it might be necessary to train a limited number of lead teachers or administrators so that they, too, can go on to train their teachers. Where I currently teach, the CUNY Language Immersion Program, we have a norming session every time we have to score students’ essays whether these are incoming students or students who are already in the program. This ensure that we all know what a 4 looks like (sounds like in your case), what a 1 loos like, etc. Do we always get it right? Well no! It’s a human enterprise, and as such, it cannot be perfect. And this is why we have two scorers. Usually, when there is a two-point difference, the essay goes to a third scorer. It might be hard and time-consuming to apply this to an oral exam. However, with good training and norming at the school level (or maybe at the testing centers–teacher would have to report one or two days early for practice, assuming the government is ready to pay them), the reliability of even individual assessors can greatly improve.
These are just some ideas. Feel free to contact me if you have any questions or would like to discuss your concerns further.
Dr. Etienne A. Kouakou
Hostos Community College
Bronx, New York
Hi Dr Kouakou !
I’ve already replied to your insightful post, but I obviously did it the wrong way as I didn’t click on ‘reply’. One of your suggestions that caught my attention, with respect to the reliability issue, was having at least two scorers for each candidate. But you know what? We did a very interesting training in this regard, back in 1997. We even went as far as suggesting also the possibility for 2 candidates to interact over thé same issue with the two proctors playing the moderators. They then confront their grades and fine a mean….We then submitted workshop findings to the MOE decision-makers. But, as you would expect knowing how thé system works, so far no feedback has been provided from them. We may need to come back to this for real discussion. As you rightly put it, the cost-effectiveness of the implémentation of such paradigm shift may suffer….and this may be the reason why there was no response.
Dr. Appiah,
Thank you for your reply. Yes, I do realize that things don’t always work as we wish them to. Hopefully, things will get better in time.
Best,
Dr. Kouakou
Thank you Dr Kouakou. This sounds like an insightful contribution. We will make good notes of such valuable ideas so that se make use of them when the time comes. I would like to seize this opportunity to encourage more contributions from teachers and teacher educators.