Skip to main content

What about reading aloud in the new GCSE exam?


One of the more eyebrow-raising proposals for the new GCSE for first teaching in 2023, first exams 2025 (current Y7) is the inclusion of a reading aloud test as part of the speaking assessment. When I mentioned this to my wife last night (she has a background in English Language Teaching) her reaction was one of surprise. She expressed it rather more strongly.

It's certainly a bold move. What are we to make of it?

Firstly, it's pretty clear why it's there. The new GCSE is heavily influenced by the findings of the TSC Review (2016) and the recommendations of ncelp.org, with its emphasis on the the three pillars of vocabulary, grammar and phonics. The committee which has developed these proposals shares members with the TSC Review group and they have decided that if the new exam is to influence and reflect so-called good practice, then reading aloud needs to be assessed in order to show that phonics skill is adequate.  Can pupils decode written language and speak it accurately? Do they know their SSCs (Sound-Spelling Correspondences)?  It's reminiscent of how the phonics test is used in primary schools, partly to evaluate pupil skills, but partly to ensure teachers teach phonics.

In that sense, I see the point of including a reading aloud test. It's obvious that, because of the backwash ("teaching to the test") effect teachers will make sure learners practise reading aloud a lot. And I don't have a great issue with that, since my view has always been that reading aloud is a very worthwhile activity in class at all levels. I won't go into the reasons now, but if you would like to explore the reasons, copy and paste these links:

https://gianfrancoconti.com/2018/03/16/my-favourite-read-aloud-task-and-how-i-use-them/

https://frenchteachernet.blogspot.com/2020/02/reading-production-effect-and-memory.html

If we look at this more broadly, let's get back to the basics of assessment: validity and reliability. It is said that good tests must be both valid and reliable. What does this mean? Here's a reminder.

Validity is about whether a test measures what it's meant to measure. In this case, if the aim is to measure if a students can read accurately and fluently, with suitable intonation, then a reading aloud test should achieve that purpose.

Reliability concerns the degree to which an assessment produces consistent results. In this case, it would come down to the mark scheme and how accurately teachers can use it. Since this type of activity requires a level-based mark scheme (a grid with descriptors such as "pronounces very accurately with hardly any errors" or "pronunciation is anglicised") then it will be somewhat unreliable, since research is clear that teachers use level-based mark schemes inconsistently. But mark schemes like this are common across various subjects, so the name of the game is to make them as foolproof as possible. In short, you can't be entirely objective when assessing reading aloud.

So far so good.

I suppose the issue arises about whether reading aloud is what is called an authentic assessment. Authentic assessment is a somewhat woolly term for which I found a few different definitions. Here is one

"An assessment requiring students to use the same competencies, or combinations of knowledge, skills, and attitudes that they need to apply in the criterion situation in professional life." (Gulikers, Bastiaens and Kirschner, 2004).

Now, I see a couple of reasons why reading aloud in an exam is controversial.  Firstly, we've never done it before at GCSE. Secondly, it's not a task we typically do outside the classroom, so in that sense it's not an authentic task. Holding conversations is something we do outside the classroom, so it seems logical to test that ability. Listening to people talk and reading text is also something we do, so it's not hard to justify activities which assess a student's ability to listen and read.

On the other hand, there are plenty of other assessment types which lack authenticity, according to the definition above. How often do we write little compositions? How often do we have to describe a photograph? How often do we write essays about film and literature?

In reality, exam tasks only occasionally resemble real life activities, and that applies to pretty much all subject areas. On the whole they are not about preparing people for life, but measuring what they know and can do so far.

So where does that leave us? At the moment, my feeling is that reading aloud in an exam is an acceptable task, as long as we keep in mind that it only measures a limited range of skills. Accurate pronunciation and intonation can also be assessed elsewhere, so this is very specifically about decoding from the written word. I would hope that few marks are allocated to it. 

Given that the format of the assessment is likely to be a short text to read followed by questions about it (there may be alternatives), then the exam task would resemble the sort of desirable activity that ought to be common currency in classrooms. The backwash effect should be positive, with more students doing choral reading aloud, individual reading aloud, read aloud games and paired reading aloud. All of these these have, in my opinion, great value for language learning.

So on balance, as I write this now, I disagree with my OH on this as I write, but alternative views are welcome. Do comment!


Comments

  1. I did French O level in 1975, part of the oral exam was to read a passage aloud.

    ReplyDelete
  2. Reading aloud is a real life activity if you take into account reading a teleprompter, reading a PowerPoint presentation, reading a speech, reading stories, reading scripts, etc, etc.

    ReplyDelete

Post a Comment

Popular posts from this blog

What is the natural order hypothesis?

The natural order hypothesis states that all learners acquire the grammatical structures of a language in roughly the same order. This applies to both first and second language acquisition. This order is not dependent on the ease with which a particular language feature can be taught; in English, some features, such as third-person "-s" ("he runs") are easy to teach in a classroom setting, but are not typically fully acquired until the later stages of language acquisition. The hypothesis was based on morpheme studies by Heidi Dulay and Marina Burt, which found that certain morphemes were predictably learned before others during the course of second language acquisition. The hypothesis was picked up by Stephen Krashen who incorporated it in his very well known input model of second language learning. Furthermore, according to the natural order hypothesis, the order of acquisition remains the same regardless of the teacher's explicit instruction; in other words,

What is skill acquisition theory?

For this post, I am drawing on a section from the excellent book by Rod Ellis and Natsuko Shintani called Exploring Language Pedagogy through Second Language Acquisition Research (Routledge, 2014). Skill acquisition is one of several competing theories of how we learn new languages. It’s a theory based on the idea that skilled behaviour in any area can become routinised and even automatic under certain conditions through repeated pairing of stimuli and responses. When put like that, it looks a bit like the behaviourist view of stimulus-response learning which went out of fashion from the late 1950s. Skill acquisition draws on John Anderson’s ACT theory, which he called a cognitivist stimulus-response theory. ACT stands for Adaptive Control of Thought.  ACT theory distinguishes declarative knowledge (knowledge of facts and concepts, such as the fact that adjectives agree) from procedural knowledge (knowing how to do things in certain situations, such as understand and speak a language).

12 principles of second language teaching

This is a short, adapted extract from our book The Language Teacher Toolkit . "We could not possibly recommend a single overall method for second language teaching, but the growing body of research we now have points to certain provisional broad principles which might guide teachers. Canadian professors Patsy Lightbown and Nina Spada (2013), after reviewing a number of studies over the years to see whether it is better to just use meaning-based approaches or to include elements of explicit grammar teaching and practice, conclude: Classroom data from a number of studies offer support for the view that form-focused instruction and corrective feedback provided within the context of communicative and content-based programmes are more effective in promoting second language learning than programmes that are limited to a virtually exclusive emphasis on comprehension. As teachers Gianfranco and I would go along with that general view and would like to suggest our own set of g