In my previous post about the new draft GCSE MFL mark schemes I focused on the Foundation Speaking Conversation grids. remember that it is really only the Speaking and Writing grids that need close attention as Listening and Reading mark schemes are largely objective (i.e. one point for one correct answer).
This time I shall look at the mark schemes for the Higher Writing question. You'll know that the marking of Writing controlled assessments has been a bone of contention ever since they were introduced, so will the new marks schemes lead to fairer and more consistent grading?
Here is how the awarding bodies allocate marks for the second composition (non-overlap):
AQA 32 marks out of 60
Content 15 Range 12 Accuracy 5
Pearson Edexcel 28 marks out of 60
Communication/Content 14 Knowledge/Accuracy 14
OCR 24 marks out of 60
Content 12 Language 12
Eduqas 30 marks out of 60 (unlike the other boards this is the overlap question)
Communication 16 Knowledge/Accuracy 7 Range 7
Note that the Eduqas Communication mark is awarded differently to that of the other boards. They say:
The candidate will be required to give ten responses to the question set.
Each response will be assessed for Communication according to the following criteria:
2 Response is complete, appropriate and without ambiguity.
1 Response is partially complete or with some ambiguity.
0 Inappropriate, incomprehensible, or no response.
This has the potential to be more objective than the other mark schemes, but also gives the examiner less leeway to reward an excellent candidate who may miss a point or two.
There is some significant variation in approach here in relation to range and accuracy. All boards allocate about half the marks for appropriate content. Pearson Edexcel and OCR incorporate accuracy and range, whilst AQA and Eduqas separate out accuracy. My own preference would be to mark range and accuracy separately. If you don't this can cause issues with candidates who write plenty of information, with a good range of vocabulary, but inaccurately. Conversely, some may play safe by being accurate but with a narrower range.
Next I'm going to have a look at descriptors just below the middle of the mark range for Language/Range/Accuracy - direct comparisons are not easy because of the different approaches adopted by each awarding body. I shall mark in bold interesting points of comparison.
Some variety of appropriate vocabulary and structures used. Longer
sentences are attempted, using appropriate linking words, often
More accurate than inaccurate. The intended meaning is generally
clear. Verb and tense formations are sometimes correct.
Uses familiar and predictable vocabulary and grammatical structures.
There may be the occasional use of a complex item. Uses tenses and time frames, with some success, with reference to
past, present, and future events, as appropriate to the task. Some evidence of manipulation of language to produce sentences but
this is not sustained. Generally accurate in using straightforward language, but there are
major errors with verbs and tenses
Simple language is mostly accurate. Errors do not impede communication. A good variety of vocabulary, appropriately used in places. A good variety of structures; simple structures used appropriately. Reference to past, present
and future events. Language is generally fluent and generally manipulated well
Errors in simple structures
sometimes impede meaning. A large number of
A limited range of vocabulary and structures is
produced. Language for the most part is simple and use is
made of uncomplicated structures, although
not always accurately.
Compare OCR and Eduqas - for similar marks Eduqas expect "limited range" whilst OCR expect "good variety". This makes OCR look harder.
Compare Pearson/OCR with AQA/Eduqas - the former expect past, present and future (with some error). AQA make no reference to tense, though marks would clearly be lost for Content if subject matter were not communicated in the right tense. I like to see tense referred to explicitly.
I like the fact that Pearson Edexcel's descriptor is more detailed.
I've also looked at the top of the mark range in the range/accuracy grids. I have marked in bold interesting points of comparison. Bear in mind there is less wiggle room in the Eduqas grid.
Very good variety of appropriate vocabulary and structures used.
More complex sentences are handled with confidence, producing a
fluent piece of coherent writing.
Accurate, although there may be a few errors especially in attempts
at more complex structures. Verbs and tense formations are secure.
Uses wide range of vocabulary and grammatical structures, including
effective use of complex items. Uses tenses and time frames successfully with reference to past,
present, and future events, as appropriate to the task. Clear ability to manipulate language to produce longer, fluent
sentences with ease. Very accurate with only isolated minor errors e.g. spellings, genders
Language is almost fully accurate. Complex language is mostly accurate. A very good variety of vocabulary, used entirely appropriately. A variety of complex structures, used appropriately. Complex tenses are used. Language is highly fluent and creatively and independently manipulated.
Writing is mostly accurate with
few mistakes. Verbs and time references are
secure. Principles of grammar are sound.
A fluid and fluent style is developing. Appropriate style and register is always
maintained. The language is sophisticated. Uses a wide variety of tenses, vocabulary and
structures. Language is almost always totally correct.*
There is clearly broad similarity here.
*I find it odd that Eduqas include a reference to accuracy in their Range descriptor. Do they mean accurate? Or appropriate? is there a mismatch between "mostly accurate" and "almost always totally correct"? I find this a bit sloppy.
AQA's reference to "a few errors" may be a little vague, but at standardisation a numerical figure may be put on this to help examiners. Pearson's "isolated minor errors" seems a little tougher. It would be useful to know what AQA mean by "more complex sentences" - will we see a return to what we used to use i.e. the need for subordinate clauses to indicate complexity? This was useful.
I find OCR's descriptor a little confusing. "Language is almost fully accurate". But then "complex language is mostly accurate". ??? Did they mean "Simple language is almost fully accurate"? I dislike the use of "creatively" and "independently". An examiner cannot know for sure if something has been pre-learned and reproduced or if it is the result of creative or independent thought. If it's good, you reward it. There is nothing wrong with pre-learning for an exam.
Well, you can be nit-picking with these things, but they are actually very important. My department and I found the AQA's CA Writing mark schemes too loose and they may have contributed to inconsistent marks from examiners (the other boards too, no doubt).
I like to see detailed descriptors with useful items to latch on to. "At least three subordinate clauses", "past, present and future", at least 4 adjectives and adverbs, at least 3 linking words etc. This may be too much for a general mark scheme and you may fear that it would confine candidates too much, but I imagine examiners would find this type of thing useful when moderating/standardising. In this way we may remove some of the subjectivity which inevitably arises when marking compositions.
Ultimately there will be problems with marking compositions, but the new system will be more reliable than the existing one, partly because all candidates will be doing the same questions.
Finally, just for fun: here is a go at a top Language mark descriptor based on work I have read over the years:
Very accurate, with no more than ten minor errors and two major errors. Use of past, present and future time frames. Much complex language, with at least 5 complex sentences (subordinate clauses), five or more adjectives and adverbs, three linking words and two modal verbs. Wide range of vocabulary and Higher Tier linguistic structures.
And a mid range descriptor:
Between 5-8 major errors and frequent minor errors. At least five successful attempts at different time frames. At least three linking words and one subordinate clause. At least one adjective and adverb. Meaning nearly always clear to a sympathetic native speaker.