March 14 | 5 minute read | Evidence-Based Practice

Teaching Without Appropriate Training

By: Mike Bird

Teaching Without Appropriate Training

This incident happened in a comprehensive school in the North of England some years ago now. The lesson transcription was taken from a series of lessons a history teacher, Ed, had recorded of himself teaching a Year 8 class. He was working on a Masters degree at the time and collecting evidence for the purposes of his own practitioner research. I was working closely with him and helped arrange the transcription of the audio recordings, while helping him with his research.

Ed was an ambitious and capable teacher who wanted to learn more about his practice but who had only recently entered the profession. By his own admission he did not feel as grounded or as confident as others in the department and felt that completing a Masters degree would help him to develop and grow. This particular incident became very significant for him and took over from his original intended focuses for his research.

The recommendations of the training provider centred on tying formative assessment to summative outcomes through engaging pupils explicitly with the criteria given for each assessment activity. There was very little mention of any other recommendations from AfL researchers (Black, Harrison, Lee, Marshall, & Wiliam, 2003). Ed’s history department produced materials which were inspired by the training and which aimed to support pupils in understanding how they are summatively assessed. This involved the use of ‘child-friendly’ attainment descriptions and sample answers in a series of assessments across, in this case, the whole of key stage 3. These were intended to track pupils’ progression defined in this case in the terminology of National Curriculum levels. The practice of using National curriculum levels (present in iterations of the National Curriculum prior to 2014) to assess individual pieces of work has already been widely criticised (Chapman, 2021). However it is not the purpose of this post to rehash those criticisms, for the issue presented here is more profound than this.

This is what happened when the pupils tried to assess themselves:

Ed: So all you have to do is look at the sample answers and then your own. Ok.. Listen – Jack! Try to judge on what level your own fits into and try to look at what makes the better answers better.
Sean: So I’ve got to what…
Melissa: I don’t get it…
Sean: Look at the criteria and
Ed: …and write down a target to focus on for next time.
Melissa and some others: I really don’t get this.
Ed: Jack do you understand? [repeats the explanation two further times]
[Pupils begin the task – some still uncertain.]
Harry: So can I put that I’m a level 5 because this answer looks about the same as this one?
Ed: Yes I don’t see why not […] Yes I can see why you think it is a level 5
Harry: So I need to set a target then?
Ed: Yes why not look at the level 6 answer and see what it is about it that makes it better. Try to put that down and aim for it next time.
Kate: (whispers) Did Harry say he was a level 5?
Arthur: But you’re not a 5.
Kate (louder) I am – if Harry is then I am too.
[…]
Harry: Can I just put down that I should write longer answers?
David: Can’t you just tell us what to write?
Ed: What do you mean? (talking to Harry)
Harry: Well the level 6 answer is longer than the level 5.
Ed: Yes but what is it about the level 6 answer that is better than level 5 apart from that it’s just longer?
Harry: (looks bemused)
Paul: My answer is the same length as yours so I must be a level 5 too.

It was troubling for Ed that pupils struggled with this activity. It seemed that they were unsure of the purpose of what they were doing and were mainly guessing, consulting each other and asking Ed which level their answers were. This suggested that the abstract, descriptive ladder made no sense to them and the sample answers did not help them either. Furthermore, after encouragements from Ed that the pupils should keep on trying and write something down they began to estimate their levels on their perception of how competent they felt at history in general, with no reference to their answers, the criteria or the samples. This led to new and unhelpful meanings about the levels being generated: those who felt they were no good at history were giving themselves levels 3s and 4s and this was informing those who felt they were better than them to give themselves 5s and 6s. In this way, the activity generated a tacit ‘norm-referencing’ of each other‘s capabilities (where they were judging themselves as above or below standard) but without reference to criteria or future learning goals being made explicit or encouraging them to engage in any understanding of the reason for doing this. This unsatisfactory experience of AfL was unfortunately also quite typical of other occasions in other classes when pupils were asked to consider their completed pieces of work against pre-assigned criteria and work samples.

The whole point of AfL originally was to involve individual learners and make them take action to challenge their existing understandings (Sadler, 1989; Torrance & Pryor, 2001; Black & Wiliam, 2006b; Black & Wiliam, 2009). Teacher beliefs play a significant role in how formative assessment is conceived and played out in classrooms too (Black P. , 1999; Harlen, 2005). MacBeath, Pedder, & Swaffield (2007) usefully point out that not all teachers may be aware of or subscribe to the tenets underpinning formative assessment. They point out that this often leads to practices antithetical to the principles of AfL (2007, p. 67). Can this explain how intelligent and capable teachers, like Ed, could preside over disasters like these?

Black, P. (2015). Formative Assessment – an optimistic but incomplete vision. Assessment in Education: Principles, Policy & Practice, 22(1), 161-177.


Black, P., & Wiliam, D. (2006b). Developing a Theory of Formative Assessment. In J. Gardner (Ed.), Assessment and Learning (pp. 81-100). London: Sage.


Black, P., & Wiliam, D. (2009). Developing the Theory of Formative Assessment. Education, Assessment, Evaluation and Accountability(21), 5-31.


Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment For Learning. Maidenhead: Open University Press.


Chapman, A. (Ed.). (2021). Knowing History in Schools: Powerful knowledge and the powers of knowledge. London: UCL Press.


Harlen, W. (2005). ‘Teachers’ Summative Practices and Assessment for Learning — Tensions and Synergies’. The Curriculum Journal, vol. 16(no. 2), pp. 207-223.


MacBeath, J., Marshall, B., & Swaffield, S. (2007). Case Studies of LHTL from Secondary Schools. In M. James, R. McCormick, P. Black, P. Carmichael, M.-J. Drummond, A. Fox, . . . D. Wiliam, Improving Learning How to Learn (TLRP) (pp. 144-173). London and New York: Routledge.


Ofsted. (2011). History For All. London: Department for Education.


Sadler, D. R. (1989). Formative Assessment and the Design of Instructional Systems. Instructional Science, 18, 119-144.


Torrance, H., & Pryor, J. (2001). ‘Developing Formative Assessment in the Classroom: Using Action Research to Explore and Modify Theory’. British Educational Research Journal, vo. ?, pp. 615-631.


Wylie, E. C., & Lyon, C. (2015). The fidelity of formative assessment implementation: issues of breadth and quality. Assessment in Education: Principles, Policy & Practice, 22(1), 140-160.