Wednesday, 24 May 2017

My Reflections

Mussalam noted three techniques to consider when trying to ‘spark’ learning: curiosity comes first, to embrace the mess and to reflect. The third rule was the one I thought about the most. The group that plays on my mind the most is my year eleven set one group. The last set of assessment data that I collected for the group was, in a word, shocking. The assessment indicated that the group are doing quite poorly (I will go into the limitations of assessing this current cohort later). When sharing their assessment results with them I decided to go for a reassuring approach taking the stance that this is a case of ‘trial and error’. As Beckett noted we will fail and we will fail again but next time we will fail better. 

I realised that it was quite tempting to do a couple of shallow purple pen activities during an AFL lesson mock but when I reflected on this it seemed more of a box ticking exercise. Instead I decided to go for a more developed approach to demonstrate their ability progress past this. I spent ten to fifteen minutes a week going over the skills that were required to create top band responses for the exam questions and provided the group with a set of answer scaffolding sheets for each one. Students were then asked to take one of the questions home each week to attempt independently and to time themselves. This homework task proved particularly effective; every student made progress on their assessment mark with each student jumping at least one estimated grade boundary. At the end of the half-term a second assessment was conducted in exam conditions. Again the data was positive with a ten percent improvement in students achieving 3+LOP. This example illustrates why we assess: to reassure us that we are teaching effectively and they are learning progressively. 

This brings me to the greatest barrier my department are facing currently with assessing our students. Due to the re-design of grades from alphabetical to numerical and due to the hazy information we have received concerning what constitutes an upper grade pass and even which grade constitutes an upper grade pass assessing the current KS3 is, essentially, a guessing game. For these reasons we as a department decided to err on the side of caution and to make our grade boundaries particularly conservative. Due to this decision very few students have been ranked as having achieved 3+LOP and here lies our assessment limitations and arguably why at present our assessment practice is far from outstanding; it is far from being consistent nationally or perhaps even being accurate. The proof will be in the summer results pudding.

Creating an Assessment

As you have probably guessed I decided to create an assessment that had my year eleven group in mind, one that was not subject to the limitations of the grade boundary chaos. In April I decided to book the school lecture theatre for two periods and to create an assessment based on a mini ‘walking mock’ system. The reason that I wanted to do a walking mock rather than a conventional mock was because I didn’t want to dedicate two lessons to just an assessment task, I wanted to fit some teaching in there too. I spent two weeks leading up to the walking mock going through Language Paper One questions introducing students to an answer scaffolding resource that I developed specifically for them (figure 1).

Arguably, the walking mock that we conducted linked to five of the eight intelligences Gardner identified in his Frames of Mind: The Theory of Multiple Intelligences (1983). In the first instance the walking mock linked to visual-spatial intelligence; students were put into exam conditions in a simulated exam environment in the hope that during their actual exam in June they would be able to visualize with the mind’s eye some of the resources that I placed around the exam hall (figure 2). The most significantly impacted intelligence was verbal-linguistic. This relates to the students’ ability to understand language and to write appropriately. Due to the nature of the answer scaffolding resource (figure 1) there was even a logical-mathematical element to the assessment (quite an accomplishment for an English specialist) as it encouraged students to become increasingly logical with the structuring of their responses encouraging students to identify links and patterns within texts. To add to this, intrapersonal intelligence was also targeted as students had to spend an extended period of time inwardly considering what to write in their responses as they were guided but not instructed what to include in answers. Finally, the walking mock also had naturalistic elements as students were placed in a true to life exam situation sitting individually in rows, in silence, to increase focus and legitimacy of the assessment.  

Students were instructed to enter the exam hall in silence and to sit alphabetically (due to not knowing candidate numbers off by heart). When the administration of assessment papers was complete I read the first question to the group (figure 3) and then the section of the extract this related to. Without hint or guidance students were given four minutes (as I have advised them to spend in the real exam) to complete this question. When this time had lapsed students were instructed to move onto Question Two. I then read the rest of the extract to the group (figure 4) to contextualise the rest of the questions on the exam paper. With Question Two, like Question One, I read the question to the group but this time, rather than giving them the time limit, I first read the scaffolding resource to students (figure 1) and explained how structuring their answers in this way would give them the best chance of hitting all of the assessment objectives. This process of reading, structure explanation and then timed conditions was repeated with the final two questions. What became apparent was that students responded positively to identifying techniques required for their answers and to the scaffolding resource but evidently struggled with the timing aspect of the walking mock because we had to move on collectively and each student worked at varying paces. This was a limitation of the waking mock assessment and is evidenced in student A’s response (figure 5).

The data that I collected from this assessment was far more positive than it had been in previous unaided assessments. All students had made a significant improvement since the previous assessment although this data was of course biased to a certain extent due to the resources made available to the group in the exam hall. What became apparent during my marking was that students were able to structure their answers in a developed way due to the tick sheets that I had provided so when giving students formative feedback I did not want to focus on this. Instead, I used three highlighters which represented the top three bands of the mark scheme (due to the group being relatively high in ability responses only spanned the top three bands). I highlighted each explanation within a student’s response depending on whether it could be classed as band four (green), three (amber) or two (red). Due to being high ability in nature all students are capable of writing band three and four responses and so any band two (or red) explanations were of some cause for concern. As Student B’s response suggests (figure 6) some students were structuring responses appropriately but were either not answering the essay question or, as Student C’s illustrates, some students’ responses were just too simplistic and didn’t reflect their ability (figure 7). However, as student C’s work suggests many responses included a mixture of bands two, three and four indicating the student is more than capable of creating a top band answer they just needed to become more conscious of what constitutes a top band explanation and what doesn’t. The soft data that I collected was the most useful because it helped me to identify which students were struggling to balance the quality of their explanations. Student D was a good example of this as in their work there was a number of band two and three explanations included but also some rarer band four ones too indicating that if intervention was put into place this student could really be pushed (figure 8).

As a group we all concluded that the walking mock had been a useful assessment which we could have repeated for Language Paper Two but all felt that if repeated too regularly (like any assessment) it could become decreasingly effective and potentially monotonous. Due to the issues I mentioned in my last blog I steered away from sharing our estimated grade boundaries and focussed on the number of marks students had achieved to illustrate real progress had been made. 

Appendix and Bibliography

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5 (Student A)
Figure 6 (Student B)
Figure 7 (Student C)
Figure 8 (Student D)
Bibliography

Fernandez-Martinez, Fernando; Kseniya Zablotskaya; Wolfgang Minker (Aug 2012). "Text categorization methods for automatic estimation of verbal intelligence". Expert Systems with Applications. 39 (10): 9807–9820.  

Gilman, Lynn (2012). "The Theory of Multiple Intelligences". Indiana University. Archived from the original on 25 November 2012. Accessed 24th May 2017.

Mussalam, R (2013). Three Rules to Spark Learning. http://www.ted.com/talks/ramsey_musallam_3_rules_to_spark_learning.html Accessed 24th May 2017.

Smith, Mark K. (2002). "Howard Gardner, multiple intelligences and education". The encyclopedia of informal education. Accessed 24th May 2017.