Wednesday, 24 May 2017

My Reflections

Mussalam noted three techniques to consider when trying to ‘spark’ learning: curiosity comes first, to embrace the mess and to reflect. The third rule was the one I thought about the most. The group that plays on my mind the most is my year eleven set one group. The last set of assessment data that I collected for the group was, in a word, shocking. The assessment indicated that the group are doing quite poorly (I will go into the limitations of assessing this current cohort later). When sharing their assessment results with them I decided to go for a reassuring approach taking the stance that this is a case of ‘trial and error’. As Beckett noted we will fail and we will fail again but next time we will fail better. 

I realised that it was quite tempting to do a couple of shallow purple pen activities during an AFL lesson mock but when I reflected on this it seemed more of a box ticking exercise. Instead I decided to go for a more developed approach to demonstrate their ability progress past this. I spent ten to fifteen minutes a week going over the skills that were required to create top band responses for the exam questions and provided the group with a set of answer scaffolding sheets for each one. Students were then asked to take one of the questions home each week to attempt independently and to time themselves. This homework task proved particularly effective; every student made progress on their assessment mark with each student jumping at least one estimated grade boundary. At the end of the half-term a second assessment was conducted in exam conditions. Again the data was positive with a ten percent improvement in students achieving 3+LOP. This example illustrates why we assess: to reassure us that we are teaching effectively and they are learning progressively. 

This brings me to the greatest barrier my department are facing currently with assessing our students. Due to the re-design of grades from alphabetical to numerical and due to the hazy information we have received concerning what constitutes an upper grade pass and even which grade constitutes an upper grade pass assessing the current KS3 is, essentially, a guessing game. For these reasons we as a department decided to err on the side of caution and to make our grade boundaries particularly conservative. Due to this decision very few students have been ranked as having achieved 3+LOP and here lies our assessment limitations and arguably why at present our assessment practice is far from outstanding; it is far from being consistent nationally or perhaps even being accurate. The proof will be in the summer results pudding.

Creating an Assessment

As you have probably guessed I decided to create an assessment that had my year eleven group in mind, one that was not subject to the limitations of the grade boundary chaos. In April I decided to book the school lecture theatre for two periods and to create an assessment based on a mini ‘walking mock’ system. The reason that I wanted to do a walking mock rather than a conventional mock was because I didn’t want to dedicate two lessons to just an assessment task, I wanted to fit some teaching in there too. I spent two weeks leading up to the walking mock going through Language Paper One questions introducing students to an answer scaffolding resource that I developed specifically for them (figure 1).

Arguably, the walking mock that we conducted linked to five of the eight intelligences Gardner identified in his Frames of Mind: The Theory of Multiple Intelligences (1983). In the first instance the walking mock linked to visual-spatial intelligence; students were put into exam conditions in a simulated exam environment in the hope that during their actual exam in June they would be able to visualize with the mind’s eye some of the resources that I placed around the exam hall (figure 2). The most significantly impacted intelligence was verbal-linguistic. This relates to the students’ ability to understand language and to write appropriately. Due to the nature of the answer scaffolding resource (figure 1) there was even a logical-mathematical element to the assessment (quite an accomplishment for an English specialist) as it encouraged students to become increasingly logical with the structuring of their responses encouraging students to identify links and patterns within texts. To add to this, intrapersonal intelligence was also targeted as students had to spend an extended period of time inwardly considering what to write in their responses as they were guided but not instructed what to include in answers. Finally, the walking mock also had naturalistic elements as students were placed in a true to life exam situation sitting individually in rows, in silence, to increase focus and legitimacy of the assessment.  

Students were instructed to enter the exam hall in silence and to sit alphabetically (due to not knowing candidate numbers off by heart). When the administration of assessment papers was complete I read the first question to the group (figure 3) and then the section of the extract this related to. Without hint or guidance students were given four minutes (as I have advised them to spend in the real exam) to complete this question. When this time had lapsed students were instructed to move onto Question Two. I then read the rest of the extract to the group (figure 4) to contextualise the rest of the questions on the exam paper. With Question Two, like Question One, I read the question to the group but this time, rather than giving them the time limit, I first read the scaffolding resource to students (figure 1) and explained how structuring their answers in this way would give them the best chance of hitting all of the assessment objectives. This process of reading, structure explanation and then timed conditions was repeated with the final two questions. What became apparent was that students responded positively to identifying techniques required for their answers and to the scaffolding resource but evidently struggled with the timing aspect of the walking mock because we had to move on collectively and each student worked at varying paces. This was a limitation of the waking mock assessment and is evidenced in student A’s response (figure 5).

The data that I collected from this assessment was far more positive than it had been in previous unaided assessments. All students had made a significant improvement since the previous assessment although this data was of course biased to a certain extent due to the resources made available to the group in the exam hall. What became apparent during my marking was that students were able to structure their answers in a developed way due to the tick sheets that I had provided so when giving students formative feedback I did not want to focus on this. Instead, I used three highlighters which represented the top three bands of the mark scheme (due to the group being relatively high in ability responses only spanned the top three bands). I highlighted each explanation within a student’s response depending on whether it could be classed as band four (green), three (amber) or two (red). Due to being high ability in nature all students are capable of writing band three and four responses and so any band two (or red) explanations were of some cause for concern. As Student B’s response suggests (figure 6) some students were structuring responses appropriately but were either not answering the essay question or, as Student C’s illustrates, some students’ responses were just too simplistic and didn’t reflect their ability (figure 7). However, as student C’s work suggests many responses included a mixture of bands two, three and four indicating the student is more than capable of creating a top band answer they just needed to become more conscious of what constitutes a top band explanation and what doesn’t. The soft data that I collected was the most useful because it helped me to identify which students were struggling to balance the quality of their explanations. Student D was a good example of this as in their work there was a number of band two and three explanations included but also some rarer band four ones too indicating that if intervention was put into place this student could really be pushed (figure 8).

As a group we all concluded that the walking mock had been a useful assessment which we could have repeated for Language Paper Two but all felt that if repeated too regularly (like any assessment) it could become decreasingly effective and potentially monotonous. Due to the issues I mentioned in my last blog I steered away from sharing our estimated grade boundaries and focussed on the number of marks students had achieved to illustrate real progress had been made. 

Appendix and Bibliography

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5 (Student A)
Figure 6 (Student B)
Figure 7 (Student C)
Figure 8 (Student D)
Bibliography

Fernandez-Martinez, Fernando; Kseniya Zablotskaya; Wolfgang Minker (Aug 2012). "Text categorization methods for automatic estimation of verbal intelligence". Expert Systems with Applications. 39 (10): 9807–9820.  

Gilman, Lynn (2012). "The Theory of Multiple Intelligences". Indiana University. Archived from the original on 25 November 2012. Accessed 24th May 2017.

Mussalam, R (2013). Three Rules to Spark Learning. http://www.ted.com/talks/ramsey_musallam_3_rules_to_spark_learning.html Accessed 24th May 2017.

Smith, Mark K. (2002). "Howard Gardner, multiple intelligences and education". The encyclopedia of informal education. Accessed 24th May 2017.

Tuesday, 3 January 2017

Learning and Assessment

Learning and assessment are always at the forefront of a teacher’s mind from planning AFL activities in individual lessons to assessment weeks in the long-term plan.  A careful balance needs to be struck between learning and assessment; in the current educational climate a teacher can even feel pressurized into assessing too frequently cutting into precious learning time. When the proper balance is struck between learning and assessment they communicate what has been learnt and what student and teacher now need to do to move forward, and so, they are a vital pairing.
Assessment is incredibly useful for a practitioner when identifying what learning has occurred and often, most importantly, identifying what hasn’t been learnt. Marton and Säljö (1976) explored the differences between Surface and Deep Learning. Martin and Säljö have defined Surface Learning as the quantitative increase in a student’s knowledge whereas Deep Learning is a far more desirable attribute of being able to comprehend abstract concepts. A saddening truth here is that the recent modifications to the English Language and Literature specifications lend themselves to Surface Learning. Of late many of my lessons and revision sessions have included tasks where students are encouraged to memorise lines of plays, lists of poems and the differences between sentence forms. I’ve found these changes have had an almost numbing effect on students and as a teacher it is difficult to watch a once keen and capable learner become apathetic towards your subject. Indeed the old English specifications had a tendency towards Surface Learning but one always felt students, even the lowest of ability, were at least supported in some way by foundation papers and Controlled Assessments. But in this new age of exclusively exam based English assessments it is an unnerving uncertainty where a B (or a new age five) is now the bottom end of an upper-grade pass one can’t help but feel as apathetic and defeated as the poor students going through the process. My only hope with the new specifications is that my students will be able to develop what Ramdass et.al (2011) identified as self-regulatory skills. Independent learning can only take place when a pupil develops self-regulation skills; self-regulation is integral for an independent learner as it is a proactive process of managing one’s own independent learning, behaviour, thoughts and emotions in order to make progress. The findings of Ramdass et.al have been informing my teaching practice over the last few years and have especially resonated with my development of task menus in lessons. These resources have been useful for both learning and assessment; I’ve noticed my students have become more self-aware of their own progress. I feel at this stage that the implementation of task menus has had a positive impact on my students’ self-regulatory skills as they have begun taking the steps towards more independent learning.
It is my hope that the appropriate use of task menus in my lessons will help in dealing with the ‘fight Vs flight’ mentality that students often face. During the first Teaching and Learning session we were presented with a task involving the Periodic Table. My initial feeling was one of competitiveness; I made the assumption that we were going to be given a few minutes to memorise the table and then would have to recreate it (strange that I made the assumption that this was going to be a Surface Learning based task, or, perhaps not.) After a few minutes it was brought to my attention that there were four or five questions at the bottom of the sheet that I was supposed to be answering. My initial instinct was a competitive fight one but in actual fact this kneejerk reaction became a hindrance: I just went for what I instinctively thought was the task without double checking. Before I had made this mistake I would have probably said that I would want my students to always choose the fight over flight response; this experience provided me with the opportunity to also evaluate the downsides of the fight response, it has become clear to me that contextualising learning –and assessment in turn –is essential.
A lack of contextualisation during lesson time is the reason why teachers can receive some strange responses from students during assessment. I once had an experience with a student when studying Of Mice and Men. The student thought that the quote “live off the fat ‘o the lan’” was not an American colloquialism for “live off of the fat of the land” but thought that George and Lennie were striving to earn enough money to buy fatty lamb’s meat to eat. To add to this need of contextualisation it is also essential that teachers ensure knowledge is not only learnt but embedded. I have found in my experience that when students are presented with a vast amount of Surface Learning material to memorise they have forgotten it after a few weeks so when we return to the topic a few months later it is completely gone. Contrastingly, I have found that students are often able to retain Deeper Learning concepts over longer periods of time because they have related parts of that knowledge to other pieces of knowledge sometimes spanning the curriculum areas. For a brain to validate and embed learning to memory there must be an emotional connection and so a purposeful mixture of Surface and Deep Learning, as well as the contextualization of knowledge, is seemingly vital.
This need of an emotional connection has been touched upon by Gardner (1983). He noted that there are a number of different types of intelligences and that individuals are often proficient in some and less able in others. Gardner noted that there were seven in total (logical; mathematical; linguistic; musical; bodily-kinaesthetic; interpersonal; and intrapersonal.) Of the seven I would argue that English Literature and Language touch most often on the linguistic, interpersonal, intrapersonal and logical intelligences (when I completed the Gardner questionnaire during the session these were the areas I was most closely aligned to.) These are therefore the areas in which students are using the most during their learning in English and therefore are being assessed on most frequently. Fleming and Chambers (1983) found that the majority of assessment-based questions were Surface Learning based concerning factual information. Long (2000) later built on this noting that the frequent use of knowledge-based assessments was due to their ease during the planning and feedback stages. Robinson (2006) noted that as teachers we should be asking ourselves how we can make a positive change and how can we effectively measure the intelligence and creativity of our students. This flummoxed me after I became conscious of Gardner’s ideas concerning the seven intelligences. Not only did this stump me I actually experienced a pang of professional guilt. I fervently began scribbling down a spider-diagram which simply had a question mark in the centre because I didn’t know the answers. I wrote down the embarrassingly obvious: assessment tasks, work in their books, data, none of which even scratched the surface of what Robinson was posing. I was trapped inside an educational box with every other teacher in this seemingly flawed educational paradigm. I then posed myself another question after failing at answering Robinson’s: what could I do to foster the creativity of my learners? My first thought was to dismantle my physical educational box: my classroom. What if I started to wander from the assessment objectives and consider Deep Learning activities over Ofqual’s preferred Surface Learning ones? What if I changed my students’ stimuli and did away with the technology which is arguably dulling their thirst for discovery? What if I went one step further in altering my students’ stimuli by removing the chairs and desks and had my students sitting on the floor receiving nothing more that verbal teaching and verbal feedback? I’m quite sure I’d be haunted by the ghost of Gove. Although some of these ideas are perhaps in the extreme they do highlight our need in evaluating the current model of Surface based Learning in the English Curriculum and whether teachers should be evaluating it in a balanced way.

Conveniently, the musings of Claxton (2008) bring me back down to earth. Claxton noted that the most academically well-rounded students are good at sticking with things even if they are difficult and they are willing to ask questions if they are stuck. These types of students are also willing to share with their peers but are also willing to think independently of their teacher. They essentially choose an appropriate contextualized fight response. As an English practitioner I am now considering the following questions: what can I do to foster the creativity of my learners? How can I effectively assess the creativity of my learners? How will I know if I have struck the appropriate balance between learning and assessment? How can I help my students to develop their resilience, resourcefulness, reflectiveness and reciprocity? Rather than making a shallow conclusive statement about these questions I’m going to spend some time contemplating them and perhaps doing some reading around them. My hope is that by the end of this course I will be able to not only answer these questions but to truly understand their answers.

References
Claxton. G (2008). What's the Point of School?: Rediscovering The Heart Of Education. Oneworld Publications, London.

Fleming, M. & Chambers, B. (1983). Teacher-made tests: Windows on the  classroom. In W. E. Hathaway (Ed.), Testing in the schools: New directions for testing and measurement, NO. 19 (pp.29-38). San  Francisco: Jossey-Bass.

Gardner, Howard (1983), Frames of Mind: The Theory of Multiple Intelligences, Basic Books, New York.

Long, M. (2000) The Psychology of Education , Routledge Falmer, Berkshire.

Marton F. and Säljö R. (1976) On qualitative differences in learning. I – Outcome and Process’ British Journal of Educational Psychology 46, pp. 4-11.

Ramdass.D. Zimmerman, B.J. (2011) Developing Self-Regulation Skills: The Important Role of Homework. Journal of Advanced Academics. 22 (2), 194-218.


Robinson. K, (2006) Do schools kill creativity? | TED Talk". TED.com. Retrieved 21.12.2016.