27th January 2014 9:00
By Blue Tutors
Last week I did a little analysis on our assessments. For a while I’ve been concerned that we don’t have a good way of monitoring the scores that assessors give, because while the assessment is designed to be objective, and therefore over a large number of assessments the scores for each question should all look very similar, it’s always worrying to think that some assessors mark people more harshly or more generously than they should.
It was incredibly encouraging to see that, when compared to my own scores, our assessors are remarkably similar. I was half-expecting to see large in differences in a few criteria which are easy to misunderstand or those for which it’s difficult to work out the correct score (some of the ideas assessors have to understand are genuinely really tough, and we often need to think long and hard before giving a score). My next step was going to be to see if there were significant differences in the overall grades given by me compared to our other assessors, but the individual scores are so close that I don’t even need to do that.
Obviously the end goal of this is to ensure that it doesn’t matter by whom a tutor is assessed; they should get the same result. By using these results I can establish what the ‘average’ scores should be, so that, when I train a new assessor, I am able to quickly identify if he/she is over or under marking tutors in general, and can make sure that everything is properly understood before hundreds of tutors are assessed.