I’ve been interested this morning to read some of the news and responses on the web to the release of the initial findings from the Measures of Effective Teaching (MET) Project from the Bill and Melinda Gates Foundation. The $45-million MET project began in 2009 with the goal of building “fair and reliable systems for teacher observation and feedback.”
The study’s early findings have added fuel to the debate over whether student test scores should be used in evaluating teachers—and if so, how. At the heart of it all is the focus on the use of a value-added model, the controversial statistical method that relies on test-score data to determine a teacher’s effectiveness.
The argument for using this model is that it brings objectivity to teacher evaluations, because it compares students to themselves over time and mitigates influences outside teachers’ control, such as socio-economic status and parental involvement. Critics of the approach argue that the value-added approach, used on its own, is unreliable, and that what is needed is a mix of approaches and multiple measures.
This is what the MET project has attempted to address, by drawing data from a range of sources. Excerpts from the “Early Findings” section of the policy brief include:
- In every grade and subject studied, a teacher’s past success in raising student achievement on state tests (that is, his or her value-added) is one of the strongest predictors of his or her ability to do so again.
- Teachers with the highest value-added scores on state tests also tend to help students understand math concepts or demonstrate reading comprehension through writing.
- The average student knows effective teaching when he or she experiences it.
- Valid feedback need not be limited to test scores alone. By combining different sources of data, it is possible to provide diagnostic, targeted feedback to teachers who are eager to improve.
Taking into account that this report comes out of the US context where standardised testing is the norm, and there are moves to adopt common core standards, there are still some interesting challenges here for teachers in NZ – particularly in light of the research of John Hattie and others whose findings confirm that, in the context of school, the greatest effect on student learning is the teacher.
This being the case it makes sense that we ought to be interested in finding ways of being more precise in our approaches to identifying ways of improving teacher performance. In many school settings, teacher evaluations are little more than a formality, with teachers getting ratings based on not much more than a principal’s (or senior teacher’s) superficial impressions. The initial findings of the MET project suggest that we need to be thinking more about using multiple data sources to inform our teacher evaluations – including feedback from students!
Read the research paper with initial findings here. (The final report is apparently due to be released around this time next year.)