In a tour de force of investigative journalism, The Los Angeles Times analyzed thousands of teachers’ performance over seven years, based on their students’ progress on standardized tests. Here are some key findings, quoted from the excellent lead article of a series:
- Highly effective teachers routinely propel students from below grade level to advanced in a single year. There is a substantial gap at year’s end between students whose teachers were in the top 10% in effectiveness and the bottom 10%. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math.
- Some students landed in the classrooms of the poorest-performing instructors year after year — a potentially devastating setback that the district could have avoided. Over the period analyzed, more than 8,000 students got such a math or English teacher at least twice in a row.
- Contrary to popular belief, the best teachers were not concentrated in schools in the most affluent neighborhoods, nor were the weakest instructors bunched in poor areas. Rather, these teachers were scattered throughout the district. The quality of instruction typically varied far more within a school than between schools.
- Although many parents fixate on picking the right school for their child, it matters far more which teacher the child gets. Teachers had three times as much influence on students’ academic development as the school they attend. Yet parents have no access to objective information about individual instructors, and they often have little say in which teacher their child gets.
- Many of the factors commonly assumed to be important to teachers’ effectiveness were not. Although teachers are paid more for experience, education and training, none of this had much bearing on whether they improved their students’ performance.
The analytical technique used is called a value-added analysis, explained in the article this way:
In essence, a student’s past performance on tests is used to project his or her future results. The difference between the prediction and the student’s actual performance after a year is the “value” that the teacher added or subtracted.
For example, if a third-grade student ranked in the 60th percentile among all district third-graders, he would be expected to rank similarly in fourth grade. If he fell to the 40th percentile, it would suggest that his teacher had not been very effective, at least for him. If he sprang into the 80th percentile, his teacher would appear to have been highly effective.
Any single student’s performance in a given year could be due to other factors — a child’s attention could suffer during a divorce, for example. But when the performance of dozens of a teacher’s students is averaged — often over several years — the value-added score becomes more reliable, statisticians say.
The Times quoted experts who said students’ test performance should not be the only way a teacher is evaluated, especially in high-stakes decisions like firing. Academic and government supporters of value-added analysis in education, including the Obama Administration, suggest that it comprise half a teacher’s evaluation.
Nevertheless, the president of the local teachers’ union responded, “You’re leading people in a dangerous direction, making it seem like you can judge the quality of a teacher by...a test.” The teachers’ union launched a boycott of The Times, asking the union’s members to cancel their subscriptions.
Bad response. Better would have been to respond like two low-scoring teachers did when interviewed by The Times. They said they want to use the data to help them improve. The Los Angeles Unified school district has always had the data, but never used it for teacher feedback.
That said, there are legitimate questions to be asked when teachers are rated largely by students’ scores on standardized tests. I know too many good teachers who say it forces teaching to the test, which leads to standardized, least-common-denominator teaching—not what comes to mind when you think “great teacher.” And of course no one wins if we create a generation of ace test-takers who lack any deeper understanding or motivation.
Acknowledging those concerns, I’d still say the lesson of The Los Angeles Times investigation is that its type of teacher-targeted analysis has merit. The questions are how to maximize such an analysis’ fairness, how to integrate it into a larger evaluation of a teacher, and how to avoid undesirable side effects like teaching only to the test.
In other words, we can argue about issues like how to choose the data, how to analyze it, and what decisions it should affect. But please let’s have those arguments rather than the one about whether test data should be used at all. If you ever had doubts, The Times series should put those to rest.