Rutgers Report Briefs Educators on Training for New Teacher Evaluation
John Mooney | N.J. Spotlight
New Jersey’s planned use of student test scores in evaluating teachers has drawn most of the fire so far, but researchers following the first tests of the evaluation system have found that training observers for classroom observations is neither quick nor easy.
The team from the Rutgers Graduate School of Education has been tracking the 10 pilot districts that have been up and running for the past two years, along with another 15 that began testing the system last year.
The new system is slated to be rolled out statewide next year.
In a distributed to districts across New Jersey last week, researchers described the lessons and challenges specific to training both supervisors and teachers to the observation systems.
Observations make up at least half of a teacher’s overall evaluation, as mandated by the new teacher tenure law. The balance of the rating is made by measuring student performance, including state test scores by far the most controversial piece of the system.
The report's lead author, William Firestone, said in an interview yesterday that the report details the extensive amount of time needed to get all parties up to speed, both on the new procedures and on the broader concept of pinpointing the qualities of good teaching.
Firestone said the training on procedures, including using the data management system to constantly update information on teachers and their classrooms practices, often took close to a year.
“The learning issue is really big in that first year,” said Firestone, a professor of education leadership and policy. “There is just an awful lot that people need to learn."
“We heard a lot of stories of principals going into classrooms with their iPads [to input data on the observations], and that takes practice to do it well, that takes a better part of a year.”
When asked whether the Christie administration and the Legislature should have factored training time into its mandate that the system be fully deployed next year, he said that not doing so was risky.
“The idea of having highly reliable systems in that first year is questionable, because it does take a year to learn this stuff,” he said. “This is the downside of jumping in so fast, that the system won’t be as reliable as it could be.”
Firestone also said that it takes time to train teachers on how they will be evaluated. “Teachers need to understand it well to both benefit from it and to trust it,” he commented.
The brief is not part of the ongoing two-volume research report that the Rutgers team is conducting in the pilot districts, with funding from the state Department of Education. The first report came out this spring; the second is due in September.
This brief is more of an update on the specific training the pilots have used, including methodology, requirements, expert assistance, turnkey instruction, and video guidance or some combination of the foregoing.
Virtually all of New Jersey's districts are now launching evaluation systems of their own, starting with the training of staff.
Firestone said his team planned to further explore all of these issues as part of the broader, final report due to the state Department of Education in September. But even as he called it more impressions than vetted research, Firestone said it was important to get out the information since schools are now starting training themselves.
“Teachers and administrators are going into this now, and they need the information now,” he said.