I’m not sure how other states are conducting their teacher evaluation plans, but here in New York, we have what is called APPR (Annual Professional Performance Review) and what it used to look like vs what it looks like now is vastly different. In the past, teachers would sit down with their principal at the end of the school year and go over a rating form that both the teacher and principal had already completed. They would compare scores, talk about differences, and review goals that the teacher had to submit for the upcoming school year. The principal would go over strengths and weaknesses of the particular teacher, and discuss any relevant information or questions that the teacher had. Prior to this, the teacher had already had a formal classroom observation by the principal which was reviewed at a post-observation meeting. Forms were signed and submitted to the district office, and voila, the evaluation process was completed for another year. Make no mistake, this form of evaluation did not “protect” bad teachers or ignore the fact that some teachers had obvious difficulties in the classroom. I don’t know any principals who want to have ineffective teachers or teachers not pulling their weight in their building, so these evaluations served as a good platform for discussion on how these particular teachers could improve for the upcoming year. Sometimes these struggling teachers were reassigned to a different grade level or building to better suit their strengths or style. Sometimes these teachers were paired with a mentor. Sometimes these teachers were relieved of their teaching duties (Yes, it is possible even with tenure. Tenure does not guarantee job security. It guarantees due process).
The new APPR system was implemented last year. Now teachers are rated on a HEDI (pronounced Heidi) scale: Highly Effective, Effective, Developing, or Ineffective. Teachers are rated on three components: State Growth Measure, Locally Developed Growth Measure, and Other Measures of Effectiveness. I am going to explain this as simply as possible, and then explain what exactly this means for our students–your children. More information can be found on your districts APPR plan here: http://usny.nysed.gov/rttt/teachers-leaders/plans/ Every district with an approved APPR plan is listed on this site.
Growth Measure (20 points): This is a measure that is determined by a set of assessments. This is where the term “SLO” (Student Learning Objective) comes into play. Teachers administer a pre-test in the beginning of the school year, determine a goal or target for each student to reach on the same or a similar test at the end of the school year, and then administer a post-test at the end of the year. If the student reaches the goal then the teacher gains points. If not, the teacher loses points. Teachers who teach a class with a state assessment (grades 3-8) or a Regents exam use those scores as their post-test score.
Locally Developed Growth (20 points): This is a one-time assessment given at the end of the year. Some teachers use their particular Regents exam for this score, while other teachers use a different assessment such as AIMSWEB or a locally created assessment. At the high school level, final exams are often used for this score. The teacher determines a target goal that students will reach and then after the assessments are scored, the points are entered and a total ‘target reached’ score will be generated for the teacher. It is important to note that teachers are not allowed to score their own assessments.
Other Measures of Effectiveness (60 points): This score comes from formal classroom observations by the principal, informal observations (principal “drop ins”), and other evidence provided to the principal in accordance to the Danielson Rubric. This is where teachers are now presenting teacher portfolios that provide evidence of reaching all of the elements of the four domains within the Danielson Rubric. The domains are broken down and explained into ‘look fors’ and ‘elements’ that many teachers are now using as guides in the classroom. More detailed ‘look fors’ can be found here: http://www.danielsongroup.org/userfiles/files/downloads/2013EvaluationInstrument.pdf
Once all the scores have been computed, the scores are then turned into a HEDI score and mailed to the teacher. This friends, is the new teacher evaluation process to ensure that the teachers in your district are doing their jobs. Most teachers have adjusted and are rolling with the changes. There’s really no room for complaining as this isn’t going away anytime soon so what else can we do than just adjust to the changes? The problem, folks, is this:
Time and accuracy. Friday I spent three hours at a great computer training. I learned valuable resources that will transform the culture of my classroom and will greatly benefit the students. I was excited to get back to my classroom to experiment and practice and implement these changes. That didn’t happen. Instead, I spent the remainder of the day computing pre-test scores (that took two class days to administer), setting targets, entering that data into the computer, and then packing my bag with fifty additional tests to compute and enter over the weekend. These pre-tests, in essence, are supposed to show the teacher overall strengths and weaknesses of the students and serve as a guide of what to spend time teaching over the course of the year.
The idea seems valid: kids don’t know the information on the pretest so they do poorly. Teachers teach the information over the remaining months of the year. Kids learn the information. Kids show improvement on the test. In theory, it makes sense. But New York State doesn’t realize that our kids are smart cookies and they’ve figured this out. High schoolers know this doesn’t count as a grade. They know they don’t know anything on the pre-test. They know that they have to sit through a weeks worth of tests on material they don’t know–even in their elective classes such as PE and art. They also have perfected tuning out the teachers who are insisting that they give an honest effort, that they try their hardest, that they show what they know. In turn, they create designs that spell out ‘YOLO’ on their Scantron forms and write a response that says, ‘Unicorns are cute and I like tacos’ for the written response portion, or simply #angrytroll.
How is this helping show my effectiveness as a teacher?
I’m willing to bet that 98% of teachers know their strengths and weaknesses in the classroom. My effectiveness as a teacher, and my ability to be an honest professional should be reflecting on those strengths and weaknesses and working with administration and teacher teams to collaborate and grow as a district. Teachers want to teach, we want to create, we want to make learning a valuable experience for the children in our classrooms. We did not become teachers to collect data. Our data is in the form of your children and believe me when I say that we know your children better than a test does. We know they are more than a score, we know what they know and what they need to know and we need the general public to put their faith back in us to effectively do our jobs.
*Please note that I am speaking from direct experience with my own district in New York State. All district’s APPR plans vary, so please use the link provided in paragraph two to research your own district. I cannot speak to how other states are conducting their teacher evaluation plans.
Click the link below to find Mrs Momblog on Facebook: