EMERGING DIALOGUES IN ASSESSMENT

Down the Rabbit Hole: Aligning Program Improvements with Real-World Goals in Teacher Education

October 7, 2019

Laura Trujillo-Jenks, Rebecca Fredrickson, Sarah McMahan, and Karen Dunlap
Alice: "Would you tell me, please, which way I ought to go from here?"
The Cheshire Cat: "That depends a good deal on where you want to get to."
-Lewis Carroll, Alice's Adventures in Wonderland

Often in higher education, there is an awareness that something in a program of study needs to change, but program leaders may not know where to start, or worse, where they want to go. For change to be effective, it is important to have data that support and guide change to ensure the right path is taken. The right data can help to avoid "the rabbit hole" of aimless change. Finding a good place to start in program improvement depends upon knowing "where you want to get to," which involves thinking with the end in mind and understanding the assessment tools that can be used to obtain the necessary information. Assessment can help faculty and program leaders identify their improvement goals and use data to help them get there.

A starting point for improving Educator Preparation Programs (EPPs) is to review national and state standards. These standards require EPPs to provide evidence of preservice teacher outcomes in several areas: content area knowledge, classroom management skills, instructional techniques, assessment knowledge, technology capabilities, and ability to modify instruction to meet the needs of all learners in their future classrooms. These aspects of preservice teacher education are assessed while the students are still in their programs, but how do colleges and universities know if their programs are effectively preparing graduates for the realities of the classroom?

Improving an EPP to better meet the needs of preservice teachers requires data to indicate where improvements are needed. Our university’s efforts to improve the EPP were focused on data about our graduates’ preparedness for entering the classroom. The faculty in the Department of Teacher Education wanted to know how well prepared our graduates were for the real world of teaching once they entered into their own classroom, so we needed a way to gather their feedback and feedback from the principals who oversee them to improve the program.


Finding Where to Begin

In avoiding being stuck in the rabbit hole and establishing a clear path toward meaningful improvement, the EPP faculty decided to start with asking questions regarding where they wanted to go and what they felt the program needed. The logical first step was to begin with state and national standards. Using these standards, a matrix was created in order to map the standards with the course activities and requirements. There were two different sets of state standards used: standards for preservice teachers (focusing on the professional pedagogy and responsibilities for teachers) and standards that cover the expectations of teachers in public schools (focusing on their teaching practice). The set of national standards we included in the matrix was the Interstate New Teacher Assessment and Support Consortium (InTASC) standards, which are the standards used by most national accrediting bodies, such as the Southern Accreditation of Colleges and Schools (SACS) and the Council for the Accreditation of Educator Preparation (CAEP). This matrix helped ensure that the program was aligned with the requirements of each set of standards.

Using the matrix of state and national standards, we conducted a strengths and weaknesses analysis to examine how current practices within the program aligned with the state and national standards. After the strengths and weaknesses analysis, we added an additional column to the original standards matrix to systematically address weaknesses where standards were not sufficiently met. In this column, we included class activities and assignments that would lead to all state and national standards being fully met. The strengths that were identified were also listed in an additional column to remind us of current practices that are aligned with the standards.

Once we ensured that all state and national standards were being met in the program, we looked to improving assessment methods, such as including more performance-based methods (teaching a lesson on a school campus or being evaluated by a practicing principal) in the program. We also reviewed data for program graduates from state certification examinations and principal evaluations. Using these data, the faculty worked together to find areas of progress and problems. This required a great deal of trust from all of us who were asked to work on this matrix because it forced us to be open to critical reflection, which was sometimes uncomfortable, especially since some of us believe what we do works. However, finding and talking through these areas allowed the faculty to close "holes" throughout the program.

Finally, to ensure the success of these preservice teachers, faculty became trained in the state evaluation systems for practicing teachers and principals (State Teacher Evaluation and Support System [T-TESS], State Principal Evaluation and Support System [T-PESS], and Advancing Educational Leadership [AEL]). Training faculty to use these state-mandated evaluation tools allowed them to address the required components and help preservice teachers to understand the expectations they will face in the real world. It allowed faculty to assist their students in understanding and recognizing when effective teaching, learning, collaborating, reflecting, and leading are present on a campus.

Currently, the state has a principal survey that is filled out by principals to rate the performance of all first-year teachers. These ratings can be used as an indicator of the effectiveness of an EPP. The principal survey measures the teachers’ preparedness to teach the content material, their classroom management, and their ability to work with all learners in the classroom, including students with special needs and second language learners. It was important to consider the data from this principal survey since the EPP could be seen as ineffective based on a principal’s responses on the survey regarding its graduates’ preparation.

Using all of this information from multiple sources resulted in a well-developed, "living" program—a program where faculty agreed to be responsive to the needs of the students, the requirements of the state, and the measures of assessment. Thus, the faculty affects the "living" program by using the matrix to create course activities and assessments to make a more tightly developed program with no holes.


Getting to Where We Want to Go

One change that was implemented within the courses was the addition of field work for preservice teachers. Field work in real world classrooms is now a requirement in all Teacher Education courses and involves the pre-service teachers performing teaching tasks (e.g., teaching a lesson, observing a mentor teacher, and analyzing student data). These field experiences are scaffolded as the preservice teachers go through their coursework and required hours increase prior to student teaching.

Additionally, the EPP added a New Teacher Academy (NTA), a one-day professional development opportunity for graduates prior to their first year of teaching. The NTA was created by faculty from the Department of Teacher Education to be responsive to the feedback obtained from the principals in the field. For example, principals identified classroom management as an area of needed improvement, so the NTA now offers multiple classroom management professional development opportunities to assist graduates in meeting the principals' expectations as they enter into their first year of teaching.

One major change that came out of reviewing the data from graduates was Teacher Education faculty accepting ownership for responsibilities that had not previously been theirs. For example, at this university, all content-specific courses are taught within the colleges of the content areas, not within Teacher Education. However, the Teacher Education program is responsible for student success on state certification exams, which have content knowledge sections. Due to low scores on the certification exams, Teacher Education faculty worked diligently with preservice teachers on mastering the content portions of the elementary certification exam, even though they are not responsible for teaching the content-specific material.


Avoiding the Rabbit Hole

Since implementing revised courses and coursework, adding field experiences, and providing additional opportunities for faculty and graduates, the program has seen an increase in state certification exam scores and better prepared first-year teachers. Additionally, feedback from graduates has been positive. These changes took a great deal of time, energy, and willingness from the faculty to implement. All faculty wanted students to be successful; however, the solutions were not always easy to find. What ultimately helped was that the faculty were all willing to change and be flexible to meet the needs of the students. They also found that working as a team to improve the program, and ultimately the students, helped us to get where we needed "to get to."