Skip to Main Site Navigation Skip to Content Skip to Footer
Back To Top

CAEP Accountability Measures

CAEP (Council for the Accreditation of Educator Preparation) requires all EPPs (Educator Preparation Programs) to provide information to the public on program impact and program outcomes. The EPP at Eastern Connecticut State University (Eastern EPP) offers the following data and reports from our initial undergraduate and graduate licensure programs, as they relate to the required Accountability Measures.

Impact Measures

  • At this time, Connecticut legislation explicitly prohibits the linking of any state student-testing database with state educator databases, thereby precluding the use of value-added methodologies for the evaluation of teacher performance based on student achievement. However, given CAEP standard requirements (formally adopted by the State Board of Education in December 2016) and federal Title II requirements regarding measurements of student effectiveness, we continue to work with the Connecticut State Department of Education (CSDE) to develop alternative reliable and valid methodologies for measuring our program impact on P-12 student growth.

    In the absence of state data, our EPP has been studying the impact of our completers on P-12 learning and development through other means.

    1. Survey of Completers: 

    We measure the satisfaction of our completers through a survey of 17 items. These 17 items focus on the four InTASC subscales, including content, learner and learning, instructional practice, and professional responsibility. It is administered in the summer, every other odd year.

    Based on this survey, we can determine that our EPP has prepared our completers strongly in their understanding of the content area, their abilities to make content meaningful, integrating technology, collaborating with professionals, and understanding and following ethics and codes of professional conduct (all with a mean score above 4). Other responsibilities that were rated above average (with a mean score above 3.5) include engaging students in their content, understanding how students develop, developing effective classroom environments, teaching students from diverse backgrounds, using variety of instructional strategies, planning instruction, using assessment data to inform instruction, using data to monitor student growth, and seeing leadership roles of responsibilities for student learning. The only areas that our completers felt less prepared include teaching students with disabilities (mean of 3.06 with a standard deviation of 1.16), teaching English learners (mean of 2.91 with a standard deviation of 1.23) and effective classroom management (mean of 3.14 with a standard deviation of 1.19). These results have been woven into program improvements.

    Evidence:

    2. Survey of Employers:

    Another measure of our completers is through a survey of the employers of our completers. These surveys are deployed every two years on the even years.

    Seven items in the employer survey provide data on our completers’ teaching effectiveness. Employers rated our completers’ knowledge of subject area and their abilities to plan, organize lessons and activities effectively strongly, with an average mean score above 4. They rated our completers’ ability to work with a culturally diverse classroom, to develop developmentally appropriate assessment practices, to implement varied assessments and to use a variety of instructional approaches as above average (with a mean score above 3.5). The only item that was rated slightly lower (mean score of 3.31) was skill in classroom management. These results not only corroborate the results from the completers’ survey but underline our completers’ (and by attribution, our programs’) strengths and areas for improvement.

    Evidence:

    3. Completers’ performance in Data Literacy and Action Research Projects

    Many of our completers advance their learning in our graduate programs, where they engage in two classroom-based projects that provide direct information on their effectiveness and impact on P-12 learning and development.

    The first is a data literacy project (completed in EDU 693) that requires our completers to demonstrate the positive impact of their teaching on student learning through pre/post-assessments on an instructional module. They are required to capture student data and demonstrate student growth as a direct consequence of their instruction.

    The second project completed in EDU 697 serves as the culminating experience for their advanced master’s degree and is called the Action Research project. Candidates identify an instructional topic and complete an action research study centered on student data and instructional data.

    Thirteen items from the data literacy project and four items from the action research project speak to our completers’ impact on P-12 learning and development. Six items from our completers’ data literacy project and two items from their culminating research project provide another window into our completers’ teaching effectiveness. Completers (across three cycles) demonstrated a mean score of 2.50 or higher (scale of 1 to 3) on aligning their learning activities to national standards, knowledge of curriculum standards, using research and connecting research to the field, determining conclusions and implications for the field from their research. These results speak well of our completers’ teaching effectiveness, in all aspects of teaching ranging from content knowledge to instruction, to collaboration. Slightly lower scores in their abilities to interpret data meaningfully (2.44 mean) and to use measures of central tendency (2.42 average) suggest areas of improvement for our programs. Our completers could use some assistance learning about data analysis and use of data to monitor student growth. Interestingly, these are also areas that we identified as areas for improvement for our candidates and have already identified ways to enhance the content in relevant courses and clinical experiences.

    Evidence:

    4. Research on completers:

    Every two years, we conduct a qualitative study on the impact of our program on completers and their P-12 students. Interview data and classroom observations are gathered and measured against the four domains of our candidate learning outcomes.

    Findings from our pilot and first round of study (2018-2019) indicate that our completers demonstrate strong competence in Intentional Teaching, Cultural Competence, and Professional Practice and Leadership. Our completers included the first two domains as areas of their own expertise and attributed their knowledge to our EPP. Classroom observations and employer surveys corroborated this finding, affirming the research result that our completers positively affected their students through measures of intentional teaching and understanding their cultural needs. Completers demonstrated this in terms of their lesson planning, classroom arrangements, formative assessments and summative evaluations. Completers demonstrated their abilities to apply cultural competence in differentiations of instructional methods and assessments. Our findings also indicated that Data Literacy was an area of growing interest and need for our completers. Data Literacy was not as quickly evident in classroom observations but that could also be a function of the time of observations. Completers did share many applications of data usage in their own practice, even if they could not attribute the knowledge to their teacher preparation program. All completers were confident about their professionalism, which was also mirrored in classroom observations and their employer’s surveys. Research results affirm our completers’ strengths in teaching but also underscores for us the need to enhance data literacy in our programs.

    Evidence:

  • 1. Employer Survey:

    We measure the satisfaction of our employers through a survey evaluating the knowledge, skills and professional qualities of our completers, their employees. We administer the survey every two years on even years. Data from 2016, 2018 and 2020 indicate that our employers rate our completers very positively, with an average mean score of 3.76 (out of 5) for knowledge and skills and 4.12 for professional qualities. Knowledge of subject area, planning abilities, curiosity and compassion were rated with a mean score of/above 4.00. Skill in classroom management was rated the lowest at 3.34.

    Evidence:

    2. Clinical Advisory Council

    Our EPP’s Clinical Advisory Council (CAC) is comprised of representative stakeholders from regional and local school districts and child development centers. The purpose of the council is to collaborate with our EPP in the data review, program evaluation, and to provide guidance for the continuous improvement of our teacher preparation courses, assessments, and clinical experiences. Additionally, full-time EPP faculty engaged in the meetings the Coordinator for Educational and Clinical Experience facilitated. Representatives from Coventry, Manchester, Mansfield, Norwich, Tolland, and Windham participate; membership characterizes the urban, rural, and suburban demographics where Eastern candidates engage in clinical experiences. Noteworthy is the dedication and commitment to Eastern candidates among the membership. Each district member serves as a cooperating teacher or host administrator for multiple years, attesting to the partnership with Eastern. During introductions, members expounded Eastern’s support of candidates, cooperating teachers, and excellence among candidates and supervisors.

    Evidence:

Outcome Measures