Skip to Main Site Navigation Skip to Content Skip to Footer

CAEP Accountability Measures

CAEP (Council for the Accreditation of Educator Preparation) requires all EPPs (Educator Preparation Programs) to provide information to the public on program impact and program outcomes. The EPP at Eastern Connecticut State University (Eastern EPP) offers the following data and reports from our initial undergraduate and graduate licensure programs, as they relate to the required Accountability Measures.

Impact Measures

  • At this time, Connecticut legislation explicitly prohibits the linking of any state student-testing database with state educator databases, thereby precluding the use of value-added methodologies for the evaluation of teacher performance based on student achievement. However, given CAEP standard requirements (formally adopted by the State Board of Education in December 2016) and federal Title II requirements regarding measurements of student effectiveness, we continue to work with the Connecticut State Department of Education (CSDE) to develop alternative reliable and valid methodologies for measuring our program impact on P-12 student growth.

    In the absence of state data, our EPP has been studying the impact of our completers on P-12 learning and development through other means.

    1. Survey of Completers: 

    We measure the satisfaction of our completers through a survey of 17 items. These 17 items focus on the four InTASC subscales, including content, learner and learning, instructional practice, and professional responsibility. It is administered in the summer, every other odd year. 

    Based on this survey, we can determine that our EPP has prepared our completers strongly in their understanding of the content area, their abilities to make content meaningful, engaging students in their content, understanding how students develop, using a variety of instructional strategies, planning instruction, using assessment data to inform instruction, using data to monitor student growth, integrating technology, collaborating with professionals and community members, and understanding and following ethics and codes of professional conduct (all with a mean score above 3.5). 

    Areas that our completers felt less prepared include teaching students with disabilities (mean of 3.36 with a standard deviation of .95), teaching English learners (mean of 3.32 with a standard deviation of .95) and effective classroom management (mean of 2.77 with a standard deviation of 1.19). Results from the Survey of Completers are woven into ongoing program improvements. 

    Evidence:

    2. Survey of Employers:

    Another measure of our completers is through a survey of the employers of our completers.  These surveys are deployed every two years on the even years.

    Multiple items across the 2024 employer survey provide data on our completers' teaching effectiveness.  Employers strongly rated our completers' knowledge of their subject area and their abilities to plan and organize lesson and activities effectively (each with an average mean score above 3.7 our of 5).  Employers rated our completers' abilities to work with a culturally diverse classroom and to develop appropriate assessment practices as above average (each with a mean score above 3.3 out of 5).  One score that was rated slightly lower (mean score of 3.12 out of 5) was skill in classroom management.  These results not only corroborate the results from our student completers' survey but underline our completers' (and by attribution, our programs') strengths and areas for ongoing improvement.

    Evidence:

  • 1. Employer Survey:

    We measure the satisfaction of our employers through a survey evaluating the knowledge, skills and professional qualities of our completers, their employees. We administer the survey every two years on even years. Data from the 2024 survey indicate that our employers rater our completers very positively, with an average mean score of 3.77 (out of 5) for knowledge of subject area, an average mean of 3.73 (out of 5) for planning and organizing lessons and activities effectively, and an average mean score of 3.77 (out of 5) for effectively involving all students in learning.  Completers' skill in classroom management shows room for improvement, with an average mean of 3.12 (out of 5) as well as their leadership skills for school improvement, with an average mean of 3.08 (out of 5).

    Evidence:

    2. Clinical Advisory Council

    Our EPP’s Clinical Advisory Council (CAC) is comprised of representative stakeholders from regional and local school districts and child development centers. The purpose of the council is to collaborate with our EPP in data review, program evaluation, and to provide guidance for the continuous improvement of our teacher preparation courses, assessments, and clinical experiences. Additionally, full-time EPP faculty engage in CAC meetings, facilitated by the Coordinator for Educational and Clinical Experience. Representatives from Coventry, Manchester, Mansfield, Norwich, Tolland, and Windham participate; membership characterizes the urban, rural, and suburban demographics where Eastern candidates engage in clinical experiences. Noteworthy is the dedication and commitment to Eastern candidates among the membership. Each district member serves as a cooperating teacher or host administrator for multiple years, attesting to the partnership with Eastern. 

    Evidence:

Outcome Measures

Youvisit Pixel

Eastern's website makes use of cookies to provide social media features, analyze traffic to the site, and to personalize content for visitors.

By closing this message, you consent to our cookies on this device in accordance with our cookie policy unless you have disabled them.
View Privacy Notices