NCATE-Standard 2
Standard 2 – Assessment System
The unit has an assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs.
2.1 Significant Changes
2.1 What are the significant changes in how the unit uses its assessment system to improve candidate performance, program quality and unit operations?
Since the 2006 NCATE accreditation visit, the EPP has made significant changes to address areas of improvement in order to be in full compliance with Standard 2.
Assessment System
The EPP continues to refine its key assessments at conversion points throughout its programs: program entry, internship, and program completion. Key assessments are aligned with state and national standards appropriate to program level, content, and support of the EPP conceptual framework. Since the 2006 NCATE visit, a significant change has been made by leveraging an electronic portfolio assessment system (e.g., Taskstream™) to collect multiple key assessments at program conversion points. The Council for Teacher Education (CTE) monitors candidate performance as well as candidate progress. Unit assessment policies and procedures are established through the Council; and procedures and policies are implemented by the Office of Teacher Education (OTE) and the Office of Assessment and Accreditation (OAA). Decisions are based on multiple measures of candidate performance. Procedures for annual data collection, aggregation, dissemination, and analysis are followed.
Given the focus of the Pirate CODE on curricular and clinical practice innovations, significant changes to key assessments are described in Section 2.3. In this section, significant changes implemented at program completion are highlighted. In 2010, the EPP first engaged with the national pilot of edTPA and subsequently adopted and implemented edTPA in all ITP’s. The reason to adopt edTPA was its alignment with NCATE Standards 1, 2, and 4, and the North Carolina Professional Teaching Standards (NCPTS). The rigorous annual training required for local evaluation of edTPA portfolios was further motivation for this program. This training helps the EPP eliminate bias in assessments and establish the fairness, accuracy, and consistency of assessment procedures during the ITP internship semester.
Taskstream™ has become a key element in the EPP assessment system. A prime example is the use of Taskstream™ to collect edTPA portfolios for local evaluation and other signature assessments associated with the Pirate CODE.
In an effort to leverage key assessment requirements for the NCDPI and SACS Reaffirmation at ECU, the OAA led a campaign to align the five student learning objectives (SLO’s) for the University with the NCDPI adopted standards (NCPTS). Between meetings with CTE and program coordinators from the initial and advanced level programs, examples and models were provided for faculty discussion and program adoption. Now, many of the SLO’s in the annual reports for SACS align with the NCDPI standards. This alignment allows faculty members to focus their attention on the quality of candidate performance using SLO data to drive continuous improvement within the program.
Data Collection, Analysis, and Evaluation
At the time of the 2006 NCATE visit, the Teacher Education Management System (TEMS) was in its infancy. Since that time, TEMS has been augmented with the electronic portfolio system Taskstream™ to create an assessment system that is easily integrated. This system draws upon the University data system (Banner), TEMS, and Taskstream™ regularly and systematically. Using these components, the OAA collects, compiles, aggregates, summarizes, and analyzes candidate performance data in order to improve candidate performance and inform program improvement. The EPP continues to refine its assessment system through the exploration of new information technology to support and economize workflow.
In addition to university, and EPP-generated data, the EPP utilizes data from external partners and grants. Several examples are provided in exhibit 1.4.i.
The COE data manager is skilled at querying the data systems which support the EPP, and possesses a deep contextual knowledge of the EPP to manipulate data for compliance, reporting, and research purposes. The COE data manager drives the annual reporting process for the EPP, as evidenced in exhibit 2.4.d.
On behalf of the EPP, the OAA will review, revise, and deploy exit surveys for initial and advanced programs. The OAA monitors response rates and communicates with program faculty to implement strategies to increase response rates. Data from exit surveys is disseminated to program faculty regularly and included in annual data summits. Exit survey data also supports unit operations in direct and indirect ways. For example, the EPP relies on candidate exit surveys for feedback on advising, clinical placements, instructional technology resources, and comprehensive program effectiveness.
The OAA also revived an ECU Graduate Employer Survey by conducting a pilot administration of the revised survey in June 2014. Responses to the new survey are currently unavailable at the time of this self-study submission; however, it will be available at the time of the on-site visit. In the past, NCDPI facilitated a principal survey and summarized responses in its annual IHE reports. In 2008-09, when the survey transferred to an electronic format, response rates plummeted making individual EPP data impossible; by 2010-11, NCDPI dropped the survey administration completely.
The University maintains records of formal student complaints through appropriate offices. Grade appeals follow a policy set forth by the Office of the Registrar. The Office of Equity and Diversity (OED) has systems in place for filing or participating in these complaint processes to receive and investigate complaints or allegations such as harassment or discrimination complaints, and allegations of retaliation. The EPP adheres to university policies, as well as the practices found in the Teacher Education Handbook.
Use of Data for Program Improvement
The EPP regularly and systematically uses data to evaluate program efficacy. For example, in preparation for the University’s 2013 SACS Reaffirmation visit, each degree program was required to develop a Unit Assessment Plan for annual reporting. Each Unit Assessment Plan at the University must comply with the policy regarding student learning outcomes (SLO’s). As a result, the EPP has leveraged the reporting cycle for SACS to ensure annual and semester review of student learning outcome goals. As part of university policy, each program is required to have five SLO’s: 1) global outcome; 2) leadership outcome; and 3) three program learning outcomes. The unit assessment reporting process requires the faculty to analyze candidate performance data for programmatic pedagogical and/or curricular changes. As a result, data-driven program improvement is evident in the Action Steps section of each unit assessment report (Step 4); see reporting template.
Candidate performance data is generated through several processes. First, each EPP program with Taskstream™ portfolios has one faculty representative identified as the evaluation manager who has specialized open access to programmatic data in Taskstream™ (e.g., portfolio and reporting data). Then, following the unit report timeline, the COE data manager generates candidate performance reports, de-identifies the reports of identifiers, and uploads the files to TracDat™, the University’s data warehouse adopted in preparation for SACS Reaffirmation. Department chairs and faculty unit assessment coordinators have access to all files uploaded into TracDat™.
As part of the Pirate CODE, the edTPA implementation led to new opportunities to engage faculty in data-driven program improvement. With further sustainable data available from edTPA local evaluations and national scoring through Pearson, the OAA assembled the EPP’s lead edTPA faculty in June 2013 called edTPAL’s (edTPA Liaisons. This happened for the inaugural edTPA Data Summit. The Data Summit model generated considerable interest in data utilization for program improvement, in addition to successful faculty engagement, which led to a second annual edTPA Data Summit.
2.2 AFI
2.2 Summarize activities and assessments that demonstrate correction of any areas for improvement from the previous visit, if applicable.
The EPP has one area for improvement to be addressed in Standard 2 from the 2006 NCATE accreditation visit:
- The assessment system does not certify that faculty regularly and systematically analyze data composites in order to improve programs and unit operations. (ITP and ADV)
To address this area for improvement, the following processes are now in place for all ITP and ADV programs in the EPP at ECU:
- Annual Unit Reporting Process
In preparation for the University’s 2013 SACS Reaffirmation visit, each educational unit (degree, certificate and stand alone minor program) was required to develop a Unit Assessment Plan for annual reporting. Each Unit Assessment Plan must comply with the University’s policy regarding student learning outcomes (SLO’s). University policy requires each program to have five SLO’s: 1) global outcome; 2) leadership outcome; and 3) three additional program learning outcomes. In fall 2013, college-based Assessment Review Committees were formed to evaluate all unit assessment plans. As a result, the EPP has leveraged the reporting cycle for SACS to ensure annual review of SLO’s.
- CTE’s Focus on Assessment
The CTE monthly agendas document a regular and systematic focus on data use at all levels of the EPP. Data that is shared through this outlet is central to the needs of the EPP and correlates with program approval from NCDPI, as well as NCATE and CAEP accreditation. In particular, updates from the OAA to the CTE in spring 2014 document a focus on regular and systematic data use in the EPP (February, March, April 2014)
- Department Focus on Assessment
All EPP programs have been encouraged to add assessment review to monthly faculty meetings, with more focused attention during key reporting periods in alignment with the ECU Annual Unit Reporting, as noted above. Each college with programs in the EPP has an assessment committee also focused on data-driven program improvement.
- edTPA Data Summit
The aforementioned significant changes in key assessments (e.g., edTPA) have led to more valid and reliable data at the point of program completion. This data is used by faculty to utilize for ongoing improvement. The edTPA Data Summit unites faculty across the EPP, surrounding the common language and architecture of edTPA. This builds assessment literacy and solidifies an inquiry stance toward edTPA data. The edTPA Data Summit is now an annual event, which EPP faculty anticipate for collaborative analysis of unit level data and unit level decision-making.
- Faculty Research Presentations and Publications
As evidenced in Section 5.3, Pirate CODE innovations and practiced-based research projects rely upon the integrated assessment system’s ability to code candidates by innovation and prepare data reports tailored to faculty research needs. As a result, the assessment system plays an integral role in faculty work and unit operations. OAA prepares routine data summaries and ad hoc reports for use by faculty for program improvement and research.
In addition to the processes noted above, the Pirate CODE, the EPP’s Transformation Initiative (TI) itself, is grounded in the regular and systematic use of data to drive program improvement and increase candidate learning. As evidenced in Sections 1.3, 2.3, 3.3, and 5.3, multiple data measures are analyzed for each Pirate CODE innovation to determine implementation efficacy, candidate performance, and impact. As noted in the original TI submission, the Pirate CODE leverages COE investments in its integrated assessment system to support practice-based research; see ECU TI Proposal, page 24. After three years of implementation, exhibit TI.4.a documents not only the regular and systematic use of data for program improvement, but also refinements and improvements made to the assessment system in support of evolving Pirate CODE innovations.
2.3 Transformation Initiative
2.3 Transformation Initiative
- Summarize activities and changes based on data on candidate performance and program quality that are related to the TI, if TI is related to this standard.
- Discuss plans for sustaining and enhancing progress on the TI in this area, if TI is related to this standard (10408/12000 characters)
Pirate CODE innovations are supported and tracked by the Office of Assessment and Accreditation (OAA) assessment system, the Teacher Education Management System (TEMS). The robust cohort coding feature in TEMS allows the EPP to track cohorts of candidate based on certain characteristics, such as program pathway, scholarship program, or research projects. As candidates participate in a Pirate CODE innovation, OAA assigns a code specific to the innovation and term, to each candidate. For example, an ELEM candidate participating in Instructional Coaching carries a coaching code, a MIDG candidate in ISLES and Co-teaching would have two codes. As a result, each candidate code reflects the instructional affordances offered that candidate. Cohort codes allow the OAA to disaggregate performance data by innovation for reporting and research purposes and facilitate data collection and analysis at multiple levels: individual candidate, individual innovation, and combined innovations. OAA’s ability to prepare data by innovation code allows faculty to assess short-term and long-term goals of the innovation.
Key artifacts and evaluations for ITP’s and ADV’s in the EPP are stored in electronic portfolios via Taskstream™. The OAA designs and maintains all electronic portfolios used throughout the unit. In the ITP’s, artifacts required for licensure by NCDPI are collected and evaluated in one of four portfolios: the Early Experience portfolio, the Signature Assessment for Initial Licensure (SAIL) portfolio, the Internship portfolio, and the edTPA portfolio.
The Early Experience portfolio houses common assessments across the ITP during the first course in all ITP’s. The SAIL portfolio holds artifacts required for NCDPI licensure and other program-determined assessments. Programs may elect to add additional items to the SAIL portfolio as necessary for curriculum development. The Internship portfolio collects key assessments during the internship semester. The edTPA portfolio collects the edTPA key assessments only. Many Pirate CODE innovations collect assessments and other artifacts in Taskstream™ through one of the aforementioned portfolios.
The process for adding new artifacts to Taskstream™ portfolios has been piloted by the Pirate CODE innovations. To date, Pirate CODE innovation artifacts have been added to the Early Experience and SAIL portfolios. Before a new artifact is added to SAIL (or another portfolio), it is vetted in a separate, stand-alone portfolio. Lead faculty work with OAA to design and develop “mini portfolios” consisting of items only required for the innovation. The mini portfolio is used during the formal pilot period. Candidates who are participating in the pilot are enrolled into the mini portfolio and are required to submit the work associated with the innovation there. The use of the mini portfolio allows lead faculty members to easily make changes to evaluation rubrics while the innovation is being piloted. Once the assessment and rubric have been finalized, the artifact is moved to the SAIL portfolios for the scale-up phase.
Introductory Clinical Observation for Novice Observers/Video Grand Rounds (VGR)
- The VGR Observation Protocol is set up as a form in Taskstream™ in the Early Experience portfolio. Participating candidates complete the protocol each time a video is watched, or each time he or she observes a class. The items in Section 1 of the protocol are consistent for all program areas. Section 2 contains items regarding subject-specific pedagogy and is unique to each program. Candidates complete the Context for Learning using a standardized template. The document is submitted to Taskstream™ and evaluated as Meets Requirements or Does Not Meet Requirements. As part of the final exam, candidates submit an essay reflecting on the final observation. The essay is evaluated with a three-level rubric.
- Data collected from the VGR Observation Protocols are reported to VGR lead faculty. The results from the Context for Learning evaluations and the Final Exam Part B rubric are currently being analyzed to determine the efficacy of the model. Practice-based research studies for VGR are highlighted in Section 5.2.
- As noted on the VGR Implementation Timeline, VGR will expand to other program areas in Years 4 and 5. As a result, new protocols will be developed to address subject-specific pedagogy in each participating area. Since each program will have its own version of the protocol, the VGR artifacts will be moved into the SAIL portfolios.
- Fidelity of implementation research underway with the ISLES and edTPA innovations (see below) may impact the types of data collected as part of VGR in Years 4 and 5 of the Pirate CODE.
ISLES Instructional Strategies Modules
- Three artifacts from ISLES 3 are collected in Taskstream™: lesson plans, instructional commentary, and a video clip. Collectively, these artifacts are evaluated using a five-level rubric with five criteria. These artifacts are collected in the SAIL portfolios for Elementary Education, Middle Grades Education, and Special Education. New ISLES assignments for secondary ITP’s were in development in 2013-14 and will not be included in SAIL portfolios until the pilot phase is complete.
- Data from the ISLES 3 is accessible to lead faculty in Taskstream™. Several practice-based research projects utilizing ISLES data analysis are highlighted in Section 5.3.
- No changes are anticipated in the assignment or the rubric for ISLES 3 utilized in Elementary Education, Middle Grades Education, and Special Education. In Pirate CODE’s Years 4 and 5, ISLES modules will be incorporated into additional ITPs (see ISLES Implementation Timeline). SAIL portfolios in those programs will be updated to accommodate the ISLES artifact(s).
edTPA Preparation Modules Integrating ISD-Development Strategies
As noted in Section 1.3, the edTPA Preparation Modules were piloted in Fall 2013 and determined not fit for further piloting for several reasons. One reason related to Standard 2 was the inability to link candidate progress through the modules with the integrated assessment system. Without the ability to connect with TEMS or Taskstream™ the utility of the innovation to inform formative progress of candidates was serious limited.
Clinical Internship Experience Co-Teaching Model
To date, the data on the Co-teaching innovation is collected outside of the data system. The development process of this innovation lends itself to data collection from the field through different means, including clinical teacher feedback and focus group interviews. In order to sustain the innovation, data collection will move into Taskstream™ so that data on Co-teaching is integrated with data from other Pirate CODE innovations. Not having Co-teaching data in Taskstream™ poses a limitation to the innovation, one that lead faculty and OAA plan to address as Co-teaching expands to new ITPs in the coming year; see the Co-teaching Implementation Timeline.
Clinical Internship Observation Model Support with Instructional Coaching
Instructional coaches were asked to complete the TQP Walkthrough (described in more detail in Section 1.3) up to five times for each candidate with whom they worked. The results of the multiple “look fors” on the walkthroughs were recorded in a form in Taskstream™ and evaluated as Meets Requirements/Does Not Meet Requirements by the Instructional Coach.
Data collected from the TQP Walkthroughs were reported to lead faculty for use in practice-based research projects. More details on instructional coaching research projects are highlighted in Section 5.3.
The TQP funding for the instructional coaches ended in May 2014; there will be no additional TQP Walkthroughs conducted in Years 4 and 5. More about instructional coaching as it relates to clinical practice is included in Section 3.3.
ISLES data is currently being used to develop a protocol to measure the fidelity of implementation for the EPP. Once areas of concern are identified, procedures will be developed to increase the consistency with which the Pirate CODE innovations are implemented. Changes in the types and quantity of data collected as a part of fidelity of implementation work are anticipated.
edTPA Administration
edTPA portfolios are submitted for local evaluation via Taskstream™. Prior to spring 2014, the official edTPA rubrics (five-level) were used to score the assessment. In spring 2014, the unit adopted the local evaluation protocol as directed by SCALE. The criteria for the local evaluation rubrics are the same as the official rubrics; however, local evaluation is limited to three performance levels. The overall score of Meets Requirements/Does Not Meet Requirements is also collected. Reports are generated on the overall scores as well as on the criterion level scores. Open-ended evaluator comments are archived as part of the evaluation history.
- Data collected from the administration of the edTPA is disaggregated and reported to the edTPA in each program area. In 2013, the results were used to conduct the OAA Data Summit where the results were analyzed by the participants and used as the basis for generating priority questions for the following year. The results from the 2014 edTPA administration were used for the 2014 edTPA Data Summit. edTPA data is also used by program faculty for annual unit assessment reporting for SACS.
- Few changes are anticipated in the collection of edTPA data. SCALE has stabilized the format of the assessment and is only making minor revisions that do not impact the collection of artifacts or the reporting of results. The EPP plans to explore participation in national edTPA scoring in Year 4 and 5 of the Pirate CODE, but barriers exist. Firstly, in NC there is not a mandate to use edTPA or to engage in national scoring. As a result, the EPP has moved forward with edTPA for the reasons noted in Section 2.1. Additionally, the cost of national scoring is a concern for the EPP. Without a state mandate, the EPP does not want to add an additional cost to candidates. However, the EPP is unable to sustain the cost of national scoring in an economic climate of budget cuts and spending restrictions.
- The edTPA data is currently being used to develop a protocol to measure the fidelity of implementation for the EPP. Once areas of concern are identified, procedures will be developed to increase the consistency with which the Pirate CODE innovations are implemented. Once an acceptable level of fidelity has been reached, the true impact of edTPA at program completion can be determined and linked to P-12 student achievement.
2.4 Exhibits
2.4 Exhibits
- 2.4.a – Evidence of TI-related changes to the unit’s assessment system including the requirements and key assessments used at transition points, if TI is related to this standard
- 2.4.b – Evidence to support correction of areas for improvement, if any
- 2.4.c – Procedures for ensuring fairness, accuracy, consistency, and freedom of bias for key assessments of candidate performance and evaluations of program quality and unit operations
- 2.4.d – Policies and procedures for data use that demonstrate how data are regularly collected, compiled, aggregated, summarized, analyzed, and used to make improvements
- 2.4.e – Examples of significant changes made to courses, programs, and the unit in response to data gathered from the assessment system
ECU is conditionally accredited based on the National Council for Accreditation of Teacher Education (NCATE) standards, for a period of 7 years, from Fall 2015 to Spring 2022.
ECU will seek accreditation based on the Council for the Accreditation of Educator Preparation (CAEP) Standards in Spring 2022. CAEP is the single specialized accreditor for educator preparation, and administers NCATE accreditation.