This project is designed to analyze the psychometric properties of the Early Childhood Environment Rating Scale-3rd Edition (ECERS-3), the latest version of the most widely used tool for assessing quality in early childhood classrooms. The project has three primary goals: (1) validate the ECERS-3, using the scoring system recommended by its authors; (2) refine the tool using item response theory (IRT) to establish an alternative scoring system and investigate the predictive and convergent validity of the subscales created under the alternative; and (3) disseminate findings broadly. Because the ECERS tools are used extensively by state and federal agencies, researchers, and professional development providers, ensuring that the newest version is psychometrically sound is critical for the field of early childhood. Additionally, the alternative scoring will provide users with highly detailed and nuanced information about classrooms that can be used to better assess and support quality.
The ECERS-3, which was published in the fall of 2014, employs the same structure as its predecessor (ECERS-R): observers respond to yes/no indicators to derive scores on 7-point items. The revision incorporates new items and indicators to address the field’s growing understanding of the importance of instructional quality; provides more nuanced information about key content areas like literacy and math; and improves measurement sensitivity. The research will take place in classrooms for 3- to 5-year-olds in Washington, Georgia, and Pennsylvania. Data will be collected via subcontracts with the agency in each state responsible for its Quality Improvement Rating System. The classrooms will include state funded pre-k, child care, and Head Start and will be diverse with regard to classroom quality and children’s socio-economic status. The subcontractors will collect ECERS-3 data in 900 classrooms (300 per state), the Classroom Assessment Scoring System–Pre-K (CLASS Pre-K) in 120 classrooms (40 per state), and pre- and post-test child outcomes, covering the five school readiness domains, from 600 children in 120 classrooms (40 classrooms per state).
To validate the ECERS-3 using the author’s scoring system, we will investigate the factor structure of the 7-point indicators. Confirmatory factor analysis will test for a single factor measuring global quality, and exploratory factor analysis will test for multiple subscales. Predictive validity will be assessed by testing the association of ECERS-3 total score and subscales with child outcomes, using three-level hierarchical linear models accounting for time (pre-/post-) being nested in child and child being nested in classroom. Convergent validity will be tested using correlations of the total score and subscales with the CLASS Pre-K dimensions.
To refine the ECERS-3, we will apply IRT to the indicators, resulting in indicator-based subscales measuring narrowly defined aspects of classroom practice that we hypothesize will relate strongly to specific outcomes. We refer to them as ‘indicator-based’ because indicators from different 7-point items will be combined without regard for the items to which they were originally assigned. We will test 16 theoretically derived indicator-based subscales that have been substantiated in the ECERS-R, such as language/ literacy, engagement, and teaching. We will test their predictive and convergent validity following the same analytic strategy used for the overall score but will only test specific, hypothesized associations.