Home » News » Using implementation science to improve outcomes for students with disabilities

Using implementation science to improve outcomes for students with disabilities

classroom setting with young student with curly blonde hair points to smiling blue face image on paper held by woman with long dark hair and glasses

Using implementation science to improve outcomes for students with disabilities

July 14, 2022

Despite good intentions, U.S. education systems struggle to adopt evidence-based practices to improve the quality and effectiveness of services provided for students with disabilities. Implementation science—the multi-disciplinary study of methods and strategies to promote use of research findings in practice—addresses this by providing frameworks that facilitate the use of such practices and support practitioners in consistently using these tools.

In the article, “Effective Implementation Capacity to Impact Change Within State Education Systems to Support Students With Disabilities” recently published in the Journal of Disability Policy Studies, a team of researchers—led by Caryn Ward, PhD, director of the National Implementation Research Network (NIRN) at the UNC Frank Porter Graham Child Development Institute (FPG)—explore the gap between what researchers know about best practices and what is actually implemented for children with disabilities.

Ward and her co-authors Tanya Ihlo, PhD, Kathleen Ryan Jackson, DEd, and Sophia Farmer, all of whom are implementation specialists at NIRN, examined how the State Implementation and Scaling-up of Evidence-Based Practices (SISEP) Center—developed by the U.S. Department of Education’s Office of Special Education Programs (OSEP)—used Active Implementation Frameworks, an implementation science approach to systemic change, and created improved outcomes for students with disabilities.

In their article, the authors present a summary of the lessons learned and the considerations for practice and policy that result from these lessons.

A central finding is the success of state education agencies (SEAs) that supported districts and schools in implementation of a specific educator-student practice designed to help students with disabilities. This implementation resulted in improved outcomes for the children. Significantly, SEAs that implemented frameworks or processes that did not utilize a measurable educator-student practice had little or no improved student outcomes.

Policy makers and executive leaders—who control the resources to redesign an education system that serves diverse children, students, and communities—must come together with their communities to co-create an implementation infrastructure. This infrastructure must be effectively used, tested, scaled, and sustained from one teacher to the next, one school to the next, one local education agency to the next, one region to the next, and one state to the next, until the system has been transformed.

To facilitate this outcome, education agencies need the skills to use implementation and improvement science practices. Attention and development of implementation capacity can be scaled when policy makers and funders support the use of implementation science in legislation and funding opportunities. “Together, leaders and educators in active SISEP states champion the work of deep systemic change using implementation frameworks,” says Ryan Jackson, “Every action is data driven as they learn from one another classroom to classroom, school to school, and district to district.”

Key lessons

“The key lessons from our implementation efforts within SISEP serve as the foundation for continued growth, development and commitment to the families and educators of students with disabilities,” says Farmer. “Together we will continue to disrupt the systems and policies that ensure inequitable outcomes for those ‘differently abled’ and build capacity within education agencies to provide access to the best possible educational opportunities.”

When SISEP was developed in 2007, it was with some assumptions: evidence-based education practices were being used in most school districts and in every state; most districts utilized measures to assess these practices; and data systems to measure student learning were in place. SISEP leadership soon learned that these assumptions were primarily wrong because evidence-based practices were often not identified.

SISEP then spent time developing the capacity of the state and stakeholders to operationalize evidence-based practices, work with subject matter experts to develop fidelity measures and relevant data systems for decision making, and establish that outcomes are demonstrated. This shows that specific, usable educator-student level practices must be identified and operationalized.

The researchers emphasized the importance of examining the structure of implementation teams and then refining or repurposing an existing team to meet the current needs. Since the existing team may not have the skills needed, ongoing training and coaching are essential.

Another key learning is ensuring that different types of data, such as training effectiveness data and fidelity data, are accessed and used by the implementation team within the first six months of the team’s formation. Without this, teams often ceased to function together. Teams also need support in effectively using data to identify or solve problems. Decision making protocols consistently produced actionable plans for improvement. Additionally, teams need to create operating procedures for roles and responsibilities, decision making methods, and communication protocols to ensure that they maintain focus.

SISEP also identified lessons learned around the need for effective coaching systems, using multiple forms of data in decision making, using policy-practice feedback loops, and facilitating cross-agency collaboration. In addition, SISEP learned the importance of creating structures to facilitate cross-agency collaboration at the state education agencies and local levels.

During implementation stages, support was essential so that states could outline selection criteria and develop a selection protocol to ensure a mutual fit among partners at all stages of the implementation process. At full implementation, a key lesson was to continue measuring fidelity, maintaining high quality support, evaluating the impact on achieving intended outcomes, and continuing to use data.

The SISEP Center continues to systematically evaluate outcomes and lessons learned, adjust the plan, and apply the lessons learned. “The SISEP TA Center has been engaged in this important work for the past 15 years,” says Ward. “We continue with our unwavering attention in addressing disparities in outcomes for our students with disabilities by ongoing use of data and these lessons we have learned to improve not only our supports to states but also the tools and methods of implementation.”