Home » Implementation Division Blog: How to escape the data doldrums

Implementation Division Blog: How to escape the data doldrums

Blog post author: Jake A. Leonard, MS

It’s hard to overstate data’s importance throughout the stages of implementation (Proctor et al., 2011). Specifically, data can offer vital glimpses into the otherwise opaque functioning of complex systems. Well-operationalized key performance indicators enable implementation teams to focus on opportunities for growth while celebrating their successes. Without the use of data, implementation leadership would be forced to rely on intuition and hunches to make far-reaching decisions. Moreover, data collected through surveys and questionnaires can bolster and uplift the thoughts and feelings of building administrators, teachers, students, and those who may otherwise lack a voice throughout the implementation process. Therefore, the use of data can play a crucial role in ensuring equitable outcomes for those from traditionally marginalized backgrounds impacted by the implementation of evidence-based practices.

Data’s importance in implementation work is wonderful news for data nerds like me. For instance, the abundance of data from large-scale projects like the Effective Implementation Cohort (EIC) allows the National Implementation Research Network's data team to employ nifty statistical techniques to characterize our cohort of 19 diverse school districts and tease apart the possible impacts of implementation across several levels of analysis. Simply put, we’re able to do the really fun stuff (well, I guess that depends on your definition of fun), all while supporting impactful systems-change work that has the potential of addressing student opportunity gaps, improving student math outcomes, and lessening the burden of adapting new instructional practices for teachers. However, with so much data from so many sources, I often feel overwhelmed, and while pouring through the seemingly endless columns of 1s and 0s, I feel far removed from what the data represents. For me, working with large-scale datasets exacerbates an inherent shortcoming of quantitative data in the social sciences: It flattens participants' lived experiences by reducing their complex and rich experiences into bland numerical expressions.

To fight against data fatigue and the emotional distance data can create, I find it helpful to remind myself of an individual I’ve had a personal connection with and keep them in mind while I work with my teammates to craft plain language summaries that can be shared with non-expert stakeholders and the people and communities whom the data represents. Making the mindful effort of remembering what’s represented within implementation data, instead of simply scrolling through a spreadsheet’s expanse of columns and rows, allows me to see data as more than mere numbers and brings me closer to the human experiences it represents.

Before diving into the strategies that help ground me to the lived experiences, I want to describe more fully the ways quantitative data can alienate us from the people present within the data. By representing lived experiences numerically, we flatten them. Here’s an example outside of the social sciences: A parabolic equation can precisely plot a basketball’s arc as it leaves a player’s hand, floats through the air, and swooshes through the basket for a buzzer-beater that wins the championship game. That equation on its own, without the proper contextualization and meaning-making procedure, despite the deeply emotional event it precisely represents, could put you to sleep (I should know–I often accidentally napped during precalc). Even with the proper contextualization and despite its accuracy and validity, that equation ultimately lacks the emotionality of seeing your team score with no time on the clock.

Do not impose your own ponderous scientific style for communicating results but diffuse and share what you have learned together with others, in a manner that is understandable and even literary or pleasant; for science should not be necessarily a mystery or only a monopoly of jargon-loaded experts, intellectuals, and consultants. (Fals Borda, 1996, pg. 179)

I experienced frustration with this flattening of lived experiences first-hand while working as a graduate research assistant within the Philadelphia-area emergency housing shelter system. I’d meet with caregivers–mostly moms but a few dads–of infants who were experiencing homelessness and living in shelters. During our interview, we’d inquire about the participant’s life-time experiences of adversity, and we’d mark the presence or absence of potentially traumatic experiences–things like incarceration, the death of a loved one, physical and emotional abuse–as a simple yes or no, that we later coded as a 1 or a 0. In some ways, it felt like we were doing an injustice to the participants by representing such impactful events with such brevity, and I felt like there was no way our codes could encapsulate those lived experiences. However, my advisor, the project's primary investigator, justified it in a way that made a lot of sense to me: We didn’t want to pry and dig up more than we needed to, but also our statistical models didn’t need anything else besides that 1 or 0.  In short, in order to conduct statistical analyses that might offer important insights into bettering systems to support families experiencing homelessness, we have to condense these lived experiences into bit- and byte-sized chunks.

While this flattening is often a necessary evil, I take steps to try to lessen that emotional distance created by the data, to avoid data fatigue, and to remind myself of what’s represented. First, I ground the abstract and ambiguous data (whether it be the raw data itself or graphical representations) in my own past experiences. I find it especially helpful to picture someone I’ve had personal experiences with. Educators within a school district could refer to this practice as “centering your North Star student” while dealing with data, but I think it can extend far beyond classroom contexts and is a practice that demonstrates remarkable flexibility.

Do not monopolize your knowledge or arrogantly impose your techniques, but respect and combine your skills with the knowledge of the research or grassroots communities taking them as full partners and co-researchers, that is, fill in the distance between subject and object. (Fals Borda, 1996, pg. 179)

Here’s how this might look using student survey data from the EIC project as an example. In addition to achievement data from state math assessments, the EIC project probes into students’ interest in and feelings toward math through a 30-item survey. The students’ experiences in the classroom and their feelings that stem from those experiences are reduced to 5-point responses, and when scrolling through the seemingly endless spreadsheet, it’s easy to see those data points as nothing more than numbers, instead of students’ voices. You can breathe life into those nebulous numbers by grounding yourself in your own past experiences and picture the face of someone you know or have known whose experiences might be represented in those numbers. It could be a former student from your first year of teaching. It could be your child or niece or nephew. It could even be yourself when you were in school. Think about how your “North Star student” might respond to the survey’s items and how the survey's constructs–things like their enjoyment of math or their perception of their ability to succeed–might look in their lives. The more sensory detail the better. The goal is to ground the abstract in your own concrete experiences to close some of the emotional distance data creates.

Next, once you’ve grounded yourself by picturing your “North Star student,” making meaning from the data using plain, non-technical language can further collapse the distance caused by data. When working with data, it’s easy to become consumed by the nitty-gritty details of data analysis. While of course those details matter, move beyond metrics of statistical significance and highlight the human story you see in the data. Write that story down in simple, clear language, and make sure people (individuals or teams) are present in most sentences.

Importantly, making meaning of data should not be done in isolation, because the same data might inspire very different stories depending on who’s interpreting those data. Therefore, whenever possible, create and support opportunities for shared meaning making. When meeting with and sharing data with colleagues, ask them to identify their own “North Star students.”  Share your plain-language summaries with each other and take some time to recognize the ways your identities, positions, perspectives, and past experiences have influenced the story you see in the data. Ideally, share your stories with stakeholders who are directly impacted by the implementation efforts, and recognize the expertise that comes with first-hand experience.

In sum, despite its absolute importance, working with implementation data can feel absolutely overwhelming, and unfortunately, lived experiences have to be flattened. There are, however, a few steps we can take to rally against this flattening and highlight the humanity hidden within the data. First, picture someone you have an emotional connection with who could be represented in the data. Next, with your “North Star student” in mind, craft a story from the data that centers the human experiences and the real-life implications the data may have. Then share your story with others, listen to their interpretations, and recognize the ways different stories can emerge from the same data. I find, with these steps, working with data becomes much less dull, and it reminds me of why I love this work and the very real impact implementation work can have. I hope these steps help you, too.

References

Fals Borda, O. (1996). Power/knowledge and emancipation. Systems Practice 9, 177–181. https://doi.org/10.1007/BF02172931

Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., Griffey, R., & Hensley, M. (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and policy in mental health and mental health services research, 38(2), 65-76.