While at Johns Hopkins, I had to take several research methods courses to ensure that I knew not only how to conduct research, but how to define, collect, and analyze data. One of my professors (I took two of the three research methods courses with her because she’s nothing short of amazing but that’s a story for another time) always reminded us to operationalize the variable. Now, I realize that if you’re not in the research world that phrase may not mean anything to you, but for those of us who are studying change and program effectiveness in schools…this phrase means everything.
To operationalize the variable means to define it in clear terms–preferably in terms that can be measured.
By operationalizing the variable, it should be clear to everyone exactly what you mean.
If the variable is clearly defined, then it makes it easier to determine if change occurred.
To be clear, just because you operationalized the variable doesn’t mean that the program or change initiative worked. A lot of factors can and do influence change. However, to determine what changed, to what extent change occurred, and what possibly contributed to the change, one needs to start with clearly defined variables.
For example, if one wanted to measure the effectiveness of a program change, then one needs to:
- Identify a problem of practice (what it the gap? what are potential drivers to the problem?)
- Design and conduct a needs assessment (how do you know it’s a problem? what does the literature reveal?)
- Operationalize the variables (what is it that you want to see changed?)
- Clearly define the instruments (what is going to measure the change? how will data be collected? when will data be collected? how will the data be analyzed? who will analyze the data?)
- Clearly define the program or intervention (what is change initiative? foundational theory of change? program details? duration? who is the target of the program initiative? what are the proposed proximal, short, and long-term outcomes?)
All of that is the bare minimum. I didn’t include all of the steps, but I’m sure by now you get the gist that conducting a program evaluation is not a simple or quick task.
For the evaluation to mean anything, however, it is imperative that all of the variables are defined–that is, operationalized.
I bring this up because I’ve been doing research on social-emotional learning and culturally responsive teaching. And, although these two can certainly work in tandem to build a warm and supportive classroom community, they can (and in my opinion should) be implemented separately if one wants to truly measure change.
Definition: social-emotional learning is the “process through which all young people and adults acquire and apply the knowledge, skills, and attitudes to develop healthy identities, manage emotions and achieve personal and collective goals, feel and show empathy for others, establish and maintain supportive relationships, and make responsible and caring decisions” (CASEL, n.d.).
Definition: culturally responsive teaching uses “cultural characteristics, experiences, and perspectives of ethnically diverse students as conduits for teaching them more effectively” (Gay, 2002, 106).
One of the biggest problems I see in education is the morphing of definitions. We’ve all been privy to buzzwords in education–most of these which likely started with the best of intentions, but because many of the terms were not clearly defined–operationalized–educators, parents, community, general public, and the media have put their own spin on it. They’ve redefined the terms according to their understanding or they’ve taken similarities in terminology and made the assumption that the terms basically mean the same.
And therein rests the problem.
So, what initially appeared to be something that might actually effect change, instead became watered-down, redefined, or morphed into something quite different or less effective than what was intended.
Now, I’m not saying that teachers and schools cannot implement two different frameworks, pedagogies, theories, etc. at the same time. Heck, we’ve been doing that (and more) for years. But doing so makes it impossible to know what exactly changed, to what extent it changed, and even what caused the change.
And if the end result isn’t what one expected, what happens? The theory, pedagogy, framework, or strategy is scrapped and something new is put into place.
And what a shame since many of these theories, pedagogies, frameworks, and strategies are backed by research. Evidence that it can work. But we’re hard pressed to know if it could work in certain situations since educators were juggling multiple and competing initiatives–so it’s hard to tell what changed, what caused the change, or even why didn’t the expected change occur.
So, if you truly want to see if a (new) program is effective, then it’s critical to (1) operationalize the variable and (2) remove competing initiatives.
I would love to not see social-emotional learning or culturally responsive teaching reduced to buzzwords. Each of these has its merits, and I truly believe that they can effect positive change in the classroom. However, these need to be implemented with fidelity but more importantly, clearly defined for everyone at the outset so that there’s no confusion as to what the terminology means.
CASEL. (2022). Fundamentals of SEL. Collaborative for Academic, Social, and Emotional Learning (CASEL). https://casel.org/fundamentals-of-sel
Gay, G. (2002). Preparing for culturally responsive teaching. Journal of Teacher Education, 53(2), 106 – 116. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.294.1431&rep=rep1&type=pdf