Part 3 of 3: What Counts as Valid? Using our power and privilege to reframe vigor
By: Vantage Evaluation
As part of our participation in the Equitable Evaluation Design Sessions (hosted by the Colorado Evaluation Network, sponsored by the Colorado Health Foundation and the Colorado Trust, and a project of the Equitable Evaluation Initiative), Vantage Evaluation took on two design challenges:
In our evaluation work, how can we build personal awareness of inequities and examine their professional implications? We addressed this challenge in our last blog post, Sharing Our Stories Together to Address Inequities.
How can we expand the definition of validity in evaluation used by nonprofits and foundations in our community?
Today, we’ll focus on the second design challenge: why Vantage Evaluation decided to focus on expanding the definition of validity, what we did (or tried to do), what we learned, and where we are going next. Our hope in sharing our experiences is that others might find inspiration to engage in similar work, and that those organizations will take what we did and improve upon it.
Using our power and privilege to reframe rigor
When people think about generating valid findings, often their brains go straight to things like sample size. In other words, methodological considerations that academia had taught them to associate with good science. However, there is more to valid findings than methodological designs: things like which voices are represented, how accurately they are represented, if the data collector is trusted, if the report is trusted, and more. This is a concept called “multicultural validity.”
Traditional notions of valid methodologies reflect white, male, cisgender, middle-class ways of knowing, and they are connected to structural inequities. While pockets of the evaluation field are working to redefine validity to reduce inequity, we must also address what clients count as accurate and credible findings.
To ensure we aren’t perpetuating inequities, both evaluators and clients must expand their definition of what counts as valid. So, we set out to answer this question: How can we take a theoretical conversation about defining multicultural validity among evaluators and bring it into practice with our clients and potential clients?
The goals of this work were to:
Strengthen Vantage’s ability to articulate a broader understanding of validity.
Use our power and position to shift the conversation around validity away from purely methodological considerations in Colorado, indirectly raising awareness of inequities in the field’s current evaluation priorities.
Test language reframing validity and identify messages that resonate most with nonprofits, foundations, and other evaluations (contribute to the field).
We developed some scripts for talking about multicultural validity, and set out to test it during three different times (potential client meetings, launch meetings, and design meetings). After each use of the language, we planned to debrief about how the client reacted, what questions they had, if the language resonated with them, etc. After that debrief, we planned to revise the language and test it again. The goal was to have tested and standardized language for talking about validity at the end of the project.
How did it go?
We were able to test validity language at two different time points: one with a potential client, and one during an Evaluation Orientation Training webinar at the start of a project. Here’s what we learned:
Scripted does not work. Validity is not a “plug-and-play” type of conversation where we can easily drop something that is pre-scripted into the flow of the conversation. Every conversation is different based on the background, context, and experience of the person or people we are speaking with.
It’s very hard to change our spiel. Our conversations and meetings follow a generally practiced flow. As a result, it is hard to break that flow for new material.
We need to understand the theory well before trying to translate it. As we got further into this process, we realized that we did not understand all of the components of multicultural validity well enough ourselves to be able to translate it effectively for our clients. For example, we did not (and still don’t) have a good understanding of theoretical validity, and could not provide examples of how it played out in evaluation projects.
While these were important lessons for us to learn, we have also reflected that life happens. Our team had other personal and professional priorities that took up the time and mental energy we were hoping to dedicate toward this project to do it well.
What comes next?
Our main takeaway from this design challenge is that it’s really hard to explain and help clients conceptualize how historical views on what counts as valid and rigorous data must expand to ensure we don’t perpetuate inequities. But we’re not giving up, and we hope others will join us and share what they learn along the way, so we can make steady progress together.
During the Equitable Evaluation Design sessions, we had the opportunity to troubleshoot our challenges and got some great ideas from other members of our cohort. Specifically, we think we may have been trying to introduce validity too early. It was too hypothetical for our clients and prospective clients. As with all evaluation work, it might work better once specifics are in place.
We’ve had some early success in introducing validity later in our conversations, like this:
Go through our usual process to help the client establish key evaluation questions
Ask them what they think the answer to each key evaluation question is
Then ask, if the answer was the opposite, what would need to be in place for you to believe that answer?
Then, use what they say as a jumping-off point to discuss multiple valid ways of knowing!
Now it’s your turn
We tried this at Vantage... now it’s your turn! Take what we did and our lessons learned, then start to have conversations about multicultural validity and different ways of knowing with your team. Additionally, we have a lot more detail and lessons learned in the full report on our website. We’re happy to help you think through how you might get started!
Then, let us know how it went! What did you try? How did your audience react? What would you try differently next time?