How Can We Be Antiracist Institutional Researchers?
Following the recent killing of George Floyd while he was in police custody, AIR released a Statement on Racial Injustice and announced a coffee chat focused on advancing equity and inclusion in higher education. My regional institutional research association also shared a statement denouncing racism and announced plans for programming that will “facilitate a clearer understanding of the issues in the effort to move to an anti-racist society.” I was pleased to see these messages. Action is needed to fight racism and a culture of white privilege, which destroys lives, fuels social and economic disparities, and creates insurmountable barriers to health and social mobility. Beyond the important condemnations of racism, how can we be antiracist institutional researchers? In practice, we have opportunities to make antiracist decisions in various aspects of our work, including how we frame and interpret research, analyze and label data, and think about what numbers mean.
Framing and interpreting research
We often report on people by race, which is an important and sometimes federally required way to break out counts or outcome measures such as graduation rates. However, we rarely talk about racism. We are not alone in this respect. One multi-institution study of campus racial climates found that racism was deemed a “taboo” and “avoidable topic” for students, faculty, and administrators, often to avoid making people feel uncomfortable (Harper & Hurtado, 2007). A literature review of higher education research articles revealed that “most higher education researchers have attempted to take account of racial differences in college access and student outcomes, as well as in the racially dissimilar experiences of whites and minoritized persons, without considering how racist institutional practices undermine equity and diversity” (Harper, 2012, p. 2012). This is not to say that racism fully accounts for all instances of racial disparity, but when it may be a factor, for instance, in student attrition or faculty retention, we can name it as such, not minimize it by using alternative wording such as “unwelcoming climates” or “racial tension.” Indeed, as writer and historian Rebecca Solnit (2018) reminds us, “to name something truly is to lay bare what may be brutal or corrupt—or important or possible—and key to the work of changing the world is changing the story, the names, and inventing or popularizing new names and terms and phrases.” In practice this may mean, for example, naming racism in campus climate reports without substituting in softened language. It could also mean making sure climate studies are taken seriously and not written to sound less “harsh” and lobbying for results to be used to create change, an outcome that does not always happen, particularly if the results reveal a racist reality for students of color (Harper, 2015).
Analyzing and labeling data
We can sometimes get stuck in our methodological ways and forget to think about the assumptions or limitations involved in the research methods we use. A first and simple example is the decision to combine groups of people from racial or ethnic groups that are not represented in large quantities in a dataset. This practice offers benefits by simplifying data tables or visualizations and increasing statistical power. To create a single race category of Asian American students, for instance, those identifying as Indian American might be grouped with those who identify as Korean American. Doing so overlooks these groups’ distinct cultural and social identities and obscures the differences in education opportunities and outcomes across subgroups (Museus & Kiang, 2009; Teranishi, 2007). One way to combat this is to collect and report data on finer grained racial and ethnic subgroups when sample sizes are large enough to maintain confidentiality. Another way is to spend time learning how students perceive their own racial identities (e.g., Stewart, 2009) and how they think about the race and ethnicity categories we give them to choose from in our surveys and data collection tools (e.g., Johnston, Ozaki, Pizzolato, & Chaudhari, 2014). We can also be thoughtful in how we label categories, as the language we use to describe people is powerful. For example, in our reporting and conversations, we can use humanizing language to describe undocumented students or incarcerated students, both groups that are often disproportionally comprised of people of color.
A second example of race-based assumptions embedded in our methods occurs in the commonly used practice of dummy coding. When we use white people as the reference group in regression analyses, which is typical in higher education research, we may be sending a message that white students’ experiences are the “norm” against which other students’ experiences or outcomes should be measured. Using an alternative reference group—or no reference group at all through a methodological approach called effect coding (Mayhew & Simonoff, 2015; Ro & Bergom, 2020)—offers an alternative to positioning white students’ experiences as the baseline. Of course, we are oftentimes not given the option to choose how we report race or ethnicity; IPEDS or the client requesting the survey may dictate those terms. However, as Stage (2007) points out, we can develop a general habit to “question the models, measures, and analytic practices of quantitative research in order to offer competing models, measures, and analytic practices that better describe the experiences of those who have not been adequately represented” (p. 10).
Thinking about what numbers mean
Counting may seem a value-neutral activity, but in reality it is not a clear-cut or objective pursuit. Political scientist Deborah Stone breaks down for us what it means to count. Essentially, she argues, “numbers are a magic wand that resolve ambiguity into one-ness” (2018, p. 9). I admit this is one reason I love institutional research. Counting and sorting makes the world seem cleaner and tidier than it really is. Stone writes, “We construct numbers by solidifying bits and pieces of the swirling miasma that is our real world” (p. 9). Solidifying these “bits and pieces” helps us make sense of complex phenomena, such as patterns of student attrition or faculty satisfaction. But this “swirling miasma” that is reality is actually very messy. The reason I bring up the seemingly benign practice of counting in an essay about being antiracist is because counting and sorting employ racial categories based on social histories. Our numbers are not “objective” measures that are divinely created then plunked down on earth for us to gather up, and in fact the act of counting can reify racist systems.
An example is the practice of counting slaves as three-fifths of a person for purposes of taxation and representation, a decision made by lawmakers during the 1787 U.S. Constitutional Convention. Although we need not become experts in the histories of race groups, as we count and sort and enter numbers into tidy boxes we can remember that the categories we use to group people are, as Stone writes, based on “human judgment and cultural conventions” (p. 10). As a first step, we can treat numbers not as purely objective but as representations of human-made categories and bear in mind that “all numbers have a social and intellectual history” (Stone, p. 10). We can make a practice of asking: Who is included in these categories and who is left out? For what reason, and how was the decision made to exclude or include? What might be the consequences of grouping people in these ways or using these particular measures?
Lastly, many of the ideas mentioned above pertain to structural racism, but structural racism is intertwined with personal racism. To my white colleagues, an important step we can take is to work toward understanding how we as individuals play a part in reinforcing racist systems, cultures, and norms and in promulgating an ideology of “White institutional presence” embedded on so many campuses (Gusa, 2010). Indeed, we cannot “minimize our biases,” as the AIR Statement of Ethical Principles calls us to do, without understanding or acknowledging them in our own thinking and behavior.
Gusa, D. L. (2010). White institutional presence: The impact of Whiteness on campus climate. Harvard Educational Review, 80(4), 464-489.
Harper, S. (2012). Race without racism: How higher education researchers minimize racist institutional norms. Review of Higher Education, 36(1), 9-19.
Harper, S. (2015). Paying to ignore racism. Inside Higher Ed. Available here.
Harper, S. R., & Hurtado, S. (2007). Nine themes in campus racial climates and implications for institutional transformation. New Directions for Student Services, 120, 7-24.
Johnston, M. P., Casey Ozaki, C., Pizzolato, J. E., & Chaudhari, P. (2014). Which box(es) do I check? Investigating college students’ meanings behind racial identification. Journal of Student Affairs Research and Practice, 51(1), 56-68.
Mayhew, M. J., & Simonoff, J. S. (2015). Non-White no more: Effect coding as an alternative to dummy coding with implications for higher education researchers. Journal of College Student Development, 56, 170–175.
Museus, S. M. & Kiang, P. (2009). Deconstructing the model minority myth and how it contributes to the invisible minority reality in higher education research. New Directions for Institutional Research, 142, 5-15.
Ro, H. K., & Bergom, I. (2020). Expanding our methodological toolkit: Effect coding in critical quantitative studies. New Directions for Student Services, 169, 87-97.
Solnit, R. (2018). Call them by their true names: American crises (and essays). Chicago, IL: Haymarket Books.
Stage, F. K. (2007). Answering critical questions using quantitative data. New Directions for Institutional Research, 2007(133), 5–16.
Stewart, D.-L. (2009). Perceptions of multiple identities among Black college students. Journal of College Student Development, 50(3), 253–270.
Stone, D. (2018). The 2017 James Madison Award Lecture: The ethics of counting. PS, January 2018, 7-15.
Teranishi, R. T. (2007). Race, ethnicity, and higher education: The use of critical quantitative research. New Directions for Institutional Research, 133, 37–49.
Inger Bergom is a Data/Research Analyst, Office of Institutional Research, Tufts University. She holds a Ph.D. and M.A. in higher education from the University of Michigan. She thanks Esther Enright, Spencer Piston, Hyun Kyoung Ro, and Steve DesJardins for their feedback on earlier versions of this essay. Views presented are her own and do not represent those of her employer.