"We provide accurate and contextualized information. We do not knowingly or intentionally mislead the consumers of our information."
Thanks to all who answered the call. Below is what you told us (with a few tips of our own).
From Jennifer Schon of Northwestern College we heard: "In the realm of statistical analyses, it's ensuring the guidelines for using that test (e.g., normality, homoscedasticity) are met or noting if they are not. For example, when working on a project where a non-significant difference was the desired outcome by the director, it was important to inform her and those involved in the decision that the sample size meant the chance of finding a significant result was very low so the results weren't definitive one way or the other."
Marla Smith at Mitchell Technical College in South Dakota said: "It's surprising how often people misunderstand the 'denominator' in any given statistic. My colleagues joke that, with me, 'it's all about the denominator!' In all seriousness, it pains me when people misuse a statistic, such as declaring that NN% 'of graduates' are employed, when the calculation was graduates employed out of graduates who responded to our survey who are/were seeking employment. My tip would be to clearly articulate and define the N for any statistic you share internally as well as publicly."
Laura Fingerson from Strategic Education, Inc. in Minnesota worte: I have found showing the trends with actual values is key. I like to show at a minimum of 9 quarters to show seasonality, but even more depending on what changes have happened at the institution over the last years. When we focus on the one number or the year-over-year, we lose sight of the context. E.g., a 10% increase might sound really great, but when we look at the trends and values, we might be on a long slide downwards and the starting point is really low. Focusing on just a YoY or change over time from one point to another can mask these bigger trends leading to uninformed decision-making.
Ellen Peters at the University of Puget Sound wrote: "I found that one of the best things I can do is remove jargon and acronyms. I credit Jennifer Brown with the following three principles: 1) Don't obfuscate; 2) If you don't count it, it doesn't count; 3) Keep it simple. Technical and industry specific language can be exclusive and lead to a lack of transparency. For example, when I provide peer comparison data, I don't use the acronym IPEDS without explaining what IPEDS is (and not in a footnote that no one will read--in the text itself).
Cheryl Arndt of Muhlenberg College commented: "I'm enjoying all the responses here. One thing I have not seen mentioned yet is the importance of visually presenting results. In my mind, that also provides a type of context. I have begun to set the standard of minimizing the use of tables and maximizing the use of graphs in presenting data. Making the data more intuitive is important to ensuring that people can easily understand it and hopefully put it to use."
Jacqui Broughton of Michigan State University noted during a session at the 2021 AIR Forum called Join the Equity Walk: How IR Can Advance DEI: "Understand what story the data are showing and make that point known. Organize your visual or table in such a way that it’s accessible and lets the story the data is telling be told, with support from the developer in terms of how it’s presented. This can be done in various ways, for example: use of color, sorting, and a descriptive title. Every person who interacts with the data will have numerous points that can be taken away, but it’s always nice to help the individual consuming it know the main thing the data are saying while allowing them to also take away their own points."
Broughton also noted: "The data represent people so when we’re talking data we’re talking about people. Lives. Therefore it is important to be mindful of the language we use because we are talking about people. Avoid focusing on perceived student deficits while focusing on structural changes/inequality. Discuss differences to the university average instead of differences between marginalized populations and the majority group (ex: comparing outcomes of Hispanic or Black students to that of White students). Different groups have different interactions with the institution therefore, compare to the university average as everyone is navigating the institution differently because of how it interacts with them."
Additionally, here are strategies that have worked for us, the authors.
Sometimes, even though you've made sure your cell sizes are large enough to ensure privacy, they still aren't large enough to draw meaningful conclusions. For instance, when there are only 10 people in a group, a single person represents 10 percentage points. So in an entry cohort of 10, one person who isn't retained represents a 10 percentage point change. I've found that using rolling three-year totals helps to increase cell sizes to the point where differences carry more meaning. Often, the trends are relatively stable from one year to the next, so taking this approach smooths any irregularities while still conveying the overall trajectory of the trend.
And, though we've all heard it, sometimes it bears repeating: Change the values on your x and y axes from whatever default the software provides. PowerPoint "helpfully" sets the axes to show trends clearly, which often exaggerates differences and trends.
Michelle Appel is a Director in the University of Maryland's Office of Institutional Research, Planning, and Assessment and is currently on loan to UMD's Workday implementation serving as Reporting Lead. Michelle has more than 25 years of experience in IR and has served as President of AIR, NEAIR, and MDAIR. During her time as AIR President, Michelle led the effort to draft and adopt the AIR Statement of Ethical Principles.
Jeffrey Alan Johnson is Director of University Planning and Effectiveness and Graduate Faculty Associate in the School of Education at Utah Valley University and a leading scholar in the area of information ethics in higher education administration. His recent book, "Toward Information Justice: Technology, Politics, and Policy for Data in Higher Education Administration" (New York: Springer, 2018), breaks new ground in the study of ethics in public organizations’ use of information technologies. He previously served as President of the Rocky Mountain Association for Institutional Research. Dr. Johnson holds a Ph.D. in political science from the University of Wisconsin-Madison.