Metrics for Reporting Data

Ask eAIR invites questions from AIR members about the work of institutional research, careers in the field, and other broad topics that resonate with a large cross-section of readers. Questions may be submitted to eAIR@airweb.org.

Ttschehr.jpghis month’s question is answered by Terra Schehr, Assistant Vice President for Institutional Research and Effectiveness at Loyola University Maryland. The ideas, opinions, and perspectives expressed are those of the author, and not necessarily AIR. Members are invited to join the discussion by commenting below. 

Dear Terra: How does one communicate various definitions and methodologies for calculating the “same” general metric (i.e. FTE) to stakeholders?

Communicating the details about metrics used in reports is an important issue. On the one hand, we want the metric to be clearly and accurately understood. On the other hand, we do not want to overwhelm or distract our stakeholders with the minutiae of our processes in IR. When obtaining data for a report that will be disseminated to stakeholders, it is important to keep the question being asked front and center, as well as how the information will ultimately be used. Keeping this in mind not only guides the metrics that we use, but also the detail that we provide about the metrics. There are four basic steps that I feel are important in communicating metrics across stakeholder groups.
  • First, it is important to document the data source and any definitions on your work. This is commonly done using footnotes or endnotes to your table or document. In your notes, it is best to avoid jargon that requires additional explanation. This is both for the recipient of the report and also for yourself/your office; over time you will forget where you got the data and how you defined your metrics.
  • Second, use the same data definition as much as possible. Various definitions driven by internal needs and provided by external groups (IPEDS, NACUBO, CASE, etc.) may not always be the same. Having more than one number for the “same” metrics, however, can cause confusion and lead to skepticism or questions about the accuracy of your data. Pick one data definition that makes the most sense for your institution, designate it as the primary definition, and then use it across as many reports as possible so that exposure to different definitions is an anomaly and not the norm. For example, the IPEDS definition for FTE is 1FT+1/3PT; we use this definition for all of our internal reports because the differences between this rough calculation and the real FTE (based on courses or credits) is not that great at our institution and is not worth the trouble of having multiple FTE numbers in the public domain. 
  • Third, create a data dictionary for your office that contains all of your common metrics and their definitions. When there are multiple data definitions, document which one is the primary definition to be used and why; over time you will forget why your default definition was designated as the default. From time to time, review the dictionary and the rationale for your “primary” definition. Does it still make sense or has the institution or the data needs changed so that another definition would be better? 
  • Fourth, create a brand for your work—a set of common report structures, table layouts, chart types, color schemes, etc. As your stakeholders begin to recognize that a report came from your office based just on how it looks, and you follow the suggestions above, questions about the merit of the metrics and their definitions will decrease over time.
 
 

 Comments

 
To add a comment, Sign In
Total Comments: 1
 
Marlene posted on 8/17/2015 8:52 AM
Great advice! Thank you!