Assessing High Impact Practices

Ask eAIR invites questions from AIR members about the work of institutional research, careers in the field, and other broad topics that resonate with a large cross-section of readers. If you are interested in writing an eAIR article, or have an interesting topic, please contact eAIR@airweb.org.

This month’s question is answered by ​Ishuan Li, Associate Professor of Economics, Minnesota State University-Mankato.

The ideas, opinions, and perspectives expressed are those of the authors, and not necessarily of AIR. Subscribers are invited to join the discussion by commenting at the end of the article.

Dear Ishuan: How can faculty members or program coordinators at my institution assist IR in carrying out successful assessment of “High Impact Practices” (HIPs) such as Learning Communities?

My answer to this question is informed by my experience as a teaching faculty member and my training as an applied economist. As a faculty member in a large public university in the Midwest, my main duty is to instruct undergraduate courses each semester. I have taught Learning Community (LC) courses, such as Business Statistics, serving students in the College of Business. My teaching experience includes senior capstone (Senior Research Seminar) and writing-intensive courses. Over the years, I have mentored many undergraduate research projects and supervised student independent studies and internships. Currently, I serve as the Vice President of the International Honor Society in Economics (Omicron Delta Epsilon). I also represent the division of Social Sciences at the Council of Undergraduate Research, an international organization dedicated to promoting undergraduate research. Many faculty members have probably engaged students in one of the above practices. Faculty members can be a resource that IR can tap when assessing HIPs.

The Association of American Colleges and Universities (AAC&U) provides a useful list of widely accepted HIPs. The list includes: First-Year Experiences (FYE), Common Intellectual Experiences, Learning Communities (LCs), Writing-Intensive Courses (WI), Collaborative Assignments and Projects, Undergraduate Research (UR), Diversity/Global Learning, ePortfolios, Service Learning, Community-Based Learning, Capstone Courses and Projects, and Internships 1. For most institutions, the common characteristic across these HIPs is student self-selection into the programs. Perhaps due to their high costs 2, institutions seldom randomize assignment of their FY students into any of these HIPs to gather experimental data for assessment purposes.

The type of assistance faculty members and program coordinators can provide IR in the assessment of LCs and HIPs depends on the outcome variable assessed, the assessment method, and data availability 3. A brief review of quantitative research on the topic suggests that HIPs and LCs generate inconsistent student success outcomes (as measured by retention and GPA). This can be explained by: (1) not controlling for or incorrectly assuming control of self-selection bias and (2) incorrect estimation techniques 4. For programs that have intentionally randomized assignment of students to LCs 5, simple, raw mean comparisons of the outcomes variables are at best not informative, and at worst they misinform decision makers.

Data analysis using observational data requires statistical techniques that can produce unbiased (or consistent) and efficient estimates. Some of the techniques addressing self-selection require “instruments” for identification purpose. For instance, a researcher can use the statistical technique (Heckman) to control for self-selection into LCs using information gleaned from data collected at various points during students’ enrollment at the institution. Faculty members and program coordinators can assist IR by identifying “good” instruments within the data to perform the statistical analysis. In the Hotchkiss et al. (2006) study, the authors used the number of LCs offered, whether a student’s hometown was local, number of same high school graduates matriculated at the same time, and whether a student’s hometown was rural, etc. However, valid instrumental variables vary according to HIPs, assessed outcome variable, and the methodology employed.

Furthermore, once self-selection is controlled for (if possible), other corrections to estimation techniques may still be necessary. For instance, assessment of the impact of LCs on student retention (probability of being enrolled one-year from matriculation) requires that the IR researcher address the issue of endogeneity of the variable credit hours earned on GPA (as a measure for academic performance) in the retention regression equation. A student with a GPA below a cut-off value is more likely to drop-out or face academic suspension, and GPA is also a function of credit hours earned. To flush out the correlation between the error term and the independent variables in the retention regression, IR could use a two-step IV procedure by identifying a valid instrumental variable. Depending on the nature and structure of the LCs, faculty instructors and LC program coordinators can help identify information in the data (such as LC course pre-requisites, grading scheme, attendance policy, and other LC related structural information) that might serve as valid IVs.

As IR prepares to assess HIPs and LCs, faculty instructors and program coordinators can assist by providing IR with critical information allowing this unit to correctly estimate the impact of these programs on outcome variables such as retention and GPAs. Without these useful identifiers, proper statistical analysis by IR might be unnecessarily difficult and time consuming and .

References:

  1. AACU High Impact Practices

  2. Johnson and King (2018)

  3. MacGreggor (1991) and MacGreggor et al (2005) provide a summary (and road map) to the use of Learning Communities in higher education

  4. Hotchkiss et al. (2006)

  5. Russell (2017)

 

 

 Comments

 
To add a comment, Sign In
There are no comments.