Direct Measures of Student Learning Outside the Classroom

Ask eAIR invites questions from AIR members about the work of institutional research, careers in the field, and other broad topics that resonate with a large cross-section of readers. Questions may be submitted to eAIR@airweb.org.   

PGregg Portrait 062216-2.jpgThis month’s question is answered by Patricia L. Gregg, Associate Director of Assessment, Georgia State University.
 
The ideas, opinions, and perspectives expressed are those of the author, and not necessarily AIR. Members are invited to join the discussion by commenting at the end of the article.
 
Dear Patti: How can I help move my institution toward direct measures of student learning outside the classroom?
 
At most institutions, faculty are becoming increasingly sophisticated at direct assessment of student learning in classes. However, assessment of student learning through co-curricular experiences is still largely reliant on self-reported gains and satisfaction measures. Rubrics can be an extremely effective and low-cost tool for assessing student learning outcomes outside the classroom. Such outcomes might be tied to general education goals that cut across the curriculum, an institution’s strategic plan, student development goals (e.g. leadership, civic engagement, global citizenship), institutional initiatives (e.g. QEP, AQIP), or targeted projects aimed at specific student populations. Having reliable, quantifiable evidence of these outcomes contributes to institutional success in seeking specialized accreditation, national recognition, competitive awards, donations, and grant-funding.

Building rubrics can be a labor-intensive process, and the task may seem daunting. Here are some tips to make the process manageable based on your institutional context:

  • Don’t reinvent the wheel. There are numerous ready-made rubrics and templates available online. Many of them are oriented to K-12 education, but even those can serve as a useful starting point. If you search for rubrics that are higher education-based, you will find a number of university websites with excellent resource links. Rather than single out specific institutions, I recommend beginning with the sites listed below. You may find one that works as is or can be modified to fit your needs.

    • AAC&U VALUE rubrics, built as part of the LEAP initiative.
    • Kappa Omicron Nu, a website for sharing sample rubrics related to undergraduate research, student organizations, and reflection.
    • Rubistar, probably the best-known and most comprehensive rubric site, but not dedicated to higher education.

  • Manage expectations. Rubric building requires multiple iterations. Your team will identify categories and rating criteria and test them against student work products (portfolios, reflection papers, research projects, etc.). Invariably, the testing process will demonstrate areas where the rubric criteria need to be expanded, collapsed, clarified, or otherwise fine-tuned. Then the new rubric will be tested again and potentially tested several more times. Be sure to build this into your timeline and communicate to everyone on the rubric-building team that this is a long-term commitment. Depending on what resources are devoted to the project, it will likely take a full academic year to build a reliable instrument.

  • Keep the momentum. Because it is an iterative process, burn-out is an occupational hazard. Ideally, your rubric-building team would be compensated, if not with a monetary stipend, then perhaps with course release time or travel funding. If resources are limited, food is often a good incentive. If resources are non-existent from your institution, dig into your pocket and buy some inexpensive treats. I bring PayDay and 100 Grand bars. Everyone gets the joke, and it contributes to the spirit of camaraderie. I have also brought in fruit or veggies, but somehow the chocolate always goes first!

  • Use data. At each iteration, collect data on scoring reliability and consistency. On a four-point scale, there is usually a lot of agreement about what rates a one or a four, but the twos and threes are murkier. You may find that your original rubric has too many or too few categories or rating levels. Be sure to schedule time for debriefing at the end of each scoring session so that raters can share their perceptions about which parts of the rubric they found easy to apply and which were more challenging.

It is true that rubrics require more up-front work than some other methods, but the investment is worth it!

 

 

 Comments

 
To add a comment, Sign In
There are no comments.