Special Features

  • Featured
  • 08.15.23

Learning Analytics and Learning Outcome Assessment: A Viable Partnership

  • by Gina Deom, Stefano Fiorini, Michael Sauer, Steve Graunke

Talking about the data

Consider the following two quotes. The first reflects the purposes of traditional learning outcomes assessment:

“Assessment asks the question regarding what students know and how they came to know it. What is important is how much a student knows and is able to do upon graduation.” (Muffo, 2001)

The second pertains to the emerging field of Learning Analytics (LA):

“Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.” (Society for Learning Analytics Research, 2023 from Siemens and Baker, 2012)

Placing these definitions side-by-side evokes an image of two trains traveling along parallel tracks to the same destination. The assessment movement has been largely driven by accountability demands from state governments and targeted toward administrators and faculty (Ewell, 2009). However, the primary purpose has always been using information on student learning to better inform the educational environment and the classroom experience and to facilitate student learning. LA, meanwhile, has emerged and been advocated for by technology leaders to take advantage of the myriad of data generated and collected from learning tools to describe and improve the learning experience of students and which practices may be most effective to enhance learning (Lee, Cheung, & Kwok, 2020, although see Motz et al. 2023). In the middle of these tracks stand institutional researchers. With the traditional role in compiling and reporting data for compliance as well as the skills to optimize analytics from a wide variety of sources, Institutional Research (IR) professionals have the opportunity to bring LA and traditional assessment data together to provide valuable insights that can improve outcomes for all students.

The purpose of this article is to highlight some of the opportunities and challenges in using LA and student learning outcomes data together to drive meaningful insights. We will include concrete examples that highlight opportunities for IR to help facilitate the intersection of the assessment and LA spaces — both in terms of fostering strong relationships between IR professionals and LA researchers as well as applying machine learning algorithms to data sets that combine traditional assessment information with digital data about student experience and student learning.

Differences in methods and opportunities for synergy

Both the assessment and LA movements advocate enhancing the process of learning and evaluating student learning or academic progress while the learning is happening. LA is centered on the analysis of data emerging from the digital transformation of education systems (Motz et al., 2023; Lee, Cheung, & Kwok, 2020). Some examples of such digital data derived from learning management systems (LMS) that include student use of and interaction with these systems (how much time do they spend in the system, what units of the course do they engage with, when and how, how many assignments do they complete, how often do they log in, do they submit assignments late, how does usage relate to performance, etc.); software systems that track students’ engagement in extracurricular activities; early alert or early warning systems; online course assignments and pedagogy versus traditional “in person” delivery; diversity of the student body in general and within a course; faculty grading practices and trends; student engagement with advising support; student participation in academic support structures; student housing choices and more.

At the same time, assessment professionals frequently emphasize the importance of collecting both direct and indirect evidence as part of the assessment process (Banta, Jones, and Black, 2009). Direct assessment evidence includes products in which a student must demonstrate their learning. This evidence may be in the form of tests, writing, portfolios, or other creative products scored using rubrics or other agreed upon methods (Elbeck and Bacon, 2015). The key is that the product provides a demonstration of learning that has occurred as a result of the learning experience. Indirect assessment is designed to measure what Elbeck and Bacon (2015) describe as the “covariates of learning” (p. 279). These covariates might include students' beliefs, attitudes, and perceptions of the learning experience. Using direct and indirect evidence in conjunction provides a comprehensive view of student attainment of learning outcomes as well as additional perspectives on what may have led to these outcomes. 

Using LA and assessment data together provides rich opportunities for understanding and improving the student experience. Models that predict assessment using traditional features from institutional systems (prior course performance, student demographic and academic profile information, etc.) can be augmented with student learning and behavior data during the term to 1) better understand the factors that are associated with the outcome and 2) create early alert systems that are informed by more “real-time” information, which can translate to more effective institutional support for academic success. For example, advising appointment data from an advising system and housing data from Residential Program and Services have been incorporated into machine learning models to better understand how elements of human connection (meeting with an advisor, living in a living learning community, etc.) relate to student retention. Additionally, advisors on some campuses have coupled academic program information with LMS data to proactively reach out to students who have lower levels of engagement in the LMS compared to their peers. IR professionals not only play a technical role in helping analyze these data sources, but they also play a key leadership role in facilitating conversations across campus to help dissolve barriers to connecting such data sources.  

Combining traditional assessment data and LA data together also provides opportunities to achieve student success at scale. LA researchers, through the assistance of IR staff, can harness large institutional data sets to conduct research on a larger scale or with a broader scope than traditional research, which is usually limited in scope to a single course. This allows faculty to examine issues related to curriculum design and student flow through a curriculum, looking at relationships between student performance in key gateway courses, students’ choice of major, the effectiveness of certain course sequences, or the overall level of learning outcome or degree attainment of students in a given major or school. Some LA research questions attempt to address issues of student success, choice of major, grading practices at the institutional level, for example.

While the combination of LA and traditional assessment information does yield numerous opportunities, it also comes with its unique challenges. Assessment, when done effectively, should include information collected using standardized practices and in predictable cycles. This is less likely to be the case for some of the data sets used in the LA realm which often still live in operational databases — the data are often very messy, not as well documented, and house large amounts of time-stamped records or text fields that meet the purpose of administering or providing a service to students, but not necessarily creating metrics of student learning or behavior. As such, extensive work and care needs to be taken to translate the large volume of time-stamped records or text strings into accurate and meaningful metrics for analysis. Using LA data requires working closely with experts in the field and administrators of the data systems themselves. In addition to creating meaningful metrics of student learning and behavior from messy data sources, specific consideration needs to be made when merging and aligning data elements between sources, particularly because of the time component. To gain the full benefit of combining traditional assessment outcome metrics with LA measures, the events, features, and products of students need to be mapped to the correct unit of time analysis or ordered in the appropriate time sequence in which the events occur within a unit of time analysis. This is where the creation, maintenance, and usage of crosswalks and rollups of date ranges to common instructional reporting periods (e.g., academic terms, academic sessions, start/end dates of a given week in the semester, etc.) is particularly important. Finally, as more LA data are incorporated into the standard suite of data used to study student learning outcomes, it should challenge IR professionals and LA researchers to have more training in methods and techniques that yield insights from big data, unstructured data, and time series data. For example, LA draws heavily on data science, with the application of machine learning techniques (ML) and Artificial Intelligence (AI), and focuses on topics such as self-regulated learning and social network analysis (Baek and Doleck, 2021). Time series data lends itself to the application of event history analysis (sometimes more commonly known as survival analysis) and allows for modeling the effect of time-variant characteristics (e.g., participation in tutoring, meeting with an advisor, dropping a course, LMS activity, etc.) in addition to time-invariant characteristics (e.g., demographics, test scores, performance on specific assignments, etc.) on outcomes.

Who conducts Learning Analytics research?

LA is primarily being undertaken by faculty interested in improving the learning environment and the delivery of their course material to maximize student learning for all students, often with a focus on mitigating biases that often undermine traditionally disadvantaged populations. Similarly, traditional assessment works best when leadership occurs at the level of a specific academic unit and is driven by faculty looking to improve learning outcome attainment in their courses (Banta et. Al, 2009). The similarity in goals may likely yield champions capable of identifying ways to synthesize assessment and LA data to gain a full perspective of the learner experience. LA research is interdisciplinary, and researchers come from a variety of academic backgrounds, including data science, computer science, sociology, education, and psychology. On some campuses, LA research has also been conducted by faculty from Schools of Public Health, Schools of Business, Anthropology, Library and Information Sciences, Student Academic Centers, Schools of Public Administration, Economics, and English, to name a few (Lam et al, Forthcoming).

Opportunities for collaboration between LA researchers, Assessment professionals, and IR practitioners

The data often used by LA researchers is sometimes managed, cleaned, and stored by IR professionals who have experience working with institutional data sources and positioning them in ways that facilitate the use of the data for reporting and analytics. IR professionals usually have access to the LMS and other frequent LA data sources (or can easily obtain it). However, in general, faculty conducting LA research do not typically have access to this data, but can obtain it from IR professionals, sometimes with needed Institutional Review Board (IRB) approval. Thus, there is a need for collaboration among LA researchers and IR professionals to enable LA research to occur. This collaboration can consist of helping faculty gain access to and position the data they need for their research but may also include assisting with research design and analytic support in cases where faculty lack proper training. Furthermore, the use of new and emerging technologies has allowed IR offices to develop unique ways of assisting faculty in LA projects, for example using statistical/programming packages to leverage ML algorithms to create comparison populations similar in characteristics to the population of students being analyzed, to test the significance or effect of a particular course or program intervention, or pedagogy. This can help facilitate a data-informed feedback loop to support faculty and staff with the continuous improvement of instruction and campus programs.

References

Baek, C., & Doleck, T. (2021). Educational data mining versus learning analytics: A review of publications from 2015 to 2019. Interactive Learning Environments, 1-23.

Banta, T. W., Jones, E. A., & Black, K. E. (2009). Designing effective assessment: Principles and profiles of good practice. San Francisco, CA: Jossey-Bass.

Elbeck, M. & Bacon, D. (2015). Toward universal definitions for direct and indirect assessment. Journal of Education for Business, 90, 278-283. doi: 10.1080/08832323.2015.1034064

Ewell, P. T. (2009, November). Assessment, accountability, and improvement: Revisiting the tension. (Occasional Paper No. 1). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Lam C., Rehrey G., Shepard L., Sauer M., Herhusky-Schneider J. What’s the Big Deal about Big Data? Learning Analytics and the Scholarship of Teaching, Learning and Student Success. In Miller-Young J., McCollum B., Hays L. (eds.), Educational Technology and the Scholarship of Teaching and Learning: Asking Questions about our Practices (forthcoming)

Lee, L. K., Cheung, S. K., & Kwok, L. F. (2020). Learning analytics: current trends and innovative practices. Journal of Computers in Education, 7, 1-6.

Motz, B., Bergner, Y., Brooks, C., Gladden, A., Gray, G., Lang, C., ... & Quick, J. (2023). A LAK of direction: Misalignment between the goals of learning analytics and its research scholarship. Journal of Learning Analytics.

Muffo, J. (2001). Institutional effectiveness, student learning, and outcomes assessment. In R.D. Howard (Ed.) Institutional research: Decision support in higher education (pp. 60 – 87). Tallahassee, FL: Association for Institutional Research.

Siemens, G., & Baker, R. S. J. D. (2012). Learning analytics and educational data mining: Towards communication and collaboration. ACM International Conference Proceeding Series, 252–254. https://doi.org/10.1145/2330601.2330661

Society for Learning Analytics Research. (2023). What is learning analytics? https://www.solaresearch.org/about/what-is-learning-analytics/

 

 


DeomGina Deom is a data scientist in the Division of Institutional Analytics at Indiana University. Gina serves as a data scientist with the Research and Analytics team. In this role, she provides research and analysis to support campus initiatives and student success. She also explores new areas of inquiry in machine learning, statistical analysis, causal inference, and data mining to enhance institutional effectiveness and student success outcomes. Gina has given several presentations at national and international conferences and has earned best paper awards from INAIR, AIR, and LAK. She holds a Master’s degree in Applied Statistics from Bowling Green State University and a Bachelor’s Degree in Statistics and Actuarial Mathematics from Saint Mary’s College.

 

 

FioriniStefano Fiorini, Ph.D. is Manager for Research and Analytics, in the Division of Institutional Analytics at Indiana University. Stefano coordinates the work of the Research and Analytics team. He holds a degree in Natural Sciences from the University of Bologna, a Master and Ph.D. in Social and Cultural Anthropology and a minor in population studies from Indiana University – Bloomington. He conducts applied research and analytics in co-participation with stakeholders – the goal being the co-identification of data, its use to produce actionable knowledge and the establishment of cyclical analytic-to-action practices. Building on social science frameworks, his approach has progressively integrated knowledge from a variety of academic disciplines and practices: institutional analysis and development, Participatory Action Research, statistics, qualitative analysis, and learning analytics.

 

SauerMichael Sauer is Lead Information Management Analyst for Research and Analytics at Indiana University. Mike began his career in institutional research at IU in 2002, and currently serves as an Information Analyst on the Research and Analytics team. Prior to his time in institutional research, he taught Spanish and Portuguese at IU. He holds a Certificate in Underwater Resource Management from IU, is a certified yoga instructor and an Indiana Master Naturalist. He enjoys being with family, reading, observing nature, watching IU sports and learning languages. Mike holds a Master of Public Affairs in Environmental Policy and Natural Resource Management, a Master’s in Latin American and Caribbean Studies (both from Indiana University), and a Bachelor’s in History from the College of Wooster.

 

 

GraunkeSteve Graunke serves as Interim Director of Analysis and Institutional Effectiveness within Institutional Analytics at Indiana University. In his role, Steven promotes the use of data informed decision making across all levels through reporting of enrollment information, completion of data requests from schools and academic units, and studying the effectiveness of student success programs.

 

 

Back to Special Features