Assuring Quality: An Institutional Self-Assessment Tool for Excellent Practice in Student Learning Outcomes Assessment

New Leadership Alliance for Student Learning and Accountability (2012)  

Reviewed by Catherine Horn 

Assuring-Quality-cover.jpgAssuring Quality is the second in a series of monographs produced by the New Leadership Alliance. Its intent is to provide institutions a self-assessment tool ultimately useful in improving assessment of student learning. The tool includes questions developed across 29 criteria (e.g., “institution-wide student learning outcome statements are easily understood by internal and external stakeholders”) in eight areas (e.g., that, according to authors, “comprise the essential indicators of high-quality student learning outcomes assessment and accountability practices”) and guides institutional stakeholders through a step-by-step process of review and action planning. 

There is much that is valuable about this tool. First, it provides a specific and comprehensive set of considerations for institutional leaders to study in determining the extent to which student learning outcomes are achievable, observable, and measurable. It also encourages universities to thoughtfully include the perspectives of a broad constituency of stakeholders (e.g., students, parents, graduates, employers, the general public) in the review of institutional assessment practices and learning outcomes. Finally, it consciously acknowledges the importance and interrelatedness of both academic and co-curricular programs toward the assessment and improvement of learning outcomes.  

In responding to the Alliance’s request to provide feedback on this first edition of the tool, I offer three observations. While carefully focused on the components of assessment, the current form of the tool does not offer clear opportunity for more specific reflection on measurement or testing. While some might argue that such an observation is simply a splitting of hairs, at least a few psychometricians might disagree. Calfee (1993), for example, argues that “assessment leads a hierarchy in which testing and measurement are in the service of a greater purpose” (p. 3). In its current form, the tool asks its users to identify how they know whether assessment is being undertaken, but does not explicitly prompt users to also unpack and critique the components that underlie those assessments. To the extent that assessment is being understood through questionable testing or measurement practices, for example, the answer to the question of “how” does not reveal the more important answer of “how well.” 

Second, the tool appropriately and usefully concentrates review on individual (a term used broadly to represent leaders, units, stakeholders, etc.) contributions to the institutional whole, but does not as clearly draw attention to review of available institutional structures able to cultivate conditions conducive for such efforts. As an example, Area 1 seeks to assess whether “an ongoing and integrated commitment to achieving student learning outcomes is visible in the actions of the campus community.” The eight specific criteria focus on visibility and communication of commitment as well as the pervasiveness and collaborative nature of assessment efforts. The final criteria asks about whether a “process is in place to ensure that expectations for student learning outcomes assessment…are met,” but does not also ask about whether the systems (e.g., a Center for Teaching and Learning, a Faculty Development Program, focused on assisting in the development of high quality tests) needed to facilitate process are also in place.  

Finally, the tool utilizes a framework of aggregating understanding (e.g., to the program, to the institution) of assessment practices as a means of empowering universities to make evidence-based decisions about student learning. Such efforts may also serve to aid in assessment “culture” development. But the tool might be strengthened through a more deliberate recognition of the critical nature of disaggregation (rather than aggregated whole) toward best information. As one example, Bensimon, Hao, and Bustillos (2006), writing about the ways in which institutions might consider an equity index as part of their self-assessment and accountability efforts, note “the disaggregation of data by race and ethnicity, particularly in relation to outcomes, is not a routine practice with the exception of data on college access. Thus, even though the values of diversity and equity are espoused in the mission statements…progress toward their attainment is not something that is monitored” (pp. 157-159). 

For institutional leaders seeking to seriously engage in conversation about student learning, the tool provided in Assuring Quality is a productive starting place. It is also a reminder that any such institutional effort, if successful, moves quickly from a checklist at the beginning to a complex, integrated, and ongoing process.  

Bensimon, E. M., Hao, L., & Bustillos, L. T. (2006). Measuring the state of equity in public higher education. In P. Gandara, G. Orfield, & C. Horn (Eds.), Expanding opportunity in higher education: Leveraging promise (pp. 143-166). Albany, NY: SUNY Press. 

Calfee, R. (1993). Assessment, testing, measurement: What’s the difference? Educational Assessment, 1(1), 1-7.

Catherine Horn is Associate Professor of Educational Psychology at the University of Houston. 


To add a comment, Sign In
There are no comments.