Committing to Quality: Guidelines for Assessment and Accountability in Higher Education

New Leadership Alliance for Student Learning and Accountability (2012)  

Reviewed by Jonathan D. Fife 

This pamphlet proposes a four-step process that an institution should use to fulfill its commitment to quality: (1) set clearly articulated ambitious goals for student achievement, (2) gather evidence of how successfully these goals are being achieved through measurement of actual performance in relation to these goals, (3) use this evidence—that is, link the results of the measurements to practices—to adjust practices to be more effective in achieving the institution’s goals, and (4) make the evidence public—the institution’s current performance—with what actions—specific responses—the institution will be taking to improve these results. 

These steps to create change have proven successful in the 40 years since the organizational quality and effectiveness concepts were introduced in the late 1970s. The limitation of this pamphlet is in the guidance it provides in selecting the tools necessary to assist an institutional unit with successful implementation of these four steps so the steps become continuous parts of the unit’s daily process for improvement.

A closed system or process is one that takes new information and builds and refines itself; it is “closed” because it never ends.

In Step 1, it is asserted that “there is a general agreement about the desired outcomes of undergraduate education.” This may be true from a generalized national perspective, but it may not be true at the institutional level. There are two key ingredients necessary for the implementation of any goal: 

Consensus that gives legitimacy to the goals. This pamphlet provides a list of national organizations that support the concept of continuous quality improvement. It is an impressive list that can aid IR professionals in demonstrating the national political support for this effort. However, what is of greater importance is for an institution to identify the key individuals, both external and internal, who are the most influential in supporting and implementing these goals. These people must be involved in setting both the institution’s “ambitious goals” and in evaluating the institution’s efforts. How these people are identified and used in Steps 1 and 3 is critical to the institution’s success.  

Foundational resources or capacity. Does the institution have the basic elements needed to achieve its ambitious goals? Examples include student intellectual abilities or aspirations, faculty competence, and institutional financial resources. The potential of achieving ambitious goals is first dependent on having the basic resources to do so. Too often, the ambitions of an institution are not achieved because of failure to assess its base resources. IR offices can assist with this crucial start by benchmarking the resources of the institution with institutions that have successfully achieved similar goals. 

Quality Improvement as a Continuous Cycle 

In order for IR professionals to create the maximum impact desired by the endorsers of this pamphlet, they must recognize that these four steps represent a closed system. This does not mean that it is closed to external or new information—quite the contrary. A closed system or process is one that takes new information and builds and refines itself; it is “closed” because it never ends. These four steps lead to a more informed and refined Step 1.  

Another way of describing these four steps is to break them into more discrete steps to realize a continuous improvement or adaptation cycle: identify specific stakeholder goals  use agreed-on measurements of performance that include process and resulting events (outcomes) and achievements (outputs)   analyze results of these measurements   identify what might be producing desired, undesired, or unsatisfactory results  make incremental adjustments, (e.g., recruit different students, hire faculty with different skills, increase or reassign resources),   repeat this analytical cycle after modifying goals, processes, and measureable results and achievements in Step 1. Emphasized throughout the pamphlet is the need to involve all stakeholders in the continuous adaptation cycle. For IR, this means that measurement without context (stakeholder-defined goals related to results/outcomes) usually is ineffective. 

For IR offices, the weakness in this pamphlet is not in its message—there is a powerful external stakeholder base that is expecting institutions to create systematic approaches to assess clearly defined student learning goals. The weakness is in identifying the ways to develop consensus on appropriate measurements. The concept of appropriate measurements has many aspects: 

• Is a specific learning goal measurable?

• Are the measurements acceptable to the relevant stakeholders?

• Does the measurement produce information that can be used to produce improved results? For example, the percentage of students who graduate (an outcomes measurement) does not measure improvement in student learning or relate to student employment or continued education (output measurements).

This pamphlet was not written for institutional researchers. Rather, it was written to motivate leaders of higher education institutions to embrace assessment and accountability in order to create measurable, or more importantly, actionable evidence so their institutions could both know how they compare to similar institutions and how they could become better at what they espouse to accomplish. 

Yet this pamphlet is important to institutional researchers in that it provides enormous support for the raison d'être of institutional research offices – to identify, gather, analyze, and relate data/measurements that will help a higher education institution know where it is and what it can do to be better. In The Handbook of Institutional Research (Jossey-Bass, Fall 2012) Chapter 37, “Tools for Improving Institutional Effectiveness” provides context and guidance related to various measurements that are missing from this brief pamphlet. 

Jonathan D. Fife is a visiting professor at Virginia Tech. 



To add a comment, Sign In
There are no comments.