• Featured
  • 09.22.20

Test-Optional Admissions? IR Has Insights

  • by Steven Graunke, Director of Institutional Research and Assessment, Indiana University-Purdue University Indianapolis

In 2018, the University of Chicago announced that it would no longer be requiring students to submit test scores in order to be admitted (Kmetz, 2018). Their admissions process would now be test-optional, meaning that students could opt not to submit scores from the ACT or SAT to earn admission. The fact that a prestigious institution such as the University of Chicago adopted a test-optional approach caused other institutions to evaluate their own admissions policies. The COVID-19 pandemic may have actually accelerated this trend. 

An Inside Higher Ed article by Scott Jaschik on March 30, 2020, reported that at least 17 universities adopted test-optional admissions in March, while several others were considering new admissions policies. Many of these delays were designed to be temporary, given the large number of test centers that needed to be closed during the pandemic (Jaschik, March 23, 2020). However, some universities may eventually decide to make test-optional decisions permanent. 

Following the death of George Floyd and ensuing protests, many institutions began systematically exploring ways to promote racial equity. Extending test-optional admissions may be one way to provide opportunity to students who may be systematically disadvantaged by known biases in standardized, norm-referenced tests. Rothstein (2004) conducted a comprehensive assessment of the validity of the SAT in predictions of first-year GPA and found that high school GPA had much stronger predictive validity than SAT scores. More recently, Geiser (2017) used data from the University of California to discuss the effect of race and ethnicity, family income, and parents’ level of education on SAT scores, and found that some items on the test may be biased against African American and Latinx test takers. Geiser further argued that since California law prevents the use of race in college admissions, the University of California could not use a test that was known to be biased again African American and Latinx students. 

Institutional Research (IR) professionals have an important role to play in test-optional discussions. Many IR professionals possess the knowledge and ability to conduct studies to determine if other factors may have a stronger relationship to student success than the SAT. Such studies can be useful in informing campus leaders and other relevant decision makers on the utility of test-optional admissions for their campus. When our institution began to consider whether to adopt test-optional admissions, we in the Office of Institutional Research and Decision Support (IRDS) developed logistic regression models to illustrate the effect of standardized test scores on fall-to-fall retention of new beginners net the effect of other measures of academic ability (high school GPA, receipt of an Indiana Honor’s diploma, and number of hours of pre-college credit earned). We found that SAT scores had only a modest effect on fall-to-fall retention. Results from one institution may not be applicable to another, which is why it is important for IR professionals to study the relationship between test scores and student success on their campuses.

Should institutions decide to go test-optional, IR professionals should also work with staff in admissions to identify alternative criteria that may be helpful in admissions decisions. Adelman’s (1999) Academic Resources Index, for example, uses information obtained from a high school transcript to estimate the difficulty and quality of work a student completed in high school. Adelman’s work and other resources could be useful in the development of institution-specific indices to aid in admissions decisions. We in IRDS plan to work with our colleagues in undergraduate admissions to develop similar measures using student transcript data.

Making test score submission optional obviously has impact well beyond just the admissions process. Specific academic programs may require test scores, particularly STEM programs where readiness for Calculus is important for success. Merit-based financial aid may also be limited only to students within a specific test score range. Test scores may also be used as supplements to placement testing that can help determine appropriate math and writing courses. Understanding the many and varied ways test scores are used is important, and IR professionals have the skills necessary to conduct relevant research to understand the broader impact of test-optional policies. For example, at IUPUI, we developed decision trees for each school to discover high school GPA cut points that might be associated with higher probability of fall-to-fall retention. We also examined the relationship between test scores, race, and socioeconomic status in order to help schools understand how test-optional admissions might expand opportunities for students. These analyses provided every school at IUPUI with the information they needed to decide criteria for direct admission into academic programs.

Test-optional admission has impact even after students are enrolled. Reporting of test scores to IPEDS and in the Common Data Set are still required. Test scores can also be valuable covariates in studies assessing student learning or determining program effectiveness. Students may also have submitted a test score to a university when taking a test, then decide later to withdraw that score from an admissions decision. Is that test score still valid to use for other purposes? We in IRDS plan to use specific groups at IUPUI, such as the Data Inquiry Group (which discusses specific data sources and appropriate use) and Strategic Information Council (which is charged with translating data into specific decisions), to discuss appropriate use and interpretation of test scores in reporting and decision making. IR professionals have a responsibility for promoting data literacy on their campus, and appropriate use of standardized test scores provides an excellent opportunity for these discussions.

The AIR Statement of Aspirational Practice (Swing & Ross, 2016) suggests that IR professionals should adopt a student-focused perspective, in which IR is utilized for the benefit of all students. Contributing relevant data, research skills, policy knowledge, and data literacy expertise to test-optional conversations can help IR staff attain this goal. Decisions on test-optional admissions should be carefully considered and based on individual institutional contexts. IR staff can make sure decisions are carefully considered and made in the interest of students.


Adelman, C. (1999). Appendix C: Gradations of academic intensity and quality of H.S. curriculum. In C. Adelman Answers in the Tool Box: Academic Intensity, Attendance Patterns, and Bachelor’s Degree Attainment. Washington, DC: U.S. Department of Education. Retrieved from https://www2.ed.gov/pubs/Toolbox/AppendixC.html

Geiser, S. (2017). Norm-referenced tests and race-blind admissions: The case for eliminating the SAT and ACT at the University of California (Research & Occasional Paper Series: CSHE.15.17). Berkeley, CA: Center for Studies in Higher education. Retrieved from https://files.eric.ed.gov/fulltext/ED580807.pdf 

Jaschik, S. (2020, March 23). Coronavirus leads to cancellations of testing dates. Inside Higher Ed. Retrieved from https://www.insidehighered.com/admissions/article/2020/03/23/testing-organizations-cancel-dates-due-coronavirus-change-ap-program

Jaschik, S. (2020, March 30). Coronavirus drives colleges to test optional. Inside Higher Ed. Retrieved from https://www.insidehighered.com/admissions/article/2020/03/30/coronavirus-leads-many-colleges-including-some-are-competitive-go-test

Kmetz, D. (2018, June 14). UChicago launches test-optional admissions process with expanded financial aid, scholarships. UChicago News. Retrieved from https://news.uchicago.edu/story/uchicago-launches-test-optional-admissions-process-expanded-financial-aid-scholarships

Rothstein, J. M. (2004). College performance predictions and the SAT. Journal of Econometrics, 121, 297-317. doi: 10.1016/j.jeconom.2003.10.003

Swing, R. L., and Ross, L. E. (2016). Statement of Aspirational Practice for Institutional Research. Association for Institutional Research, Tallahassee, Florida. Retrieved from http://www.airweb.org/aspirationalstatement