Learning Outcomes Assessment for Educational Improvement

In February, the AIR Executive Office had the honor of hosting Satoko Fukahori, of Japan’s National Institute of Educational Policy Research. She joined AIR while taking a tour of U.S. centers of education in an effort to investigate IR offices and learn more about the assessment data they collect and analyze. Her stops included James Madison University and the University of Central Florida.

Interview by Elaine Cappellino and Patrick Evanson

SatokoFukahori-sized head shot.jpgeAIR: What is the main focus of your work at the National Institute for Educational Policy Research?

The National Institute for Educational Policy Research (NIER) is the research arm of the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), and our mission is to provide the government evidence on which to ground education policy and practice, and to share this information to the wider public.

My role at NIER is to conduct research on higher education policy and practice, focusing particularly on issues concerning the quality assurance of university education. In this capacity, I have been conducting national and international research on competence-based education reform, learning outcomes assessment, and program evaluation.

My recent research projects include the OECD Assessment of Higher Education Learning Outcomes (AHELO) and Tuning, and in this line, I have been exploring approaches to using assessment for program improvement.

eAIR: How is the field of institutional research evolving in Japan?

The importance of IR functions was first formally mentioned in 2008, in a commissioned report written by the University Council, “Universities in the 21st Century and Directions for Reform,” emphasizing the need for transparency and accountability to taxpayers and consumers of higher education.

Since then, IR has been evolving at an accelerated pace, with increasing emphasis on using data for strategic planning in response to the severe decline of the 18-year-old population. Many universities are being forced not only to be accountable to the quality of the services they provide, but also to be more strategic and competitive in their operation.

In 2011, the Enforcement Regulations for the School Education Law was partially amended to mandate universities to report on nine basic elements associated with the quality of education.  


Reporting was first done by individual institutions in a non-standardized way, which then led to a more systemic effort, i.e., the development of two different “University Portrait” systems: one for national and prefectural universities (managed by the National Institution for Academic Degrees and University Evaluation), and another for private universities (managed by the Promotion and Mutual Aid Corporation for Private Schools of Japan). The “University Portrait” for private universities was successfully launched in 2014. Although participation is not mandatory, it is expected that most universities will participate.

In preparation for this new environment, many universities have made efforts to set up IR offices. According to a MEXT survey, by 2012, 9.9% of the universities had central offices with IR designation, and 15.4% had offices that served IR functions, while 69.1% responded that they did not have a centralized office serving IR functions.

eAIR: Did you have specific learning outcomes in mind as you planned your visit to the U.S.? What were they?

As in the United States, there has been much policy interest [in Japan] in assessing student learning and using the data for educational improvement. However, there has been discomfort among academics who view this as another form of faculty evaluation or university ranking.

There are also academic concerns about the feasibility and cost effectiveness of learning outcomes assessment—particularly in the majors—as well as concerns about the relevance of transversal/generic skills assessment in measuring the outcomes of a disciplinary-based university education.

From this standpoint, my questions focused on two issues associated with the use of learning outcomes assessment for educational improvement:

  1. The organizational structure and reporting lines of offices with IR functions: accountability and reporting, strategic planning, teaching and learning, etc.

  2. The types and uses of assessment data for program improvement in general education and in the majors.

eAIR: Given your expected learning outcomes, was there anything surprising about what you observed?

  1. With regard to the organizational structure and reporting lines of IR offices, I was surprised to find great variation, which I understand to be a function of multiple factors: the size of the institution, state legal structures, the historical foundation of the offices, the academic/professional backgrounds of IR staff, their organizational power in relation to the departments and the administration, and the abundance of funding/resources.

  2. The organizational structure seems also to be constantly changing in response to increasing demands for more efficiency and sophistication. High-level discussions at the AIR Executive Office helped me put into context what I had observed in practice. I realize that this is an area with great potential for research, and I am very much looking forward to the findings from the AIR’s current research pr​ojects.  

  3. With regard to the type of assessment, I was impressed to find strong collaboration between the faculty/general education committee and assessment professionals at the Center for Assessment & Research Studies at James Madison University, and the Office of Operational Excellence and Assessment Support at the University of Central Florida.

It was also striking to see how rubrics for program evaluation were being used to improve programs, and how assessment data was being incorporated into the evaluation process.

eAIR: Is there anything you learned on your trip that you think will be of great immediate value to your work?
In Japan, I am working with faculty to reach agreement on the expected student learning outcomes through the collaborative development of assignments and assessment instruments in various disciplines. The next step is to discuss with faculty and policy makers how best to use these instruments to improve education, and to institutionalize this process in sustainable ways. What I learned from this trip will help me engage stakeholders in substantive discussion.
It was also encouraging and empowering to meet colleagues with social science backgrounds playing distinctive roles in improving education in the disciplines. Institutional knowledge management is a new profession in Japan that needs to be quickly but thoughtfully developed.