• Board Corner Board News
  • 02.26.26

AI and the AIR Statement of Ethical Principles

  • by Jillian Morn

The availability and ease of use of generative Artificial Intelligence (A)I tools is certainly reshaping the context of work across multiple industries at a rapid pace. This has led some members to question as to whether we should revise our Statement of Ethical Principles to specifically reference generative AI. During the January 30th AIR Board meeting, we discussed revisiting the Statement of Ethical Principles to incorporate guidelines on the use of generative AI in Institutional Research work. Ultimately the board believes the value of our current set of ethical principles is their evergreen nature. Even as our analytical tools and regional contexts evolve, these directives derived from our initial principles and feedback from our membership remain applicable across settings and situations. We feel strongly that the current set of professional principles speak directly to the ethical use of generative AI, without ever having to name it specifically.

During the 2025 AIR Forum, I presented a speaker session on crafting pragmatic AI data usage policies for IR and related offices. I ended the session by urging attendees to be cautious in their adoption of generative AI, and to ensure that their use of any tool be aligned with our stated ethical principles. Below are the direct connections that I highlighted on the ethical use of AI to our published Statement of Ethical Principles.

“We recognize the consequences of our work. The analytic algorithms and applications we build and/or implement, as well as the policy decisions incorporating information we analyze and disseminate, impact people and situations.” Data-informed organizational decision making depends on accurate data, supplied with contextual understanding and human judgement. There are significant consequences if we provide data from generative AI tools that have hallucinated results or replicated biases unchecked. Additionally, a vital component of accepting the consequences of our work is accountability; generative AI tools can never be held responsible for inaccurate decisions or data. Accountability must remain with individuals using tools to provide enhanced analysis but verifying the results and presenting contextualized findings.

“We acknowledge that the individuals whose information we use have rights, derived from both legal and ethical principles that can cross national borders…. We make intentional efforts to protect their information from misuse or use that could cause them harm. We protect privacy and maintain confidentiality when collecting, compiling, analyzing, and disseminating information…. We act as responsible data stewards. We secure the data and information over which we have control, following generally accepted guidelines and professional standards for physical and electronic security and data sharing.” Publicly available generative AI tools are a significant concern for student and employee data privacy. Data entered into public tools can be used to train underlying models, violating data privacy laws. Organizations and higher education institutions embracing generative AI rely on the stated commercial data protections of enterprise licenses to comply with these data privacy laws. Commercial data protections typically guarantee that data is only stored locally and will not be used to train underlying models. However, some commercial data protections note that copies of user prompts and results are permanently held by the company for discovery and quality assurance testing. Students and employees from the European Union, for example, may have elements of their records permanently held, violating their right to be forgotten under GDPR. Data professionals considering supplementing their analysis with generative AI must familiarize themselves with the commercial data protections of their tools and consider the implications of retained prompts and results on student and employee data privacy.

“We provide accurate and contextualized information. We do not knowingly or intentionally mislead the consumers of our information.” Double checking data is second nature in institutional research, whether that is formalized double data entry procedures for IPEDS submissions, independent review by peers, or running your own code again before submitting just to be sure. Validating data accuracy is a crucial part of IR’s role in supporting decision making. Double checking the accuracy of data is just as important when using generative AI to support analysis. Cautious practitioners might limit generative AI use to validating prior human-driven analysis or to checking convergence of themes between a sample and a full population of qualitative responses, for example. Given documented concerns with data hallucinations and data integrity, verifying the accuracy of insights gleaned from generative AI tools is just as important as ever in our

“We seek to be fair and transparent, minimizing our own personal biases in our research assumptions, methodologies, and conclusions.” As institutional researchers and other higher education data professionals, we bolster confidence in the data we provide by being fair and transparent with our assumptions, methods, and conclusions. It is important to discuss response rates and representativeness of survey data in context of the results. And it is valuable to disclose when we use tools like generative AI to support our analysis. Regardless of how deeply we choose to engage with generative AI tools in our practice, we should strive to be transparent about its application and its process.

The end of our Statement of Ethical Principles states: “We recognize that technological advancements have and will continue to impact our work. We remain committed to serving as educators and role models on the ethical use of data to benefit students and institutions and to improve higher education.” Indeed, it is our duty to serve as educators and role models on the ethical use of generative AI in higher education settings.

All this to say our current ethical principles provide excellent guidance on the responsible use of generative AI in Institutional Research settings. The technology, tools, and contexts of our work will continue to change and develop over the years, but our role as data stewards and research advocates will remain as evergreen as the ethics that scaffold our field. At the end of the day, we must act with integrity.