ARTIFICIAL INTELLIGENCE

New InfoSec Requirements Imminent: Understanding NIST 800-171 and How it May Collide with Generative AI Technology in Higher Education

Bridging the Gap: The Interplay of Data Protection Standards in Higher Education and the Rise of Generative AI Technologies – Understanding the Implications of NIST 800-171 Compliance and Navigating the Challenges Presented by Advanced AI Systems in an Evolving Regulatory Landscape.

August 16, 2023

This article is part three of three, covering NIST 800-171, in our three part series on data privacy in higher education and the risks introduced by generative AI. Part 1 covers FERPA, GDPR, and CCPA and Part 2 covers GLBA.

Introduction

In recent years, the U.S. Department of Education has hinted at the importance of compliance with NIST 800-171, a set of guidelines aimed at protecting Controlled Unclassified Information (CUI) in non-federal systems. On February 9, 2023, the Department issued a notice urging institutions to begin aligning with NIST 800-171, emphasizing its distinction from the GLBA Safeguards Rule requirements at 16 C.F.R. Part 314 (Official Notice). This forward-looking perspective explores why colleges and universities must take notice of these developments and the potential compliance challenges with certain providers of generative AI technologies.

The Growing Imperative of NIST 800-171

NIST 800-171 compliance is no longer confined to federal agencies and their contractors. With the Department's notice, higher education institutions are being nudged towards adopting these stringent security measures. The urgency underscores a broader shift towards robust data protection in an era of escalating cybersecurity threats.

Why NIST 800-171 Compliance Matters

  1. Protects Sensitive Information: Colleges and universities handle vast amounts of CUI. Compliance ensures the confidentiality and integrity of this data.
  2. Aligns with Future Regulations: The Department's notice is a harbinger of possible mandatory compliance, and early adoption positions institutions favorably.
  3. Avoids Penalties: Non-compliance could lead to civil and criminal penalties, including fines, litigation, and reputational damage.
  4. Enhances Trust: Demonstrating commitment to data security can enhance trust among students, faculty, partners, and regulators.

Generative AI: A Double-Edged Sword

Generative AI, epitomized by OpenAI's ChatGPT, offers remarkable capabilities but poses unique challenges. Understanding these challenges is crucial for institutions exploring AI's frontiers.

Risks and Non-Compliance

  1. Data Handling and Transmission: Sending CUI to external AI models may breach NIST 800-171's access control requirements (NIST SP 800-171, 3.1.1).
  2. Audit and Accountability: Lack of transparency in AI processing may conflict with audit requirements (NIST SP 800-171, 3.3).
  3. Risk Assessment: Failure to assess AI-related risks could lead to non-compliance (NIST SP 800-171, 3.11).
  4. Third-Party Vendor Compliance: OpenAI's privacy policy and terms may not align with NIST 800-171, necessitating careful legal scrutiny (OpenAI Terms).

The Road Ahead

  1. Comprehensive Assessment: Institutions must assess their use of AI against NIST 800-171 requirements, seeking legal and technical expertise.
  2. Policy Alignment: Aligning with OpenAI's terms may require negotiation and tailored agreements, considering the nuances of NIST 800-171.
  3. Ongoing Vigilance: Regular monitoring and adaptation are essential as regulations evolve and AI technologies advance.

Penalties for Violation: A Stark Warning

Non-compliance with NIST 800-171, if mandated, could lead to severe penalties:

  1. Civil Penalties: Fines and legal actions could burden institutions financially and damage their reputation.
  2. Criminal Penalties: Willful neglect or intentional misconduct may lead to criminal charges, with severe legal consequences.
  3. Loss of Federal Funding: Non-compliance could risk federal funding, a critical lifeline for many institutions.

Conclusion

The convergence of NIST 800-171 and generative AI presents a complex landscape for higher education institutions. The Department of Education's notice marks a critical juncture, necessitating proactive compliance efforts. Understanding NIST 800-171, evaluating the risks of generative AI like ChatGPT by OpenAI, and preparing for potential regulations are vital steps. Institutions must navigate this maze with caution, foresight, and expertise to harness AI's potential without falling afoul of emerging compliance imperatives.

Important Note

This article is meant to serve as a resource for understanding potential information security concerns surrounding the use of large language models in higher education in light of NIST800-171. It does not, however, constitute legal advice. It is advisable to seek legal counsel before making decisions based on the information provided in this article. Always consult with a qualified professional to address your particular needs and circumstances.

Joseph Licata Profile Picture
Joe Licata

Founder


© 2016 - 2024 Canyon GBS LLC. All rights reserved.

Advising App™, Aiding App™, Retaining App™, Relating App™, and Instructing App™ are products created by Canyon GBS™

Scroll to Top