Generative AI and Compliance with The Gramm-Leach-Bliley Act: A Guide for Colleges and Universities in North America

In this article we explore the rapid utilization of Large Language Models like ChatGPT by OpenAI and whether use of such tools are compliant with the Gramm-Leach-Bliley Act (GLBA) for colleges and universities.

August 8, 2023

Note: This is part two of three, covering GLBA, in our three part series on data privacy in higher education and the risks introduced by generative AI. Part 1 covers FERPA, GDPR, and CCPA and Part 3 covers NIST 800-171.


In the era of digital transformation, higher education institutions are embracing technological innovations to enhance teaching, learning, and administration. The rise of large language models (LLMs) like OpenAI’s ChatGPT offers promising opportunities but also raises critical legal and ethical concerns. This article explores the intricate relationship between generative AI and GLBA compliance, shedding light on potential risks and offering guidance for colleges and universities in North America.

The Gramm-Leach-Bliley Act (GLBA): An Overview

The Gramm-Leach-Bliley Act, enacted in 1999, requires financial institutions to explain how they share and protect consumers’ personal financial information (Federal Trade Commission [FTC], 2019). Although primarily targeting banks and financial services, the GLBA’s Safeguards Rule extends to higher education institutions that handle financial transactions, such as student loans and tuition payments (U.S. Department of Education, 2016).

Under the GLBA, colleges and universities must implement written information security programs, including administrative, technical, and physical safeguards, to protect student and staff financial information (FTC, 2019). These measures are not only a legal obligation but are crucial in maintaining trust and integrity in the educational system.

Why GLBA Compliance Matters for Colleges

The applicability of GLBA to higher education institutions may come as a surprise to some. However, the financial transactions they conduct, such as processing tuition fees and managing student loans, fall under the purview of this Act. Specifically, each institution that participates in the Title IV programs has agreed in its Program Participation Agreement (PPA) to comply with the GLBA Safeguards Rule under 16 C.F.R. Part 314. Compliance with GLBA is not just a legal requirement but a fundamental aspect of institutional integrity, transparency, and accountability.

Non-compliance with GLBA may lead to loss of reputation, erosion of trust, and potential legal and regulatory actions. Moreover, the use of AI systems like ChatGPT without adequate privacy safeguards can further complicate the compliance landscape, potentially exposing institutions to significant risks.

New GLBA Requirements

Effective June 8, 2023, GLBA is requiring institutions to ensure the security, integrity, and confidentiality of student information and unauthorized access or threats to it. The objectives of the GLBA standards for safeguarding information are to:

  • Ensure the security and confidentiality of student information;
  • Protect against any anticipated threats or hazards to the security or integrity of such information; and
  • Protect against unauthorized access to or use of such information that could result in substantial harm or inconvenience to any student (16 C.F.R. 314.3(b)).

An institution’s or servicer’s written information security program must include the following nine elements included in the FTC’s regulations:

  1. Designates a qualified individual responsible for overseeing and implementing the institution’s or servicer’s information security program and enforcing the information security program (16 C.F.R. 314.4(a)).
  2. Provides for the information security program to be based on a risk assessment that identifies reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of customer information (as the term customer information applies to the institution or servicer) that could result in the unauthorized disclosure, misuse, alteration, destruction, or other compromise of such information, and assesses the sufficiency of any safeguards in place to control these risks (16 C.F.R. 314.4(b)).
  3. Provides for the design and implementation of safeguards to control the risks the institution or servicer identifies through its risk assessment (16 C.F.R. 314.4(c)). At a minimum, the written information security program must address the implementation of the minimum safeguards identified in 16 C.F.R. 314.4(c)(1) through (8).
  4. Provides for the institution or servicer to regularly test or otherwise monitor the effectiveness of the safeguards it has implemented (16 C.F.R. 314.4(d)).
  5. Provides for the implementation of policies and procedures to ensure that personnel are able to enact the information security program (16 C.F.R. 314.4(e)).
  6. Addresses how the institution or servicer will oversee its information system service providers (16 C.F.R. 314.4(f)).
  7. Provides for the evaluation and adjustment of its information security program in light of the results of the required testing and monitoring; any material changes to its operations or business arrangements; the results of the required risk assessments; or any other circumstances that it knows or has reason to know may have a material impact on the information security program (16 C.F.R. 314.4(g)).
  8. For an institution or servicer maintaining student information on 5,000 or more consumers, addresses the establishment of an incident response plan (16 C.F.R. 314.4(h)).
  9. For an institution or servicer maintaining student information on 5,000 or more consumers, addresses the requirement for its Qualified Individual to report regularly and at least annually to those with control over the institution on the institution’s information security program (16 C.F.R. 314.4(i)).

Institutions or servicers that maintain student information for fewer than 5,000 consumers are only required to address the first seven elements.

Potential Civil and Criminal Penalties for Violation

Violating the GLBA can result in severe consequences for colleges and universities. The civil penalties can range from fines to corrective actions, such as implementing additional safeguards or undergoing regular audits (FTC, 2019). Criminal penalties may include imprisonment for responsible individuals, especially in cases of willful neglect or intentional misconduct (U.S. Department of Justice, 2001).

The changes to the Safeguards Rule are effective June 9, 2023. Any GLBA findings identified through a compliance audit, or any other means, after the effective date will be resolved by the Department during the evaluation of the institution’s or servicer’s information security safeguards required under GLBA as part of the Department’s final determination of an institution’s administrative capability. GLBA related findings will have the same effect on an institution’s participation in the Title IV programs as any other determination of non-compliance.

In cases where no data breaches have occurred and the institution’s or servicer’s security systems have not been compromised, if the Department determines that an institution or servicer is not in compliance with all of the Safeguards Rule requirements, the institution or servicer will need to develop and/or revise its information security program and provide the Department with a Corrective Action Plan (CAP) with timeframes for coming into compliance with the Safeguards Rule. Repeated non-compliance by an institution or a servicer may result in an administrative action taken by the Department, which could impact the institution’s or servicer’s participation in the Title IV programs.

Generative AI and Compliance Challenges

Generative AI solutions like ChatGPT can be powerful tools for enhancing education, but they can also put colleges and universities in hot water if not handled with care. These AI models may inadvertently access, process, or share personal financial information in ways that violate the GLBA’s Safeguards Rule.

For instance, if an AI system is used to facilitate financial transactions or provide financial advice to students without proper security measures, it could lead to unauthorized access or disclosure of sensitive information. This misuse can trigger legal liabilities, especially if the AI vendor’s privacy policy and terms of conditions are not aligned with GLBA requirements.


Generative AI offers tremendous potential for enhancing higher education but must be approached with caution, especially concerning compliance with the Gramm-Leach-Bliley Act. Colleges and universities must understand the legal landscape, implement robust privacy safeguards, and ensure that their use of AI aligns with federal regulations. Failing to do so can result in significant civil and criminal penalties, tarnishing the reputation of the institution.

Important Note

This article is meant to serve as a resource for understanding the potential privacy concerns surrounding the use of large language models in higher education and the associated legal protections in place. It does not, however, constitute legal advice. Laws and regulations often differ from one jurisdiction to another and are subject to change. It is advisable to seek legal counsel before making decisions based on the information provided in this article. Always consult with a qualified professional to address your particular needs and circumstances.

Joseph Licata Profile Picture
Joe Licata


© 2023 Canyon GBS LLC. All rights reserved.

We provide CRM, data, analytics and professional services to colleges and universities.

Scroll to Top