ARTIFICIAL INTELLIGENCE
In this article we explore the rapid utilization of Large Language Models like ChatGPT by OpenAI and whether use of such tools are compliant with the Gramm-Leach-Bliley Act (GLBA) for colleges and universities.
Note: This is part two of three, covering GLBA, in our three part series on data privacy in higher education and the risks introduced by generative AI. Part 1 covers FERPA, GDPR, and CCPA and Part 3 covers NIST 800-171.
In the era of digital transformation, higher education institutions are embracing technological innovations to enhance teaching, learning, and administration. The rise of large language models (LLMs) like OpenAI's ChatGPT offers promising opportunities but also raises critical legal and ethical concerns. This article explores the intricate relationship between generative AI and GLBA compliance, shedding light on potential risks and offering guidance for colleges and universities in North America.
The Gramm-Leach-Bliley Act, enacted in 1999, requires financial institutions to explain how they share and protect consumers' personal financial information (Federal Trade Commission [FTC], 2019). Although primarily targeting banks and financial services, the GLBA's Safeguards Rule extends to higher education institutions that handle financial transactions, such as student loans and tuition payments (U.S. Department of Education, 2016).
Under the GLBA, colleges and universities must implement written information security programs, including administrative, technical, and physical safeguards, to protect student and staff financial information (FTC, 2019). These measures are not only a legal obligation but are crucial in maintaining trust and integrity in the educational system.
The applicability of GLBA to higher education institutions may come as a surprise to some. However, the financial transactions they conduct, such as processing tuition fees and managing student loans, fall under the purview of this Act. Specifically, each institution that participates in the Title IV programs has agreed in its Program Participation Agreement (PPA) to comply with the GLBA Safeguards Rule under 16 C.F.R. Part 314. Compliance with GLBA is not just a legal requirement but a fundamental aspect of institutional integrity, transparency, and accountability.
Non-compliance with GLBA may lead to loss of reputation, erosion of trust, and potential legal and regulatory actions. Moreover, the use of AI systems like ChatGPT without adequate privacy safeguards can further complicate the compliance landscape, potentially exposing institutions to significant risks.
Effective June 8, 2023, GLBA is requiring institutions to ensure the security, integrity, and confidentiality of student information and unauthorized access or threats to it. The objectives of the GLBA standards for safeguarding information are to:
An institution’s or servicer’s written information security program must include the following nine elements included in the FTC’s regulations:
Institutions or servicers that maintain student information for fewer than 5,000 consumers are only required to address the first seven elements.
Violating the GLBA can result in severe consequences for colleges and universities. The civil penalties can range from fines to corrective actions, such as implementing additional safeguards or undergoing regular audits (FTC, 2019). Criminal penalties may include imprisonment for responsible individuals, especially in cases of willful neglect or intentional misconduct (U.S. Department of Justice, 2001).
The changes to the Safeguards Rule are effective June 9, 2023. Any GLBA findings identified through a compliance audit, or any other means, after the effective date will be resolved by the Department during the evaluation of the institution’s or servicer’s information security safeguards required under GLBA as part of the Department’s final determination of an institution’s administrative capability. GLBA related findings will have the same effect on an institution’s participation in the Title IV programs as any other determination of non-compliance.
In cases where no data breaches have occurred and the institution’s or servicer’s security systems have not been compromised, if the Department determines that an institution or servicer is not in compliance with all of the Safeguards Rule requirements, the institution or servicer will need to develop and/or revise its information security program and provide the Department with a Corrective Action Plan (CAP) with timeframes for coming into compliance with the Safeguards Rule. Repeated non-compliance by an institution or a servicer may result in an administrative action taken by the Department, which could impact the institution’s or servicer’s participation in the Title IV programs.
Generative AI solutions like ChatGPT can be powerful tools for enhancing education, but they can also put colleges and universities in hot water if not handled with care. These AI models may inadvertently access, process, or share personal financial information in ways that violate the GLBA's Safeguards Rule.
For instance, if an AI system is used to facilitate financial transactions or provide financial advice to students without proper security measures, it could lead to unauthorized access or disclosure of sensitive information. This misuse can trigger legal liabilities, especially if the AI vendor's privacy policy and terms of conditions are not aligned with GLBA requirements.
Generative AI offers tremendous potential for enhancing higher education but must be approached with caution, especially concerning compliance with the Gramm-Leach-Bliley Act. Colleges and universities must understand the legal landscape, implement robust privacy safeguards, and ensure that their use of AI aligns with federal regulations. Failing to do so can result in significant civil and criminal penalties, tarnishing the reputation of the institution.
This article is meant to serve as a resource for understanding the potential privacy concerns surrounding the use of large language models in higher education and the associated legal protections in place. It does not, however, constitute legal advice. Laws and regulations often differ from one jurisdiction to another and are subject to change. It is advisable to seek legal counsel before making decisions based on the information provided in this article. Always consult with a qualified professional to address your particular needs and circumstances.
Founder
© 2016 - 2024 Canyon GBS LLC. All rights reserved.
Advising App®, Aiding App™, Retaining App™, Relating App™, and Instructing App™ are products created by Canyon GBS®