How to Approach Student AI Use Policies in Higher Education

Husch Blackwell LLP
Contact

The emergence of ChatGPT and other generative artificial intelligence (AI) tools has triggered widespread debates about the propriety of their use, especially in education circles, where academic integrity and confidentiality are paramount. Recognizing the likely permanence of AI, colleges, universities, and schools are grappling with the need for clear guidelines for students on the use of AI in the academic context, leading to the creation of AI use policies, procedures, or guidelines (collectively, AI Use Policies or AUPs). Given the novelty of generative AI, there is no tested formula for these educational institution policies. Drawing on our experience in the education industry, we have identified the following topics for consideration when drafting student-facing AUPs.

Preliminary questions

To use or not to use?

First, institutions must decide the extent to which they wish to regulate AI use and how much AI use should be allowed. Institutions have set different tolerances for AI use ranging from outright bans to encouraging responsible AI use. Recognizing the job industry shift towards AI-literacy (it was noted in a recent federal lawsuit that over 80% of Fortune 500 companies use Chat GPT), many institutions are electing to express some tolerance for AI use and skill development. Further, as a practical matter, there are limits to institutions’ ability to prevent student AI usage altogether, and students may be attracted to institutions that embrace generative AI to streamline tasks and minimize unnecessary work. As one commentator noted, not allowing some form of AI use is like “offering a typewriter when word processors are available.”

How does an institution begin to address the governance and oversight challenges of AI?

Institutions of higher education should consider designating or forming a body to govern adoption, implementation, and revision of the AUP and AI usage. Given the rapidly evolving AI market, best practices suggest this group be composed of diverse stakeholders (potentially teachers/professors, IT staff, student leadership or community members, senior administration, and/or industry experts). Institutions should consider the optimal size that will strike the balance between being large enough to consider multiple angles but small enough to be efficacious.

Implementation and oversight are important, but enforcement is equally vital. AI enforcement requires receiving, investigating, and remediating reports of violations. Institutions may wish to address these issues through an AI/AUP committee or—particularly with respect to already regulated activity—through preexisting enforcement mechanisms. Ideally, an AUP will designate those responsible for enforcement in a variety of contexts.

Important considerations

Are several policies better than one?

Educational institutions present a unique scenario where AUPs may be advisable for a number of stakeholders (mainly students, faculty, and staff) and on a number of topics. With respect to stakeholders, while some aspects of AI policy may apply universally (such as guidelines for usage of AI in relation to treatment of patients for both students in clinical rotations and employees at academic medical centers), others may only concern one group (such as information on the interplay of AI and student academic misconduct). Many AI policy statements may best be placed within existing policies to address the use of AI in the context of already regulated conduct—such as in policies on student enrollment/marketing where AI chatbots may now be used. At the same time, institutions may also elect to have a standalone, overarching AUP to address AI governance and usage—potentially including high-level institutional values and guidelines related to both—cross-referencing context-specific policies for detail. Given the breadth of potential AI impacts on higher education, several institutions have elected an approach that offers only general AI usage guidelines at an institutional level and are attempting to fold context-specific AI use regulation into existing policies.

How do the idiosyncrasies of AI create challenges for the development of institutional policies?

Traditional policy drafting has focused on enduring definitions which apply to novel situations with minimal updates; however, the idiosyncrasies of AI create both issues of under- and over-inclusivity.

Underinclusive. Institutions may initially consider explicitly banning specific AI programs like ChatGPT by name; however, this approach falls short given the multitude of other AI programs in use, like Scribe for writing and AlphaCode for coding. With new AI programs launching daily, policy restrictions by named product quickly become impractical.

Overinclusive. Broadly restricting “programs which use AI” is also problematic. Such a definition would ban tools like spelling and grammar check which are traditionally allowed. Attempts to restrict “generative AI” may be similarly over-inclusive because tools like predictive text on a cellphone use generative AI without the same academic integrity concerns.

To tackle this challenge, Husch Blackwell has identified multiple potential alternatives. One option is to define the processes which AI may not perform, such as “using AI software to write initial drafts of assignments.” While avoiding the need for constant policy updates to name new AI programs, clarity issues may arise regarding what tasks fall under the defined process.

Another option is to empower individuals (such as faculty members via their syllabi) or smaller units (such as colleges or departments) to develop their own procedures or guidelines on AI usage, provided they align with broader institutional policies. Within this option and to assist students in identifying variations, institutions may wish to develop, for example, a selection of boilerplate language from which smaller units could choose. For example, faculty may be able to select one of four approved AI clauses in their syllabi: (1) use prohibited, (2) use with faculty permission, (3) use with acknowledgment, (4) use with no acknowledgment. Of course, a policy that allows for use with acknowledgment (option 3 above) would necessitate a standard for attribution and citation of AI-generated content, and some schools have adopted an institutional standard for attribution and citation and included it in their AUPs. Providing flexibility can accommodate academic preferences and operational practices, while also giving students more predictability regarding AI use across their academic experience.

Legal compliance and confidentiality

An AUP must also address legal compliance and confidentiality. Laws related to AI are evolving rapidly in U.S. federal and state and other (particularly international) jurisdictions. The broad availability of generative AI tools is so new that many issues—such as whether prohibitions on AI usage impact free speech or academic freedom obligations, or the point at which AI usage in academics becomes plagiarism prohibited in federally funded research—have not been addressed by courts and other regulators. Institutions should consider designating responsibility for periodic review of these developments to ensure any impacting their operations are addressed.

Institutions should consider providing students—particularly those with access to sensitive information, such as via research, campus jobs, or student leadership positions—with information about the potential legal implications of AI usage on issues such as privacy and intellectual property (via AUPs or in related educational materials or programming). Sharing sensitive information with AI programs raises potential legal implications under already existing law, such as violations of laws like the Family Educational Rights and Privacy Act (FERPA) and those related to protection of consumer financial information. Moreover, inputting confidential data into AI systems may implicate intellectual property and confidentiality provisions within an institution’s existing contracts. To navigate these complexities, institutions should consider the extent to which their AUPs should define the scope of permissible information sharing for students and designate a system for resolving unanswered questions.

Additionally, it is essential that the AUP does not absolve individuals of their obligations under existing policies or laws. To this end, schools may consider placing responsibility for AI-generated content on students for its creation and inappropriate dissemination. This approach not only prevents individuals from using AI as a loophole to evade liability (“I didn’t do it; the AI did”) but also fosters awareness of the interconnected nature between AI use and other organizational policies.

What this means to you

The evolving AI landscape demands adaptive, informed, and tailored approaches to effectively integrate generative AI within educational settings. In crafting AUPs, schools, colleges, and universities must navigate individual circumstances—what works for a small liberal arts college of 2,000 students will not work for a large academic medicine research institution of 50,000 students with healthcare and extensive research operations. Recognizing the absence of a one-size-fits-all solution, institutions should ensure AUPs align with policy goals, legal requirements, and institutional values and culture.

[View source.]

Written by:

Husch Blackwell LLP
Contact
more
less

Husch Blackwell LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide