Last Update: 10/13/2023
University Name
Policy Status
Policy Overview
Princeton University
Vague Policy

Does not ban the use of AI but asks instructors to be explicit about AI policy in course syllabus.

Harvard University
Basic Policy

Encourages instructors to explicitly include policies in the syllabus. Provides basic sample policies for a variety courses.

Stanford University
Vague Policy

Giving instructors space to explore uses of generative AI tools and also setting policies for the students.

Yale University
Vague Policy

Use of AI by students without proper attribution or authorization from the instructor is not allowed.

University of Pennsylvania
Basic Policy

Instructors can incorporate using AI tools into their teaching to help students developing related skills. Provides samples for instructors on how to incorporate AI tools.

California Institute of Technology
Vague Policy

Allows students use AI generative tools only in ways that are explicitly allowed by the course instructor

Duke University
Vague Policy

Doesn't recommend using AI detection tools:
1. The products are unreliable. The latest research on AI detection software from MIT highlights the false positive and negative rates. OpenAI (the company behind ChatGPT) recently withdrew its own detection software due to the software’s unreliability.
2. Detection software is biased against non-native speakers, as research from Stanford shows.
3. As AI changes, detection software cannot keep up.

Brown University
Vague Policy

Instructors should offer clear information about what is, and is not, allowed in their courses.

John Hopkins University
No policy

Northwestern University
Basic Policy

Verbatim: "Plagiarism includes, but is not limited to, the unauthorized use of generative artificial intelligence to create content that is submitted as one's own."

Columbia University
Vague Policy

Instructors might consider ways to incorporate AI tools in their assignment design; in doing so, instructors can provide students with opportunities to practice and foster the digital literacy skills they will need for the future.

Cornell University
Vague Policy

Instructors should explicitly communicate the course's generative AI policies by clearly identifying in what situations generative AI use is prohibited or permitted.

University of Chicago
Detailed Policy

Defines the academic integrity with the existence of generative AI.
Outlines guidelines on how to include AI policies in the syllabus and detailed examples for instructors to follow.

University of California, Berkeley
Detailed Policy

Appropriate Use of AI Tools is explained with the supported AI tools university wide. Resources are provided for instructors who might be seeking more help.

University of California, Los Angeles
Basic Guidance

Recognizes students using AI tools and guides instructors to use ChatGPT with their course assignments Instructors are expected to talk about the use of AI in classes and its ethical use.

Rice University
Vague Policy

Bans use of AI by students to do homeworks. Allows students to use AI for their studies.
"The Honor Council sent an email to all undergraduates on April 11, announcing an Honor Code amendment explicitly prohibiting the use of artificial intelligence software such as ChatGPT without proper citation. Additionally, the email clarified professors’ right to ban the use of AI software for their classes."

Dartmouth College
Vague Policy

Acknowledges the existence of generative AI tools to the students and asks instructors to articulate the expectations of students in their course syllabus in as much detail as possible. Is it acceptable to use these tools in the course, or in specific assignments? To what degree?

Vanderbilt University
Basic Guidance

Stopped Using Turnitin AI detection tool.
Encourages employees to harness the capabilities of generative AI and incorporate them into their day-to-day workflows.

University of Notre Dame
Vague Policy

Asks instructors to be specific and explicit in their syllabus and on Canvas regarding the use of AI technologies.

University of Michigan, Ann Arbor
Basic Guidance

To ensure transparency, it is critical that expectations are clearly articulated in the syllabus and continually reinforced when assignments are given.
Example Syllabi Statements are provided for instructors' use.

Instructors should align their course policies with their college’s academic misconduct policies.

Georgetown University
Basic Guidance

Instructors determine use of AI tools by individual course policies. It is, as always, the students’ responsibility to be sure that they are following the rules laid out by their professors.
Example and reasoning for use of AI tools are provided.

University of North Carolina at Chapel Hill
Basic Guidance

States usage of generative AI in teaching should be based on the following principles:
1. AI should help you teach, not teach for you.
2.
Balance quality and timeliness for grading purposes.
3. You are 100% responsible for your teaching materials.
4. The use of AI should be open and documented.
5. Adjust teaching practices to address AI use concerns.
6. Select AI tools that align with course objectives.
7. Ensure that AI use is inclusive.
8. Facilitate and encourage critical thinking.
9. Emphasize human skills.
10. Specify AI policies for your course.
11. Avoid entering confidential or personal data into AI tools.
12.
Stay informed.

Carnegie Mellon University
Basic Guidance

Provides examples of possible academic integrity policies that address student use of generative AI tools

Emory University
Vague Policy

Webinars and other resources provided

Legend
Detailed Policy
Outlines guidelines for instructors to set policies according to course syllabi
Basic Guidance
Provides generic guidance for instructors on how to clarify the AI tools usage
Vague Policy
Indicates whether AI tool usage is allowed or not without no specific guidelines
No policy
Does not specify or mention any policies