Guidelines for AI Use in the Classroom at MRU

The following Guidelines for AI Use in the Classroom at MRU are written to support both instructors and students in navigating the complex and ever-changing landscape of AI. Each guideline is meant to give specific direction about how each principle from Mount Royal’s Framework for Responsible AI might be applied in a teaching and learning context. Please see the MRU AI Framework for further detail and a list of definitions. Because the classroom learning environment is a shared responsibility and relationship between instructors and students, the guidelines are presented with parallel language to support each group.

It is important to state explicitly that AI use by instructors and students is not assumed, rather, these guidelines are meant to help each group make responsible decisions and conduct their work ethically in relation to AI, whether they choose to use it in any specific instance or not.

Within this document, further detailed guidance and resources are provided in the form of links to additional information, some of which are still being built and will be updated by appropriate areas as the AI landscape evolves at MRU and beyond.

Faculty Guidelines

Student Guidelines

Guiding Principle : Human-Centered

AI should augment, not replace, human judgement, learning, creativity and pedagogy. Faculty retain academic freedom and discretion over course-based use within institutional policy and the Collective Agreement.

Guideline 1:

If choosing to use AI in their instruction or assessment, instructors are responsible for using AI in ways that assist and support them in their tasks rather than replacing or bypassing their expertise and professional judgement. A human-centred approach applies to course design, pedagogy, and assessment (both the creation of assessment tasks and in assessing student work).

If choosing to use AI in their academic work (within course expectations), students are responsible for using AI in ways that support, not bypass, their learning. For more information on how to use AI to promote learning and to avoid becoming overreliant on AI, please explore AI and Your Learning.

Guideline 2:

If choosing to use AI as part of assessments, instructors are encouraged to consider a careful design that helps students support, rather than replace, their learning through the use of AI. When integrating student use of AI in assessment design, include explicit instruction for students on how to use specific tools effectively and how to evaluate AI-generated output as appropriate.

In most cases, it is not appropriate to use AI to assess student work. While there may be limited exceptions, evaluation should not circumvent the professional judgement of the instructor.

Students are responsible for any AI-generated output they use in their academic work, submitted by themselves or as part of a group assignment. Therefore, it is crucial that students evaluate AI output for accuracy and quality before any submission of work. Information on how to evaluate and acknowledge AI output can be found on the Library’s website Guide to GenAI for Academic Work.

Guiding Principle: Transparency and Trust

A campus culture of trust is built on open communication. All meaningful uses of AI in assignments, research methods and administrative work should be disclosed in plain language.

Guideline 3:

Instructors are responsible to explicitly state their expectations for the allowable use of AI in each course and in individual assessments within courses. Instructors are asked to communicate their expectations for accurate and transparent documentation, citation and acknowledgement of GenAI use in assessments and assignments, if allowed.

These expectations should be clearly stated in the course syllabus, assignment sheets, made available in D2L, and reinforced in class discussions.

Recommended language and guidance for possible statements can be found on the ADC’s website on Artificial Intelligence Guidelines for Course Outlines and Assessment.

Students are responsible for meeting their instructors’ expectations for AI use. Expectations are provided in the course syllabus, on assignment sheets, in class discussions, and made available in D2L. Each instructor’s expectations are unique, and expectations might not be the same for every assignment or assessment even within a single course.

For courses or assignments where GenAI use is permitted, students are required to clearly and properly acknowledge, cite or document their use per the expectations of their course instructor.

When in doubt, ask your instructor directly for clarification. Also explore the section on instructor expectations on the AI and Your Learning Page.

Guideline 4:

Instructors are responsible for clear and transparent communication to their students around their own use of AI tools or AI in course administration, communications, or generation of course materials. Any use of AI-generated feedback should be transparent to students, and instructors should provide a rationale to students for the use of AI-generated feedback.1

If any AI detection of student work is undertaken (using institutionally approved software only), a statement must be included in the course outline as suggested by ADC's website on Sample Statements for Course Outlines to inform students in advance.

At all times, students retain the right to opt out of their original work being uploaded to such tools for the purposes of either assessment or AI detection.

For each course they take, students are responsible for reading the course outline to understand how the instructor will approach identifying AI misuse (i.e., AI use that is not approved). If students have concerns about how this approach impacts their data privacy and original work, they have the right to contact the instructor to discuss an alternative form of originality checking. Students have the right to opt out of having their work uploaded to detection tools.

Guiding Principle: Equity, Inclusion, and Accessibility

AI should be used to reduce, not amplify, bias or discrimination; AI-driven tools and AI outputs should meet accessibility standards. Use of AI tools should not unintentionally exclude individuals who may not have equal access to these technologies.

Guideline 5:

Instructors are responsible to uphold the principles of equity, inclusion, and accessibility in relation to AI and the classroom. Instructors should be aware that AI tools can reflect biases present in the tools’ training and development. More information can be found on the Library’s website under Learning With Generative AI Tools.

Additionally, information to support inclusive and accessible teaching can be found in the Creating Accessible Learning resource found on each instructor’s D2L homepage.

Students should be aware that AI tools can reflect biases present in their training and development. More information can be found on the Library’s Guide to GenAI for Academic Work.

Students should be aware of their rights related to principles of equity, inclusion, and accessibility in relation to AI and the classroom. For information on academic accommodations please see Access and Inclusion Services.

Guiding Principle: Privacy, Security and Data Protection

Only institutionally approved AI tools and platforms should be used when working with confidential, sensitive or personal data. AI systems must be selected, configured and monitored to be robust against misuse, adversarial inputs and data leakage, with comprehensive risk management assessments performed prior to deployment.

Guideline 6:

Instructors must use institutionally approved AI tools and platforms when working with sensitive, personal, or confidential data and information (i.e. anything potentially identifiable). Additional information on privacy is available through the Access and Privacy Office.

Students are responsible for understanding the privacy risks of including, uploading, or pasting personal information of themselves and others to AI tools and prompts.

When using AI within course expectations, students should choose institutionally provided AI tools, because these tools have stronger data protection.

Guideline 7:

Instructors should be aware that sharing their own original work, or that of others, to AI tools means that input may be stored, reviewed, or used to improve a tool or service. Institutionally approved tools have increased data protection. Instructors are responsible to use other people's intellectual property, including original and creative works, ethically and legally. See the Library’s website for more information on AI and copyright.

Students should be aware that sharing their own original work, or that of others, to AI tools means that input may be stored, reviewed, or used to improve a tool or service. Institutionally approved tools have increased data protection. Students are responsible to use other people's intellectual property, including original and creative works, ethically and legally. See the Library’s website for more information on AI and copyright.

Guiding Principle: Accountability and Academic Integrity

People, not tools, remain responsible for the integrity of their work and decisions. Academic integrity requires that any use of AI be authorized, verified for accuracy and properly acknowledged or cited.

Guideline 8:

Instructors are responsible for using AI in human-centred ways that align with the values of academic integrity: honesty, trust, fairness, respect, responsibility, and courage. Additional resources around academic integrity are available on the Student Community Standards website and the International Centre for Academic Integrity.

Students are responsible for using AI in ways that align with institutional and course expectations, and with the values of academic integrity — honesty, trust, fairness, respect, responsibility, and courage — and with the Code of Student Academic Integrity Policy. More information can be found on the Student Community Standards website.

Guiding Principle: Mindfulness, Literacy and Continuous Improvement

AI use should be grounded in an awareness of the impacts on learning, relationships, professional responsibilities, sustainability and the environment. AI literacy for students and employees should be developed and regularly reviewed as technology and legislation evolve.

Guideline 9:

In partnership with the institution, individuals in the MRU community are responsible for their own awareness and literacy in relation to AI, and to consider how their choice to use or not use AI impacts themselves, others, and the environment in the MRU community and beyond. This is a shared responsibility and an ongoing and iterative process.

1Consultations between the GenAI Working Group and students indicate that, at this time, many students have concerns of both AI-generated feedback and AI-generated course materials

These Guidelines were developed by the Generative AI Teaching and Learning Working Group, building on previously developed Artificial Intelligence Guidelines for Course Outlines and Assessment, and subjected to consultation across the institution. More information about the working group.

These guidelines were last updated in May 2026.