Our approach to Generative AI

Artificial intelligence (AI) is a broad term that can refer to many different technologies. Generative AI is a specific set of tools that can generate content such as text, images, videos or music. The University’s approved generative AI tool that can be used to generate text and images is Microsoft Copilot. Staff and students can use the tool for free with their university credentials.

There has been much debate about generative AI, largely focused on assessment. At Manchester Met, we are proactively tackling the potential issues these tools bring by embracing generative AI responsibly.  This means that we are: 

  • developing students’ AI literacy so that they are aware of the capabilities and limitations of generative AI tools. 
  • providing opportunities for students to gain practical experience in using and evaluating generative AI tools as part of teaching and learning experiences.   
  • using authentic assessments that are resilient to the misuse of generative AI. 
  • explaining to students our expectations about the use of generative AI in assessment with a default position and assessment-specific guidance.  
  • implementing our academic misconduct procedures when there is clear evidence of students passing the work of generative AI off as their own. 

Recognising concerns

Just as we cannot avoid generative AI, we also cannot ignore the potential issues it raises. We are all feeling the impact of these tools, making it important to respect all perspectives with empathy and kindness.  

Much like making conscious choices when purchasing products, we want to enable informed decision-making. That way, the use of these technologies will align with our values and goals, such as inclusivity, reducing awarding gaps, decolonising the curriculum, and promoting sustainability.

The Rise AI Literacy study pack helps get started exploring points such as: 

  • Generative AI outputs are based on statistics, not intelligence. See Capabilities and Limitations of AI.
  • AI text generators are only trained to predict sequences of words. They do not have the functionality to check the accuracy. See Capabilities and Limitations of AI.
  • Generative AI is not a reliable source of information. See Verifying Information.
  • The output of generative AI tools is often biased.  See Bias.
  • The pricing of generative AI tools makes access to them inequitable. Free versions have less functionality and use older models. See Choosing an AI tool.
  • The data used to train generative AI tools has been scraped from the internet, often without consent. This raises concerns about copyright and intellectual property. See Copyright.  
  • The data you input into generative AI tools might be used to train future models. See Choosing an AI tool .
  • The training and use of generative AI tools require tremendous amounts of energy, water and earth metals. This has a significant environmental impact. See Human and Environmental Impact.
  • The ethics of some of the companies that develop and deploy generative AI tools have been questioned. See Human and Environmental Impact.
  • Legislation and regulation often lag behind the rapid advancements in technology, including generative AI. See Legislation.

Using generative AI to support teaching and learning

Generative AI can be incredibly helpful for both students and teachers. We encourage all staff and students to explore how generative AI can be used responsibly to improve learning experiences and to support our students to be career-ready.   

Here are some examples of starting points for integrating generative AI into teaching and learning. When using generative AI, always make sure to check all outputs are accurate and free of bias. 

Generate activity ideas

If you want to make your sessions more interactive, but are not sure how, Microsoft Copilot can generate some initial ideas.

Generate polling questions

Vevox have an AI Quiz Tool that you can use to generate some draft polling questions.

Starting point for Padlet activities

The Create with AI option in Padlet gives you a starting point for setting up a Padlet for collaborative work.

Starting point for presentations

The Menti AI presentation maker can create a first draft of your presentation.

Critique AI outputs for accuracy

Generating some example text can be a useful prompt for discussion. Ask students to appraise the output and verify the information within them using more reliable sources.

Critique AI outputs for bias

Images created by generative AI can be a helpful prompt for discussions about bias. A simple activity is to ask Copilot to create a picture of someone in the industry the students are working towards.

Embracing generative AI in action

Want to see how colleagues have been embracing generative AI?

Authentic assessments

Authentic assessment approaches can reduce the risk of academic misconduct resulting from students attempting to pass off work created by generative AI as their own.   

Authentic assessment refers to the assessment of learning that is conducted through real-world tasks requiring students to demonstrate their knowledge and skills in meaningful contexts (Swaffield, 2011). Authentic assessments are designed to measure a learner’s ability to apply learning to real-world contexts.  

An authentic assessment draws upon many of the features of active learning. Our resources on resources on active learning might be a useful starting point for assessment design.  Features of authentic assessment include: 

  • clear links to the module learning outcomes 
  • a clear and transparent marking process
  • an accessible and understandable assessment brief 
  • a consideration of active learning strategies 

Assessment design may adopt an authentic approach by incorporating elements of real-world contexts, professional standards, reflections on practice and other personalised and bespoke content. For example:  

  • Using activities related to real-world tasks or scenarios. 
  • Providing an opportunity for collaboration and co-production.  
  • Encouraging opportunities for reflection on lived experiences.  
  • Adopting problem-solving techniques and activities. 

Assessment design toolkit

Guidance on designing assessment at Manchester Met.

Find out more

Assessment ideas on Jisc

A postcard set to invoke discussion around assessment approaches.

Find out more

Explaining expectations to students

Our default position on the use of generative AI and broader AI tools is explained in detail to students in the Are you allowed to use AI in assessments? section of the AI Literacy Rise Study Pack.

The two key principles that we want our students to work to are:   

  • Your work should always authentically represent your capabilities.  
  • You should never trust the outputs of generative AI uncritically.  

At a high level, the position is that students can use generative AI:  

  • to help them understand content, but they must check this against other sources
  • as part of the planning process (for example to get ideas, to break down tasks, to explore different structures)  
  • to find information but should not consider it a reliable source
  • to provide feedback on work, but must maintain authorship by making a decision about each change suggested

They cannot use it to create the assessment itself. 

Our Academic Misconduct Policy states that it is not an offence to use generative AI when it has been “has been expressly authorised as part of the assessment component”.

The extent to which students are permitted to use generative AI is clearly explained in all assessment briefs. 

Most assessments will take the default position, but students are advised to check each assessment brief for specific instructions. There may be instances where the use of generative AI is not permitted, but this will only be where there is a clear reason, such as meeting the requirements of a professional, statutory and regulatory body.

Where an assessment is categorised as permitted, students must follow the guidance set out in the default position and retain authorship of the work so that it does not constitute academic misconduct. Module teams can adjust the default position where there is a pedagogical reason, but this must be clearly explained in the assignment brief.

Our approach to managing submitted work

Detection software

A reliable detector for AI-generated text is not available 

There currently is no suitable technology for detecting AI-generated text.

Turnitin announced it was launching an AI detector in April 2023, which it claimed to have a false positive rate of less than 1%, but has since revised this figure to 4% in June 2023. This means that for every 100 submissions submitted to Turnitin, 4 will be flagged incorrectly as written by AI.

Concerns have also been raised about these tools showing bias against non-native English writers. Like the overwhelming majority of UK higher education institutions, Manchester Met has opted out of enabling this software as a means of detecting generative AI for the foreseeable future. 

Colleagues must not upload student work into an online AI detection service. These are not reliable and the data uploaded is not secure. 

Mitigating the risks

Focus on reducing the risk of academic misconduct 

The steps outlined in this guidance are intended to reduce the risk of academic misconduct by:  

  • Developing students’ AI literacy so that they are aware of the capabilities and limitations of generative AI tools.   
  • Providing opportunities for students to gain practical experience in using and evaluating generative AI tools as part of teaching and learning experiences.   
  • Designing authentic assessments that are resilient to the misuse of generative AI.   

However, if the student attempts to pass off the output of generative AI as their own, this is academic misconduct as set out in the Academic Misconduct Policy.

The intranet academic misconduct and integrity intranet page provides further information about what to do if you suspect a student of academic misconduct. 

Appropriate evidence

Appropriate evidence of academic misconduct with generative AI 

Deciding whether academic misconduct has taken place is ultimately a matter of academic judgement, supported by available evidence.

See 3.8 in the Academic Misconduct Policy: “All cases will be considered on the basis of evidence. The standard of proof at any stage of the investigation is that the University is satisfied that, on the basis of the evidence available, that academic misconduct is likely to have occurred.” 

Determining with certainty that generative AI has been used can be challenging, particularly due to the rapid evolution of this technology. Example indications of inappropriate use of generative AI include:

  • wording like “As an AI language model I cannot” in the assessment  
  • direct admission from a student

You might also notice signs of possible generative AI use when marking a piece of work such as it containing references that do not exist, incorrect context (for example apple as a fruit, rather than a company) or vague wording of answers.

While these might point to the misuse of generative AI, they might equally point to the student requiring support with referencing or academic writing.

A sudden increase in the quality of work or the mark awarded for an individual student might be a result of the student acting on feedback and seeking additional help, rather than evidence of misconduct.

Colleagues are advised to email our Assessment Management team at [email protected] for queries about the policy. The team can give guidance on the academic misconduct process and possible penalties. They cannot make decisions on whether academic misconduct has taken place as that is an academic judgement. A mark should be provided for the work if possible as explained in Section 5.4 of the academic misconduct policy.

If no evidence, mark work at face value

In the absence of evidence, mark work at face value 

If there is not enough evidence to make a decision on academic misconduct involving generative AI, it is advisable to assess and grade the submitted work based on its own merits.   

The University offers a comprehensive package of study skills support and this should be the first place to direct students to. It includes courses, workshops, online and one-to-one support. The advantage of this approach is that students will develop skills that will equip them for their future study and work.

Use of generative AI in assessment process

Using generative AI to support the assessment process 

It might be tempting to use generative AI to mark students’ work.

However, just as our default position for students emphasises the need to retain authorship, when marking you are entirely responsible for the feedback and grades you give.

Microsoft Copilot does not have the functionality to base feedback or grades on the accuracy and quality of the work and colleagues must not upload student work into any tool that is not institutionally provided, including AI detectors.

Marking involves making a series of decisions that have a very real impact on our students. Receiving marks and feedback can be a time of great anxiety, so it’s a time more than ever when they need us as humans.   

However, there are some legitimate ways to use generative AI to support the assessment process. Remember to check the output to make sure that it is accurate and free of bias.  

  • Create an example answer.
  • Compile a set of assessment FAQs.   
  • Check the tone of a sample of feedback.  
  • Check how well generative AI can do the assessment.