What it means to redesign assessments with generative AI

For all assessments outside of the controlled Channel 1 setting, we need to engage openly, but also critically and ethically, with Gen AI as a part of the student learning experience. It can, however, be challenging to think about the role that Gen AI can play in contributing to student learning and assessment processes.

To help with this AUT is representing the two assessment channels as being two channels of a big river, drawing on the structure of an Aotearoa New Zealand braided river. Channel 2 has been split into six smaller channels called braids, which represent different parts of an assessment where Gen AI could be involved. Each braid helps staff consider when, where and how students could engage with AI during a particular assessment task.

Channels in AI

The six braids each represent the role Gen AI can play in these areas:

  1. Starting an assessment task
  2. Engaging with literature and course resources
  3. Supporting analysis
  4. Generating content
  5. Acting as an editor
  6. Acting as a feedback coach

The braids in Channel 2 are based on the menu concept from the University of Sydney.

The braided river approach is intended to provide a grounded and contextualised visualisation to support staff in navigating the complexities involved in engaging with AI and assessment. This greater understanding will help give students more consistency and clarity about the appropriate use of AI in each of their assessments.

The braids are a tool to help course and programme teams to design assessments that sit authentically in Channel 2. They can guide teams to evaluate how Gen AI can be used to:

  • Support formative assessment tasks to help students better understand the requirements of summative assessment; and/or
  • Support the completion of summative assessment allowing for both the work of the student and the work of the Gen AI (regarding contribution to the learning outcomes), to be clearly acknowledged

If an assessment is designed in which two braids are explicitly selected to contribute to a task, students may still decide – and are free – to use Gen AI for a role that sits in the four remaining braids. The assessment will have been designed in such a way that the use of Gen AI in the unselected braids will not contribute to outcomes on which the task will be judged. While the use of Gen AI in these unselected braids may have value to the individual learner, it will not have value in terms of the assessment of learning.

The braids are not a tool to for staff to direct the permitted or non-permitted use of Gen AI.

Stating the expected use of Gen AI within an assessment is not sufficient – if this were the case, academic integrity alone would be sufficient to ensure assessment security.

We need to move beyond setting expectations and redesign assessments to engage with AI and sit authentically within Channel 2.

The table below summarises the six braids - each representing a part of an assessment task there are ‘quick guides' which provide examples to help staff consider engagement with AI within selected braids.

Braids 

Role of Gen AI 

Contribution Gen AI could make in this role 

Broad outcomes that relate to this role 

1

Start an assessment task

Suggesting structure, brainstorming ideas, breaking down a complex problem, unpacking assessment questions.

Solving problems / developing plan

2

Engage with literature and course resources

Suggesting search terms, performing searches, summarising literature, identifying methodologies, fixing reference list.

Assessing and managing information

3

Support analysis

Performing analyses of data sets, or text, suggesting counterarguments

Solving problems

4

Generate content

Making images, video or audio, writing code, making slide decks, writing text

Produce

5

Act as an editor

Editing tone, improving clarity and readability; fixing grammar, helping with concision, debugging programme code

Knowledge and understanding

6

Act as a feedback coach

Aligned with assessment rubric criteria to help identify strengths, areas for improvement, provide additional resources

Thinking critically and making judgements

Programme-aligned assessment design is a central feature of AUT’s assessment principles, policy and procedures. The term ‘programme-aligned' recognises the importance of thinking about assessment design beyond an individual course to consider varied elements that align to create assessments for learning and assurance.

From a student perspective, staff teams need to consider how their courses contribute to the programme’s outcomes, identifying key learning thresholds and transition points and planning how feedback should be used to support progress between stages.

Different faculties may define the details of their programme-aligned assessment in slightly different ways – this will take a pragmatic approach and will vary depending on several factors including consideration given to the pathways and groupings that students experience across or within a programme and dependent on any external accreditation requirements.

In relation to Gen AI, a programme-aligned approach means that staff who contribute to courses in a programme will need to work collaboratively to identify the key assessment points where the assessment of learning in secure settings is required (Channel 1) and when the use of Gen AI is permitted (Channel 2). These decisions will be influenced by several factors – for example, an assessment might be placed in Channel 1 if it measures course outcomes which are critical to assess in the absence of Gen AI – for example because it covers a threshold concept, is a requirement of a professional body or involves judgement of a performance (for example a clinical observation).

References and further reading

One significant development for teaching and learning in the age of Gen AI is a redoubled emphasis on students’ processes of learning, not just the product of their learning.

The product of learning consists of the work that a student submits for assessment, such as an essay, report, case study or quiz. Grading the product alone creates risk for assurance of learning, given that Gen AI can produce plausible or even excellent responses to many Channel 2 assessment tasks. This risk is reduced if students are required to provide evidence of their processes of learning as they formulate and develop their assessment work, particularly where that use of Gen AI has been built into an in-class learning activity.

In a general sense, the term “learning process” refers both to the concrete steps that students took to complete their assessment and to the cognitive changes that students underwent along the way. Evidence of learning steps could include:

  • Artefacts such as diagrams of problem analysis or brainstorming
  • Annotations
  • Responses to in-class discussion or lectures
  • Documentation of Gen AI prompts and responses
  • Decision trees
  • Peer review of drafts
  • Minutes of group meetings

Student reflections on their processes at multiple points of learning can provide evidence that they are building metacognitive skills – the ability to be self-aware about their own needs and capabilities and developing their evaluative judgement (Boud, Ajjawi, Dawson & Tai, 2018).

In practical terms, an emphasis on process means that workshops and tutorials focus less on delivering course content, and more on guiding students to work with the content, and tools including Gen AI where allowed, through hands-on tasks to produce concrete, verifiable outputs (even if in rough or draft form) that can be submitted along with the final assessment product. A process-oriented approach aligns with other methods of active learning that have a strong evidence base of effectiveness in higher education, such as inquiry-based learning and activity-centred analysis and design (Lovett et al., 2023; Spronken-Smith & Walker, 2010; Kogan & Laursen, 2014; Goodyear, Carvalho, & Yeoman, 2021).

Process-based approaches to learning hold many other advantages, beyond ensuring that students learn to develop understandings about academic integrity and ethical practices in relation to use of Gen AI. For example, it prepares students for challenges – whether in work or in life more generally – that require complex problem solving, persistence, testing of ideas and transparency of process.

The self-awareness that emerges through both structured and unstructured process can also help students recognise and manage some of the more difficult feelings that arise when learning, such as discomfort, confusion and insecurity. For teachers, regular process-oriented tasks can “support a better understanding of students’ sense-making processes” (Lodge et al., 2023, p. 4) and signal whether teaching methods need to be adapted accordingly. This provides formative feedback not just to students, but to teachers themselves.

References and further reading

  • Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (Eds.). (2018). Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work (1st ed.). Routledge. https://doi.org/10.4324/9781315109251
  • Goodyear, P., Carvalho, L. & Yeoman, P. (2021). Activity-centred analysis and design (ACAD): Core purposes, distinctive qualities and current developments. Educational Technology Research and Development 69, 445–464. https://doi-org.ezproxy.aut.ac.nz/10.1007/s11423-020-09926-7
  • Kogan, M., Laursen, S.L. (2014). Assessing long-term effects of inquiry-based learning: A case study from college mathematics. (2014). Innovative Higher Education 39, 183–199. https://doi.org/10.1007/s10755-013-9269-9
  • Lodge, J.M., Howard, S., Bearman, M., Dawson, P., & Associates. (2023). Assessment reform for the age of Artificial Intelligence. Tertiary Education Quality and Standards Agency [Australia].
  • Lovett, M.C., Bridges, M.W., DiPietro, M., Ambrose, S.A., & Norman, M.K. (2023). How learning works: 8 research-based principles for smart teaching (2nd ed). Jossey-Bass.
  • Spronken-Smith, R., & Walker, R. (2010). Can inquiry-based learning strengthen the links between teaching and disciplinary practice? Studies in Higher Education 35(6): 723–40.

If staff haven’t considered assessment in relation to Gen AI, they can start by assessing the overall risk of Gen AI being able to complete a current assessment task.

At AUT, LTED has created a structured rubric to guide staff through this risk analysis, followed by practical guidelines on how to run the assessment through Copilot and evaluate the output. There is also some guidance on mitigating the effects of Gen AI on existing assessment tasks.

Reviewing and adapting assessments to build in some resilience is a short-term measure, and most non-secure assessments will be sitting outside of Channel 2. However, it can be a useful process to undertake to further help further understand the capabilities and limitations of Gen AI in relation to assessment.

When we think about the role that Gen AI can play in contributing to student learning and assessment processes, we need also to consider creating environments where students are encouraged to use the emerging technology with both a sense of agency and with integrity (Principles # and #).  In particular, engaging with Gen AI in assessment raises interesting and challenging questions for universities in relation to students acting with integrity in an academic sense - academic integrity.

There is an emerging body of work that is considering the impact that a ubiquitous and relatively relational technology such as Gen AI is having on our long-standing definitions of academic integrity (Eaton, 2023).  While this is not the place for a detailed discussion on that topic, we can apply some thinking about academic integrity when we consider redesigning assessments for Gen AI - certainly the new procedures direct us to do this - in that the use of AI should be within the University's guidelines for academic integrity.

Starting points for considering academic integrity in relation to Gen AI and assessments:

  1. AUT's current Academic Integrity Guidelines.
  2. Guidance on redesigning assessments with a view to encouraging students to engage with Gen AI in a way that best aligns with a broad definition of acting with integrity, criticality, respect and manaaki in an academic setting.

Engaging with both is the ideal, as this provides a holistic, positive and realistic view of student use of Gen AI - and what this means for both academic integrity and for student messaging about AI use and attribution and responsibility for each assessment.

Reference

More about generative AI and assessment at AUT

AUT takes a whole-of-institution approach to the systematic integration of generative AI (Gen AI) into assessment design. Learn more about what drives our work

VIEW MORE

Assessment design

AUT aims to equip students with knowledge and skills to take them beyond their time at university. A key driver in achieving this objective are AUT’s assessment principles, policy and procedures.

VIEW MORE

LTED | Office of Learning, Teaching and Educational Design

LTED is a central AUT service with a team of experienced and dedicated learning designers, educational video specialists and learning technologists.

VIEW MORE