University of Newcastle Assessment Framework

University of Newcastle Assessment Framework

UON Assessment Framework Booklet CoverThe University of Newcastle Assessment Framework is a bold step to modernise our assessment in an era of AI. Developed collaboratively across Colleges and Learning Design and Teaching Innovation teams, the Framework ensures integrity, compliance, and trust in our awards while embracing authentic, future-focused learning.

Its two-lane approach balances secure assessment with open, AI-enabled learning, giving flexibility for disciplines and innovation. By 2026, all courses will implement secure strategies with deployment supported by resources, workshops, and consultations.

Please explore the Framework, engage with the resources, and take advantage of these opportunities as you plan assessment changes for 2026 and beyond.

— Professor Belinda Tynan
Senior Deputy Vice-Chancellor

Professor Steven Warburton
Pro Vice-Chancellor Education Innovation
Click here to access the full framework

Why do we need a Framework?

Under the Higher Education Standards Framework (Threshold Standards) 2021, higher education institutions are required to demonstrate that student achievement of the course and program learning outcomes is credibly assessed.

The methods of assessment used should be capable of confirming that all specified learning outcomes are achieved and that grades awarded reflect the level of student attainment.

Beyond this regulatory compliance imperative, students, employers, professional bodies, and regulators must be able to trust that each graduate has genuinely met the required learning outcomes. By protecting this credibility, we preserve the lifelong value of our awards and uphold the University’s standing for excellence.

Where can I get help?

Learning Design and Teaching Innovation will build on the GenAI focused support already offered by delivering assessment framework implementation workshops to all Schools and offering on-on-one learning design consultations for Course Coordinators.

Staff are strongly encouraged to attend a framework implementation workshop before booking a learning design consultation. Consultations will be prioritised based on the term in which the course is being offered.

Assessment Framework Implementation Workshop

Drop in workshops are available from 12:00pm - 12:30pm daily from Monday 1st December - Friday 12th December. Join via the link above  or with meeting ID 850 9266 8597.

If you would like to organise a discipline-specific session please email ed-ldti@newcastle.edu.au.

Book an Assessment Consultation with a LDTI Learning Design

Click the link above to book in a one-on-one consultation learning design and assessment consultation with Learning Designers from LDTI.

What are we doing?

At the heart of the Framework is the application of a “two lane” approach to assessment which couples secure assessment of learning with open assessment as and for learning which provides opportunities for students to work appropriately with each other and AI in ways which are aligned with discipline norms and industry expectations.

Secure Assessment

These are supervised assessment that facilitates observable attainment of program/course learning outcomes, verifying that students demonstrated the required knowledge and skills.

A variety of different assessment tasks can be delivered in a way that delivers this security.

Open Assessment

There is a growing sector consensus that “unsupervised assessments are no longer able to assure attainment of learning outcomes” (Liu & Bates, 2025).

Any assessment task that is not delivered in a secure, supervised fashion should be considered “open” and be delivered with the assumption that students may engage with GenAI to compete the task.

What is needed and by when?

Secure assessment can be implemented using a course approach or at the program level. Unless otherwise approved, the minimum expectation is that course level security strategies will be in place for 2026.

For 2026:

All courses must include:

  • at least one secure assessment task with a minimum weighting of 30% that is a compulsory component (hurdle)

OR

  • secure assessment tasks within a course that have a total minimum weighting of 50% (comprised of one or more assessments)

Colleges may elect to implement minimum secure assessment thresholds (over and above the minimum requirements outlined above) including differentiated minimums across different year levels.

The following timelines apply to the implementation of assessment security strategies in all coursework programs offered by the University:

  • January 2026: All Trimester and Semester 1 2026 courses must comply with the minimum assessment security provisions described above.
  • April 2026: All Midyear session and Trimester 2 courses must comply with the minimum assessment security provisions described above.
  • July 2026: All Trimester 3 and Semester 2 2026 courses must comply with the minimum assessment security provisions described above

These initial requirements are focused on the immediate safeguarding of the integrity of our awards. As the understanding and impact of emerging technologies continues to evolve, the University will continue to explore and implement assessment reform beyond the minimum expectations outlined above. This will include a focus on modern assessment strategies which aim to uncover “evidence as to whether or not the work of learning has occurred” (Ellis & Lodge, 2024).

How can I implement secure assessments?

Secure tasks are supervised assessments that facilitate the observable attainment of program/course learning outcomes and verify that students have demonstrated the required knowledge and skills.

The University of Newcastle Assessment Framework includes 5 broad assessment task types that can be adapted to deliver assessment security within the specific contexts of courses.

In Term Supervised Task

In Term Supervised Task

Activities conducted in-term under supervised conditions may take multiple forms. These may include practical demonstrations, lab activities, completion of creative works, tests/quizzes, etc.

Many existing assessment tasks can be adapted for completion under supervised conditions.

Examples and Toolkits:

  • Literacy team Rachel Birch, Leanne Fray and Michelle Heaney from the School of Education - Check in oral assessments
  • Dr Andrew Keppert and Professor Natalie Thamwattana from the School of Information and Physical Sciences (Mathematics) - Weekly oral quiz
  • Toolkit coming soon
Live Interactive Performance

Live Interactive Performance

A live question and answer session during/following a live (in person or online) performance, presentation, placement or submission of an artefact provides an excellent opportunity for students to demonstrate their learning. This may include formally structured oral assessments, in class debates, Q&A following a presentation, conference style presentations, or an oral defence of a submitted piece of work.

Examples and Toolkits

Formal Exams

Formal Exams

Classifying an assessment as an exam triggers specific policies and procedures, including the requirement to schedule it within the official exam period. These requirements are outlined in Section 4 of the Course Assessment and Grading Manual.

Exams may be conducted in person or online under supervised conditions, and may involve written, practical, or oral components.

Heavy reliance on exams as a secure assessment type is not a supportive educational experience for our students, there is an extensive body of research supporting the move away from high-stakes final exams.

Placement, Internship, or Supervision

Placement, Internship, or Supervision

Observed performance in a Work Integrated Learning (WIL) activity, placement, practicum or internship setting that demonstrates student attainment of course learning outcomes.  WIL is well-supported and already integrated into all undergraduate degrees at the University of Newcastle through the Career-ready Placement program. These educational activities integrate theoretical learning with practical application in a workplace and industry settings to provide authentic assessment opportunities to encourage good academic behaviour.

Competency Based Grading

Competency Based Grading

Competency based grading strengthens the assurance of learning outcomes by making demonstrated mastery of skills and/or knowledge the non-negotiable standard for success. The focus on mastery rather than numerical grades and ranking, can reduce competitive pressures and increase student motivation to engage in good academic behaviours.

In practice this might look like creating a competency-based grading such as:

  • Compulsory Hurdle Requirements – competency must be demonstrated in all assessment tasks within the course to meet the course learning outcomes.
  • Use of ungraded pass - students who demonstrate competency in all assessment tasks will be awarded an upgraded pass (UP) (refer Grading Scales and Administrative Codes Appendix to the Course Management and Assessment Manual).
  • Mandatory Formative Feedback – students who do not meet competency standards on a task but receive effective and timely feedback to grow.
  • Minimum resubmissions – all assessment tasks must include an option for students to resubmit with sufficient time to reflect on and incorporate feedback.

Find more examples of the secure assessment types in practice from across the global higher education sector at GenAI and Assessment Examples.

UON Assessment Framework - FAQs

The “two-lane” approach has been developed by the University of Sydney and is being adopted and adapted by many institutions.

The first lane (secure assessment) is about integrity, ensuring that students can demonstrate their attainment of program/course learning outcomes by verifying they have the required knowledge and/or skills to graduate. This can be done by students participating in observed assessment tasks such as in term supervised tasks, live interactive performances, placement, internships or supervision and formal exams.

The second lane (open assessment) is about relevance, ensuring that students have a well-rounded educational experience and become graduates that the community and industry are interested in and supportive of. This can be done by ensuring that experiential learning is constructively aligned and students can engage in authentic learning and assessment. This lane is about the student learning journey and inspiring students to do good work and engage in the best way.

Lane 1 - Integrity

Lane 2 - Relevance

Role of assessment

Assessment of learning (summative)

Assessment for and as learning (formative)

Level of operation

Mainly at program level

Mainly at unit level

Assessment security

Secured, invigilated

Open, ‘unsecured’

Role of Generative AI

May or may not be used by the examiner

Where relevant, use of GenAI scaffolded and supported

TEQSA Alignment

Principle 2 – Forming trustworthy judgements about student learning in a time of AI requires multiple, inclusive and contextualised approaches to assessment

Principle 1 - Assessment and learning experiences equip students to participate ethically and actively in a society where AI is ubiquitous

Examples

In-person interactive orals, viva voces, simulations, in-class assessment and skill development, tests and exams.

Using GenAI to provoke reflection,  suggest structure, brainstorm ideas, summarise literature, generate content, suggest counterarguments, improve clarity, provide formative feedback etc.

Adapted from Malecka, B., & Liu, D. (2025, October 24). Student Voice and AI in Higher Education: Looking back to look forward. ACHIEVE Symposium, Newcastle, Australia.

Examples of the application of two-lane assessment can be seen at:

With the rapid advancement of GenAI technologies “unsupervised assessments are no longer able to assure attainment of learning outcomes” (Liu & Bates, 2025). In unobserved (i.e. “take home”) assessment, attempts to enforce limits on GenAI use are unfeasible, and may contribute to increases in academic workload while also being detrimental to the emotional wellbeing of both teachers and students (Corbin, T., Dawson, P., Nicola-Richmond, K., & Partridge, H. (2025).

Therefore, any assessment task that is not delivered in a secure, supervised fashion should be considered “open” and be delivered with the assumption that students may engage with GenAI to compete the task.Such open, formative assessments can allow educators to focus attention on the process of learning and on process-based assessment strategies that uncover “evidence as to whether or not the work of learning has occurred” (Ellis & Lodge, 2024). Open assessment for learning can help develop students’ discipline-specific knowledge, skills and mindsets. Carefully crafted tasks, which clearly scaffold and guide the appropriate use of every relevant resource, can assisting in equipping “students to participate ethically and actively in a society where AI is ubiquitous” (Lodge, J. M., Howard, S., Bearman, M., Dawson, P, & Associates, 2023).

Through well-designed open assessments, students can be provided with opportunities to work appropriately with each other and AI in ways which are aligned with discipline norms and industry expectations. Examples and case studies of open assessment in use can be seen at GenAI and Assessment: Examples.

Centrally supported, formal and in-term invigilated exams can form part of the overall approach to assessment security. However, the use of a high-stakes formal exam should be minimised wherever possible. Academic staff are encouraged to consider alternative assessment types that are more appropriate for the intended learning outcomes of their course.

The use of invigilated exams solely as a security measure — without careful alignment to the learning outcomes — can compromise assessment validity. As Dawson et al. (2024) note, “Attempts to improve validity by addressing cheating can sometimes make validity worse”. Similar sentiments are echoed in the September 2025 TEQSA publication Enacting assessment reform in a time of artificial intelligence which notes

“Seeking ease and familiarity, individuals and teams may opt for familiar, secure assessment formats such as invigilated time-limited tests and exams.

This may reduce assessment variety and authenticity, thereby diminishing the value of assessment in promoting learning, as well as exacerbating existing inequities.”

If implementing formal exams, please ensure you review the requirements outlined in the Course Assessment and Grading Manual.

Heads of Schools can approve the use of an invigilated exam, with this approval sent to exams@newcastle.edu.au

The University of Newcastle Assessment Framework mandates minimum secure assessment to be implemented across all courses in a program unless a program level approach has been endorsed. Refer to Section 4.2 of the Framework.

If you wish to investigate program level assessment, please contact your College Assistant Dean Education.

Yes.

A secure assessment verifies that students have demonstrated the required knowledge and skills and facilitates the observable attainment of program/course learning outcomes. Depending on the learning outcomes, it may be entirely appropriate for students to engage with GenAI.

If the learning outcomes permit, secure assessments may involve students engaging with GenAI in specific ways, ranging from AI-assisted idea generation to the full utilisation of GenAI for task completion (see Perkins, M., Furze, L., Roe, J., & MacVaugh., 2024).

Where GenAI is permitted in a secure assessment, educators must clearly communicate how it can be used and ensure that the assessment is conducted in a way that allows adherence to these specifics to be validated.

Data Security, Education, Detection, Governance and Assessment design sound the culture and value of Academic Integrity.

Assessment Design is only one element of the institution academic integrity ecosystem. Other elements include:

Element

Examples

Governance, Policies, Guidelines and Plans

Data Governance and Security

Education and Awareness

Detection, Investigation, and Response

In addition to secure tasks described above, there are indirect or complementary strategies that can be explored to support verifiable demonstration of learning outcomes. Examples include:

Strategy

Description

Revision of learning outcomes to incorporate use of GenAI

Revising learning outcomes to explicitly include legitimate GenAI use clarifies expectations, reduced the ambiguity that often leads to academic-integrity breaches, while at the same time promoting engagement with GenAI in ways that reflect industry expectations

Portfolio assessment

Creation and collation of work overtime to demonstrate student learning progression, skills, and achievements, providing a holistic view of their development.

Semester long projects

Extended assignments that span an entire academic term, enabling students to engage in in-depth research, creative work, or practical application of course content while offering multiple opportunities for both secure assessment and feedback.

Conditional Grading

Student marks in open assessment tasks are dependent on evidence of learning in related secure assessments.

Highly contextualised questions

Tasks based on very specific course material, or that involve authentic application to novel real-world situations, may be more difficult to complete using GenAI

Use of Cadmus platform

Cadmus is a useful tool for scaffolding and understanding how student work is constructed (when they work in the platform), and can support assurance by providing visibility of the process the student has completed to construct their assessment.

Ensure a focus on Constructive alignment

Ensuring the assessment type and design align to the specific learning outcomes is a key strategy in ensuring assurance of learning. Additionally, when course content clearly prepares students for assessments, they are more likely to engage appropriately and feel less pressure to misuse generative AI.

Not always.

Provisions relating to course revisions are outlined in Section 6 of the Course Design and Management Manual. For example, changing an assessment type or weighting would require a course revision. However, existing assessment tasks can be deemed secure (Supervised assessment that facilitates observable attainment of program/course learning outcomes, verifying that students demonstrated the required knowledge and skills) by altering the way the task is delivered.

The below describes some instances where a revision would be required.

Change: Adding a compulsory component
Policy Information: Course Design and Management Manual Table 9 School Level Revisions “Approve compulsory course requirements, including placement and WHS requirements, compulsory assessments, or compulsory attendance requirements.”

Change: Changing the weighting of an assessment task
Policy Information: Course Design and Management Manual Table 9 School Level Revisions “Amend the assessment weighting, types, or methods of assessment.”

Change: Change the type of assessment task (e.g. change an essay to an exam)
Policy Information: Course Design and Management Manual Table 9 School Level Revisions “Amend the assessment weighting, types, or methods of assessment.”

Change: Add a 3-hour exam
Policy Information: Course Design and Management Manual Table 11 University Level Revisions “Approve a formal examination within a course to have a duration of 3 hours.”

The Academic Integrity, Artificial Intelligence, and Standards Working Group (AASWG) shares monthly all staff updates via The Loop. Previous updates are available here. The group also provides an update to each meeting of the University Teaching and Learning Committee.

The AI Community of Practice is a platform for open discussion, available to all staff. LDTI contributes a weekly “wrap up” post each Friday morning, and this is a great way to get a snapshot of latest updates and news.

The GenAI: Frequently Asked Questions page and Library Generative AI Tools guide provide a launch pad to access the wealth of information available.

Additional information and resources are available via:

Effective GenAI practice grows through open exchange of insights among colleagues, universities, professional bodies and industry, and through focused dialogue within each academic field to address its unique standards and challenges.  By sharing knowledge at all levels, we can refine our strategies and stay aligned with emerging expectations.

More than 400 different sessions and events have been shared through AASWG channels over the last 2 years. Keep an eye on the events page for opportunities to hear from colleagues across the higher education sector and beyond.

Online learning can provide the flexibility required by our students as they balance multiple competing priorities. However, online unsupervised assessment (which exists in both online and on campus courses) presents unique security challenges.

Some approaches that can assist in the online space:

1. Consider the mode of delivery of your course.

In scheduled online courses students interact with content, peers, and instructors via  timetabled online sessions. As the online participation is scheduled, students can plan accordingly. These scheduled sessions can be used as times to run secure assessment tasks such as live interactive performances.

Similarly, in online with intensive courses, students come together at timetabled in person sessions during the term. These in person sessions are valuable opportunities to conduct secure in person assessments. If these include full days, opportunities for innovative and authentic assessment (e.g. student showcase or conference style events) may arise.

Further information on Modes of Delivery for Courses is included in Table 7 of the Course Design and Management Manual.

2. Coupling artifacts with live interactive performance

A written task (e.g. essay, report, etc) completed under unsupervised conditions is not a secure assessment task. However, it is possible to make judgements about both the authenticity of student work in that artifact and the students level of understanding on the related concepts by pairing submission of work with a live Q&A session where the student is asked questions specific to their submission.

It is also possible to use GenAI to support efficient creation of individualised questions , as outlined in this example:

3.  Cadmus

Cadmus is an online assessment platform, accessed through the Canvas LMS, that provides visibility into student learning throughout the assessment process.

While the use of Cadmus for unsupervised online tasks does not ensure assessment security, it can support other initiatives to provide some level of assurance and assistance in investigations by providing visibility of the process the student has completed to construct their assessment.

4. Proctoring Solutions

The University is currently exploring proctoring solutions for potential use in 2026.

To support implementation of secure assessment in online courses, the University is pursuing a suite of additional initiatives for use in 2026. Technology to support securing online courses and programs will be identified.

The aim is not to impose a single solution, but to provide a clear suite of evidence-based options that the University and Colleges can adopt according to assessment risk, disciplinary context, curriculum design, and operational capacity.

Good assessment (whether individual or in groups) accurately reflects a student's knowledge and skills. Assessment tasks should be chosen to capture the right types of information, under the right conditions, according to the outcomes being assessed.

Including a group contract or charter can be helpful – refer to the LDTI Group Work Teaching Resource. Assessment information in relation to group work is outlined in Part B of the Course Assessment and Grading Manual. Any group task should be designed in a way that provides the opportunity verify the attainment of learning outcomes by all members.

A potential approach to group work in secure assessment is to couple submission of a traditional group artifact (report, project, presentation) etc with a live interactive performance element. Members may participate in that activity as group, but each individual should be asked to demonstrate their level of achievement against the outcomes.

In Leveraging Oral Assessments to Enhance Learning and Integrity in Teaching Data Analytics, staff from the University of Melbourne discuss how they have conducted group interactive oral assessments

  • Their assessment began with a written report (75% of mark) completed in groups or 3-4 students.
  • 15-minute oral Q&A sessions were held with each group. Individual members were asked specific questions.
  • GenAI was used to assist with drafting specific individualised questions for each group member.

Yes.

It is fine to use MCQ, and short answer, style exams and quizzes. The requirements to implement secure assessments does not impact the format of the assessment nor the type of questions used. Rather, ensuring assessment security is dependent on the conditions under which the assessment is conducted.

When carefully designed, with consideration of appropriate constructive alignment, MCQs can be an effective way of determining student attainment of some learning outcomes.

The University is currently exploring proctoring solutions for potential use in 2026, and this solution will support securing assessment (MCQ and others) in the online space. We are also investigating lock down browser technologies to support securing computer based assessment (including MCQs) in face-to-face courses.

No.

Re-attempts are not mandated in University of Newcastle Policy. Refer to the Course Assessment and Grading Manual for further information on designing and implementing assessment items, and grading courses.

Assessment in research focused Honors courses is often heavily weighted towards written work that is completed under unsupervised conditions. While this is obviously a vital part of Honours courses, it also poses unique assessment security challenges given that the creation of written artifacts is a particular strength of frontier large language models.

Below outlines 2 potential approaches to addressing this.

Curtent Assessments (example)

  • Ass 1: Research Plan/Proposal 20%
  • Ass 2: Research Report Part A 30%
  • Ass 3: Research Report Part A 40%
  • Ass 4: Final seminar/presentation 10%

In this example course, 90% (Ass 1, 2, 3) of the assessment is made up of written work completed without any level of supervision (i.e. not secure).

Option 1: Increase weighting of Assessment 4
Adjusting assessment weightings to ensure that Ass 4 is 30%, and making that task a compulsory (hurdle) requirement, would meet the requirements of the Assessment Framework.

Adding a compulsory requirement and changing assessment weightings would require a course revision (Course Design and Management Manual Table 9 School Level Revisions).

Option 2: Add live interactive performance elements to existing assessments
Adding a short interactive Q&A element to existing assessments (1 to 3) would offer additional assurance of learning and meet the requirements of the assessment framework (aggregated secure assessment above 50%).

This can be achieved without a course revision. Existing assessment tasks (weighting and type) would remain as is in the curriculum management system. The specific assessment requirements (e.g. due dates, and requirement to engage in a conversation about the written submission) can be added to the Course outline and Canvas assignment instructions. Rubrics can be updated to include marks related to the interactive oral elements.

Learning Outcomes – Constructive alignment

Increased discussion based oral elements are aligned with learning outcomes frequently included in Honours programs and courses. Examples outcome from UON Honours programs and courses are included below.

Example Program Learning Outcomes

  • The ability to communicate a convincing and reasoned scientific argument at a level and style appropriate to the audience and to report scientific findings in an oral and substantial written format.
  • Demonstrate advanced oral communication skills.
  • Communication: The advanced capacity to communicate effectively across written, oral, visual, and digital forms.
  • Communicate effectively orally to varied audiences.
  • Effective oral and/or written communication skills in professional and lay domains.

Example Course Learning Outcomes

  • Effectively communicate and defend research methods, results and conclusions in verbal format to an audience.
  • Effectively communicate knowledge resulting from research.
  • Design and present a research proposal.
  • Demonstrate advanced skills in communicating research.
  • Present and defend a research proposal.
  • Communicate technical and non-technical information clearly and effectively to diverse audiences, including engineering teams and the wider community.

Information related to multi term sequence courses is available in Part C of the Course Design and Management Manual. 2026 courses, which are a continuation of multi term sequence that started in 2025, should retain existing assessment requirements.

For example:

  1. A “Part B” course run in Semester 1 2026 following “Part A” delivered in Semester 2 2025 should retain the existing assessment structure.
  2. A “Part A” course starting in Semester 1 2026 should have appropriate security measures put in place by the January deadline.
  3. A “Part B” course run in Semester 2 2026 should have appropriate security measures put in place by the July deadline. In effect, this change would often be actioned when considering the associated Semester 1 2026 Part A course.

In line with similar approaches at other institutions, a new field has been added to the curriculum management system (CourseLoop). This flag will be used to flag specific assessment tasks within courses that are being delivered in a way that satisfies the definition of secure assessment.

Secure Assessment: supervised assessment that facilitates observable attainment of program/course learning outcomes, verifying that students demonstrated the required knowledge and skills.


The field, available for use now, allows for individual tasks to be marked as:

  • No (this is not a secure assessment task)
  • Yes (the task is delivered in a way that satisfies the definition of secure assessment)
  • Not determined (a decision on the security of the task has not yet been made.

CourseLoop image #1

If the task is flagged as being secure (“Yes” is selected in the Secure Assessment field), an additional Secure Assessment Description field will appear.

CourseLoop image #2

Additional detail on the most aligned Secure Assessment Type (as outlined in the Framework) should be recorded by entering one of the following:

  • In Term Supervised Task
  • Live Interactive Performance
  • Formal Exam
  • Placement, Internship, or Supervision

How will different Modes of Delivery be handled?

Where different approaches to securing assessment are being applied to an assessment task for a course delivered concurrent across two different Modes of Delivery, this can be captured in the Secure Assessment Description field.

For example:

  • Live Interactive Performance (online). Formal Exam (face to face)
    OR
  • Formal exam (on campus cohort). Proctored exam (online* cohort)

* 2026 Proctoring solutions are being considered. Further advice will be provided as soon as available.

How will data be entered into the Field?

As Schools and Colleges identify the specific assessment tasks to be secured in line with Framework implementation deadlines, spreadsheets can be submitted to Academic Governance and Compliance to facilitate data entry.

These can be submitted using a Service Now request (selecting the CourseLoop data entry/amendments category option).

What is the deadline for adding the Security details?

Security flags should be in place by the relevant 2026 deadlines, as outlined in the framework.

  • January 2026: All Trimester and Semester 1 2026 courses.
  • April 2026: All Midyear session and Trimester 2 courses.
  • July 2026: All Trimester 3 and Semester 2 2026 courses.

How will the flag information be used?

Initially, the secure assessment flag will be used to check and support progress towards securing assessment by the 2026 deadlines.

In the longer term, secure assessment data can be used to inform future discussions on how best to assure the integrity of our awards.

What reports are available?

A new standard report has been created in the CourseLoop reporting dashboard – 10.041 UON Course Assessment Security.

Yes.

The purpose of this work is to ensure that students are actually meeting the learning outcomes of the course rather than changing assessments.

This is a key course coordinator responsibility (113b of the Course Design and Management Manual).

The main requirement is that measures are put in place in all courses to ensure the security of key assessment tasks. How this is approached in individual courses will dictate what related actions need to be completed.

Scenario 1: The course has an existing 30% secure assessment.

Under the provisions of the Framework, this task would be suitable if it is a compulsory hurdle requirement. The task can be flagged as secure in the curriculum management system.

Adding a compulsory requitement can be done at the School level prior to the start of term but should be actioned in time for inclusion in course outlines. Refer to Table 9 School Level Revisions - Course Design and Management Manual

If the 30% (or more) secure task is already a compulsory requirement, no action is needed.

Scenario 2: The course has multiple secure assessment tasks totally a minimum of 50% of the course

This satisfies the provisions of the framework, and no action is required. The individual tasks can be flagged as secure in the curriculum management system.

Scenario 3: The course has no secure assessments, and the preference is for a 30% hurdle exam.

This would require both a change to the assessment type and the addition of the compulsory assessment and approval for the use of an exam.

The supervised exam can be flagged as secure in the curriculum management system.

  • Secure Assessment = YES
  • Secure Assessment Description = Formal Exam
Scenario 4: Changing the conditions under which existing tasks are conducted.

In some situations, simply changing how an existing task is conducted will meet the requirements of the assessment framework. No significant course revision is required to action this.

For example, can an existing insecure task (online quiz, take home writing task, etc) be completed in tutorial time? This would ensure it is a secure In Term Supervised Task.

This task can now be flagged as secure in the curriculum management system.

  • Secure Assessment = YES
  • Secure Assessment Description = In Term Supervised Task
Scenario 5: Adding live interactive element to existing tasks

Only high-level assessment information (e.g. type, weighting, etc) is captured in the Curriculum Management System (CourseLoop). The specific details and requirements are typically captured and communicated through the course outline and additional assignment information in Canvas course sites.

It is possible to add security provisions to existing tasks by ensuring the requirements are clearly communicated to students in this additional information.

For example:

An existing task may be listed as 30% Written Task. The additional descriptive information and instructions will describe what is required to complete that task, how it is to be submitted, etc.

These instructions can be updated to advise students that, post submission of the written task, they will need to participate in some live interactive element intended to further gauge their understanding and level of achievement.

The assessment itself remains as a 30% written task. The interactive of the live interactive element, where students are required to demonstrate their learning under supervised conditions, means the task can now be flagged as secure in the curriculum management system.

  • Secure Assessment = YES
  • Secure Assessment Description = Live Interactive Performance

Some things to consider

  • As with an assessment, the full requirements and instructions need to be clearly communicated to students.
  • Interactive elements (e.g. post submission Q&A) need to be thoughtfully designed to focus on “testing” students' achievement in relation to the aligned learning outcome.
  • It may be possible to use GenAI to help design individualised questions that align with the CLO and the written submission. This is discussed in this example from the University of Melbourne, and LDTI is developing resources related to this approach.
  • Rubrics may need to be redesigned to align with the new task design.
  • The duration and depth of the interactive element will depend on the complexity of the original task. A low weight task aligned with few outcomes may require only a short Q&A to provide assurance of student learning. High stakes tasks, or tasks aligned to several outcomes may require more in depth and structured interactive elements.

One of the principles guiding our developing approach to GenAI in education is Sector-wide and internal collaboration strengthens collective expertise.

Effective GenAI practice grows through open exchange of insights among colleagues, universities, professional bodies and industry, and through focused dialogue within each academic field to address its unique standards and challenges.  By sharing knowledge at all levels, we can refine our strategies and stay aligned with emerging expectations.

The information below outlines a selection of readings, examples, groups, case studies, etc which have informed our approach and the development of our Assessment Framework.

1. TEQSA & CRADLE Webinar Series

Delivered across 2023, this webinar series brought together representatives from across the Australian higher education sector.

2. GenAI Knowledge Hub

Bringing together cases studies and resources from across the sector, this Hub as proven an invaluable source of information. The following papers, developed partly based on the collaborative efforts in the session above, are strongly recommended:

3. The Two-Lane Approach to Assessment

The two-lane assessment approach has been developed in large part by the University of Sydney. While adopted, adapted and presented by numerous institutions, these pieces by Danny Liu and/or Adam Bridgeman (and colleagues) have proven particularly insightful.

4. Special Interest Groups and Organisations

Across the sector, a range of groups have provided opportunities for collaboration and discussion. Below is a selection which may be of interest:

5. Key Additional Readings

Related readings are shared regularly with UON colleagues via the AI Community of Practice. The following, in particular, have informed discussions as our framework has been developed.