While the rise in access to and sophistication of GenAI offers exciting possibilities, it has also highlighted and perhaps amplified potential existing fragilities in some traditional forms of assessment. The impact is being felt at all levels of the global education sector.
This Framework is our collective response developed collaboratively by staff from the Colleges and Academic Division. It builds on the significant work many of you have already led since 2022 to protect the integrity of our awards, while maintaining a rich, contemporary learning experience for our students.
Why do we need a Framework?
Under the Higher Education Standards Framework (Threshold Standards) 2021, higher education institutions are required to demonstrate that student achievement of the course and program learning outcomes is credibly assessed.
The methods of assessment used should be capable of confirming that all specified learning outcomes are achieved and that grades awarded reflect the level of student attainment.
Beyond this regulatory compliance imperative, students, employers, professional bodies, and regulators must be able to trust that each graduate has genuinely met the required learning outcomes. By protecting this credibility, we preserve the lifelong value of our awards and uphold the University’s standing for excellence.
What are we doing?
At the heart of the Framework is the application of a “two lane” approach to assessment which couples secure assessment of learning with open assessment as and for learning which provides opportunities for students to work appropriately with each other and AI in ways which are aligned with discipline norms and industry expectations.
Secure Assessment
These are supervised assessment that facilitates observable attainment of program/course learning outcomes, verifying that students demonstrated the required knowledge and skills.
A variety of different assessment tasks can be delivered in a way that delivers this security.
Open Assessment
There is a growing sector consensus that “unsupervised assessments are no longer able to assure attainment of learning outcomes” (Liu & Bates, 2025).
Any assessment task that is not delivered in a secure, supervised fashion should be considered “open” and be delivered with the assumption that students may engage with GenAI to compete the task.
What is needed and by when?
Secure assessment can be implemented using a course approach (refer Framework section 3.1) or at the program level (refer Framework section 3.1). Unless otherwise approved (refer Framework section 4.2), the minimum expectation is that course level security strategies will be in place for 2026.
For 2026:
All courses must include:
- at least one secure assessment task with a minimum weighting of 30% that is a compulsory component (hurdle)
OR
- secure assessment tasks within a course that have a total minimum weighting of 50% (comprised of one or more assessments)
Colleges may elect to implement minimum secure assessment thresholds (over and above the minimum requirements outlined above) including differentiated minimums across different year levels.
The following timelines apply to the implementation of assessment security strategies in all coursework programs offered by the University:
- January 2026: All Trimester and Semester 1 2026 courses must comply with the minimum assessment security provisions described above.
- April 2026: All Midyear session and Trimester 2 courses must comply with the minimum assessment security provisions described above.
- July 2026: All Trimester 3 and Semester 2 2026 courses must comply with the minimum assessment security provisions described above
These initial requirements are focused on the immediate safeguarding of the integrity of our awards. As the understanding and impact of emerging technologies continues to evolve, the University will continue to explore and implement assessment reform beyond the minimum expectations outlined above. This will include a focus on modern assessment strategies which aim to uncover “evidence as to whether or not the work of learning has occurred” (Ellis & Lodge, 2024).
How can I implement secure assessments?
Secure tasks are supervised assessments that facilitate the observable attainment of program/course learning outcomes and verify that students have demonstrated the required knowledge and skills.
The University of Newcastle Assessment Framework includes 5 broad assessment task types that can be adapted to deliver assessment security within the specific contexts of courses.
In Term Supervised Task
Activities conducted in-term under supervised conditions may take multiple forms. These may include practical demonstrations, lab activities, completion of creative works, tests/quizzes, etc.
Many existing assessment tasks can be adapted for completion under supervised conditions.
Examples and Toolkits:
- Literacy team Rachel Birch, Leanne Fray and Michelle Heaney from the School of Education - Check in oral assessments
- Dr Andrew Keppert and Professor Natalie Thamwattana from the School of Information and Physical Sciences (Mathematics) - Weekly oral quiz
- Toolkit coming soon
Live Interactive Performance
A live question and answer session during/following a live (in person or online) performance, presentation, placement or submission of an artefact provides an excellent opportunity for students to demonstrate their learning. This may include formally structured oral assessments, in class debates, Q&A following a presentation, conference style presentations, or an oral defence of a submitted piece of work.
Examples and Toolkits
- Dr Erika Spray from the School of Education - Large Cohort Interactive Orals Toolkit
- Dr Nicholas Foulcher from the School of Architecture and Built Environment - Studio Critical Reviews
Formal Exams
Classifying an assessment as an exam triggers specific policies and procedures, including the requirement to schedule it within the official exam period. These requirements are outlined in Section 4 of the Course Assessment and Grading Manual.
Exams may be conducted in person or online under supervised conditions, and may involve written, practical, or oral components.
Heavy reliance on exams as a secure assessment type is not a supportive educational experience for our students, there is an extensive body of research supporting the move away from high-stakes final exams.
Placement, Internship, or Supervision
Observed performance in a Work Integrated Learning (WIL) activity, placement, practicum or internship setting that demonstrates student attainment of course learning outcomes. WIL is well-supported and already integrated into all undergraduate degrees at the University of Newcastle through the Career-ready Placement program. These educational activities integrate theoretical learning with practical application in a workplace and industry settings to provide authentic assessment opportunities to encourage good academic behaviour.
Competency Based Grading
Competency based grading strengthens the assurance of learning outcomes by making demonstrated mastery of skills and/or knowledge the non-negotiable standard for success. The focus on mastery rather than numerical grades and ranking, can reduce competitive pressures and increase student motivation to engage in good academic behaviours.
In practice this might look like creating a competency-based grading such as:
- Compulsory Hurdle Requirements – competency must be demonstrated in all assessment tasks within the course to meet the course learning outcomes.
- Use of ungraded pass - students who demonstrate competency in all assessment tasks will be awarded an upgraded pass (UP) (refer Grading Scales and Administrative Codes Appendix to the Course Management and Assessment Manual).
- Mandatory Formative Feedback – students who do not meet competency standards on a task but receive effective and timely feedback to grow.
- Minimum resubmissions – all assessment tasks must include an option for students to resubmit with sufficient time to reflect on and incorporate feedback.
Find more examples of the secure assessment types in practice from across the global higher education sector at GenAI and Assessment Examples.
Where can I get help?
Learning Design and Teaching Innovation will build on the GenAI focused support already offered by delivering assessment framework implementation workshops to all Schools and offering on-on-one learning design consultations for Course Coordinators.
Staff are strongly encouraged to attend a framework implementation workshop before booking a learning design consultation. Consultations will be prioritised based on the term in which the course is being offered.
Additional information and resources are available via:
- Academic Integrity, Artificial Intelligence, and Standards Working Group Sharepoint site
- The AI Community of Practice
- LDTI Upcoming Workshops
- LDTI Teaching Resources
- Regular all staff updates via the Loop
- Regular updates to University and College Teaching and Learning Committees
UON Assessment Framework - FAQs
The “two-lane” approach has been developed by the University of Sydney and is being adopted and adapted by many institutions.
The first lane (secure assessment) is about integrity, ensuring that students can demonstrate their attainment of program/course learning outcomes by verifying they have the required knowledge and/or skills to graduate. This can be done by students participating in observed assessment tasks such as in term supervised tasks, live interactive performances, placement, internships or supervision and formal exams.
The second lane (open assessment) is about relevance, ensuring that students have a well-rounded educational experience and become graduates that the community and industry are interested in and supportive of. This can be done by ensuring that experiential learning is constructively aligned and students can engage in authentic learning and assessment. This lane is about the student learning journey and inspiring students to do good work and engage in the best way.
Lane 1 - Integrity | Lane 2 - Relevance | |
Role of assessment | Assessment of learning (summative) | Assessment for and as learning (formative) |
Level of operation | Mainly at program level | Mainly at unit level |
Assessment security | Secured, invigilated | Open, ‘unsecured’ |
Role of Generative AI | May or may not be used by the examiner | Where relevant, use of GenAI scaffolded and supported |
TEQSA Alignment | Principle 2 – Forming trustworthy judgements about student learning in a time of AI requires multiple, inclusive and contextualised approaches to assessment | Principle 1 - Assessment and learning experiences equip students to participate ethically and actively in a society where AI is ubiquitous |
Examples | In-person interactive orals, viva voces, simulations, in-class assessment and skill development, tests and exams. | Using GenAI to provoke reflection, suggest structure, brainstorm ideas, summarise literature, generate content, suggest counterarguments, improve clarity, provide formative feedback etc. |
Adapted from Malecka, B., & Liu, D. (2025, October 24). Student Voice and AI in Higher Education: Looking back to look forward. ACHIEVE Symposium, Newcastle, Australia.
Examples of the application of two-lane assessment can be seen at:
- University of Sydney
- University of Melbourne
- University of New England
- Australian Catholic University
- Curtin University
More information about how institutions across the globe are addressing the impact of GenAI can be seen at Highlighted Resources: Approaches to assessment design.
With the rapid advancement of GenAI technologies “unsupervised assessments are no longer able to assure attainment of learning outcomes” (Liu & Bates, 2025). In unobserved (i.e. “take home”) assessment, attempts to enforce limits on GenAI use are unfeasible, and may contribute to increases in academic workload while also being detrimental to the emotional wellbeing of both teachers and students (Corbin, T., Dawson, P., Nicola-Richmond, K., & Partridge, H. (2025).
Therefore, any assessment task that is not delivered in a secure, supervised fashion should be considered “open” and be delivered with the assumption that students may engage with GenAI to compete the task.Such open, formative assessments can allow educators to focus attention on the process of learning and on process-based assessment strategies that uncover “evidence as to whether or not the work of learning has occurred” (Ellis & Lodge, 2024). Open assessment for learning can help develop students’ discipline-specific knowledge, skills and mindsets. Carefully crafted tasks, which clearly scaffold and guide the appropriate use of every relevant resource, can assisting in equipping “students to participate ethically and actively in a society where AI is ubiquitous” (Lodge, J. M., Howard, S., Bearman, M., Dawson, P, & Associates, 2023).
Through well-designed open assessments, students can be provided with opportunities to work appropriately with each other and AI in ways which are aligned with discipline norms and industry expectations. Examples and case studies of open assessment in use can be seen at GenAI and Assessment: Examples.
Centrally supported, formal and in-term invigilated exams can form part of the overall approach to assessment security. However, the use of a high-stakes formal exam should be minimised wherever possible. Academic staff are encouraged to consider alternative assessment types that are more appropriate for the intended learning outcomes of their course.
The use of invigilated exams solely as a security measure — without careful alignment to the learning outcomes — can compromise assessment validity. As Dawson et al. (2024) note, “Attempts to improve validity by addressing cheating can sometimes make validity worse”. Similar sentiments are echoed in the September 2025 TEQSA publication Enacting assessment reform in a time of artificial intelligence which notes
“Seeking ease and familiarity, individuals and teams may opt for familiar, secure assessment formats such as invigilated time-limited tests and exams.
This may reduce assessment variety and authenticity, thereby diminishing the value of assessment in promoting learning, as well as exacerbating existing inequities.”
If implementing formal exams, please ensure you review the requirements outlined in the Course Assessment and Grading Manual.
The University of Newcastle Assessment Framework mandates minimum secure assessment to be implemented across all courses in a program unless a program level approach has been endorsed. Refer to Section 4.2 of the Framework.
If you wish to investigate program level assessment, please contact your College Assistant Dean Education.
Yes.
A secure assessment verifies that students have demonstrated the required knowledge and skills and facilitates the observable attainment of program/course learning outcomes. Depending on the learning outcomes, it may be entirely appropriate for students to engage with GenAI.
If the learning outcomes permit, secure assessments may involve students engaging with GenAI in specific ways, ranging from AI-assisted idea generation to the full utilisation of GenAI for task completion (see Perkins, M., Furze, L., Roe, J., & MacVaugh., 2024).
Where GenAI is permitted in a secure assessment, educators must clearly communicate how it can be used and ensure that the assessment is conducted in a way that allows adherence to these specifics to be validated.

Assessment Design is only one element of the institution academic integrity ecosystem. Other elements include:
In addition to secure tasks described above, there are indirect or complementary strategies that can be explored to support verifiable demonstration of learning outcomes. Examples include:
Strategy | Description |
|---|---|
Revision of learning outcomes to incorporate use of GenAI | Revising learning outcomes to explicitly include legitimate GenAI use clarifies expectations, reduced the ambiguity that often leads to academic-integrity breaches, while at the same time promoting engagement with GenAI in ways that reflect industry expectations |
Portfolio assessment | Creation and collation of work overtime to demonstrate student learning progression, skills, and achievements, providing a holistic view of their development. |
Semester long projects | Extended assignments that span an entire academic term, enabling students to engage in in-depth research, creative work, or practical application of course content while offering multiple opportunities for both secure assessment and feedback. |
Conditional Grading | Student marks in open assessment tasks are dependent on evidence of learning in related secure assessments. |
Highly contextualised questions | Tasks based on very specific course material, or that involve authentic application to novel real-world situations, may be more difficult to complete using GenAI |
| Use of Cadmus platform | Cadmus is a useful tool for scaffolding and understanding how student work is constructed (when they work in the platform), and can support assurance by providing visibility of the process the student has completed to construct their assessment. |
| Ensure a focus on Constructive alignment | Ensuring the assessment type and design align to the specific learning outcomes is a key strategy in ensuring assurance of learning. Additionally, when course content clearly prepares students for assessments, they are more likely to engage appropriately and feel less pressure to misuse generative AI. See more at Creating Learning Outcomes |
Not always.
Provisions relating to course revisions are outlined in Section 6 of the Course Design and Management Manual. For example, changing an assessment type or weighting would require a course revision. However, existing assessment tasks can be deemed secure (Supervised assessment that facilitates observable attainment of program/course learning outcomes, verifying that students demonstrated the required knowledge and skills) by altering the way the task is delivered.
For example:
- quizzes currently completed online would not be secure, but the same quiz completed in class by students would be (as an in term supervised task).
- a recorded video presentation would not be deemed secure, but if that presentation included a live Q&A after submission, it would be (as a live interactive performance).
Neither of these changes, or similar, would require a course revision as they involve neither a change in assessment type nor weight.
The Academic Integrity, Artificial Intelligence, and Standards Working Group (AASWG) shares monthly all staff updates via The Loop. Previous updates are available here. The group also provides an update to each meeting of the University Teaching and Learning Committee.
The AI Community of Practice is a platform for open discussion, available to all staff. LDTI contributes a weekly “wrap up” post each Friday morning, and this is a great way to get a snapshot of latest updates and news.
The GenAI: Frequently Asked Questions page and Library Generative AI Tools guide provide a launch pad to access the wealth of information available.
Effective GenAI practice grows through open exchange of insights among colleagues, universities, professional bodies and industry, and through focused dialogue within each academic field to address its unique standards and challenges. By sharing knowledge at all levels, we can refine our strategies and stay aligned with emerging expectations.
More than 400 different sessions and events have been shared through AASWG channels over the last 2 years. Keep an eye on the events page for opportunities to hear from colleagues across the higher education sector and beyond.
Online learning can provide the flexibility required by our students as they balance multiple competing priorities. However, online unsupervised assessment (which exists in both online and on campus courses) presents unique security challenges.
Some approaches that can assist in the online space:
1. Consider the mode of delivery of your course.
In scheduled online courses students interact with content, peers, and instructors via timetabled online sessions. As the online participation is scheduled, students can plan accordingly. These scheduled sessions can be used as times to run secure assessment tasks such as live interactive performances.
Similarly, in online with intensive courses, students come together at timetabled in person sessions during the term. These in person sessions are valuable opportunities to conduct secure in person assessments. If these include full days, opportunities for innovative and authentic assessment (e.g. student showcase or conference style events) may arise.
Further information on Modes of Delivery for Courses is included in Table 7 of the Course Design and Management Manual.
2. Coupling artifacts with live interactive performance
A written task (e.g. essay, report, etc) completed under unsupervised conditions is not a secure assessment task. However, it is possible to make judgements about both the authenticity of student work in that artifact and the students level of understanding on the related concepts by pairing submission of work with a live Q&A session where the student is asked questions specific to their submission.
It is also possible to use GenAI to support efficient creation of individualised questions , as outlined in this example:
- Samadi,H., & Chee, G.V.F.(2024). Leveraging Oral Assessments to Enhance Learning and Integrity in Teaching Data Analytics. In Cochrane, T., Narayan, V., Bone, E., Deneen, C., Saligari, M., Tregloan, K., Vanderburg, R.(Eds.), Navigating the Terrain: Emerging frontiers in learning spaces, pedagogies, and technologies. Proceedings ASCILITE 2024. Melbourne (pp. 529-533).
3. Cadmus
Cadmus is an online assessment platform, accessed through the Canvas LMS, that provides visibility into student learning throughout the assessment process.
While the use of Cadmus for unsupervised online tasks does not ensure assessment security, it can support other initiatives to provide some level of assurance and assistance in investigations by providing visibility of the process the student has completed to construct their assessment.
4. Proctoring Solutions
The University is currently exploring proctoring solutions for potential use in 2026.
Good assessment (whether individual or in groups) accurately reflects a student's knowledge and skills. Assessment tasks should be chosen to capture the right types of information, under the right conditions, according to the outcomes being assessed.
Including a group contract or charter can be helpful – refer to the LDTI Group Work Teaching Resource. Assessment information in relation to group work is outlined in Part B of the Course Assessment and Grading Manual. Any group task should be designed in a way that provides the opportunity verify the attainment of learning outcomes by all members.
A potential approach to group work in secure assessment is to couple submission of a traditional group artifact (report, project, presentation) etc with a live interactive performance element. Members may participate in that activity as group, but each individual should be asked to demonstrate their level of achievement against the outcomes.
In Leveraging Oral Assessments to Enhance Learning and Integrity in Teaching Data Analytics, staff from the University of Melbourne discuss how they have conducted group interactive oral assessments
- Their assessment began with a written report (75% of mark) completed in groups or 3-4 students.
- 15-minute oral Q&A sessions were held with each group. Individual members were asked specific questions.
- GenAI was used to assist with drafting specific individualised questions for each group member.
The University of Newcastle acknowledges the traditional custodians of the lands within our footprint areas: Awabakal, Darkinjung, Biripai, Worimi, Wonnarua, and Eora Nations. We also pay respect to the wisdom of our Elders past and present.