Frequently Asked Questions for Staff

Refer to the Artificial Intelligence Working Group Sharepoint site . This includes a growing library of resources from across the sector, details of upcoming events, and updates from the group.  Staff are also encouraged to join the Artificial Intelligence Community of Practice.

Also see the University Library’s Generative AI Tools resource.

In any conversations with students, you should emphasise the following messages:

  • It’s critical that any work submitted for assessment is the student’s own original work. Students must adhere to the expectations set by their Course Coordinator regarding use of AI tools in the development of any work to be submitted.
  • A student’s assessment is their own responsibility, so they should critically consider everything they submit.
  • Acknowledgement, as always, is key. If students follow the above points and submit a pre-approved assessment using an AI tool, they should acknowledge this contribution clearly in your work. This is the same as any other citation or acknowledgement of source.
  • Misuse of AI tools may be considered a breach of the University's Student Conduct Rule and could result in disciplinary action.
  • From January 2024, Artificial Intelligence detection software to review any work that students submit. If students have used AI in any way other than has been expressly permitted by their course coordinator, they may be engaging in academic misconduct and be subject to penalties.  Unlike Turnitin’s text matching functionality, results from AI detection software are not visible to students.
  • Refer to the University Library’s Generative AI Tools resource  for further information.

To assist in any future conversations about how particular assessments have been crafted or developed, all students are advised to retain copies of drafts/versions of submissions. This message should be reinforced in course level discussions around assessments.

Also refer students to the Academic Integrity website and ASKUON.

Generative AI tools can be beneficial for student learning, and they should be encouraged to experiment with these approaches.

  • Tutor – use prompts asking GenAI tools to explain complex concepts in a variety of ways, or explain unfamiliar jargon.
  • Personal Assistant - GenAI tools can help you create study plans and schedules.
  • GenAI tools can assist in summarising information.
  • Opportunities for practice – GenAI can be used to generate challenging practice questions and other study resources such as flip cards.

Assessments within your course are in place to ensure that students have met the course learning outcomes, not to test what AI tools can do. If you have created an assignment that incorporates the use of AI (and clearly outlined your expectations for use), students must correctly acknowledge the use of GenAI tools if they are used.  See more at AI in Assessment - Communicating Your Expectations.

Generative AI, of which ChatGPT is an example, refers to the field of artificial intelligence that focuses on creating models and systems capable of generating original and creative content. These models learn from large amounts of data to understand patterns, structures, and semantics of the given domain, and then use that knowledge to generate new content that resembles the training data.

Generative AI has been applied in various domains including such as text generation, image synthesis, music composition, and even video creation.

ChatGPT is an advanced generative AI chatbot that can give detailed textual responses on a wide variety of subjects and in a variety of different forms. ChatGPT was developed by OpenAI (an artificial intelligence research laboratory) and released in November 2022.

The Guardian has published this informative visual explainer on How AI chatbots like Chat GPT or Bard work.

Available generative AI tools encompass a range of capabilities including text generation, code automation, image creation, and more, thereby catering to diverse use cases across different sectors such as software development, content creation, marketing, and academics among others.

All University of Newcastle staff have access to Microsoft Copilot, which offers a safe platform that safeguards user and business data from cyber threats. See more on how to access Copilot at Elevate your Working Experience with Microsoft Copilot .

You are able to use Copilot to experiment with prompting and review the type and quality of the output it produces.

Some of the more well-known platforms are listed below.

Unlike copilot, these platforms are not endorsed by the University and are not secure. While these may be suitable for experimentation, you should NOT enter any private, personal or protected information into these systems.

  • AlphaCode:  A generative tool focused on automating code generation, aiding in software development tasks.
  • GitHub Copilot: A code completion tool that suggests whole lines or blocks of code as you type, making coding faster and easier.
  • Gemini:  A generative AI by Google known for its ability to assist in creating human-like text and other forms of digital content.
  • DALL-E:  Specialized in generating images from textual descriptions, showcasing the fusion of language and visual understanding.
  • Repl.it: An online IDE supporting dozens of programming languages, which also integrates AI-powered code suggestions.
  • Cohere Generate: Known for its text generation capabilities, useful in various fields like content creation and customer service.
  • Claude:  A generative AI tool that helps in creating marketing campaigns, among other applications.
  • Synthesia: Specializes in creating AI-driven video content by manipulating existing video footage or generating new visuals.
  • Scribe:  Aids in generating step-by-step guides
  • Midjourney:  A generative AI service that creates images from natural language descriptions.
  • Grammarly:  Enables a user to proofread and edit documents for grammar and spelling online.

As per its makers at OpenAI:

  • ChatGPT has limited knowledge of world and events after 2021
  • ChatGPT sometimes “hallucinates”, writing plausible sounding but incorrect or nonsensical answers.
  • ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times.
    For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
  • The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI.
  • Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
  • While OpenAI have tried to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour.

More generally, you should be conscious of the following:

Bias in Training Data:
Generative AI models can inadvertently replicate and amplify biases present in their training data. This means that if the data used to train the AI contains historical biases or stereotypes, the AI's outputs can also be biased. You should be aware that AI-generated content may not be free from cultural, gender, or ethnic prejudices.

Lack of Originality and Creativity:
While generative AI can produce new content by recombining elements in novel ways, it fundamentally lacks true creativity and original thought. It cannot produce work that is genuinely innovative in the way that a human artist or thinker can, as it is limited to what it has been exposed to in its training data.

Quality and Coherence Issues:
The outputs from generative AI, especially for complex tasks, can sometimes lack coherence or be of inconsistent quality. For example, longer texts may drift off-topic, or the AI might generate images that are surreal or have artifacts. You should critically assess AI-generated content for such flaws.

Lack of Contextual Understanding:
AI systems may not fully understand the context or the nuances behind certain requests. They might generate content that is technically correct but inappropriate for the specific context or intention, missing subtle cues that a human would pick up on.

Dependency and Skill Degradation:
Overreliance on generative AI for tasks such as writing, designing, or problem-solving could lead to a decline in these skills among students. It's important to use these tools to augment learning and productivity, not replace the learning process itself.

Ethical and Legal Concerns:
The use of generative AI raises ethical questions, such as the appropriation of styles or potential copyright issues when the AI's output closely resembles existing works. Students must navigate these concerns and understand the legal implications of using AI-generated content.

Informed and ethical use and understanding of AI are integral to curricula and will be crucial in developing students' ability to navigate a technologically driven society.

The University of Newcastle is embracing the benefits that AI can offer students in their educational journeys but are conscious of the need to use AI ethically in line with our principles of conduct. All students need to focus on responsible use of AI and preservation of academic integrity.

The use of generative AI does not automatically constitute academic misconduct. Whether its use in individual courses and specific assessment items is acceptable needs to be communicated by Course Coordinators to students.

Course Coordinators should review the information at Communicating with students and be explicit about their expectations for each assessment item, ensuring students understand the requirements.

Follow the Academic Integrity Working Group SharePoint site for further updates.

In October 2023, the University of Newcastle Teaching and Learning Committee endorsed the use of the artificial intelligence detection functionality available through Turnitin.

From January 2024, Artificial Intelligence detection software may be used to review any work that students submit. If students have used AI in any way other than has been expressly permitted by their course coordinator, they may be engaging in academic misconduct and be subject to penalties.  Unlike Turnitin’s text matching functionality, results from AI detection software are not visible to students.   See more at Turnitin AI Detection Session 2 - 14 March 2024.

However, this is not a fail-safe silver bullet and is no more than a tool to support professional judgement when considering the integrity of student submissions.

Language editing tools, such as Grammarly, can be useful for students wanting to improve the expression of academic writing. However, the use of such tools may not always be appropriate (for example, in an assessment focused on particularly on students writing skills) and it is important that you are clear about your expectations for their use.

In particular, GrammarlyGo (the generative Artificial Intelligence tool embedded within Grammarly) should only be used if you are expressly permitting use of generative artificial intelligence in the assessment. If AI is not allowed in a particular assessment, students should make sure that the GrammarlyGo feature is turned off in your customisation settings by using the following instructions.

Turnitin’s Artificial Intelligence Detection may flag use of Grammarly (or GrammarlyGo) in written submissions, but does not necessarily mean the student has engaged in academic misconduct.

To assist in any future conversations about how particular assessments have been crafted or developed, all students are advised to retain copies of drafts/versions of submissions. This message should be reinforced in course level discussions around assessments.

Further Reading

Visit the Artificial Intelligence Working Group SharePoint page for additional resources.


Return to Artificial intelligence home