Cite Them Right - How emerging AI tools affect academic work
Loading
Loading

How emerging artificial intelligence (AI) tools affect academic work

Recent developments in AI programs, generative AI and artificial neural networks, which take prompts from a user, are capable of producing responses that are often indistinguishable from human work (Farrokhnia, Banihashema, Noroozia and Wals, 2023).

In education, the launch of generative AI services, such as ChatGPT and Google Bard, enable users to ask questions and receive responses in a conversational manner. This can be used, for example, in producing plausible essays in response to summative questions. As more people use these services, the programs adapt to the additional information to inform future responses.

For everyone involved in academia, AI-produced text, images and other media poses challenges to the assessment for learning process as well as: 

  • Transparency in the exchange of information
  • Knowing the origins of our knowledge sources
  • Being able to give credit to the right party 

It becomes even more important now that learners and researchers act accordingly and declare that they have personally studied and applied knowledge themselves in assessed work.

Stay critical and scrutinise the output 

Early studies indicate that these AI services, such as ChatGPT, have flaws in the output. For example, there have been instances where references to non-existent journal articles are produced.

When prompted for academic-standard text, AI tools have produced factually incorrect information (Hosseini, Rasmussen and Resnik, 2023), or errors in medical diagnoses, despite demonstrating abilities to pass medical and business examinations (Editorial, 2023; Williams, 2023). These errors, and lack of traceability, raise serious questions on the practicality of using AI output in an academic setting. 

It is difficult at the time of writing to anticipate the full impact of text-generating services upon academic work. 

It is important to note that learners and researchers at all levels will continue to take responsibility for the work that they submit, and declare whenever third-party contributions have been involved, especially AI. In this sense, in terms of academic integrity and expectations, any consideration in the use of AI and its declaration, follows naturally.

To read more about these aspects of AI see: 

The impact of artificial intelligence on academia 
Use of artificial intelligence (AI) sources in academic work 
Referencing Generative AI 


References
 

Editorial (2023) ‘ChatGPT: friend or foe?’, The Lancet, vol. 5, 5 March, Available at: http://www.thelancet.com/digital-health (Accessed: 2 April 2023). 

Farrokhnia, M., Banihashema, S.K., Noroozia, O. and Wals, A. (2023) ‘A SWOT analysis of ChatGPT: Implications for educational practice and research’, Innovations in Education and Teaching International, 27 March. Available at: https://doi.org/10.1080/14703297.2023.2195846 

Hosseini, M., Rasmussen, L.M. and Resnik, D.B. (2023) ‘Using AI to write scholarly publications’, Accountability in research: ethics, integrity and policy, Editorial, 25 January. Available at: https://doi.org/10.1080/08989621.2023.2168535 

IBM (2023) What is artificial intelligence (AI)? Available at: https://www.ibm.com/topics/artificial-intelligence (Accessed: 2 April 2023). 

Williams, T. (2023) ‘GPT-4’s launch ‘another step change’ for AI and higher education’, Times Higher Education, March 23. Available at: https://www.timeshighereducation.com/news/gpt-4s-launch-another-step-change-ai-and-higher-education (Accessed: 2 April 2023).


If you do not currently subscribe to Cite Them Right, contact our sales team for more information on the digital solutions we offer. Or, to arrange a free trial for your institution, email: OnlineSalesUK@bloomsbury.com.