Skip to main content
SearchLoginLogin or Signup

The good, the bad, and the unforeseen consequences with the release of ChatGPT

A conversation with Martin Delahunty about ChatGPT and potentially useful use-cases for scholarly writing and publishing

Published onFeb 11, 2023
The good, the bad, and the unforeseen consequences with the release of ChatGPT

Martin Delahunty is the Founder and Managing Director of Inspiring STEM Consulting, providing publishing and training services to publishers, universities, and pharmaceutical companies focused on science, technology, engineering, and mathematics. He is a former Secretary of the International Society for Medical Publication Professionals and a Fellow of the Royal Society for the Encouragement of Arts, Manufactures, and Commerce (London).

He joins Jo to talk about ChatGPT.

To see all our podcast episodes go to

In a reciprocal approach, Martin interviewed Jo in his podcast show ‘The Inspiring STEM Podcast’ about AfricArXiv.

Quotes mentioned in this episode

Nature, 24 January 2023: Tools such as ChatGPT threaten transparent science; here are our ground rules for their use.

As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.

Nature, 18 January 2023: ChatGPT listed as author on research papers: many scientists disapprove

At least four articles credit the AI tool as a co-author, as publishers scramble to regulate its use.

[…] An editorial in the journal Nurse Education in Practice this month credits the AI as a co-author, alongside Siobhan O’Connor, a health-technology researcher at the University of Manchester, UK. Roger Watson, the journal’s editor-in-chief, says that this credit slipped through in error and will soon be corrected. “That was an oversight on my part,” he says, because editorials go through a different management system from research papers.

The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. Magdalena Skipper, editor-in-chief of Nature: “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.


The antidote

The new AI Text Classifier launched Tuesday by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.

OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.

“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.

Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 [2022] as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.


Nick Cave comments:

What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque.

Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.

The approach from Professor Ethan Mollick, at The Wharton School is quite smart.


No comments here
Why not start the discussion?