More academic journals allowing AI-generated content in manuscripts – dpa international

dpa

https://nordot.app/1041564040499118377

The International Studies Association has become the latest academic publisher to sign up to guidelines allowing the use of artificial intelligence-generated copy in journals.

The ISA, a partner of Oxford University Press and overseer of six journals covering international affairs and geopolitics, announced on June 11 that “recent developments” with AI mean that “human authors” would have to give “detailed statements of the exact use of AI tools” in what they submit in future.

The IR boffins “should”, the ISA said, “include information on the exact AI tool and where it was used in the creation of the manuscript” and give “rough percentages of reliance on AI tools in writing”.

But while the ISA said AI bots “do not qualify as authors”, it did not rule out the eventual publication of manuscripts conjured up solely by AI, saying only its editors “are not reviewing or accepting manuscripts compiled exclusively by an AI tool at this time.”.

The organisation urged “the ISA community to check back for further guidance” as “the situation is changing rapidly”.

The ISA’s guidelines followed the Committee on Publication Ethics, an umbrella group of university publishers, in February 2023 telling authors they “must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used” and warning them they would be “liable for any breach of publication ethics”.

COPE’s statement in turn came hot on the heels of the American Medical Association’s JAMA network of journals calling for “responsible use of AI language models and transparent reporting of how these tools are used in the creation of information and publication”.

JAMA at the time described ChatGPT’s answers to questions as “mostly well written” but at the same time “formulaic, not up to date”.
More worryingly, perhaps, given that it applies to medical research, was JAMA’s citing of findings that the AI bot sometimes comes out with “concocted nonexistent evidence for claims or statements it makes” and provides material that is “false or fabricated, without accurate or complete references”.

Follow us on Twitter
, ,
Tagged