C. Editors’ Role in Ensuring Responsible Use of AI

Many forms of use can be very helpful, especially for those authoring content not in their primary language. Nonetheless, editors should query authors and authors should disclose if and how LLM and AI were used. For example, using AI to spell-check and correct grammar is generally viewed as acceptable whereas using AI for content generation must be considered in the context of the work. In all use cases, authors should check the accuracy and validity of their content before submission. Editors should be vigilant that authors may not disclose AI use when queried.

Editors and reviewers should not upload submitted manuscripts into AI systems where confidentiality cannot be assured without authors’ explicit permission. Journals should have a policy for the use of AI in the review process and should make all editors, reviewers, and authors aware of this journal policy. Journals should remind editors and reviewers that manuscripts submitted to journals are privileged communications and that uploading of the manuscript to software or other AI technologies where confidentiality cannot be assured is prohibited except with authors’ permission.

If journals use AI for copy-editing and other production purposes, they should disclose this in their information to authors. Journals may also be using AI/LLM for ancillary content (e.g., podcasts, educational questions, patient summaries). Editors are responsible for ensuring the accuracy and validity of this content.