RT Journal Article SR Electronic T1 Artificial Intelligence–Generated Editorials in Radiology: Can Expert Editors Detect Them? JF American Journal of Neuroradiology JO Am. J. Neuroradiol. FD American Society of Neuroradiology SP 559 OP 566 DO 10.3174/ajnr.A8505 VO 46 IS 3 A1 Ozkara, Burak Berksu A1 Boutet, Alexandre A1 Comstock, Bryan A. A1 Van Goethem, Johan A1 Huisman, Thierry A.G.M. A1 Ross, Jeffrey S. A1 Saba, Luca A1 Shah, Lubdha M. A1 Wintermark, Max A1 Castillo, Mauricio YR 2025 UL http://www.ajnr.org/content/46/3/559.abstract AB BACKGROUND AND PURPOSE: Artificial intelligence is capable of generating complex texts that may be indistinguishable from those written by humans. We aimed to evaluate the ability of GPT-4 to write radiology editorials and to compare these with human-written counterparts, thereby determining their real-world applicability for scientific writing.MATERIALS AND METHODS: Sixteen editorials from 8 journals were included. To generate the artificial intelligence (AI)-written editorials, the summary of 16 human-written editorials was fed into GPT-4. Six experienced editors reviewed the articles. First, an unpaired approach was used. The raters were asked to evaluate the content of each article by using a 1–5 Likert scale across specified metrics. Then, they determined whether the editorials were written by humans or AI. The articles were then evaluated in pairs to determine which article was generated by AI and which should be published. Finally, the articles were analyzed with an AI detector and for plagiarism.RESULTS: The human-written articles had a median AI probability score of 2.0%, whereas the AI-written articles had 58%. The median similarity score among AI-written articles was 3%. Fifty-eight percent of unpaired articles were correctly classified regarding authorship. Rating accuracy was increased to 70% in the paired setting. AI-written articles received slightly higher scores in most metrics. When stratified by perception, human-written perceived articles were rated higher in most categories. In the paired setting, raters strongly preferred publishing the article they perceived as human-written (82%).CONCLUSIONS: GPT-4 can write high-quality articles that iThenticate does not flag as plagiarized, which may go undetected by editors, and that detection tools can detect to a limited extent. Editors showed a positive bias toward human-written articles.AIartificial intelligenceLLMlarge language modelSDstandard deviation