• NEWS EXPLAINER
  • 28 February 2024

Is ChatGPT making scientists hyper-productive? The highs and lows of using AI
ChatGPT是否让科学家变得超高效?使用AI的高潮和低谷

Large language models are transforming scientific writing and publishing. But the productivity boost that these tools bring could come with a downside.
大型语言模型正在改变科学写作和出版。但这些工具带来的生产力提升可能会带来负面影响。

In a 2023 Nature survey of scientists, 30% of respondents had used generative AI tools to help write manuscripts.Credit: Nicolas Maeterlinck/Belga MAG/AFP via Getty
在2023年《自然》杂志对科学家的一项调查中,30%的受访者使用过生成式人工智能工具来帮助撰写手稿。

ChatGPT continues to steal the spotlight, more than a year after its public debut.
ChatGPT在公开亮相一年多后继续吸引人们的注意力。

The artificial intelligence (AI) chatbot was released as a free-to-use tool in November 2022 by tech company OpenAI in San Francisco, California. Two months later, ChatGPT had already been listed as an author on a handful of research papers.
人工智能(AI)聊天机器人于2022年11月由科技公司OpenAI在加州的旧金山弗朗西斯科发布,作为免费使用的工具。两个月后,ChatGPT已经被列为少数研究论文的作者。

Academic publishers scrambled to announce policies on the use of ChatGPT and other large language models (LLMs) in the writing process. By last October, 87 of 100 top scientific journals had provided guidance to authors on generative AI, which can create text, images and other content, researchers reported on 31 January in the The BMJ1.
学术出版商争先恐后地宣布在写作过程中使用ChatGPT和其他大型语言模型(LLMs)的政策。截至去年10月,100种顶级科学期刊中有87种为作者提供了关于生成AI的指导,该AI可以创建文本,图像和其他内容,研究人员于1月31日在BMJ 1 上报道。

But that’s not the only way in which ChatGPT and other LLMs have begun to change scientific writing. In academia’s competitive environment, any tool that allows researchers to “produce more publications is going to be a very attractive proposition”, says digital-innovation researcher Savvas Papagiannidis at Newcastle University in Newcastle upon Tyne, UK.
但这并不是ChatGPT和其他LLMs开始改变科学写作的唯一方式。英国泰恩河畔纽卡斯尔的纽卡斯尔大学的数字创新研究员Savvas Papagiannaan说,在学术界竞争激烈的环境中,任何允许研究人员“发表更多论文的工具都将是一个非常有吸引力的提议”。

Generative AI is continuing to improve — so publishers, grant-funding agencies and scientists must consider what constitutes ethical use of LLMs, and what over-reliance on these tools says about a research landscape that encourages hyper-productivity.
生成型人工智能正在不断改进-因此出版商,资助机构和科学家必须考虑什么是LLMs的道德使用,以及过度依赖这些工具对鼓励超高生产力的研究环境的影响。

Are scientists routinely using LLMs to write papers?
科学家们是否经常使用LLMs来撰写论文?

Before its public release, ChatGPT was not nearly as user-friendly as it is today, says computer scientist Debora Weber-Wulff at the HTW Berlin University of Applied Sciences. “The interfaces for the older GPT models were something that only a computer scientist could love.”
柏林HTW应用科学大学的计算机科学家Debora Weber-Wulff说,在公开发布之前,ChatGPT并不像今天这样用户友好。“旧的GPT模型的界面是只有计算机科学家才能喜欢的东西。”

In the past, researchers typically needed specialized expertise to use advanced LLMs. Now, “GPT has democratized that to some degree”, says Papagiannidis.
过去,研究人员通常需要专业知识才能使用高级LLMs。现在,“GPT在某种程度上使其民主化了”,Papagiannabel说。

This democratization has catalysed the use of LLMs in research writing. In a 2023 Nature survey of more than 1,600 scientists, almost 30% said that they had used generative AI tools to help write manuscripts, and about 15% said they had used them to help write grant applications.
这种民主化促进了LLMs在研究写作中的使用。在2023年《自然》杂志对1,600多名科学家进行的一项调查中,近30%的人表示他们曾使用生成式人工智能工具帮助撰写手稿,约15%的人表示他们曾使用这些工具帮助撰写资助申请。

And LLMs have many other uses. They can help scientists to write code, brainstorm research ideas and conduct literature reviews. LLMs from other developers are improving as well, such as Google’s Gemini and Claude 2 by Anthropic, an AI company in San Francisco. Researchers with the right skills can even develop their own personalized LLMs that are fine-tuned to their writing style and scientific field, says Thomas Lancaster, a computer scientist at Imperial College London.
LLMs还有很多其他的用途。它们可以帮助科学家编写代码,集思广益研究思路和进行文献综述。LLMs来自其他开发人员也在改进,例如谷歌的Gemini和旧金山弗朗西斯科人工智能公司Anthropic的Claude 2。伦敦帝国理工学院的计算机科学家托马斯兰开斯特说,拥有适当技能的研究人员甚至可以开发出自己的个性化#1003 #,并根据他们的写作风格和科学领域进行微调。

What are the benefits for researchers?
对研究人员有什么好处?

About 55% of the respondents to the Nature survey felt that a major benefit of generative AI is its ability to edit and translate writing for researchers whose first language is not English. Similarly, in a poll by the European Research Council (ERC), which funds research in Europe, 75% of more than 1,000 ERC-grant recipients felt that generative AI will reduce language barriers in research by 2030, according to a report released in December2.
大约55%的自然调查受访者认为,生成式人工智能的一个主要好处是它能够为母语不是英语的研究人员编辑和翻译文章。同样,在资助欧洲研究的欧洲研究理事会(ERC)的一项民意调查中,1,000多名ERC资助者中有75%认为,到2030年,生成式人工智能将减少研究中的语言障碍。

Of the ERC survey respondents, 85% thought that generative AI could take on repetitive or labour-intensive tasks, such as literature reviews. And 38% felt that generative AI will promote productivity in science, such as by helping researchers to write papers at a faster pace.
在ERC的调查受访者中,85%的人认为生成式人工智能可以承担重复性或劳动密集型的任务,例如文献综述。38%的人认为生成式人工智能将提高科学生产力,例如帮助研究人员以更快的速度撰写论文。

What are the downsides? 缺点是什么?

Although ChatGPT’s output can be convincingly human-like, Weber-Wulff warns that LLMs can still make language mistakes that readers might notice. That’s one of the reasons she advocates for researchers to acknowledge LLM use in their papers. Chatbots are also notorious for generating fabricated information, called hallucinations.
虽然ChatGPT的输出可以令人信服地像人类一样,但Weber-Wulff警告说,LLMs仍然会出现读者可能会注意到的语言错误。这也是她提倡研究人员在论文中承认LLM使用的原因之一。聊天机器人也因产生虚假信息而臭名昭着,这些信息被称为幻觉。

And there is a drawback to the productivity boost that LLMs might bring. Speeding up the paper-writing process could increase throughput at journals, potentially stretching editors and peer reviewers even thinner than they already are. “With this ever-increasing number of papers — because the numbers are going up every year — there just aren’t enough people available to continue to do free peer review for publishers,” Lancaster says. He points out that alongside researchers who openly use LLMs and acknowledge it, some quietly use the tools to churn out low-value research.
LLMs可能带来的生产力提升也有一个缺点。加快论文写作过程可以增加期刊的吞吐量,可能会使编辑和同行评审员比现在更紧张。兰开斯特说:“随着论文数量的不断增加–因为数量每年都在增加–没有足够的人继续为出版商做免费的同行评议。”他指出,除了那些公开使用LLMs并承认它的研究人员外,还有一些人悄悄地使用这些工具进行低价值的研究。

It’s already difficult to sift through the sea of published papers to find meaningful research, Papagiannidis says. If ChatGPT and other LLMs increase output, this will prove even more challenging.
Papagianneavor说,要从大量已发表的论文中筛选出有意义的研究已经很困难了。如果ChatGPT和其他LLMs增加输出,这将证明更具挑战性。

“We have to go back and look at what the reward system is in academia,” Weber-Wulff says. The current ‘publish or perish’ model rewards researchers for constantly pushing out papers. But many people argue that this needs to shift towards a system that prioritizes quality over quantity. For example, Weber-Wulff says, the German Research Foundation allows grant applicants to include only ten publications in a proposal. “You want to focus your work on getting really good, high-level papers,” she says.
韦伯-武尔夫说:“我们必须回过头来看看学术界的奖励制度是什么。”目前的“要么发表,要么灭亡”模式奖励不断推出论文的研究人员。但许多人认为,这需要转向一个重质量轻数量的体系。例如,韦伯-武尔夫说,德国研究基金会允许资助申请人在一份提案中只包括10篇出版物。她说:“你要把工作重点放在获得真正好的、高水平的论文上。”

Where do scientific publishers stand on LLM use?
科学出版商对LLM使用的立场是什么?

According to the study in The BMJ, 24 of the 100 largest publishers — collectively responsible for more than 28,000 journals — had by last October provided guidance on generative AI1. Journals with generative-AI policies tend to allow some use of ChatGPT and other LLMs, as long as they’re properly acknowledged.
根据《英国医学杂志》的研究,截至去年10月,100家最大的出版商中有24家(共负责28,000多种期刊)提供了关于生成式AI的指导。具有生成人工智能策略的期刊倾向于允许使用ChatGPT和其他LLMs,只要它们得到适当的承认。

Springer Nature, for example, states that LLM use should be documented in the methods or another section of the manuscript, a guideline introduced in January 2023. Generative AI tools do not, however, satisfy criteria for authorship, because that “carries with it accountability for the work, and AI tools cannot take such responsibility”. (Nature’s news team is editorially independent of its publisher, Springer Nature.)
例如,施普林格·自然(Springer Nature)指出,LLM的使用应记录在方法或手稿的另一部分中,这是2023年1月引入的指南。然而,生成式人工智能工具并不满足作者资格的标准,因为这“对工作负有责任,而人工智能工具不能承担这种责任”。(《自然》的新闻团队在编辑上独立于出版商施普林格·自然。

Enforcing these rules is easier said than done, because undisclosed AI-generated text can be difficult for publishers and peer reviewers to spot. Some sleuths have caught it through subtle phrases and mistranslations. Unlike cases of plagiarism, in which there is clear source material, “you can’t prove that anything was written by AI”, Weber-Wulff says. Despite researchers racing to create LLM-detection tools, “we haven’t seen one that we thought produced a compelling enough result” to screen journal submissions, says Holden Thorp, editor in chief of the Science family of journals.
执行这些规则说起来容易做起来难,因为出版商和同行评审员很难发现未公开的人工智能生成的文本。一些侦探通过微妙的短语和误译抓住了它。与有明确来源材料的剽窃案件不同,“你不能证明任何东西都是人工智能写的,”韦伯-武尔夫说。尽管研究人员竞相创造LLM-检测工具,“我们还没有看到一个我们认为产生了一个令人信服的结果”筛选期刊提交,霍尔顿索普说,在科学系列期刊主编。

What about other uses? 其他用途呢?

Although as of November, the American Association for the Advancement of Science — which publishes Science — allows for some disclosed use of generative AI in the preparation of manuscripts, it still bans the use of LLMs during peer review, Thorp says. This is because he and others at Science want reviewers to devote their full attention to the manuscript being assessed, he adds. Similarly, Springer Nature’s policy prohibits peer reviewers from uploading manuscripts into generative-AI tools.
索普说,尽管截至11月,出版《科学》的美国科学促进会(American Association for the Advancement of Science)允许在准备手稿时公开使用生成式人工智能,但它仍然禁止在同行评审期间使用LLMs。他补充说,这是因为他和《科学》杂志的其他人希望审稿人把全部注意力放在正在评估的手稿上。同样,Springer Nature的政策禁止同行评审员将手稿上传到生成人工智能工具中。

Some grant-funding agencies, including the US National Institutes of Health and the Australian Research Council, forbid reviewers from using generative AI to help examine grant applications because of concerns about confidentiality (grant proposals are treated as confidential documents, and the data entered into public LLMs could be accessed by other people). But the ERC Scientific Council, which governs the ERC, released a statement in December recognizing that researchers use AI technologies, along with other forms of external help, to prepare grant proposals. It said that, in these cases, authors must still take full responsibility for their work.
一些资助机构,包括美国国立卫生研究院和澳大利亚研究理事会,禁止评审人员使用生成式人工智能来帮助审查资助申请,因为担心机密性(资助申请被视为机密文件,其他人可能会访问输入公共数据LLMs)。但管理ERC的ERC科学理事会在12月发布了一份声明,承认研究人员使用人工智能技术,沿着其他形式的外部帮助,来准备拨款申请。它说,在这些情况下,作者仍然必须对他们的作品承担全部责任。

“Many organizations come out now with very defensive statements” requiring authors to acknowledge all use of generative AI, says ERC Scientific Council member Tom Henzinger, a computer scientist at the Institute of Science and Technology Austria in Klosterneuburg.
ERC科学理事会成员Tom Henzinger说:“许多组织现在都发表了非常具有防御性的声明”,要求作者承认所有对生成人工智能的使用。

To him, ChatGPT seems no different from running text by a colleague for feedback. “Use every resource at your disposal,” Henzinger says.
对他来说,ChatGPT似乎与运行同事的文本以获得反馈没有什么不同。“利用你所掌握的一切资源,”Henzinger说。

Regardless of the ever-changing rules around generative AI, researchers will continue to use it, Lancaster says. “There is no way of policing the use of technology like ChatGPT.”
兰开斯特说,不管围绕生成式人工智能的规则如何不断变化,研究人员将继续使用它。“没有办法监管ChatGPT等技术的使用。”

Some grant-funding agencies, including the US National Institutes of Health and the Australian Research Council, forbid reviewers from using generative AI to help examine grant applications because of concerns about confidentiality (grant proposals are treated as confidential documents, and the data entered into public LLMs could be accessed by other people). But the ERC Scientific Council, which governs the ERC, released a statement in December recognizing that researchers use AI technologies, along with other forms of external help, to prepare grant proposals. It said that, in these cases, authors must still take full responsibility for their work.
一些资助机构,包括美国国立卫生研究院和澳大利亚研究理事会,禁止评审人员使用生成式人工智能来帮助审查资助申请,因为担心机密性(资助申请被视为机密文件,其他人可能会访问输入公共数据LLMs)。但管理ERC的ERC科学理事会在12月发布了一份声明,承认研究人员使用人工智能技术,沿着其他形式的外部帮助,来准备拨款申请。它说,在这些情况下,作者仍然必须对他们的作品承担全部责任。

“Many organizations come out now with very defensive statements” requiring authors to acknowledge all use of generative AI, says ERC Scientific Council member Tom Henzinger, a computer scientist at the Institute of Science and Technology Austria in Klosterneuburg.
ERC科学理事会成员Tom Henzinger说:“许多组织现在都发表了非常具有防御性的声明”,要求作者承认所有对生成人工智能的使用。

To him, ChatGPT seems no different from running text by a colleague for feedback. “Use every resource at your disposal,” Henzinger says.
对他来说,ChatGPT似乎与运行同事的文本以获得反馈没有什么不同。“利用你所掌握的一切资源,”Henzinger说。

Regardless of the ever-changing rules around generative AI, researchers will continue to use it, Lancaster says. “There is no way of policing the use of technology like ChatGPT.”
兰开斯特说,不管围绕生成式人工智能的规则如何不断变化,研究人员将继续使用它。“没有办法监管ChatGPT等技术的使用。”

投稿链接:https://jinshuju.net/f/IyliGU

赞助:https://kuank.top/?page_id=6

作者 admin

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注