Leo Wu, an economics student at Minerva University in San Francisco, California, founded a group to discuss how AI tools can help in education.Credit: AI Consensus
加州弗朗西斯科密涅瓦大学(Minerva University)的经济学学生Leo Wu成立了一个小组,讨论人工智能工具如何帮助教育。

Ready or not, AI is coming to science education — and students have opinions
无论是否准备好,人工智能都将进入科学教育-学生们有意见

As educators debate whether it’s even possible to use AI safely in research and education, students are taking a role in shaping its responsible use.
随着教育工作者讨论是否有可能在研究和教育中安全地使用人工智能,学生们正在塑造其负责任的使用。

The world had never heard of ChatGPT when Johnny Chang started his undergraduate programme in computer engineering at the University of Illinois Urbana–Champaign in 2018. All that the public knew then about assistive artificial intelligence (AI) was that the technology powered joke-telling smart speakers or the somewhat fitful smartphone assistants.
当Johnny Chang于2018年在伊利诺伊大学香槟分校开始他的计算机工程本科课程时,世界从未听说过ChatGPT。当时,公众对辅助人工智能(AI)的所有了解都是该技术为讲笑话的智能扬声器或有点断断续续的智能手机助手提供动力。

But, by his final year in 2023, Chang says, it became impossible to walk through campus without catching glimpses of generative AI chatbots lighting up classmates’ screens.
但是,到了2023年的最后一年,Chang说,在校园里走动时,不可能不看到生成AI聊天机器人点亮同学的屏幕。

“I was studying for my classes and exams and as I was walking around the library, I noticed that a lot of students were using ChatGPT,” says Chang, who is now a master’s student at Stanford University in California. He studies computer science and AI, and is a student leader in the discussion of AI’s role in education. “They were using it everywhere.”
“我在为我的课程和考试而学习,当我在图书馆里走动时,我注意到很多学生都在使用ChatGPT,”张说,他现在是加州斯坦福大学的硕士生。他研究计算机科学和人工智能,是讨论人工智能在教育中的作用的学生领袖。“他们到处都在使用它。

ChatGPT is one example of the large language model (LLM) tools that have exploded in popularity over the past two years. These tools work by taking user inputs in the form of written prompts or questions and generating human-like responses using the Internet as their catalogue of knowledge. As such, generative AI produces new data based on the information it has already seen.
ChatGPT是大型语言模型(LLM)工具的一个例子,在过去的两年里,LLM工具的流行程度呈爆炸式增长。这些工具的工作方式是接受用户以书面提示或问题形式提出的意见,并利用互联网作为其知识目录,作出类似人类的答复。因此,生成式人工智能根据它已经看到的信息生成新数据。

However, these newly generated data — from works of art to university papers — often lack accuracy and creative integrity, ringing alarm bells for educators. Across academia, universities have been quick to place bans on AI tools in classrooms to combat what some fear could be an onslaught of plagiarism and misinformation. But others caution against such knee-jerk reactions.
然而,这些新生成的数据–从艺术作品到大学论文–往往缺乏准确性和创造性的完整性,给教育工作者敲响了警钟。在学术界,大学已经迅速禁止在课堂上使用人工智能工具,以打击一些人担心的剽窃和错误信息的冲击。但其他人警告不要做出这种下意识的反应。

Victor Lee, who leads Stanford University’s Data Interactions & STEM Teaching and Learning Lab, says that data suggest that levels of cheating in secondary schools did not increase with the roll-out of ChatGPT and other AI tools. He says that part of the problem facing educators is the fast-paced changes brought on by AI. These changes might seem daunting, but they’re not without benefit.
斯坦福大学数据交互与STEM教学实验室负责人维克托·李(Victor Lee)表示,数据显示,随着ChatGPT和其他人工智能工具的推出,中学的作弊水平并没有增加。他说,教育工作者面临的部分问题是人工智能带来的快节奏变化。这些变化可能看起来令人生畏,但它们并非没有好处。

Educators must rethink the model of written assignments “painstakingly produced” by students using “static information”, says Lee. “This means many of our practices in teaching will need to change — but there are so many developments that it is hard to keep track of the state of the art.”
李说,教育工作者必须重新思考学生使用“静态信息”“精心制作”的书面作业模式。“这意味着我们的许多教学实践需要改变–但有太多的发展,很难跟踪最新的技术水平。”

Despite these challenges, Chang and other student leaders think that blanket AI bans are depriving students of a potentially revolutionary educational tool. “In talking to lecturers, I noticed that there’s a gap between what educators think students do with ChatGPT and what students actually do,” Chang says. For example, rather than asking AI to write their final papers, students might use AI tools to make flashcards based on a video lecture. “There were a lot of discussions happening [on campus], but always without the students.”
尽管面临这些挑战,张和其他学生领袖认为,全面禁止人工智能正在剥夺学生潜在的革命性教育工具。“在与讲师交谈时,我注意到教育工作者认为学生使用ChatGPT的行为与学生实际做的事情之间存在差距,”Chang说。例如,学生可以使用AI工具根据视频讲座制作抽认卡,而不是要求AI写他们的期末论文。“(校园里)发生了很多讨论,但总是没有学生参加。”

Portrait of Johnny Chang at graduation
Computer-science master’s student Johnny Chang started a conference to bring educators and students together to discuss the responsible use of AI.Credit: Howie Liu
计算机科学硕士生Johnny Chang发起了一场会议,将教育工作者和学生聚集在一起,讨论如何负责任地使用人工智能。

To help bridge this communications gap, Chang founded the AI x Education conference in 2023 to bring together secondary and university students and educators to have candid discussions about the future of AI in learning. The virtual conference included 60 speakers and more than 5,000 registrants. This is one of several efforts set up and led by students to ensure that they have a part in determining what responsible AI will look like at universities.
为了帮助弥合这一沟通差距,Chang于2023年创立了AI x Education会议,将中学生和大学生以及教育工作者聚集在一起,就人工智能在学习中的未来进行坦诚的讨论。虚拟会议有60名发言者和5 000多名注册者。这是由学生建立和领导的几项努力之一,以确保他们参与确定负责任的人工智能在大学里的样子。

Over the past year, at events in the United States, India and Thailand, students have spoken up to share their perspectives on the future of AI tools in education. Although many students see benefits, they also worry about how AI could damage higher education.
在过去的一年里,在美国、印度和泰国的活动中,学生们纷纷发表意见,分享他们对人工智能工具在教育中的未来的看法。虽然许多学生看到了好处,但他们也担心人工智能会如何损害高等教育。

Enhancing education 加强教育

Leo Wu, an undergraduate student studying economics at Minerva University in San Francisco, California, co-founded a student group called AI Consensus. Wu and his colleagues brought together students and educators in Hyderabad, India, and in San Francisco for discussion groups and hackathons to collect real-world examples of how AI can assist learning.
Leo Wu是加州弗朗西斯科Minerva大学经济学专业的本科生,他与人共同创立了一个名为AI Consensus的学生团体。Wu和他的同事将印度海得拉巴和旧金山弗朗西斯科的学生和教育工作者聚集在一起,进行讨论小组和黑客松,收集人工智能如何帮助学习的真实例子。

From these discussions, students agreed that AI could be used to disrupt the existing learning model to make it more accessible for students with different learning styles or who face language barriers. For example, Wu says that students shared stories about using multiple AI tools to summarize a lecture or a research paper and then turn the content into a video or a collection of images. Others used AI to transform data points collected in a laboratory class into an intuitive visualization.
通过这些讨论,学生们一致认为,人工智能可以用来破坏现有的学习模式,使具有不同学习风格或面临语言障碍的学生更容易使用。例如,吴说,学生们分享了使用多种人工智能工具来总结演讲或研究论文,然后将内容转化为视频或图像集的故事。其他人则使用AI将实验室课程中收集的数据点转换为直观的可视化。

For people studying in a second language, Wu says that “the language barrier [can] prevent students from communicating ideas to the fullest”. Using AI to translate these students’ original ideas or rough drafts crafted in their first language into an essay in English could be one solution to this problem, he says. Wu acknowledges that this practice could easily become problematic if students relied on AI to generate ideas, and the AI returned inaccurate translations or wrote the paper altogether.
对于学习第二语言的人,吴说,“语言障碍[可以]阻止学生最充分地交流思想”。他说,使用人工智能将这些学生的原始想法或用第一语言制作的草稿翻译成英文文章可能是解决这个问题的一个方法。吴承认,如果学生依赖人工智能来产生想法,并且人工智能返回不准确的翻译或完全撰写论文,这种做法很容易成为问题。

Jomchai Chongthanakorn and Warisa Kongsantinart, undergraduate students at Mahidol University in Salaya, Thailand, presented their perspectives at the UNESCO Round Table on Generative AI and Education in Asia–Pacific last November. They point out that AI can have a role as a custom tutor to provide instant feedback for students.
Jomchai Chongthanakorn和Warisa Kongsantinart是泰国Salaya Mahidol大学的本科生,去年11月在联合国教科文组织亚太地区生成人工智能和教育圆桌会议上发表了他们的观点。他们指出,人工智能可以扮演定制导师的角色,为学生提供即时反馈。

“Instant feedback promotes iterative learning by enabling students to recognize and promptly correct errors, improving their comprehension and performance,” wrote Chongthanakorn and Kongsantinart in an e-mail to Nature. “Furthermore, real-time AI algorithms monitor students’ progress, pinpointing areas for development and suggesting pertinent course materials in response.”
“即时反馈通过使学生能够识别并及时纠正错误,提高他们的理解力和表现,促进了迭代学习,”Chongthanakorn和Kongsantinart在给《自然》杂志的一封电子邮件中写道。“此外,实时人工智能算法监控学生的进步,精确定位发展领域,并提出相关的课程材料。

Although private tutors could provide the same learning support, some AI tools offer a free alternative, potentially levelling the playing field for students with low incomes.
虽然私人教师可以提供相同的学习支持,但一些人工智能工具提供了免费的替代方案,可能会为低收入学生提供公平的竞争环境。

Jomchai Chongthanakorn speaks at the UNESCO Round Table on Generative AI and Education conference
Jomchai Chongthanakorn gave his thoughts on AI at a UNESCO round table in Bangkok.Credit: UNESCO/Jessy & Thanaporn
Jomchai Chongthanakorn在曼谷的联合国教科文组织圆桌会议上发表了他对人工智能的看法。

Despite the possible benefits, students also express wariness about how using AI could negatively affect their education and research. ChatGPT is notorious for ‘hallucinating’ — producing incorrect information but confidently asserting it as fact. At Carnegie Mellon University in Pittsburgh, Pennsylvania, physicist Rupert Croft led a workshop on responsible AI alongside physics graduate students Patrick Shaw and Yesukhei Jagvaral to discuss the role of AI in the natural sciences.
尽管有可能的好处,但学生们也对使用人工智能可能对他们的教育和研究产生负面影响表示谨慎。ChatGPT因“幻觉”而臭名昭着-产生不正确的信息,但自信地声称它是事实。在宾夕法尼亚州匹兹堡的卡内基梅隆大学,物理学家鲁珀特克罗夫特与物理学研究生帕特里克肖和Yesukhei Jagvaral一起主持了一个关于负责任的人工智能的研讨会,讨论人工智能在自然科学中的作用。

“In science, we try to come up with things that are testable — and to test things, you need to be able to reproduce them,” Croft says. But, he explains, it’s difficult to know whether things are reproducible with AI because the software operations are often a black box. “If you asked [ChatGPT] something three times, you will get three different answers because there’s an element of randomness.”
克罗夫特说:“在科学中,我们试图提出可测试的东西–为了测试这些东西,你需要能够重现它们。”但是,他解释说,很难知道人工智能是否可以重现,因为软件操作通常是一个黑匣子。“如果你问[ChatGPT]三次,你会得到三个不同的答案,因为有一个随机因素。

And because AI systems are prone to hallucinations and can give answers only on the basis of data they have already seen, truly new information, such as research that has not yet been published, is often beyond their grasp.
而且由于人工智能系统容易产生幻觉,只能根据已经看到的数据给出给予答案,因此真正的新信息,比如尚未发表的研究,往往超出了它们的掌握范围。

Croft agrees that AI can assist researchers, for example, by helping astronomers to find planetary research targets in a vast array of data. But he stresses the need for critical thinking when using the tools. To use AI responsibly, Croft argued in the workshop, researchers must understand the reasoning that led to an AI’s conclusion. To take a tool’s answer simply on its word alone would be irresponsible.
克罗夫特同意人工智能可以帮助研究人员,例如,通过帮助天文学家在大量数据中找到行星研究目标。但他强调,在使用这些工具时需要批判性思维。为了负责任地使用人工智能,克罗夫特在研讨会上认为,研究人员必须理解导致人工智能结论的推理。仅仅凭一个工具的话来回答问题是不负责任的。

“We’re already working at the edge of what we understand” in scientific enquiry, Shaw says. “Then you’re trying to learn something about this thing that we barely understand using a tool we barely understand.”
肖说,在科学研究中,“我们已经在我们所理解的边缘工作”。“然后你试图使用我们几乎不理解的工具来学习一些我们几乎不理解的东西。

These lessons also apply to undergraduate science education, but Shaw says that he’s yet to see AI play a large part in the courses he teaches. At the end of the day, he says, AI tools such as ChatGPT “are language models — they’re really pretty terrible at quantitative reasoning”.
这些课程也适用于本科科学教育,但肖说,他还没有看到人工智能在他教授的课程中发挥很大作用。他说,说到底,像ChatGPT这样的人工智能工具“是语言模型–它们在定量推理方面真的很糟糕”。

Shaw says it’s obvious when students have used an AI on their physics problems, because they are more likely to have either incorrect solutions or inconsistent logic throughout. But as AI tools improve, those tells could become harder to detect.
Shaw说,当学生在他们的物理问题上使用人工智能时,这是显而易见的,因为他们更有可能有不正确的解决方案或不一致的逻辑。但随着人工智能工具的改进,这些信息可能会变得更难检测。

Chongthanakorn and Kongsantinart say that one of the biggest lessons they took away from the UNESCO round table was that AI is a “double-edged sword”. Although it might help with some aspects of learning, they say, students should be wary of over-reliance on the technology, which could reduce human interaction and opportunities for learning and growth.
Chongthanakorn和Kongsantinart说,他们从联合国教科文组织圆桌会议上学到的最大教训之一是,人工智能是一把“双刃剑”。他们说,虽然这可能有助于学习的某些方面,但学生应该警惕过度依赖技术,这可能会减少人际互动以及学习和成长的机会。

“In our opinion, AI has a lot of potential to help students learn, and can improve the student learning curve,” Chongthanakorn and Kongsantinart wrote in their e-mail. But “this technology should be used only to assist instructors or as a secondary tool”, and not as the main method of teaching, they say.
“在我们看来,人工智能有很大的潜力来帮助学生学习,并可以改善学生的学习曲线,”Chongthanakorn和Kongsantinart在他们的电子邮件中写道。但他们说,“这项技术应该只用于帮助教师或作为次要工具”,而不是作为主要的教学方法。

Equal access 平等机会

Tamara Paris is a master’s student at McGill University in Montreal, Canada, studying ethics in AI and robotics. She says that students should also carefully consider the privacy issues and inequities created by AI tools.
塔玛拉巴黎是加拿大蒙特利尔麦吉尔大学的硕士生,学习人工智能和机器人技术的伦理学。她说,学生们还应该仔细考虑人工智能工具带来的隐私问题和不公平。

Some academics avoid using certain AI systems owing to privacy concerns about whether AI companies will misuse or sell user data, she says. Paris notes that widespread use of AI could create “unjust disparities” between students if knowledge or access to these tools isn’t equal.
她说,一些学者避免使用某些人工智能系统,因为他们担心人工智能公司是否会滥用或出售用户数据。巴黎指出,如果知识或获得这些工具的机会不平等,人工智能的广泛使用可能会在学生之间造成“不公平的差距”。

Portrait of Tamara Paris
Tamara Paris says not all students have equal access to AI tools.Credit: McCall Macbain Scholarship at McGill
塔玛拉巴黎说,并不是所有的学生都能平等地使用人工智能工具。

“Some students are very aware that AIs exist, and others are not,” Paris says. “Some students can afford to pay for subscriptions to AIs, and others cannot.”
“一些学生非常清楚人工智能的存在,而另一些学生则没有,”巴黎说。“有些学生有能力支付人工智能的订阅费,有些学生则没有能力。”

One way to address these concerns, says Chang, is to teach students and educators about the flaws of AI and its responsible use as early as possible. “Students are already accessing these tools through [integrated apps] like Snapchat” at school, Chang says.
Chang说,解决这些问题的一种方法是尽早向学生和教育工作者传授人工智能的缺陷及其负责任的使用。“学生们已经在学校里通过Snapchat等(集成应用程序)访问这些工具,”Chang说。

In addition to learning about hallucinations and inaccuracies, students should also be taught how AI can perpetuate the biases already found in our society, such as discriminating against people from under-represented groups, Chang says. These issues are exacerbated by the black-box nature of AI — often, even the engineers who built these tools don’t know exactly how an AI makes its decisions.
Chang说,除了学习幻觉和不准确之外,学生还应该学习人工智能如何使我们社会中已经存在的偏见永久化,例如歧视来自代表性不足群体的人。这些问题因人工智能的黑箱性质而加剧-通常,即使是构建这些工具的工程师也不知道人工智能是如何做出决策的。

Beyond AI literacy, Lee says that proactive, clear guidelines for AI use will be key. At some universities, academics are carving out these boundaries themselves, with some banning the use of AI tools for certain classes and others asking students to engage with AI for assignments. Scientific journals are also implementing guidelines for AI use when writing papers and peer reviews that range from outright bans to emphasizing transparent use.
除了人工智能知识,Lee说,积极主动、明确的人工智能使用指南将是关键。在一些大学,学者们正在自己划定这些界限,一些大学禁止在某些课程中使用人工智能工具,另一些大学则要求学生在作业中使用人工智能。科学期刊在撰写论文和同行评审时也在实施人工智能使用指南,从完全禁止到强调透明使用。

Lee says that instructors should clearly communicate to students when AI can and cannot be used for assignments and, importantly, signal the reasons behind those decisions. “We also need students to uphold honesty and disclosure — for some assignments, I am completely fine with students using AI support, but I expect them to disclose it and be clear how it was used.”
Lee说,教师应该清楚地与学生沟通,何时可以和不可以将人工智能用于作业,重要的是,要说明这些决定背后的原因。“我们还需要学生坚持诚实和披露-对于某些作业,我完全同意学生使用人工智能支持,但我希望他们披露它并清楚它是如何使用的。

For instance, Lee says he’s OK with students using AI in courses such as digital fabrication — AI-generated images are used for laser-cutting assignments — or in learning-theory courses that explore AI’s risks and benefits.
例如,李说,他可以接受学生在数字制造等课程中使用人工智能-人工智能生成的图像用于激光切割作业-或者在探索人工智能风险和好处的学习理论课程中使用人工智能。

For now, the application of AI in education is a constantly moving target, and the best practices for its use will be as varied and nuanced as the subjects it is applied to. The inclusion of student voices will be crucial to help those in higher education work out where those boundaries should be and to ensure the equitable and beneficial use of AI tools. After all, they aren’t going away.
就目前而言,人工智能在教育中的应用是一个不断变化的目标,其使用的最佳实践将与其应用的主题一样多样化和细致入微。纳入学生的声音将是至关重要的,以帮助那些在高等教育中工作的人确定这些界限应该在哪里,并确保公平和有益地使用人工智能工具。毕竟,它们不会消失。

“It is impossible to completely ban the use of AIs in the academic environment,” Paris says. “Rather than prohibiting them, it is more important to rethink courses around AIs.”
“在学术环境中完全禁止使用人工智能是不可能的,”巴黎说。“与其禁止它们,更重要的是重新思考围绕人工智能的课程。”

作者 admin

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注