
Sign up to save your podcasts
Or


After MIT Professor Joseph Weizenbaum created the chatbot Eliza, he became concerned that people who had used the programme started to act as if it was human. This might sound like a modern problem, but Eliza was created in 1966. If a programme from the 1960s was capable of tricking people into thinking it was human, what effect could the large-language-model-based chatbots of the 2020s have?麻省理工学院教授 Joseph Weizenbaum 创建聊天机器人 Eliza 后,他开始担心使用该程序的人开始表现得像人类一样。 这听起来像是一个现代问题,但 Eliza 是在 1966 年创建的。如果 1960 年代的程序能够欺骗人们认为它是人类,那么 2020 年代基于大型语言模型的聊天机器人会产生什么效果呢?
Modern philosophers and technology experts have discussed whether AI could develop consciousness. Sentience is difficult to define, but the fact that large language models respond by mathematically calculating the probability of certain patterns appearing suggests that it would be hard to consider them to be alive. However, in terms of our responses to them, what matters is not whether they are sentient, but whether they appear to be so.现代哲学家和技术专家讨论了人工智能是否可以发展意识。 感知很难定义,但大型语言模型通过数学计算某些模式出现的概率来做出响应的事实表明,很难认为它们是活着的。 然而,就我们对它们的反应而言,重要的不是它们是否有知觉,而是它们看起来是否有知觉。
Large language models are made up of genuine human interactions. While their tendency to hallucinate means that chatbots are not able to provide reliable factual information, they are able to effectively replicate the language used in human communication. Psychologists report that people tend to have a cognitive bias towards forming attachment and trust. Even sceptical technology writers report feeling some emotion towards AI chatbots. Some users have even reported grief when one model has been replaced by a newer one.大型语言模型由真实的人类互动组成。 虽然聊天机器人产生幻觉的倾向意味着它们无法提供可靠的事实信息,但它们能够有效地复制人类交流中使用的语言。 心理学家报告说,人们往往对形成依恋和信任存在认知偏见。 即使是持怀疑态度的技术作家也表示对人工智能聊天机器人有一些感情。 一些用户甚至表示,当一种型号被更新的型号取代时,他们感到非常悲伤。
This combination of believable human language together with the inability to reliably assess facts can be dangerous.Cases have been reported where people have been encouraged by chatbots to do dangerous or illegal things. The chatbots were able to use language to encourage and persuade, but not identify or evaluate risks. Trust becomes dangerous when it is not accompanied by reason. Also, if people form relationships with AI, then they may spend less time and effort trying to cultivate genuine human relationships. Could the chatbot revolution lead to a world where we struggle to relate to each other?可信的人类语言与无法可靠评估事实的结合可能是危险的。据报道,聊天机器人鼓励人们做危险或非法的事情。 聊天机器人能够使用语言来鼓励和说服,但无法识别或评估风险。 当信任没有理性的陪伴时,它就会变得危险。 此外,如果人们与人工智能建立关系,那么他们可能会花费更少的时间和精力来培养真正的人际关系。 聊天机器人革命是否会导致我们难以相互联系的世界?
By 晨听英语3.6
55 ratings
After MIT Professor Joseph Weizenbaum created the chatbot Eliza, he became concerned that people who had used the programme started to act as if it was human. This might sound like a modern problem, but Eliza was created in 1966. If a programme from the 1960s was capable of tricking people into thinking it was human, what effect could the large-language-model-based chatbots of the 2020s have?麻省理工学院教授 Joseph Weizenbaum 创建聊天机器人 Eliza 后,他开始担心使用该程序的人开始表现得像人类一样。 这听起来像是一个现代问题,但 Eliza 是在 1966 年创建的。如果 1960 年代的程序能够欺骗人们认为它是人类,那么 2020 年代基于大型语言模型的聊天机器人会产生什么效果呢?
Modern philosophers and technology experts have discussed whether AI could develop consciousness. Sentience is difficult to define, but the fact that large language models respond by mathematically calculating the probability of certain patterns appearing suggests that it would be hard to consider them to be alive. However, in terms of our responses to them, what matters is not whether they are sentient, but whether they appear to be so.现代哲学家和技术专家讨论了人工智能是否可以发展意识。 感知很难定义,但大型语言模型通过数学计算某些模式出现的概率来做出响应的事实表明,很难认为它们是活着的。 然而,就我们对它们的反应而言,重要的不是它们是否有知觉,而是它们看起来是否有知觉。
Large language models are made up of genuine human interactions. While their tendency to hallucinate means that chatbots are not able to provide reliable factual information, they are able to effectively replicate the language used in human communication. Psychologists report that people tend to have a cognitive bias towards forming attachment and trust. Even sceptical technology writers report feeling some emotion towards AI chatbots. Some users have even reported grief when one model has been replaced by a newer one.大型语言模型由真实的人类互动组成。 虽然聊天机器人产生幻觉的倾向意味着它们无法提供可靠的事实信息,但它们能够有效地复制人类交流中使用的语言。 心理学家报告说,人们往往对形成依恋和信任存在认知偏见。 即使是持怀疑态度的技术作家也表示对人工智能聊天机器人有一些感情。 一些用户甚至表示,当一种型号被更新的型号取代时,他们感到非常悲伤。
This combination of believable human language together with the inability to reliably assess facts can be dangerous.Cases have been reported where people have been encouraged by chatbots to do dangerous or illegal things. The chatbots were able to use language to encourage and persuade, but not identify or evaluate risks. Trust becomes dangerous when it is not accompanied by reason. Also, if people form relationships with AI, then they may spend less time and effort trying to cultivate genuine human relationships. Could the chatbot revolution lead to a world where we struggle to relate to each other?可信的人类语言与无法可靠评估事实的结合可能是危险的。据报道,聊天机器人鼓励人们做危险或非法的事情。 聊天机器人能够使用语言来鼓励和说服,但无法识别或评估风险。 当信任没有理性的陪伴时,它就会变得危险。 此外,如果人们与人工智能建立关系,那么他们可能会花费更少的时间和精力来培养真正的人际关系。 聊天机器人革命是否会导致我们难以相互联系的世界?

439 Listeners

19 Listeners

1 Listeners

35 Listeners

21 Listeners

19 Listeners

9 Listeners

6 Listeners

25 Listeners

47 Listeners

10 Listeners

17 Listeners

63 Listeners

6 Listeners

47 Listeners