(点击右边三个点,可调整速度,电脑上可下载)
听力参考原文 ↓↓↓
[00:00.04]Newly-developed artificial intelligence (AI) systems
[00:05.76]have demonstrated the ability to perform
[00:09.32]at human-like levels in some areas.
[00:13.52]But one serious problem with the tools remains
[00:18.28]– they can produce false or harmful information repeatedly.
[00:24.16]The development of such systems, known as "chatbots,"
[00:28.72]has progressed greatly in recent months.
[00:32.80]Chatbots have shown the ability to interact
[00:36.48]smoothly with humans and produce complex writing
[00:41.56]based on short, written commands.
[00:44.72]Such tools are also known as "generative AI"
[00:49.60]or "large language models."
[00:52.84]Chatbots are one of many different AI systems
[00:57.12]currently under development.
[00:59.64]Others include tools that can produce new images,
[01:04.32]video and music or can write computer programs.
[01:09.84]As the technology continues to progress,
[01:13.76]some experts worry that AI tools
[01:17.52]may never be able to learn how to avoid false,
[01:21.72]outdated or damaging results.
[01:25.92]The term hallucination has been used to describe
[01:30.88]when chatbots produce inaccurate or false information.
[01:36.40]Generally, hallucination describes something that
[01:40.40]is created in a person's mind,
[01:43.48]but is not happening in real life.
[01:47.12]Daniela Amodei is co-creator
[01:50.28]and president of Anthropic,
[01:53.36]a company that produced a chatbot called Claude 2.
[01:58.92]She told the Associated Press,
[02:01.84]"I don't think that there's any model today
[02:05.40]that doesn't suffer from some hallucination."
[02:09.92]Amodei added that such tools
[02:12.84]are largely built "to predict the next word."
[02:17.48]With this kind of design, she said,
[02:20.36]there will always be times when the model
[02:23.52]gets information or context wrong.
[02:27.92]Anthropic, ChatGPT-maker OpenAI
[02:32.28]and other major developers of such AI systems
[02:36.52]say they are working to make AI tools
[02:40.32]that make fewer mistakes.
[02:42.92]Some experts question how long
[02:45.96]that process will take
[02:48.12]or if success is even possible.
[02:52.00]"This isn't fixable," says Professor Emily Bender.
[02:57.16]She is a language expert and
[02:59.72]director of the University of Washington's
[03:02.80]Computational Linguistics Laboratory.
[03:07.12]Bender told the AP she considers
[03:10.52]the general relationship between AI tools
[03:14.60]and proposed uses of the technology a "mismatch."
[03:20.08]Indian computer scientist Ganesh Bagler
[03:24.08]has been working for years to get AI systems
[03:28.28]to create recipes for South Asian foods.
[03:32.92]He said a chatbot can generate misinformation
[03:37.28]in the food industry that could hurt a food business.
[03:42.16]A single "hallucinated" recipe element
[03:45.72]could be the difference between a tasty meal
[03:49.00]or a terrible one.
[03:51.76]Bagler questioned OpenAI chief Sam Altman
[03:56.36]during an event on AI technology
[03:59.48]held in India in June.
[04:02.64]"I guess hallucinations in ChatGPT are still acceptable,
[04:08.36]but when a recipe comes out hallucinating,
[04:11.80]it becomes a serious problem," Bagler said.
[04:16.68]Altman answered by saying
[04:19.24]he was sure developers of AI chatbots
[04:23.20]would be able to get "the hallucination problem
[04:27.16]to a much, much better place" in the future.
[04:31.52]But he noted such progress could take years.
[04:35.96]"At that point we won't still talk about these," Altman said.
[04:41.56]"There's a balance between creativity
[04:44.64]and perfect accuracy, and the model will need
[04:49.12]to learn when you want one or the other."
[04:53.52]Other experts who have long studied the technology
[04:57.68]say they do not expect
[04:59.84]such improvements to happen anytime soon.
[05:04.60]The University of Washington's Bender
[05:07.52]describes a language model as a system
[05:11.28]that has been trained on written data
[05:14.28]to "model the likelihood of different strings of word forms."
[05:20.24]Many people depend on a version of this technology
[05:24.36]whenever they use the "autocomplete" tool
[05:28.24]when writing text messages or emails.
[05:32.48]The latest chatbot tools
[05:35.20]try to take that method to the next level,
[05:38.60]by generating whole new passages of text.
[05:42.88]But Bender says the systems are still
[05:46.28]just repeatedly choosing
[05:48.44]the most predictable next word in a series.
[05:52.84]Such language models "are designed to make things up.
[05:57.52]That's all they do," she noted.
[06:00.64]Some businesses, however, are not so worried
[06:04.76]about the ways current chatbot tools
[06:07.92]generate their results.
[06:10.44]Shane Orlick is head of marketing technology company Jasper AI.
[06:17.04]He told the AP, "Hallucinations are actually an added bonus."
[06:23.44]He explained many chatbot users were pleased
[06:27.84]that the company's AI tool
[06:30.28]had "created takes on stories or angles
[06:34.52]that they would have never thought of themselves."
[06:38.56]I'm Bryan Lynn.
____________________
Words in This Story
generate – v. to produce something
inaccurate – adj. not correct or exact
context – n. all the facts, opinions, situations, etc. relating to a particular thing or event
mismatch – n. a situation when people or things are put together but are not suitable for each other
recipe – n. a set of instructions and ingredients for preparing a particular food dish
bonus – n. a pleasant extra thing
angle – n. a position from which something is looked at
|
|