评论
3 5
分享

经济学人双语精读TE-2023-05-29期考研英语阅读|ChatGPT的语言方式引发思考:人类如何习得语言

这个是认证

猫友2022080339

2023-05-29 11:21 江苏

12886 3 5

2023-05-29期ChatGPT的语言方式引发思考:人类如何习得语言(PDF版+Word版+音频)

5.29英语外刊社语言专栏语言本能.png



5.29每日一篇 | 英语外刊社



JohnsonThe language instinct
ChatGPT’s way with words raises questions about how humans acquire language

1When deep blue, a chess computer, defeated Garry Kasparov, a world champion, in 1997 many gasped in fear of machines triumphing over mankind. In the intervening years, artificial intelligence has done some astonishing things, but none has managed to capture the public imaginationin quite the same way. Now, though, the astonishment of the Deep Blue moment is back, because computers are employing something that humans consider their defining ability: language.

2Or are they? Certainly, large language models (LLMS), of which the most famous is ChatGPT, produce what looks like impeccable human writing. But a debate has ensued about what the machines are actually doing internally, what it is that humans, in turn, do when they speak—and, inside the academy, about the theories of the world’s most famous linguist, Noam Chomsky.

3Although Professor Chomsky’s ideas have changed considerably since he rose to prominence in the 1950s, several elements have remained fairlyconstant. He and his followers argue that human language is different in kind (not just degree of expressiveness) from all other kinds of communication. All human languages are more similar to each other than they are to, say, whale song or computer code. Professor Chomsky has frequently said a Martian visitor would conclude that all humans speak the same language, with surface variation.

4Perhaps most notably, Chomskyan theories hold that children learn their native languages with astonishing speed and ease despite “the povertyof the stimulus”: the sloppy and occasional language they hear in childhood. The only explanation for this can be that some kind of predisposition for language is built into the human brain.

5Chomskyan ideas have dominated the linguistic field of syntax since their birth. But many linguists are strident anti-Chomskyans. And some are now seizing on the capacities of llms to attack Chomskyan theories anew.

6Grammar has a hierarchicalnested structure involving units within other units. Words form phrases, which form clauses, which form sentences and so on. Chomskyan theory posits a mental operation, “Merge”, which glues smaller units together to form larger ones that can then be operated on further (and so on). In a recent New York Times op-ed, the man himself (now 94) and two co-authors said “we know” that computers do not think or use language as humans do, referring implicitly to this kind of cognition. LLMS, in effect, merely predict the next word in a string of words.

7Yet it is hard, for several reasons, to fathom what LLMS “think”. Details of the programming and training data of commercial ones like ChatGPT are proprietary. And not even the programmers know exactly what is going on inside.

8linguists have, however, found clever ways to test LLMS’ underlying knowledge, in effect tricking them with probing tests. And indeed, LLMS seem to learn nestedhierarchical grammatical structures, even though they are exposed to only linear input, ie, strings of text. They can handle novel words and grasp parts of speech. Tell ChatGPT that “dax” is a verb meaning to eat a slice of pizza by folding it, and the system deploys it easily: “After a long day at work, I like to relax and dax on a slice of pizza while watching my favourite TV show.” (The imitative element can be seen in “dax on”, which ChatGPT probably patterned on the likes of “chew on” or “munch on”.)

9What about the “poverty of the stimulus”? After all, GPT-3 (the LLM underlying ChatGPT until the recent release of GPT-4) is estimated to be trained on about 1,000 times the data a human ten-year-old is exposed to. That leaves open the possibility that children have an inborn tendency to grammar, making them far more proficient than any LLM. In a forthcoming paper in linguistic Inquiry, researchers claim to have trained an LLM on no more text than a human child is exposed to, finding that it can use even rare bits of grammar. But other researchers have tried to train an LLM on a database of only child-directed language (that is, of transcripts of carers speaking to children). Here LLMS fare far worse. Perhaps the brain really is built for language, as Professor Chomsky says.

10It is difficult to judge. Both sides of the argument are marshalling LLMS to make their case. The eponymous founder of his school of linguistics has offered only a brusque riposte. For his theories to survive this challenge, his camp will have to put up a stronger defence.




短语1.原文:When deep blue, a chess computer, defeated Garry Kasparov, a world champion, in 1997 many gasped in fear of machines triumphing over mankind.

       词典: in fear of 害怕;担忧

             triumph over 打败;战胜

例句:We lived in constant fear of losing our jobs.

      我们一直生活在担心失去工作的阴影里。

Working side by side, we have the ability to solve the most insurmountable problems and to triumph over the greatest of adversities.

只要携手合作,我们就有能力解决最难以克服的问题,战胜最大的逆境。


2.原文:Although Professor Chomskys ideas have changed considerably since he rose to prominence in the 1950s, several elements have remained fairly constant.

       词典: rise to prominence崛起;声名鹊起

例句:As she rises to prominence in the international world of chess, she struggles with alcoholism and addiction.

随着她在国际象棋界崭露头角,她与酗酒和毒瘾作斗争。


3.原文:And some are now seizing on the capacities of llms to attack Chomskyan theories anew.

       词典: seize on对…大为关注;抓住(可利用的事物)

例句:Newspapers seized on the results as proof that global warming wasn't really happening.

各报纸纷纷以此结果为证据来证明全球变暖并没有真正发生。


4.原文:The imitative element can be seen in dax on, which ChatGPT probably patterned on the likes of chew onor munch on.

       词典: pattern... on...模仿;仿效 (被动形式:be patterned on

例句:The clothing is patterned on athletes' wear.

这些衣服是仿照运动员的穿着制作的。


长难句

1. 原文Now, though, the astonishment of the Deep Blue moment is back, because computers are employing something that humans consider their defining ability: language.


分析本句包含一个原因状语从句和一个定语从句。主句为the astonishment of the Deep Blue moment is back”;though在本句中为副词,表示“然而”because为连词,引导原因状语从句computers are employing something”;“that humans consider their defining ability: language”为限制性定语从句,修饰something


译文但现在,“深蓝”那样震撼的时刻又回来了,因为计算机正在运用人类认为是其决定性能力的东西:语言。


2. 原文Certainly, large language models (LLMS), of which the most famous is ChatGPT, produce what looks like impeccable human writing.


分析本句包含一个非限制性定语从句和一个宾语从句。主句为large language models produce... of which the most famous is ChatGPT”为非限制性定语从句,which指代LLMSwhat looks like impeccable human writing”为宾语从句


译文当然,大型语言模型(LLMS)能像人类一样写作且无可挑剔,其中最有名的就是ChatGPT


写作技巧:

Here LLMS fare far worse.

在这种情况下,大型语言模型表现要差得多。

生词: fare n.旅费;路费;车费 v.进展;进行

fare这个词常见的意思是“费用”,在本文中为熟词僻义,表示“情况如何;表现如何”,可以替换performfare常和well/badly搭配,表示“情况好/情况不好”,其比较级为fare better/worst

例句:It is hard to categorize about how many hours should be spent on everyday learning. What suffices for able students may be inadequate for those who fare worse.

很难准确地说每天需要学习多少小时。同样的学习时间,对于聪明的学生来说够了, 对于成绩落后的学生来说可能不够。


背景知识:

1. Deep Blue:深蓝是一台专门为国际象棋比赛打造的超级计算机,美国IBM公司的研究小组从1989年就开始对其进行开发,并经历了多次升级和改良。深蓝的主要特色在于它能在每秒钟内运算超过两亿种走法,并从中筛选出最优解。同时,它也能利用大量的国际象棋数据库,吸收并学习人类大师的经验和技巧。1997511日,那是一场人类智慧与机械智力之间较量的终场之日,落锤之地在美国纽约。深蓝超级电脑在一场六局的对决中,以3.5:2.5的总分战胜了国际象棋大师——世界冠军加里·卡斯帕罗夫,此战成为了人工智能历程中的一个重要时刻。它向我们展示了机器的可能性,同时也让我们对未来充满了期待和想象。

2. large language models (LLM):大型语言模型是一种人工智能模型,旨在理解和生成人类语言。它们在大量的文本数据上进行训练,可以执行广泛的任务,包括文本总结、翻译、情感分析等等。LLM的特点是规模庞大,包含数十亿的参数,帮助它们学习语言数据中的复杂模式。LLM是通常源自Transformer架构的Al模型,旨在理解和生成人类语言、代码等。这些模型在大量文本数据上进行训练,使它们能够捕捉人类语言的复杂性和细微差别。LLM可以执行范围广泛的语言任务,从简单的文本分米到文本生成,且有很高的准确性,流畅性和风格。在医疗保健行业,LLM被用于电子病历处理、临床试验匹配和药物发现。在金融领域,LLM被用干欺诈检测,金融新闻的情绪分析,其至交易策略。凭借其多功能性和高性能的特性,基于TransformerLLM正在成为各种行业和应用程序中越来越有价值的资产。


段落大意:

1】计算机正在运用人类的决定性能力:语言。

2】大型语言模型能和人类一样写文章,引发了诸多争论。

3】乔姆斯基教授表示,所有人类语言之间具有相似性。

4】乔姆斯基理论认为,人类大脑中存在某种语言倾向。

5】有些人正在利用大型语言模型的能力攻击乔姆斯基理论。

6】大型语言模型和人类的认知方式并不相同。

7】我们很难理解大型语言模型的“想法”。

8】语言学家利用探究性测试来了解大型语言模型的基础知识。

9】《语言学探索》的一篇论文对“刺激贫乏”给予说明。

10】对于争论,乔姆斯基学派必须提出更强有力的辩论观点。


PS:各位研友下载请文末点击阅读原文.

考研英语杂志经济学人英文外刊|2022年经济学人英文杂志(2022年经济学人周刊英杂志已更新完毕)

366外刊社每日分享英文杂志,考研英语杂志,考研英语外刊双语精读,经济学人杂志,英文杂志下载。

文件格式:True PDF ,①.下载方式:网盘下载②.是否支持编辑:支持③.文字内容:支持选取复制★能否打印:支持打印★

阅读原文”下载外刊精读PDF Doc Mp3  




# 教育
# 考研英语
# ChatGpt
本文为凯迪网自媒体“凯迪号”作者上传发布,代表其个人观点与立场,凯迪网仅提供信息发布与储存服务。文章内容之真实性、准确性由用户自行辨别,凯迪网有权利对涉嫌违反相关法律、法规内容进行相应处置。
举报
投喂支持
5人点赞
发表评论
请先 注册 / 登录后参与评论
热门评论
栾明同
需要理性,呼唤理性
2023-05-29 14:00 广东
1
0
揭蓓枫
不错 看着还行
2023-05-29 13:16 广东
1
0
甲悦
这是90年代的操作吗?
2023-05-29 12:38 广东
1
0
推荐阅读