Stephen Wolfram:让 ChatGPT 真正起作用的是什么?

What Really Lets ChatGPT Work? 

让 ChatGPT 真正起作用的是什么?

Human language—and the processes of thinking involved in generating it—have always seemed to represent a kind of pinnacle of complexity. And indeed it’s seemed somewhat remarkable that human brains—with their network of a “mere” 100 billion or so neurons (and maybe 100 trillion connections) could be responsible for it. Perhaps, one might have imagined, there’s something more to brains than their networks of neurons—like some new layer of undiscovered physics. But now with ChatGPT we’ve got an important new piece of information: we know that a pure, artificial neural network with about as many connections as brains have neurons is capable of doing a surprisingly good job of generating human language.

人类的语言——以及生成它所涉及的思考过程——一直似乎代表着某种复杂性的巅峰。实际上,人类大脑——其网络仅包含大约 1000 亿左右的神经元(以及可能的 100 万亿连接)能够负责它,似乎是相当令人惊讶的。或许,人们可能会想象,大脑比其神经元网络还有更多的内容——比如某种尚未被发现的新的物理层面。但现在,有了 ChatGPT,我们得到了一个重要的新信息:我们知道一个纯粹的、人工的神经网络,其连接数量大约与大脑的神经元数量相当,能够非常好地生成人类语言。

And, yes, that’s still a big and complicated system—with about as many neural net weights as there are words of text currently available out there in the world. But at some level it still seems difficult to believe that all the richness of language and the things it can talk about can be encapsulated in such a finite system. Part of what’s going on is no doubt a reflection of the ubiquitous phenomenon (that first became evident in the example of rule 30) that computational processes can in effect greatly amplify the apparent complexity of systems even when their underlying rules are simple. But, actually, as we discussed above, neural nets of the kind used in ChatGPT tend to be specifically constructed to restrict the effect of this phenomenon—and the computational irreducibility associated with it—in the interest of making their training more accessible.

是的,这仍然是一个庞大而复杂的系统,其神经网络权重数量与目前世界上可获取的文本字数相当。但在某种程度上,我们仍然很难相信所有丰富的语言及其能够表达的事物可以被封装在如此有限的系统中。在很大程度上,这无疑反映了计算过程可以在其基本规则简单的情况下极大地放大系统复杂性的普遍现象(这一现象最早在规则 30 的例子中显现出来)。然而,实际上,正如我们上面所讨论的,ChatGPT 中使用的神经网络类型往往是专门构建的,以限制这种现象及与之相关的计算不可约性,以便使它们的训练更容易进行。

So how is it, then, that something like ChatGPT can get as far as it does with language? The basic answer, I think, is that language is at a fundamental level somehow simpler than it seems. And this means that ChatGPT—even with its ultimately straightforward neural net structure—is successfully able to “capture the essence” of human language and the thinking behind it. And moreover, in its training, ChatGPT has somehow “implicitly discovered” whatever regularities in language (and thinking) make this possible.

那么,像 ChatGPT 这样的系统如何在语言方面取得如此进展呢?我认为,基本答案是,从根本上讲,语言实际上比它看起来更简单。这意味着 ChatGPT,即使其神经网络结构非常直接,也能成功地“捕捉到”人类语言及其背后思维的本质。而且,在训练过程中,ChatGPT 以某种方式“隐含地发现”了语言(和思维)中使这一切成为可能的规律。

The success of ChatGPT is, I think, giving us evidence of a fundamental and important piece of science: it’s suggesting that we can expect there to be major new “laws of language”—and effectively “laws of thought”—out there to discover. In ChatGPT—built as it is as a neural net—those laws are at best implicit. But if we could somehow make the laws explicit, there’s the potential to do the kinds of things ChatGPT does in vastly more direct, efficient—and transparent—ways.

我认为,ChatGPT 的成功为我们提供了一项关于基础科学的重要证据:它表明我们可以期待发现全新的“语言规律”以及实际上的“思维规律”。在作为神经网络构建的 ChatGPT 中,这些规律充其量只是隐含的。但是,如果我们能以某种方式让这些规律显式化,那么有可能以更直接、更高效且更透明的方式完成类似 ChatGPT 的事情。

But, OK, so what might these laws be like? Ultimately they must give us some kind of prescription for how language—and the things we say with it—are put together. Later we’ll discuss how “looking inside ChatGPT” may be able to give us some hints about this, and how what we know from building computational language suggests a path forward. But first let’s discuss two long-known examples of what amount to “laws of language”—and how they relate to the operation of ChatGPT.

那么,这些规律可能是什么样的呢?从根本上讲,它们必须为我们提供关于语言及其所表达内容的组织方式的某种指导。稍后我们将讨论如何“观察 ChatGPT 内部”以获取关于这方面的一些线索,以及我们从构建计算语言中所了解到的推动这一进程的途径。但首先,让我们讨论两个长期以来已知的“语言规律”的例子以及它们与 ChatGPT 运作的关系。

The first is the syntax of language. Language is not just a random jumble of words. Instead, there are (fairly) definite grammatical rules for how words of different kinds can be put together: in English, for example, nouns can be preceded by adjectives and followed by verbs, but typically two nouns can’t be right next to each other. Such grammatical structure can (at least approximately) be captured by a set of rules that define how what amount to “parse trees” can be put together:

首先是语言的语法。语言并非随机的词汇混合体。相反,有(相当)明确的语法规则来规定不同类型的词汇如何组合:例如在英语中,名词可以被形容词修饰并跟在动词后面,但通常两个名词不能紧挨在一起。这种语法结构可以(至少近似地)通过一组规则来捕捉,这些规则定义了如何组合起“解析树”:

0f45443c38a022349fe004fa9e67cff9.png

ChatGPT doesn’t have any explicit “knowledge” of such rules. But somehow in its training it implicitly “discovers” them—and then seems to be good at following them. So how does this work? At a “big picture” level it’s not clear. But to get some insight it’s perhaps instructive to look at a much simpler example.

ChatGPT 并没有关于这些规则的显式“知识”。但是在训练过程中,它以某种方式隐含地“发现”了这些规则,并且似乎很擅长遵循它们。那么这是如何实现的呢?从“大局”的层面上来看,这还不清楚。但是,为了获得一些启示,也许可以参考一个更简单的例子。

Consider a “language” formed from sequences of (’s and )’s, with a grammar that specifies that parentheses should always be balanced, as represented by a parse tree like:

考虑一种由序列('s 和 )'s 组成的“语言”,其语法规定括号应始终保持平衡,如下面的解析树所示:

5db38b66d82a54a12789386f4d92cea4.png

Can we train a neural net to produce “grammatically correct” parenthesis sequences? There are various ways to handle sequences in neural nets, but let’s use transformer nets, as ChatGPT does. And given a simple transformer net, we can start feeding it grammatically correct parenthesis sequences as training examples. A subtlety (which actually also appears in ChatGPT’s generation of human language) is that in addition to our “content tokens” (here “(” and “)”) we have to include an “End” token, that’s generated to indicate that the output shouldn’t continue any further (i.e. for ChatGPT, that one’s reached the “end of the story”).

我们能否训练神经网络生成“语法正确”的括号序列呢?在神经网络中处理序列有多种方法,但是让我们使用 ChatGPT 所使用的转换器网络。给定一个简单的转换器网络,我们可以开始用语法正确的括号序列作为训练示例。一个微妙之处(实际上也出现在 ChatGPT 生成人类语言中)是除了我们的“内容标记”(这里是“(”和“)”)之外,我们还需要包括一个“结束”标记,该标记生成表示输出不应再继续(也就是说,对于 ChatGPT 来说,它已经达到了“故事的结尾”)。

If we set up a transformer net with just one attention block with 8 heads and feature vectors of length 128 (ChatGPT also uses feature vectors of length 128, but has 96 attention blocks, each with 96 heads) then it doesn’t seem possible to get it to learn much about parenthesis language. But with 2 attention blocks, the learning process seems to converge—at least after 10 million or so examples have been given (and, as is common with transformer nets, showing yet more examples just seems to degrade its performance).

如果我们设置一个转换器网络,其中只有一个具有 8 个头部和长度为 128 的特征向量的注意力块(ChatGPT 也使用长度为 128 的特征向量,但具有 96 个注意力块,每个块有 96 个头部),那么似乎不可能让它学习到关于括号语言的知识。但是,对于两个注意力块,学习过程似乎在给出大约一千万个示例后收敛(正如转换器网络中常见的情况,再展示更多示例似乎只会降低其性能)。

So with this network, we can do the analog of what ChatGPT does, and ask for probabilities for what the next token should be—in a parenthesis sequence:

因此,对于这个网络,我们可以做类似于 ChatGPT 所做的事情,询问在括号序列中下一个标记的概率:

e6e248bbedf40b1776db3dc7a292bbee.png

And in the first case, the network is “pretty sure” that the sequence can’t end here—which is good, because if it did, the parentheses would be left unbalanced. In the second case, however, it “correctly recognizes” that the sequence can end here, though it also “points out” that it’s possible to “start again”, putting down a “(”, presumably with a “)” to follow. But, oops, even with its 400,000 or so laboriously trained weights, it says there’s a 15% probability to have “)” as the next token—which isn’t right, because that would necessarily lead to an unbalanced parenthesis.

在第一个例子中,网络“非常确定”序列不能在这里结束,这是很好的,因为如果序列在这里结束,那么括号将会不平衡。然而,在第二个例子中,它“正确地识别出”序列可以在这里结束,尽管它也“指出”可以“重新开始”,放下一个“(”,然后紧跟着一个“)”。但是,哎呀,即使经过 40 万个辛苦训练的权重,它还是认为下一个标记为“)”的概率有 15%,这是错误的,因为这将不可避免地导致括号不平衡。

Here’s what we get if we ask the network for the highest-probability completions for progressively longer sequences of (’s:

如果我们要求网络为逐渐变长的('s 序列提供最高概率的补全,我们将得到什么结果呢:

3337ce0165936cef1c71189d9e1060e4.png

And, yes, up to a certain length the network does just fine. But then it starts failing. It’s a pretty typical kind of thing to see in a “precise” situation like this with a neural net (or with machine learning in general). Cases that a human “can solve in a glance” the neural net can solve too. But cases that require doing something “more algorithmic” (e.g. explicitly counting parentheses to see if they’re closed) the neural net tends to somehow be “too computationally shallow” to reliably do. (By the way, even the full current ChatGPT has a hard time correctly matching parentheses in long sequences.)

是的,到一定长度时,网络表现得很好。但是随后它开始失败。在神经网络(或一般的机器学习)中处理这种“精确”的情况时,这是一种非常典型的现象。人类“一眼就能解决”的案例,神经网络也能解决。但是,需要做一些“更具算法性”的情况(例如明确计算括号以查看它们是否已关闭),神经网络往往在某种程度上“计算能力不足”以可靠地实现。(顺便说一下,即使是目前功能齐全的 ChatGPT,在处理长序列中的括号匹配方面也很困难。)

So what does this mean for things like ChatGPT and the syntax of a language like English? The parenthesis language is “austere”—and much more of an “algorithmic story”. But in English it’s much more realistic to be able to “guess” what’s grammatically going to fit on the basis of local choices of words and other hints. And, yes, the neural net is much better at this—even though perhaps it might miss some “formally correct” case that, well, humans might miss as well. But the main point is that the fact that there’s an overall syntactic structure to the language—with all the regularity that implies—in a sense limits “how much” the neural net has to learn. And a key “natural-science-like” observation is that the transformer architecture of neural nets like the one in ChatGPT seems to successfully be able to learn the kind of nested-tree-like syntactic structure that seems to exist (at least in some approximation) in all human languages.

那么,这对于诸如 ChatGPT 之类的事物以及像英语这样的语言语法意味着什么呢?括号语言是“简朴的”,并且更具有“算法性”。但在英语中,基于单词的局部选择和其他提示来“猜测”什么是在语法上合适的可能更为现实。是的,神经网络在这方面做得更好,尽管可能会错过一些“正式正确”的情况。嗯,人类也可能会错过。但是,要点是语言具有整体句法结构的事实——以及所有这些结构所暗示的规律性——从某种意义上限制了神经网络需要学习的“程度”。一个关键的“自然科学观察”是,像 ChatGPT 中的神经网络这样的转换器架构似乎能够成功地学习嵌套树状句法结构,这种结构似乎(至少在某种程度上)存在于所有人类语言中。

Syntax provides one kind of constraint on language. But there are clearly more. A sentence like “Inquisitive electrons eat blue theories for fish” is grammatically correct but isn’t something one would normally expect to say, and wouldn’t be considered a success if ChatGPT generated it—because, well, with the normal meanings for the words in it, it’s basically meaningless.

句法为语言提供了一种约束。但是显然还有更多。像“好奇的电子吃蓝色理论为鱼”的句子在语法上是正确的,但不是人们通常会说的内容,如果 ChatGPT 生成了这样的句子,也不会被认为是成功的——因为,嗯,用其中的词的正常含义来说,这基本上是毫无意义的。

But is there a general way to tell if a sentence is meaningful? There’s no traditional overall theory for that. But it’s something that one can think of ChatGPT as having implicitly “developed a theory for” after being trained with billions of (presumably meaningful) sentences from the web, etc.

但是,有没有一种通用的方法来判断一个句子是否有意义呢?没有传统的整体理论可以解释这一点。但这是我们可以将 ChatGPT 视为在经过数十亿来自网络等的(可能有意义的)句子训练之后隐含地“发展出理论”的事物。

What might this theory be like? Well, there’s one tiny corner that’s basically been known for two millennia, and that’s logic. And certainly in the syllogistic form in which Aristotle discovered it, logic is basically a way of saying that sentences that follow certain patterns are reasonable, while others are not. Thus, for example, it’s reasonable to say “All X are Y. This is not Y, so it’s not an X” (as in “All fishes are blue. This is not blue, so it’s not a fish.”). And just as one can somewhat whimsically imagine that Aristotle discovered syllogistic logic by going (“machine-learning-style”) through lots of examples of rhetoric, so too one can imagine that in the training of ChatGPT it will have been able to “discover syllogistic logic” by looking at lots of text on the web, etc. (And, yes, while one can therefore expect ChatGPT to produce text that contains “correct inferences” based on things like syllogistic logic, it’s a quite different story when it comes to more sophisticated formal logic—and I think one can expect it to fail here for the same kind of reasons it fails in parenthesis matching.)

这种理论可能是什么样的呢?嗯,有一个小角落基本上已经为人所知两千年了,那就是逻辑。当然,在亚里士多德发现的三段论形式中,逻辑基本上是一种说法,即遵循某些模式的句子是合理的,而其他句子则不合理。因此,例如,合理的说法是“所有 X 都是 Y,这不是 Y,所以它不是 X”(如“所有鱼都是蓝色的。这不是蓝色的,所以它不是鱼。”)。正如人们可以想象亚里士多德通过大量修辞例子(“机器学习风格”)发现三段论逻辑一样,我们也可以想象,在 ChatGPT 的训练过程中,它可以通过查看大量的网页文本等来“发现三段论逻辑”。(是的,尽管人们因此可以期望 ChatGPT 生成基于诸如三段论逻辑之类的事物的“正确推论”的文本,但在更复杂数学逻辑方面,情况完全不同——我认为它在这方面的失败原因与在括号匹配中的失败原因相同。)

But beyond the narrow example of logic, what can be said about how to systematically construct (or recognize) even plausibly meaningful text? Yes, there are things like Mad Libs that use very specific “phrasal templates”. But somehow ChatGPT implicitly has a much more general way to do it. And perhaps there’s nothing to be said about how it can be done beyond “somehow it happens when you have 175 billion neural net weights”. But I strongly suspect that there’s a much simpler and stronger story.

但是除了逻辑这个狭义的例子之外,关于如何系统地构建(或识别)甚至看似有意义的文本还能说些什么呢?是的,有像疯狂填词游戏这样的东西,它们使用非常具体的“短语模板”。但是,ChatGPT 隐含地具有一种更加通用的方法来实现这一点。也许关于如何做到这一点,除了“当你拥有 1750 亿个神经网络权重时,不知何故就会发生这种情况”之外,没有别的可说的。但我强烈怀疑还有一个更简单、更有力的故事。

59f856dfa21405c3ba748ff13a8473e0.jpeg

“点赞有美意,赞赏是鼓励”

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/33370.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

最新版本的Anaconda环境配置、Cuda、cuDNN以及pytorch环境一键式配置流程

本教程是最新的深度学习入门环境配置教程,跟着本教程可以帮你解决入门深度学习之前的环境配置问题。同时,本教程拒绝琐碎,大部分以图例形式进行教程。这里我们安装的都是最新版本~ 文章目录 一、Anaconda的安装1.1 下载1.2 安装1.3 环境配置…

掌握Python的X篇_27_Python中标准库文档查阅方法介绍

前面的博文介绍了python的基本语法、模块及其导入方法。前人将各种方法封装成模块、库、函数供我们使用,如何去使用前人做好的东西,那就需要去查阅文档。今天就介绍python中官方文档的查阅方式。对于初学者而言,python自带的文档就已经足够好…

[软件工具][原创]OCR识字找图关键词找图以文搜图工具使用教程

OCR识字找图工具功能简介: 当你有一批图片但是想提取图片里面包含关键词的的图片,以前都是手工肉眼打开去找,其实这个大可不必,现在只需输入关键词,软件会自动搜索所有图片,只要包含指定关键词就会复制或者…

Vue3自定义简单的Swiper滑动组件-触控板滑动鼠标滑动左右箭头滑动-demo

代码实现了一个基本的滑动功能,通过鼠标按下、鼠标松开和鼠标移动事件来监听滑动操作。 具体实现逻辑如下: 在 onMounted 钩子函数中,我们为滚动容器添加了三个事件监听器:mousedown 事件:当鼠标按下时,设置…

C# Blazor 学习笔记(12):css样式设置

文章目录 前言添加引入css保证razor和css的对应关系手动引入css文件 前言 由于Blazor UI库暂时还不完善,有时候需要我们自己写css。Razor做好了css动态隔离的设置。 ASP.NET Core Blazor CSS 隔离 C#小轮子:Visual Studio自动编译Sass文件 添加引入c…

在Centos环境中搭建Nginx环境

一、Nginx概念简介 Nginx是一个轻量级的高性能HTTP反向代理服务器,同时它也是一个通用类型的代理服务器,支持绝大部分协议,如TCP、UDP、SMTP、HTTPS等。 Nginx与redis相同,都是基于多路复用模型构建出的产物,因此它与R…

【MySQL】并发执行事务可能存在的问题, 事物的四种隔离级别

文章目录 前言一、并发执行事务可能存在的问题1, 脏读问题2, 不可重复读3, 幻读 二、MySQL 的四种隔离级别1, READ UNCOMMITTED 读未提交2, READ COMMITTED 读已提交3, REPEATABLE READ 可重复读 (MySQL 的默认事务隔离级别)4, SERIALIZABLE 串行化 总结 前言 各位读者好, 我是…

idea如何上传项目到github(超详细)

idea如何上传项目到github 1、IDEA配置2、项目上传到本地仓库2.1、创建本地git仓库2.2、Add操作2.3、Commit操作 3、项目上传到Github4、拿到登录Github的token 1、IDEA配置 File-Settings-VersionControl-Git Git的安装路径下bin目录下的git.exe可执行文件 可以直接点 Gene…

QGIS3.28的二次开发六:VS不借助QT插件创建UI界面

上一篇博客我们说了在VS中如何使用QT插件来创建UI界面,但是我们二次开发QGIS的第一篇博客就说了,最好使用OSGeo4W中自动下载的QT进行QGIS二次开发,这样兼容性是最好的,那么该如何在VS中不使用外部安装的QT以及QT的VS插件情况下进行…

【vue3】vue3中父子组件传参:

文章目录 一、父传子:二、父调用子方法:三、子组件发送emit方法给父组件: 一、父传子: 【1】父组件传值: 【2】子组件接收: 二、父调用子方法: 【1】父组件调用: 【2】子组件暴…

RabbitMQ在CentOS下的安装

RabbitMQ的版本是3.8.2 1.环境配置:CentOs 7.6以上版本,我的版本是7.9,不要对yum换源,否则可能会安装失败。 echo "export LC_ALLen_US.UTF-8" >> /etc/profile source /etc/profile 以上命令,是…

vue使用ElementUI

1.安装 npm i element-ui -S 2.引入 2.1完整引入 import Vue from vue; import ElementUI from element-ui; import element-ui/lib/theme-chalk/index.css; import App from ./App.vue;Vue.use(ElementUI); 2.2按需引入 说明:为了输入时候有提示,建…

变形金刚在图像识别方面比CNN更好吗?

链接到文 — https://arxiv.org/pdf/2010.11929.pdf 一、说明 如今,在自然语言处理(NLP)任务中,转换器已成为goto架构(例如BERT,GPT-3等)。另一方面,变压器在计算机视觉任务中的使用…

Practices9(双指针)|283. 移动零、11. 盛最多水的容器、15. 三数之和

283. 移动零 1.题目: 给定一个数组 nums,编写一个函数将所有 0 移动到数组的末尾,同时保持非零元素的相对顺序。 请注意 ,必须在不复制数组的情况下原地对数组进行操作。 示例 1: 输入: nums [0,1,0,3,12] 输出: [1,3,12,0,…

第二十三章 原理篇:Pix2Seq

大夏天我好像二阳了真是要命啊。 现在找到工作了,感觉很快乐,但是也有了压力。 《论你靠吹牛混进公司后该怎么熬过试用期》 希望自己能保持学习的习惯!加油! 参考教程: https://arxiv.org/pdf/2109.10852.pdf https://…

高级进阶多线程——多任务处理、线程状态(生命周期)、三种创建多线程的方式

Java多线程 Java中的多线程是一个同时执行多个线程的进程。线程是一个轻量级的子进程,是最小的处理单元。多进程和多线程都用于实现多任务处理。 但是,一般使用多线程而不是多进程,这是因为线程使用共享内存区域。它们不分配单独的内存区域…

Nginx负载均衡以及keepalived高可用实验

Vip 10.1.122 Keepalived-master 10.1.1.132Keepalied-backup 10.1.1.133Realserver_1 10.1.1.136Realserver_2 10.1.1.137 四台机器上安装nginx,编译安装的话需要另外安装pcre包支持,安装在/usr/local/nginx Keepalived-master 和backu…

【网络编程·网络层】IP协议

目录 一、IP协议的概念 二、IP协议的报头 1、四位首部长度 2、16位总长度(解包) 3、8位协议(分用) 4、16位首部校验和 5、8位生存时间 6、32位源IP和32位目的IP 7、4位版本/8位服务类型 8、16位标识 9、3位标志 10、1…

基于kettle实现pg数据定时转存mongodb

mogodb 待创建 基于kettle实现pg数据定时转存mongodb_kettle 实时迁移 mongodb_呆呆的私房菜的博客-CSDN博客

git一次错误merge的回滚

场景:提交到sit的代码,结果解决冲突merge了DEV的代码,所以要回滚到合并之前的代码 (原因是我再网页上处理了冲突,他就自动merge了,如图—所以还是idea处理冲突,可控) 方式二: &…