Benedict Evans:Ways to think about AGI思考 AGI 的方法:

​Benedict Evans本文发布于2024 年 5 月 4 日


How do we think about a fundamentally unknown and unknowable risk, when the experts agree only that they have no idea?
当专家们一致认为他们一无所知时,我们如何看待根本上未知和不可知的风险?

The manuscript for ‘A Logic Named Joe’
《乔的逻辑》手稿

In 1946, my grandfather, writing as ‘Murray Leinster’, published a science fiction story called ‘A Logic Named Joe’. Everyone has a computer (a ‘logic’) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, ‘Joe’, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - ‘Check your censorship circuits!’ - until they work out what to unplug. (My other grandfather, meanwhile, was using computers to spy on the Germans, and then the Russians.)
1946 年,我的祖父以“Murray Leinster”的名义出版了一部科幻小说,名为《名为乔的逻辑》。每个人都有一台连接到全球网络的计算机(一个“逻辑”),可以执行从银行业务到报纸和视频通话的所有操作。有一天,这些逻辑之一“乔”开始对网络上任何地方的任何请求提供有用的答案:例如,发明一种无法检测到的毒药,或者提出抢劫银行的最佳方法。恐慌随之而来——“检查你的审查电路!”——直到他们弄清楚要拔掉什么。 (与此同时,我的另一位祖父正在使用计算机监视德国人,然后是俄罗斯人。)

For as long as we’ve thought about computers, we’ve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of ‘artificial intelligence’, and wondered what that would mean, and indeed, what we’re trying to say with the word ‘intelligence’. There’s an old joke that ‘AI’ is whatever doesn’t work yet, because once it works, people say ‘that’s not AI - it’s just software’. Calculators do super-human maths, and databases have super-human memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are ‘super-human’ but they’re just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as ‘general intelligence’ and hence making it would be ‘artificial general intelligence’ - AGI.
自从我们思考计算机以来,我们就想知道它们是否能够从纯粹的机器、打孔卡和数据库跳跃到某种“人工智能”,并想知道这意味着什么,事实上, ,我们想用“智能”这个词来表达什么。有一个老笑话说,“人工智能”是指还没有发挥作用的东西,因为一旦它发挥作用,人们就会说“这不是人工智能——这只是软件”。计算器具有超人的数学能力,数据库具有超人的记忆力,但它们不能做任何其他事情,而且它们不明白自己在做什么,就像洗碗机了解盘子或钻头了解孔一样。钻机只是一台机器,数据库是“超人”,但它们只是软件。不知何故,人们有一些不同的东西,在某种程度上,狗、黑猩猩、章鱼和许多其他生物也是如此。人工智能研究人员开始将其称为“通用智能”,因此将其称为“通用人工智能”——AGI。

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.
如果我们真的能够在软件中创造出与人类智能同等的东西,那么显然这将是一件非常大的事情。我们能开发出能够推理、计划和理解的软件吗?至少,这将是我们可以实现自动化的巨大变化,正如我的祖父和其他一千位科幻小说作家所指出的那样,这可能意味着更多。

Every few decades since 1946, there’s been a wave of excitement that sometime like this might be close, each time followed by disappointment and an ‘AI Winter’, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in “from three to eight years we will have a machine with the general intelligence of an average human being”, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didn’t work).
自 1946 年以来,每隔几十年,就会出现一波兴奋的浪潮,有时这样的时刻可能即将到来,但每次随之而来的是失望和“人工智能冬天”,因为当时的技术进展速度放缓,我们意识到我们需要一个未知的数字未知的进一步突破。 1970 年,人工智能先驱马文·明斯基 (Marvin Minsky) 声称,“三到八年内,我们将拥有一台具有普通人类一般智能的机器”,但每次我们认为我们有一种方法可以实现这一目标时,结果却是:它只是更多的软件(或者只是不起作用)。

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer. At the extreme, the so-called ‘doomers’ argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (‘This is very dangerous and we are building it as fast as possible, but don’t let anyone else do it’), but plenty of it is sincere.  
众所周知,18个月前兴起的大型语言模型(LLMs)又掀起了另一波这样的浪潮。严肃的人工智能科学家以前认为通用人工智能可能还需要几十年的时间,现在他们认为它可能更近了。在极端情况下,所谓的“末日论者”认为,当前的研究确实存在自发出现通用人工智能的风险,这可能对人类构成威胁,并呼吁政府采取紧急行动。其中一些来自自私的公司寻求竞争壁垒(“这是非常危险的,我们正在尽快建造它,但不要让其他人这样做”),但很多都是真诚的。

(I should point out, incidentally, that the doomers’ ‘existential risk’ concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)
(顺便说一句,我应该指出,末日论者的“存在风险”担心 AGI 可能想要并且能够摧毁或控制人类,或者将我们视为宠物,这与更常见的担忧完全无关,例如,政府将如何使用人工智能进行人脸识别,或谈论人工智能偏见,或人工智能深度假货,以及人们滥用人工智能或搞砸人工智能的所有其他方式,就像他们对待其他所有技术一样。)

However, for every expert that thinks that AGI might now be close, there’s another who doesn’t. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.
然而,对于每一位认为通用人工智能现在可能已经接近实现的专家来说,还有另一位专家不这么认为。有些人认为LLMs可能会一路扩展到通用人工智能,而另一些人则再次认为我们仍然需要未知数量的未知进一步突破。

More importantly, they would all agree that they don’t actually know. This is why I used terms like ‘might’ or ‘may’ - our first stop is an appeal to authority (often considered a logical fallacy, for what that’s worth), but the authorities tell us that they don’t know, and don’t agree.
更重要的是,他们都会同意他们实际上并不知道。这就是为什么我使用“可能”或“可能”等术语 - 我们的第一站是诉诸权威(通常被认为是逻辑谬误,因为它的价值),但权威告诉我们他们不知道,也不知道不同意。

They don’t know, either way, because we don’t have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we don’t know why LLMs seem to work so well, and we don’t know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we don’t know why they work. We have many theories for parts of these, but we don’t know the system. Absent an appeal to religion, we don’t know of any reason why AGI cannot be created (it doesn’t appear to violate any law of physics), but we don’t know how to create it or what it is, except as a concept.
不管怎样,他们不知道,因为我们没有一个关于一般智力到底是什么的连贯的理论模型,也不知道为什么人似乎比狗更擅长,也不知道人或狗与乌鸦到底有什么不同。章鱼。同样,我们不知道为什么 LLMs 看起来效果这么好,也不知道它们可以改进多少。我们在基本和机械层面上了解神经元和令牌,但我们不知道它们为何起作用。对于其中的某些部分,我们有很多理论,但我们不了解这个系统。如果没有宗教诉求,我们不知道 AGI 不能被创建的任何原因(它似乎不违反任何物理定律),但我们不知道如何创建它或它是什么,除了一个概念。

And so, some experts look at the dramatic progress of LLMs and say ‘perhaps!’ and other say ‘perhaps, but probably not!’, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.
因此,一些专家看到LLMs的巨大进展并说“也许!”而其他专家则说“也许,但可能不是!”,这从根本上来说是一种直观和本能的评估,而不是科学的评估。

Indeed, ‘AGI’ itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.
事实上,“AGI”本身就是一个思想实验,或者,有人可能认为,它是一个占位符。因此,我们必须小心循环定义,以及将某物定义为存在、确定性或不可避免。

If we start by defining AGI as something that is in effect a new life form, equal to people in ‘every’ way (barring some sense of physical form), even down to concepts like ‘awareness’, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, you’ve just begged the question.
如果我们首先将 AGI 定义为实际上是一种新的生命形式,在“各个”方面与人平等(除了某种物理形式),甚至包括“意识”、情感和权利等概念,然后假设如果能够访问更多计算,它会更加智能(并且地球上甚至还有更多可用的备用计算),并且假设它可以立即突破任何控制,那么这听起来很危险,但实际上,你'我只是提出这个问题。

As Anselm demonstrated, if you define God as something that exists, then you’ve proved that God exists, but you won’t persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselm’s proof was invalid) but you cannot create knowledge like that.
正如安瑟姆所证明的,如果你将上帝定义为存在的东西,那么你就证明了上帝存在,但你无法说服任何人。事实上,很多 AGI 对话听起来就像过去一些神学家和哲学家试图通过第一原理推理来推断上帝的本质。你的论点的内部逻辑可能非常强大(哲学家花了几个世纪才弄清楚为什么安瑟姆的证明无效),但你不能像那样创造知识。

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesn’t of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say ‘people were wrong about X in the past so they must be wrong about Y now’, and the fact that leading AI scientists were wrong before absolutely does not tell us they’re wrong now, but it does tell us to hesitate. They can all be wrong at the same time.
同样,你可以调查大量人工智能科学家,了解他们的不确定性,并得出统计上准确的结果平均值,但这本身并不能产生确定性,就像调查统计上准确的神学家样本不会产生确定性一样上帝的本质,或者也许,将足够多的次级抵押贷款捆绑在一起可以产生 AAA 债券,这是通过平均不确定性来产生确定性的另一种尝试。预测技术的最基本的谬误之一是说“人们过去对 X 的看法是错误的,所以他们现在对 Y 的看法一定是错误的”,而领先的人工智能科学家以前错了这一事实绝对不能告诉我们他们错了现在,但它确实告诉我们要犹豫。他们可能同时都错了。

Meanwhile, how do you know that’s what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, there’s no a priori reason why it must be interesting. ‘God’ might be real, and boring, and not care about us, and we don’t know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence ‘just’ about speed?). We might produce general intelligence that’s hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We don’t know. 
与此同时,你怎么知道这就是一般智力的样子?以赛亚·柏林曾经提出,即使假设宇宙原则上有一个目的,并且原则上它是可发现的,也没有先验的理由说明它一定是有趣的。 “上帝”可能是真实的,而且很无聊,并不关心我们,而且我们不知道我们会得到什么样的通用人工智能。它的智能可能比人高 100 倍,或者可能速度更快,但并没有变得更智能(智能“仅仅”与速度有关吗?)。我们可能会产生非常有用的通用智能,但并不比狗聪明,毕竟狗确实具有通用智能,并且像数据库或计算器一样,具有超人的能力(气味)。我们不知道。

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about ‘general intelligence’ as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the ‘general intelligence’ of Llama 6 or ChatGPT 7 and say “That’s not AGI, it’s just software!” We created the term AGI because AI came just to mean software, and perhaps ‘AGI’ will be the same, and we’'ll need to invent another term.
更进一步,当我听马克·扎克伯格谈论 Llama 3 时,我惊讶地发现他所说的“通用智能”将分阶段出现,每次会以不同的方式出现。也许人们会指着 Llama 6 或 ChatGPT 7 的“通用智能”说“这不是 AGI,这只是软件!”我们创造了“AGI”这个术语,因为人工智能只是意味着软件,也许“AGI”也是一样的,我们需要发明另一个术语。

This fundamental uncertainty, even at the level of what we’re talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isn’t fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we don’t know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly it’s been a very good thing that we should want much more of.
这种根本性的不确定性,即使是在我们正在谈论的层面上,也许就是为什么所有关于通用人工智能的讨论似乎都转向了类比。如果你可以将其与核裂变进行比较,那么你就知道会发生什么,并且知道该怎么做。但这不是裂变,也不是生物武器,也不是陨石。这是一种软件,它可能会或可能不会变成通用人工智能,它可能有也可能没有某些特征,其中一些可能是不好的,而我们不知道。虽然巨大的陨石撞击地球只会带来坏事,但软件和自动化都是工具,在过去 200 年里,自动化有时对人类来说是坏事,但大多数情况下,它是一件非常好的事情,我们应该想要更多。

Hence, I’ve already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didn’t explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel… will it get there? We have no equivalents here. We don’t know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!
因此,我已经用神学作为类比,但我更喜欢的类比是阿波罗计划。我们有重力理论和火箭工程理论。我们知道为什么火箭不会爆炸,如何对燃烧室中的压力进行建模,以及如果我们将它们增大 25% 会发生什么。我们知道他们为什么上升,以及他们需要走多远。你可以把土星火箭的规格交给艾萨克·牛顿,他可以做数学计算,至少在原则上:这么大的重量,这么大的推力,这么多的燃料……它能到达那里吗?我们这里没有类似的东西。我们不知道为什么 LLMs 有效,它们能达到多大,或者它们必须走多远。然而,我们不断地把它们做得更大,而且它们似乎确实越来越接近了。他们会到达那里吗?也许是吧!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (there’s an old English joke about a Frenchman who says ‘that’s all very well in practice, but does it work in theory’). Yet while we can, empirically, see the rocket going up, we don’t know how far away the moon is. We can’t plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth. 
在这个主题上,有些人认为我们正处于人工智能或通用人工智能的经验阶段:我们正在构建事物并进行观察,但不知道它们为什么起作用,而理论可以稍后出现,就像伽利略出现在牛顿之前一样(有一个古老的理论)一个关于一个法国人的英语笑话,他说“这在实践中一切都很好,但在理论上可行”)。然而,虽然我们可以凭经验看到火箭上升,但我们不知道月球距离有多远。我们无法将人和 ChatGPT 绘制在图表上,并画一条线来说明一个人何时会到达另一个人,即使只是推断当前的增长率。

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, here’s another magazine writer on unknown risks:
所有类比都有缺陷,当然,我的类比中的缺陷是,如果阿波罗计划出错,即使在理论上,其负面影响也不是人类的终结。在我祖父之前,这是另一位关于未知风险的杂志作家:


What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascals’s Wager! Anselm’s Proof!), but if you can’t know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know they’re real, we know they could destroy mankind, and they have no benefits at all (unless they’re very very small). And yet, we’re not really looking for them.
那么,对于真实但未知的风险,您的首选态度是什么?你更喜欢哪个思想实验?我们可以回到半被遗忘的本科生哲学(帕斯卡的赌注!安瑟姆的证明!),但如果你不知道,你会担心,还是耸耸肩?我们如何看待其他风险?陨石对于通用人工智能来说是一个糟糕的类比,因为我们知道它们是真实的,我们知道它们可以毁灭人类,而且它们根本没有任何好处(除非它们非常非常小)。然而,我们并不是真的在寻找它们。

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and can’t meet demand), but on a decade’s view the models will get more efficient and the chips will be everywhere. In the end, you can’t ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)
不过,假设你认为厄运者是对的:你能做什么?该技术原则上是公开的。开源模型正在激增。目前,LLMs需要大量昂贵的芯片(Nvidia 在过去 12 个月销售了 475 亿美元,无法满足需求),但从十年的角度来看,模型将变得更加高效,芯片将变得更加高效。到处。最后,你不能禁止数学。从几十年的范围来看,它无论如何都会发生。如果你必须用核裂变来类比,想象一下,如果我们发现了一种方法,任何人都可以用家用材料在车库里制造炸弹——祝你好运,避免这种情况发生。 (末日论者可能会回应说,这回答了费米悖论:在某个时刻,每个文明都创造了通用人工智能,并将它们变成了回形针。)

By default, though, this will follow all the other waves of AI, and become ‘just’ more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer myself.
不过,默认情况下,这将跟随所有其他人工智能浪潮,并“只是”更多的软件和更多的自动化。自动化总是会产生摩擦性的痛苦,回到勒德派,英国邮局丑闻提醒我们,你不需要通用人工智能来毁掉人们的生活。 LLMs会产生更多的痛苦和更多的丑闻,但生活还要继续。至少,这是我自己更喜欢的答案。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/13038.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

程序验证之Dafny--证明霍尔逻辑的半自动化利器

一、What is Dafny?【来自官网介绍 Dafny 】 1)介绍 Dafny 是一种支持验证的编程语言,配备了一个静态程序验证器。 通过将复杂的自动推理与熟悉的编程习语和工具相结合,使开发者能够编写可证明正确的代码(相对于 {P}S{Q} 这种…

表白成功率百分百的向女朋友表白网页源代码,向女友表白HTML源代码

表白成功率百分百的向女朋友表白网页源代码&#xff0c;向女友表白HTML源代码 效果&#xff1a; 完整代码下载地址&#xff1a;向女友表白HTML源代码 <!DOCTYPE html> <!--STATUS OK--> <html><head><meta http-equiv"Content-Type" c…

玩转Matlab-Simscape(初级)-01-从一个简单模型开始学习之旅

** 玩转Matlab-Simscape&#xff08;初级&#xff09;- 01 - 从一个简单模型开始学习之旅 ** 目录 玩转Matlab-Simscape&#xff08;初级&#xff09;- 01 - 从一个简单模型开始学习之旅 前言一、从模板开始建模二、建模一个简单的连杆2.1 建模2.2 生成子系统 总结 前言 在产…

23.HashMap的put方法流程

一、put方法的流程图 二、put方法的执行步骤 首先&#xff0c;根据key值计算哈希值。然后判断table数组是否为空或者数组长度是否为0&#xff0c;是的话则要扩容&#xff0c;resize&#xff08;&#xff09;。接着&#xff0c;根据哈希值计算数组下标。如果这个下标位置为空&a…

第 397 场 LeetCode 周赛题解

A 两个字符串的排列差 模拟&#xff1a;遍历 s s s 记录各字符出现的位置&#xff0c;然后遍历 t t t 计算排列差 class Solution {public:int findPermutationDifference(string s, string t) {int n s.size();vector<int> loc(26);for (int i 0; i < n; i)loc[s…

合并K个升序链表

题目 解法一 优先级队列 思想 将每个链表中的一个节点存放到优先级队列中&#xff0c;本题采用小根堆&#xff0c;将小根堆中的根节点取出&#xff0c;插入到最终的链表中&#xff0c;并且将该节点在原链表中的下一个节点插入小根堆中&#xff08;需要向下调整&#xff09;&a…

【019】基于SSM+JSP实现的进销存管理系统

项目介绍 进销存管理系统是对企业生产经营中物料流、资金流进行条码全程跟踪管理&#xff0c;从接获订单合同开始&#xff0c;进入物料采购、入库、领用到产品完工入库、交货、回收货款、支付原材料款等&#xff0c;每一步都为您提供详尽准确的数据。有效辅助企业解决业务管理…

【机器学习】:基于决策树与随机森林对数据分类

机器学习实验报告&#xff1a;决策树与随机森林数据分类 实验背景与目的 在机器学习领域&#xff0c;决策树和随机森林是两种常用的分类算法。决策树以其直观的树形结构和易于理解的特点被广泛应用于分类问题。随机森林则是一种集成学习算法&#xff0c;通过构建多个决策树并…

kafka用java收发消息

用java客户端代码来对kafka收发消息 具体代码如下 package com.cool.interesting.kafka;import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; i…

商品服务:SPUSKU规格参数销售属性

1.Object划分 1.PO&#xff08;Persistant Object&#xff09;持久对象 PO就是对应数据库中某个表中的一条记录&#xff0c;多个记录可以用PO的集合。PO中应该不报含任何对数据库的操作 2.DO(Domain Object) 领域对象 就是从现实世界中抽象出来的有形或无形的业务实体。 3…

SPI通信(使用SPI读写W25Q64)

SPI通信协议 • SPI&#xff08;Serial Peripheral Interface&#xff09;是由Motorola公司开发的一种通用数据总线 • 四根通信线&#xff1a; SCLK:串行时钟线&#xff0c;用来提供时钟信号的。 MOSI:主机输出&#xff0c;从机输入 MISO:从机输出&#xff0c;主机输入 SS:…

Java中的数据类型与变量

引言&#xff1a; 哈喽&#xff0c;各位读者老爷们大家好呀,long time no see!这里是小堇Java小课堂&#xff0c;在本课堂中我们将继续分享Java中的数据类型与变量&#xff0c;标识符&#xff0c;关键字等知识&#xff0c;那我们启程咯&#xff01; 数据类型与变量 1.字面变量…

红蓝对抗 网络安全 网络安全红蓝对抗演练

什么是红蓝对抗 在军事领域&#xff0c;演习是专指军队进行大规模的实兵演习&#xff0c;演习中通常分为红军、蓝军&#xff0c;演习多以红军守、蓝军进攻为主。类似于军事领域的红蓝军对抗&#xff0c;网络安全中&#xff0c;红蓝军对抗则是一方扮演黑客&#xff08;蓝军&…

pytest教程-46-钩子函数-pytest_sessionstart

领取资料&#xff0c;咨询答疑&#xff0c;请➕wei: June__Go 上一小节我们学习了pytest_report_testitemFinished钩子函数的使用方法&#xff0c;本小节我们讲解一下pytest_sessionstart钩子函数的使用方法。 pytest_sessionstart 是 Pytest 提供的一个钩子函数&#xff0c…

Anaconda下载安装

看到这篇文章的同学们&#xff0c;说明你们是要下载Anaconda&#xff0c;这篇文章讲的就是下载安装教程。 Anaconda下载网址&#xff1a; Download Now | Anaconda 根据我们需要的系统版本下载&#xff0c;我的电脑是window&#xff0c;所以选择第一个&#xff0c;如下图&am…

javaEE进阶——SpringBoot与SpringMVC第一讲

文章目录 什么是springMVCSpringMVC什么是模型、视图、控制器MVC和SpringMVC的关系SpringMVC的使用第一个SpringMVC程序RestController什么是注解 那么RestController到底是干嘛的呢&#xff1f;RequestMapping 如何接收来自请求中的querystryingRequestParamRequestMapping(&q…

运用MongoDB Atlas释放开发者潜能同时把控成本

在当下的商业环境中&#xff0c;不可预测性已经成为常态&#xff0c;工程团队负责人必须在把控不可预测性和优化IT成本的双重挑战下谋求平衡。 咨询公司德勤2024 MarginPLUS调查收集了300多位企业负责人的见解&#xff0c;报告中重点介绍了面对动荡的全球经济环境&#xff0c;…

电子邮箱是什么?付费电子邮箱和免费电子邮箱有什么区别?

注册电子邮箱前&#xff0c;有付费电子邮箱和免费电子邮箱两类选择。付费的电子邮箱和免费的电子邮箱有什么区别呢&#xff1f;区别主要在于存储空间、功能丰富度和售后服务等方面&#xff0c;本文将为您详细介绍。 一、电子邮箱是什么&#xff1f; 电子邮箱就是线上的邮局&a…

labelimg删除用不到的标签(yolo格式)以及 下载使用

问题&#xff1a;当我们标注完成新的类别后后直接删除classes.txt中不需要的类别之后再次打开labelimg会闪退&#xff0c;如何删除不需要的标签并且能够正确运行呢&#xff1f;&#xff08;yolo格式&#xff09; 原因&#xff1a;当我们打开labelimg进行标注的时候&#xff0c…

LVM - Linux磁盘逻辑卷管理器概念讲解、实践及所遇到的问题

1、lvm概念 逻辑卷管理器(LogicalVolumeManager)本质上是一个虚拟设备驱动,是在内核中块设备和物理设备之间添加的一个新的抽象层次,它可以将几块磁盘(物理卷,PhysicalVolume)组合起来形成一个存储池或者卷组(VolumeGroup)。LVM可以每次从卷组中划分出不同大小的逻辑卷(Logi…