大语言模型向量数据库

大语言模型&向量数据库

  • LARGE LANGUAGE MODELS
    • A. Vector Database & LLM Workflow
    • B. Vector Database for LLM
    • C. Potential Applications for Vector Database on LLM
    • D. Potential Applications for LLM on Vector Database
    • E. Retrieval-Based LLM
    • F. Synergized Example

文章来源:A Comprehensive Survey on Vector Database:Storage and Retrieval Technique, Challenge
链接: https://arxiv.org/pdf/2310.11703.pdf

LARGE LANGUAGE MODELS

Typically, large language models (LLMs) refer to Transformer language models that contain hundreds of billions (or more) of parameters, which are trained on massive text data. On a suite of traditional NLP benchmarks, GPT-4 outperforms both previous large language models and most state-of-the-art systems.
通常,大型语言模型(LLM)指的是包含数千亿(或更多)参数的 Transformer 语言模型,这些模型是在海量文本数据上训练出来的。在一套传统的 NLP 基准测试中,GPT-4 的表现优于以前的大型语言模型和大多数最先进的系统。

A. Vector Database & LLM Workflow

Databases and large language models sit at opposite ends of the data science research:
Databases are more concerned with storing data efficiently and retrieving it quickly and accurately.Large language models are more concerned with characterizing data and solving semantically related problems.If the database is specified as a vector database, a more ideal workflow can be constructed as follows:
数据库和大型语言模型处于数据科学研究的两端:数据库更关注高效地存储数据以及快速准确地检索数据。大型语言模型更关注数据特征和解决语义相关问题。
如果将数据库指定为矢量数据库,则可以构建如下较为理想的工作流程:

在这里插入图片描述
Fig. 1. An ideal workflow for combining vector databases and large language models.
图1。结合矢量数据库和大型语言模型的理想工作流程。

At first, the large language model is pre-trained using textual data, which stores knowledge that can be prepared for use in natural language processing tasks. The multimodal data is embedded and stored in a vector database to obtain vector representations. Next, when the user inputs a serialized textual question, the LLM is responsible for providing NLP capabilities, while the algorithms in the vector database are responsible for finding approximate nearest neighbors.
Combining the two gives more desirable results than using only LLM and the vector database.
If only LLM is used, the results obtained may not be accurate enough, while if only vector databases are used, the results obtained may not be user-friendly.
首先,使用文本数据对大型语言模型进行预训练,这些数据存储了可用于自然语言处理任务的知识。多模态数据被嵌入并存储在矢量数据库中,以获得矢量表示。接下来,当用户输入序列化文本问题时,LLM 负责提供 NLP 功能,而向量数据库中的算法则负责寻找近似近邻。
与只使用 LLM 和矢量数据库相比,将两者结合起来会得到更理想的结果。如果只使用 LLM,得到的结果可能不够准确,而如果只使用向量数据库,得到的结果可能对用户不友好。

B. Vector Database for LLM

1.Data: By learning from massive amounts of pre-training textual data, LLMs can acquire various emerging capabilties,which are not present in smaller models but are present in larger models, e.g.,in-context learning, chain-of-thought, and instruction following.Data plays a crucial role in LLM’s emerging ability, which in turn can unfold at three points:
1.数据: 通过从海量的预训练文本数据中学习,LLM 可以获得各种新兴能力,这些能力在较小的模型中不存在,但在较大的模型中存在,如上下文学习、思维链和指令跟随:

Data scale. This is the amount of data that is used to train the LLMs. According to, LLMs can improve their capabilities predictably with increasing data scale, even without targeted innovation. The larger the data scale, the more diverse and representative the data is, which can help the LLMs learn more patterns and relationships in natural language. LLMs can improve their capabilities predictably with increasing data scale, even without targeted innovation. However, data scale also comes with challenges, such as computational cost,environmental impact, and ethical issues.
数据规模 。这是用于训练LLM的数据量。根据,LLM可以随着数据规模的增加而提高其能力,即使没有针对性的创新。数据规模越大,数据就越具有多样性和代表性,这可以帮助LLM学习更多的自然语言模式和关系。LLM可以随着数据规模的增加而可预测地提高其能力,即使没有针对性的创新。然而,数据规模也带来了挑战,如计算成本、环境影响和道德问题。

Data quality. This is the accuracy, completeness, consistency, and relevance of the data that is used to train the LLMs.The higher the data quality, the more reliable and robust the LLMs are, which can help them avoid errors and biases.LLMs can benefit from data quality improvement techniques,such as data filtering, cleaning, augmentation, and balancing.However, data quality also requires careful evaluation and validation, which can be difficult and subjective.
数据质量。这是用于训练LLM的数据的准确性、完整性、一致性和相关性。数据质量越高,LLM就越可靠和稳健,这可以帮助它们避免错误和偏差。LLM可以受益于数据质量改进技术,如数据过滤、清理、增强和平衡。然而,数据质量也需要仔细的评估和验证,这可能是困难和主观的。

Data diversity. This is the variety and richness of the data that is used to train the LLMs. The more diverse the data,the more inclusive and generalizable the LLMs are, which can help them handle different languages, domains, tasks, and users. LLMs can achieve better performance and robustness by using diverse data sources, such as web text, books, news articles, social media posts, and more. However, data diversity also poses challenges, such as data alignment, integration, and protection.
数据多样性。这就是用于训练LLM的数据的多样性和丰富性。数据越多样化,LLM就越具有包容性和可推广性,这可以帮助它们处理不同的语言、域、任务和用户。LLM可以通过使用不同的数据源,如网络文本、书籍、新闻文章、社交媒体帖子等,实现更好的性能和稳健性。然而,数据多样性也带来了挑战,如数据一致性、集成和保护。

As for vector database, the traditional techniques of database such as cleaning, de-duplication and alignment can help LLM to obtain high-quality and large-scale data, and the storage in
vector form is also suitable for diverse data.
对于矢量数据库,传统的数据库技术,如清理、重复数据消除和对齐,可以帮助LLM获得高质量和大规模的数据,并且矢量形式的存储也适用于各种数据。

2.Model: In addition to the data, LLM has benefited from growth in model size. The large number of parameters creates challenges for model training, and storage. Vector databases can help LLM reduce costs and increase efficiency in this regard.
模型:除了数据之外,LLM还受益于模型规模的增长。大量的参数给模型训练和存储带来了挑战。矢量数据库可以帮助LLM在这方面降低成本并提高效率。

Distributed training. DBMS can help model storage in the context of model segmentation and integration. Vector databases can enable distributed training of LLM by allowing multiple workers to access and update the same vector data in parallel. This can speed up the training process and reduce the communication overhead among workers.
分布式训练。数据库管理系统有助于在模型分割和集成的背景下存储模型。矢量数据库可以允许多个工作人员并行访问和更新相同的矢量数据,从而实现 LLM 的分布式训练。这可以加快训练过程,减少工作人员之间的通信开销。

Model compression .The purpose of this is to reduce the complexity of the model and the number of parameters, reduce model storage and computational resources, and improve the efficiency of model computing. The methods used are typically pruning, quantization, grouped convolution, knowledge distillation, neural network compression, low-rank decomposition, and so on. Vector databases can help compress LLM by storing only the most important or representative vectors of the model,instead of the entire model parameters. This can reduce the storage space and memory usage of LLM, as well as the inference latency.
模型压缩。这样做的目的是降低模型的复杂性和参数的数量,减少模型存储和计算资源,提高模型计算的效率。所使用的方法通常是修剪、量化、分组卷积、知识提取、神经网络压缩、低秩分解等。向量数据库可以通过只存储模型中最重要或最具代表性的向量而不是整个模型参数来帮助压缩LLM。这可以减少LLM的存储空间和内存使用,以及推理延迟。

Vector storage. Vector databases can optimize the storage of vector data by using specialized data structures, such as inverted indexes, trees, graphs, or hashing. This can improve the performance and scalability of LLM applications that rely on vector operations, such as semantic search, recommendation,or question answering.
矢量存储。矢量数据库可以通过使用专门的数据结构(如反向索引、树、图或哈希)来优化矢量数据的存储。这可以提高依赖向量操作(如语义搜索、推荐或问答)的LLM应用程序的性能和可扩展性。

3.Retrieval: Users can use a large language model to generate some text based on a query or a prompt, however, the output may not be diversiform, consistent, or factual. Vector databases can ameliorate these problems on a case-by-case basis, improving the user experience.
检索:用户可以使用大型语言模型根据查询或提示生成一些文本,但输出可能不多样、不一致或不真实。矢量数据库可以在个案的基础上改善这些问题,改善用户体验。

Cross-modal support. V ector databases can support cross-modal search, which is the ability to search across different types of data, such as text, images, audio, or video. For example, an LLM can use a vector database to find images that are relevant to a text query, or vice versa. This can enhance the user experience and satisfaction by providing more diverse and rich results.
跨模态支持。矢量数据库可以支持跨模态搜索,即跨不同类型的数据(如文本、图像、音频或视频)进行搜索的能力。例如,LLM可以使用矢量数据库来查找与文本查询相关的图像,反之亦然。这可以通过提供更加多样化和丰富的结果来增强用户体验和满意度。

Real-time knowledge. V ector databases can enable real-time knowledge search, which is the ability to search for the most up-to-date and accurate information from various sources. For example, an LLM can use a vector database to find the latest news, facts, or opinions about a topic or event.This can improve the user’s awareness and understanding by providing more timely and reliable results.
实时知识。矢量数据库可以实现实时知识搜索,这是从各种来源搜索最新和准确信息的能力。例如,LLM可以使用矢量数据库来查找有关主题或事件的最新新闻、事实或意见。这可以通过提供更及时和可靠的结果来提高用户的意识和理解

Less hallucination. V ector databases can help reduce hallucination, which is the tendency of LLM to generate false or misleading statements. For example, an LLM can use a vector database to verify or correct the data that it generates or uses for search. This can increase the user’s trust and confidence by providing more accurate and consistent results.
更少的幻觉。矢量数据库可以帮助减少幻觉,这是LLM产生虚假或误导性陈述的趋势。例如,LLM可以使用矢量数据库来验证或更正其生成或用于搜索的数据。这可以通过提供更准确和一致的结果来增加用户的信任和信心。

C. Potential Applications for Vector Database on LLM

Vector databases and LLMs can work together to enhance each other’s capabilities and create more intelligent and interactive systems. Here are some potential applications for vector databases on LLMs:
矢量数据库和LLM可以协同工作,增强彼此的能力,创建更智能、更交互式的系统。以下是矢量数据库在LLM上的一些潜在应用:

1 Long-term memory: Vector Databases can provide LLMs with long-term memory by storing relevant documents or information in vector form. When a user gives a prompt to an LLM, the V ector Database can quickly retrieve the most similar or related vectors from its index and update the context for the LLM. This way, the LLM can generate more customized and informed responses based on the user’s query and the V ector Database’s content.
1 长期记忆:矢量数据库可以通过以矢量形式存储相关文档或信息,为LLM提供长期记忆。当用户提示LLM时,矢量数据库可以从其索引中快速检索最相似或最相关的矢量,并更新LLM的上下文。这样,LLM可以根据用户的查询和Vector数据库的内容生成更定制、更明智的响应。

2 Semantic search: Vector Databases can enable semantic search for LLMs by allowing users to search for texts based on their meaning rather than keywords. For example, a user can ask an LLM a natural language question and the Vector Database can return the most relevant documents or passages that answer the question. The LLM can then summarize or paraphrase the answer for the user in natural language.
2 语义搜索:矢量数据库可以允许用户根据文本的含义而不是关键字来搜索文本,从而实现LLM的语义搜索。例如,用户可以向LLM提出自然语言问题,矢量数据库可以返回回答该问题的最相关的文档或段落。LLM然后可以用自然语言为用户总结或转述答案。

3 Recommendation systems: Vector Databases can power recommendation systems for LLMs by finding similar or complementary items based on their vector representations. For example, a user can ask an LLM for a movie recommendation and the Vector Database can suggest movies that have similar plots, genres, actors, or ratings to the user’s preferences. The LLM can then explain why the movies are recommended and provide additional information or reviews.
3 推荐系统:矢量数据库可以根据LLM的矢量表示找到相似或互补的项目,从而为LLM的推荐系统提供动力。例如,用户可以向LLM请求电影推荐,矢量数据库可以建议情节、流派、演员或评分与用户偏好相似的电影。LLM可以解释为什么推荐这些电影,并提供额外的信息或评论。

D. Potential Applications for LLM on Vector Database

LLMs on vector databases are also very interesting and promising. Here are some potential applications for LLMs on vector databases:
矢量数据库上的LLM也是非常有趣和有前景的。以下是LLM在矢量数据库上的一些潜在应用:

1 Text generation: LLMs can generate natural language texts based on vector inputs from Vector Databases. For example, a user can provide a vector that represents a topic, a sentiment, a style, or a genre, and the LLM can generate a text that matches the vector. This can be useful for creating content such as articles, stories, poems, reviews, captions, summaries,etc.
1 文本生成:LLM可以根据矢量数据库中的矢量输入生成自然语言文本。例如,用户可以提供表示主题、情感、风格或流派的向量,LLM可以生成与该向量匹配的文本。这对于创建文章、故事、诗歌、评论、标题、摘要等内容非常有用。

2 Text augmentation: LLMs can augment existing texts with additional information or details from Vector Databases.For example, a user can provide a text that is incomplete, vague, or boring, and the LLM can enrich it with relevant facts, examples, or expressions from Vector Databases. This can be useful for improving the quality and diversity of texts such as essays, reports, emails, blogs, etc.
2 文本扩充:LLM可以使用矢量数据库中的附加信息或详细信息扩充现有文本。例如,用户可以提供不完整、模糊或无聊的文本,LLM可以使用向量数据库中的相关事实、示例或表达式来丰富文本。这有助于提高文章、报告、电子邮件、博客等文本的质量和多样性。

3 Text transformation: LLMs can transform texts from one form to another using VDBs. For example, a user can provide a text that is written in one language, domain, or format, and the LLM can convert it to another language, domain, or format using VDBs. This can be useful for tasks such as translation, paraphrasing, simplification, summarization, etc.
3 文本转换:LLM可以使用VDBs(向量数据库系统)将文本从一种形式转换为另一种形式。例如,用户可以提供以一种语言、域或格式编写的文本,LLM可以使用VDBs(向量数据库系统)将其转换为另一种语言,域或格式。这对于翻译、转述、简化、总结等任务非常有用。

E. Retrieval-Based LLM

1 Definition: Retrieval-based LLM is a language model which retrieves from an external datastore (at least during inference time).
定义:基于检索的LLM是一种从外部数据存储中检索的语言模型(至少在推理时间内)。

2 Strength: Retrieval-based LLM is a high-level synergy of LLMs and databases, which has several advantages over LLM only.
优势:基于检索的LLM是LLM和数据库的高级协同,与仅LLM相比有几个优势。

Memorize long-tail knowledge. Retrieval-based LLM can access external knowledge sources that contain more specific and diverse information than the pre-trained LLM parameters. This allows retrieval-based LLM to answer in-domain queries that cannot be answered by LLM only. Easily updated. Retrieval-based LLM can dynamically retrieve the most relevant and up-to-date documents from the data sources according to the user input. This avoids the need to fine-tune the LLM on a fixed dataset, which can be costly and time-consuming.
记住长尾知识。基于检索的LLM可以访问外部知识源,这些知识源包含比预先训练的LLM参数更具体、更多样的信息。这允许基于检索的LLM回答仅LLM无法回答的域内查询。易于更新。基于检索的LLM可以根据用户输入从数据源中动态检索最相关和最新的文档。这避免了在固定数据集上微调LLM的需要,这可能是昂贵和耗时的。

Better for interpreting and verifying. Retrieval-based LLM can generate texts that cite their sources of information, which allows the user to validate the information and potentially change or update the underlying information based on requirements. Retrieval-based LLM can also use fact-checking modules to reduce the risk of hallucinations and errors. Improved privacy guarantees. Retrieval-based LLM can protect the user’s privacy by using encryption and anonymization techniques to query the data sources. This prevents the data sources from collecting or leaking the user’s personal information or preferences. Retrieval-based LLM can also use differential privacy methods to add noise to the retrieved documents or the generated texts, which can further enhance the privacy protection.
更适合解释和验证。基于检索的LLM可以生成引用其信息来源的文本,这允许用户验证信息,并可能根据需求更改或更新基础信息。基于检索的LLM还可以使用事实核查模块来降低幻觉和错误的风险。改进了隐私保障。基于检索的LLM可以通过使用加密和匿名技术来查询数据源,从而保护用户的隐私。这防止了数据源收集或泄露用户的个人信息或偏好。基于检索的LLM还可以使用差分隐私方法来给检索到的文档或生成的文本添加噪声,这可以进一步加强隐私保护。

Reduce time and money cost. Retrieval-based LLM can save time and money for the user by reducing the computational and storage resources required for running the LLM.This is because retrieval-based LLM can leverage the existing data sources as external memory, rather than storing all the information in the LLM parameters. Retrieval-based LLM can also use caching and indexing techniques to speed up the document retrieval and passage extraction processes.
减少时间和金钱成本。基于检索的LLM可以通过减少运行LLM所需的计算和存储资源来为用户节省时间和金钱。这是因为基于检索的LLM可以利用现有的数据源作为外部存储器,而不是将所有信息存储在LLM参数中。基于检索的LLM还可以使用缓存和索引技术来加快文档检索和段落提取过程。

3 Inference: Multiple parts of the data flow are involved in the inference session.
推理:推理会话涉及数据流的多个部分。

在这里插入图片描述
Fig. 2. A retrieval-based LLM inference dataflow.
图 2. 基于检索的 LLM 推理数据流。

Datastore. The data store can be very diverse, it can have only one modality, such as a raw text corpus, or a vector database that integrates data of different modalities, and its treatment of the data determines the specific algorithms for subsequent retrieval. In the case of raw text corpus, which are generally at least a billion to trillion tokens, the dataset itself is unlabeled and unstructured, and can be used as a original knowledge base.
数据存储。数据存储可以是非常多样化的,它可以只有一种模态,例如原始文本语料库,或者集成不同模态数据的矢量数据库,并且它对数据的处理决定了后续检索的特定算法。在原始文本语料库的情况下,数据集本身是未标记的和非结构化的,可以用作原始知识库。

Index. When the user enters a query, it can be taken as the input for retrieval, followed by using a specific algorithm to find a small subset of the datastore that is closest to the query, in the case of vector databases the specific algorithms are the NNS and ANNS algorithms mentioned earlier.
索引。当用户输入查询时,可以将其作为检索的输入,然后使用特定的算法找到最接近查询的数据存储的一小部分,在矢量数据库的情况下,特定的算法是前面提到的NNS和ANNS算法。

F. Synergized Example

在这里插入图片描述
Fig. 3. A complex application of vector database + LLM for scientific research.
图3。一个复杂的应用向量数据库+ LLM的科学研究。

For a workflow that incorporates a large language model and a vector database, it can be understood by splitting it into four levels: the user level, the model level, the AI database level, and the data level, respectively.
对于包含大型语言模型和矢量数据库的工作流,可以通过将其分为四个级别来理解:用户级别、模型级别、人工智能数据库级别和数据级别。

For a user who has never been exposed to large language modeling, it is possible to enter natural language to describe their problem. For a user who is proficient in large language modeling, a well-designed prompt can be entered.
对于从未接触过大型语言建模的用户来说,可以输入自然语言来描述他们的问题。对于精通大型语言建模的用户,可以输入精心设计的提示。

The LLM next processes the problem to extract the key-words in it, or in the case of open source LLMs, the corresponding vector embeddings can be obtained directly.
LLM接下来处理该问题以提取其中的关键词,或者在开源LLM的情况下,可以直接获得相应的向量嵌入。

The vector database stores unstructured data and their joint embeddings. The next step is to go to the vector database to find similar nearest neighbors. The ones obtained from the sequences in the big language model are compared with the vector encodings in the vector database, by means of the NNS or ANNS algorithms. And different results are derived through a predefined serialization chain, which plays the role of a search engine.
矢量数据库存储非结构化数据及其联合嵌入。下一步是转到矢量数据库,查找相似的最近邻居。通过NNS或ANNS算法,将从大语言模型中的序列中获得的编码与向量数据库中的向量编码进行比较。不同的结果是通过一个预定义的序列化链得出的,它扮演着搜索引擎的角色。

If it is not a generalized question, the results derived need to be further put into the domain model, for example, imagine we are seeking an intelligent scientific assistant, which can be put
into the model of AI4S to get professional results. Eventually it can be placed again into the LLM to get coherent generated results.
如果这不是一个广义的问题,则需要将得出的结果进一步放入领域模型中,例如,想象我们正在寻找一个智能的科学助理,它可以放入AI4S的模型中以获得专业的结果。最终,可以将其再次放入LLM中以获得相干生成的结果。

For the data layer located at the bottom, one can choose from a variety of file formats such as PDF, CSV , MD, DOC,PNG, SQL, etc., and its sources can be journals, conferences,textbooks, and so on. Corresponding disciplines can be art,science, engineering, business, medicine, law, and etc.
对于位于底部的数据层,可以从PDF、CSV、MD、DOC、PNG、SQL等多种文件格式中进行选择,其来源可以是期刊、会议、教科书等。相应的学科可以是艺术、科学、工程、商业、医学、法律等。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/618673.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

element+vue 之图片放大器

1.安装插件 npm install vue-photo-zoom-pro2.main.js导入 // 放大镜 import VuePhotoZoomPro from vue-photo-zoom-pro Vue.use(VuePhotoZoomPro)3.页面使用 <vue-photo-zoom-pro:url"imgUrl":out-zoomer"true":scale"2"style"width:…

Leetcode202快乐数(java实现)

今天分享的题目是快乐数&#xff1a; 快乐数的定义如下&#xff1a; 快乐数&#xff08;Happy Number&#xff09;是指一个正整数&#xff0c;将其替换为各个位上数字的平方和&#xff0c;重复这个过程直到最后得到的结果为1&#xff0c;或者无限循环但不包含1。如果最终结果为…

使用ElementUI的el-tab+vxe-table表格+复选框选择

效果&#xff1a; 功能&#xff1a;首先进来是全部清空的状态的 点击左边选择不同项右边会实时发送接口获取数据填充表格 复选的内容可以保留显示&#xff0c;比如A的1勾选后切换到B再切换回来A的1仍然是勾选状态 说实话官网的setCheckboxRow方法我实现不了&#xff0c;这里…

2024年华夏银行总行社会招聘公告

信息科技部自动化测试与开发类岗  工作地点&#xff1a;北京市 学历要求&#xff1a;本科及以上 工作职责 1、持续推进自动化测试的开展&#xff0c;提升自动化测试覆盖率,包括方案设计、测试分析、测试执行和总结等。 2、负责自动化测试工具和框架搭建&#xff0c;根据…

CSAPP阅读笔记-信息的表示和处理

信息的表示和处理 包括整数、浮点数的存储格式、计算中可能存在的问题等 信息存储 大多数计算机使用8位的块&#xff0c;或者字节(byte)&#xff0c;作为最小的可寻址的内存单位&#xff0c;而不是访问内存中单独的位。机器级程序将内存视为一个非常大的字节数组&#xff0c…

fisco-bcos部署pro生产版本

我这里使用的 Ubuntu20.4系统&#xff0c;linux系统把操作命令apt改为yum即可 升级安装包 apt-get update 安装jdk&#xff0c;我这里使用jdk17 apt -y install openjdk-17-jdk-headless 查看java版本 java -version 安装依赖 apt-get install -y curl docker.io docker-com…

【Databend】行列转化:一行变多行和简单分列

文章目录 数据准备和需求生成序列和分隔函数根据分隔符变多行JSON 数据简单分列总结 数据准备和需求 行列转化在实际工作中很常见&#xff0c;其中最常见的有一行变多行&#xff0c;有下面一份数据&#xff1a; drop table if exists fact_suject_data; create table if not …

基于SSM+JSP的订餐管理系统的设计与实现

末尾获取源码 开发语言&#xff1a;Java Java开发工具&#xff1a;JDK1.8 后端框架&#xff1a;SSM 前端&#xff1a;采用JSP技术开发 数据库&#xff1a;MySQL5.7和Navicat管理工具结合 服务器&#xff1a;Tomcat8.5 开发软件&#xff1a;IDEA / Eclipse 是否Maven项目&#x…

金融疆界:在线支付系统渠道网关的创新设计(一)

这是《百图解码支付系统设计与实现》专栏系列文章中的第&#xff08;11.1&#xff09;篇。点击上方关注&#xff0c;深入了解支付系统的方方面面。 整个渠道网关的内容预计会分成5篇来讲&#xff1a;1&#xff09;定位、术语、概要设计。2&#xff09;领域模型、状态机设计。3…

数据结构第十三弹---链式二叉树基本操作(上)

链式二叉树 1、结构定义2、手动创建二叉树3、前序遍历4、中序遍历5、后序遍历6、层序遍历7、计算结点个数8、计算叶子结点个数9、计算第K层结点个数10、计算树的最大深度总结 1、结构定义 实现一个数据结构少不了数据的定义&#xff0c;所以第一步需要定义二叉树的机构。 typ…

学习笔记-mysql基础(DDL,DML,DQL)

一.DDL DDL,Data Definition Language,数据库定义语言,该语言包括以下内容: 对数据库的常用操作对表结构的常用操作修改表结构 1.对数据库的常用操作 -- 查看所有的数据库 show databases -- 创建数据库 create database [if not exists] test [charsetutf8] -- 切换 选择 …

记录汇川:H5U与Factory IO测试12

主程序&#xff1a; 子程序&#xff1a; IO映射 子程序&#xff1a; 辅助出料 子程序&#xff1a; 自动程序 Factory IO配置&#xff1a; 实际动作如下&#xff1a; Factory IO测试12

基于SSM的仓库在线管理系统

末尾获取源码 开发语言&#xff1a;Java Java开发工具&#xff1a;JDK1.8 后端框架&#xff1a;SSM 前端&#xff1a;Vue 数据库&#xff1a;MySQL5.7和Navicat管理工具结合 服务器&#xff1a;Tomcat8.5 开发软件&#xff1a;IDEA / Eclipse 是否Maven项目&#xff1a;是 目录…

20-链表-删除链表中的节点

这是链表的第20题&#xff0c;力扣链接。 有一个单链表的 head&#xff0c;我们想删除它其中的一个节点 node。 给你一个需要删除的节点 node 。你将 无法访问 第一个节点 head。 链表的所有值都是 唯一的&#xff0c;并且保证给定的节点 node 不是链表中的最后一个节点。 删除…

支持向量机(公式推导+举例应用)

文章目录 引言间隔与支持向量机对偶问题&#xff08;拉格朗日乘子法&#xff09;SMO算法核函数软间隔与正则化软间隔正则化&#xff08;罚函数法&#xff09; 模型的稀疏性结论实验分析 引言 在机器学习领域&#xff0c;支持向量机&#xff08;Support Vector Machine&#xf…

蓝牙音视频远程控制协议(AVRCP) AV/C command格式介绍

零.声明 本专栏文章我们会以连载的方式持续更新&#xff0c;本专栏计划更新内容如下&#xff1a; 第一篇:蓝牙综合介绍 &#xff0c;主要介绍蓝牙的一些概念&#xff0c;产生背景&#xff0c;发展轨迹&#xff0c;市面蓝牙介绍&#xff0c;以及蓝牙开发板介绍。 第二篇:Trans…

生产数据不备份,用时两行泪

背景&#xff1a;项目使用pg一主一从&#xff0c;因慢sql导致查询慢&#xff0c;所以想从原本的4核加到16核&#xff0c;联系好运维后&#xff0c;打算先从从库开始操作&#xff0c;机器上的pgsql都正常关闭&#xff0c;然后停止&#xff0c;关机&#xff0c;扩容一切都很顺利&…

OpenCV-Python(34):FAST算法

目标 理解 FAST 算法的基础使用OpenCV 中的FAST 算法相关函数进行角点检测 介绍 FAST算法&#xff08;Features from Accelerated Segment Test&#xff09;是一种用于在图像中快速检测角点的算法。它是一种基于像素的检测方法&#xff0c;具有高效、准确的特点&#xff0c;常…

【算法分析与设计】最短路径和

题目&#xff1a; 给定一个包含非负整数的 m x n 网格 grid &#xff0c;请找出一条从左上角到右下角的路径&#xff0c;使得路径上的数字总和为最小。 说明&#xff1a;每次只能向下或者向右移动一步。 示例&#xff1a; 示例 1&#xff1a; 输入&#xff1a;grid [[1,3,1],…

【PHP AES加解密示例】从入门到精通,一篇文章让你掌握加密解密技术!

一、引言 随着互联网的普及&#xff0c;数据安全问题越来越受到人们的关注。在众多加密算法中&#xff0c;AES&#xff08;Advanced Encryption Standard&#xff09;因其高效、安全的特点被广泛应用。本文将通过PHP语言&#xff0c;为大家展示一个简单的AES加解密示例&#x…