自然语言处理从入门到应用——LangChain:记忆(Memory)-[记忆的类型Ⅲ]

分类目录:《自然语言处理从入门到应用》总目录


对话令牌缓冲存储器ConversationTokenBufferMemory

ConversationTokenBufferMemory在内存中保留了最近的一些对话交互,并使用标记长度来确定何时刷新交互,而不是交互数量。

from langchain.memory import ConversationTokenBufferMemory
from langchain.llms import OpenAI
llm = OpenAI()
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.load_memory_variables({})

输出:

{‘history’: ‘Human: not much you\nAI: not much’}

我们还可以将历史记录作为消息列表获取,如果我们正在使用聊天模型,将非常有用:

memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10, return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
在链式模型中的应用

让我们通过一个例子来演示如何在链式模型中使用它,同样设置verbose=True,以便我们可以看到提示信息。

from langchain.chains import ConversationChain
conversation_with_summary = ConversationChain(llm=llm, # We set a very low max_token_limit for the purposes of testing.memory=ConversationTokenBufferMemory(llm=OpenAI(), max_token_limit=60),verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi, what's up?
AI:> Finished chain.

输出:

" Hi there! I'm doing great, just enjoying the day. How about you?"

输入:

conversation_with_summary.predict(input="Just working on writing some documentation!")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: Hi, what's up?
AI:  Hi there! I'm doing great, just enjoying the day. How about you?
Human: Just working on writing some documentation!
AI:> Finished chain.

输出:

    ' Sounds like a productive day! What kind of documentation are you writing?'

输入:

conversation_with_summary.predict(input="For LangChain! Have you heard of it?")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: Hi, what's up?
AI:  Hi there! I'm doing great, just enjoying the day. How about you?
Human: Just working on writing some documentation!
AI:  Sounds like a productive day! What kind of documentation are you writing?
Human: For LangChain! Have you heard of it?
AI:> Finished chain.

输出:

    " Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about?"

输入:

# 我们可以看到这里缓冲区被更新了
conversation_with_summary.predict(input="Haha nope, although a lot of people confuse it for that")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: For LangChain! Have you heard of it?
AI:  Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about?
Human: Haha nope, although a lot of people confuse it for that
AI:> Finished chain.

输出:

" Oh, I see. Is there another language learning platform you're referring to?"

基于向量存储的记忆VectorStoreRetrieverMemory

VectorStoreRetrieverMemory将内存存储在VectorDB中,并在每次调用时查询最重要的前 K K K个文档。与大多数其他Memory类不同,它不明确跟踪交互的顺序。在这种情况下,“文档”是先前的对话片段。这对于提及AI在对话中早些时候得知的相关信息非常有用。

from datetime import datetime
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.memory import VectorStoreRetrieverMemory
from langchain.chains import ConversationChain
from langchain.prompts import PromptTemplate
初始化VectorStore

根据我们选择的存储方式,此步骤可能会有所不同,我们可以查阅相关的VectorStore文档以获取更多详细信息。

import faissfrom langchain.docstore import InMemoryDocstore
from langchain.vectorstores import FAISSembedding_size = 1536 # Dimensions of the OpenAIEmbeddings
index = faiss.IndexFlatL2(embedding_size)
embedding_fn = OpenAIEmbeddings().embed_query
vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {})
创建VectorStoreRetrieverMemory

记忆对象是从VectorStoreRetriever实例化的。

# In actual usage, you would set `k` to be a higher value, but we use k=1 to show that the vector lookup still returns the semantically relevant information
retriever = vectorstore.as_retriever(search_kwargs=dict(k=1))
memory = VectorStoreRetrieverMemory(retriever=retriever)# When added to an agent, the memory object can save pertinent information from conversations or used tools
memory.save_context({"input": "My favorite food is pizza"}, {"output": "thats good to know"})
memory.save_context({"input": "My favorite sport is soccer"}, {"output": "..."})
memory.save_context({"input": "I don't the Celtics"}, {"output": "ok"}) # 
# Notice the first result returned is the memory pertaining to tax help, which the language model deems more semantically relevant
# to a 1099 than the other documents, despite them both containing numbers.
print(memory.load_memory_variables({"prompt": "what sport should i watch?"})["history"])

输出:

input: My favorite sport is soccer
output: ...
在对话链中使用

让我们通过一个示例来演示,在此示例中我们继续设置verbose=True以便查看提示。

llm = OpenAI(temperature=0) # Can be any valid LLM
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
{history}(You do not need to use these pieces of information if not relevant)Current conversation:
Human: {input}
AI:"""
PROMPT = PromptTemplate(input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(llm=llm, prompt=PROMPT,# We set a very low max_token_limit for the purposes of testing.memory=memory,verbose=True
)
conversation_with_summary.predict(input="Hi, my name is Perry, what's up?")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite food is pizza
output: thats good to know(You do not need to use these pieces of information if not relevant)Current conversation:
Human: Hi, my name is Perry, what's up?
AI:> Finished chain.

输出:

" Hi Perry, I'm doing well. How about you?"

输入:

# Here, the basketball related content is surfaced
conversation_with_summary.predict(input="what's my favorite sport?")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite sport is soccer
output: ...(You do not need to use these pieces of information if not relevant)Current conversation:
Human: what's my favorite sport?
AI:> Finished chain.

输出:

  ' You told me earlier that your favorite sport is soccer.'

输入:

# Even though the language model is stateless, since relavent memory is fetched, it can "reason" about the time.
# Timestamping memories and data is useful in general to let the agent determine temporal relevance
conversation_with_summary.predict(input="Whats my favorite food")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite food is pizza
output: thats good to know(You do not need to use these pieces of information if not relevant)Current conversation:
Human: Whats my favorite food
AI:> Finished chain.

输出:

  ' You said your favorite food is pizza.'

输入:

# The memories from the conversation are automatically stored,
# since this query best matches the introduction chat above,
# the agent is able to 'remember' the user's name.
conversation_with_summary.predict(input="What's my name?")

日志输出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: Hi, my name is Perry, what's up?
response:  Hi Perry, I'm doing well. How about you?(You do not need to use these pieces of information if not relevant)Current conversation:
Human: What's my name?
AI:> Finished chain.

输出:

' Your name is Perry.'

参考文献:
[1] LangChain官方网站:https://www.langchain.com/
[2] LangChain 🦜️🔗 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/
[3] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/34762.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

基于灰狼优化(GWO)、帝国竞争算法(ICA)和粒子群优化(PSO)对梯度下降法训练的神经网络的权值进行了改进(Matlab代码实现)

💥💥💞💞欢迎来到本博客❤️❤️💥💥 🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。 ⛳️座右铭&a…

环保行业如何开发废品回收微信小程序

废品回收是近年来受到越来越多人关注的环保行动。为了推动废品回收的普及和方便,我们可以利用微信小程序进行制作,方便人们随时随地参与废品回收。 首先,我们需要注册并登录乔拓云账号,并进入后台。乔拓云是一个提供微信小程序制作…

数据结构(一):顺序表详解

在正式介绍顺序表之前,我们有必要先了解一个名词:线性表。 线性表: 线性表是,具有n个相同特性的数据元素的有限序列。常见的线性表:顺序表、链表、栈、队列、数组、字符串... 线性表在逻辑上是线性结构,但…

【云原生】Pod详讲

目录 一、Pod基础概念1.1//在Kubrenetes集群中Pod有如下两种使用方式:1.2pause容器使得Pod中的所有容器可以共享两种资源:网络和存储。1.3kubernetes中的pause容器主要为每个容器提供以下功能:1.4Kubernetes设计这样的Pod概念和特殊组成结构有…

Django中级指南:理解并实现Django的模型和数据库迁移

Django 是一个极其强大的 Python Web 框架,它提供了许多工具和特性,能够帮助我们更快速、更便捷地构建 Web 应用。在本文中,我们将会关注 Django 中的模型(Models)和数据库迁移(Database Migrations&#x…

上传代码到GitCode

Git 全局设置 git config --global user.name "AnyaPapa" git config --global user.email "fangtaihongqq.com" 添加SSH密钥 Mac终端输入命令 cd existing_folder git init git remote add origin gitgitcode.net:Java_1710/test.git git add . git co…

2023国赛数学建模A题思路分析

文章目录 0 赛题思路1 竞赛信息2 竞赛时间3 建模常见问题类型3.1 分类问题3.2 优化问题3.3 预测问题3.4 评价问题 4 建模资料 0 赛题思路 (赛题出来以后第一时间在CSDN分享) https://blog.csdn.net/dc_sinor?typeblog 1 竞赛信息 全国大学生数学建模…

Mac电脑如何把照片以文件格式导出?

在Mac电脑上,我们经常会拍摄、保存和编辑各种照片。有时候,我们可能需要将这些照片以文件形式导出,以便与他人共享、打印或备份。无论您是要将照片发送给朋友、上传到社交媒体,还是保存到外部存储设备,导出照片为文件是…

我的Python教程:使用Pyecharts画柱状图

Pyecharts是一个用于生成 Echarts 图表的 Python 库。Echarts 是一个基于 JavaScript 的数据可视化库,提供了丰富的图表类型和交互功能。通过 Pyecharts,你可以使用 Python 代码生成各种类型的 Echarts 图表,例如折线图、柱状图、饼图、散点图…

java不支持解压rar5的解决办法--引用本地7zip.exe

由于rar5算法未开源,没有合适的JAVA依赖能够解决解压rar5。在运行中报错: javacom.github.junrar.exception.RarException: badRarArchive 通过引用本地7zip.exe,命令行执行解决: private static void unZipRar5File(String fileP…

探索可视化应用的崭新前景

在当今数据驱动的世界中,可视化应用成为了一种强大的工具,能够将复杂的数据转化为易于理解和分析的图形形式。随着技术的不断发展和创新,可视化应用正迎来崭新的前景。本文将介绍可视化应用的定义、重要性以及当前的发展趋势,并探…

Controller是单例还是多例?

Controller是单例还是多例? controller默认是单例的,不要使用非静态的成员变量,否则会发生数据逻辑混乱。正因为单例所以不是线程安全的。 我们下面来简单的验证下: package com.riemann.springbootdemo.controller;import org…

docker配置文件

/etc/docker/daemon.json 文件作用 /etc/docker/daemon.json 文件是 Docker 配置文件,用于配置 Docker 守护进程的行为和参数。Docker 守护进程是负责管理和运行 Docker 容器的后台进程,通过修改 daemon.json 文件,可以对 Docker 守护进程进…

不做Linux就没前途吗?

答案当然是——并不会 我晚上回来的时候跟一个今年的毕业生聊天,他入职了一家公司,但是从事的不是Linux相关的工作。 我这里想说的是,做Linux可以赚钱,Linux现在是全世界最牛逼的开源项目一点都不为过,但是Linux也不是…

NLP(六十五)LangChain中的重连(retry)机制

关于LangChain入门,读者可参考文章NLP(五十六)LangChain入门 。   本文将会介绍LangChain中的重连机制,并尝试给出定制化重连方案。   本文以LangChain中的对话功能(ChatOpenAI)为例。 LangChain中的重…

【Mysql】数据库基础与基本操作

🌇个人主页:平凡的小苏 📚学习格言:命运给你一个低的起点,是想看你精彩的翻盘,而不是让你自甘堕落,脚下的路虽然难走,但我还能走,比起向阳而生,我更想尝试逆风…

Centos 7 出现 write error (disk full?)

问题 mysql 导入任务时,由于导出的 sql 文件是在很大 (30G),利用 SQLDumpSpliter 切割工具 切成几个 1G 大小的 sql 文件 结果在导入大半天,突然报错 (另一个服务器上更惨,都导入两天快完成的…

一分钟上手Vue VueI18n Internationalization(i18n)多国语言系统开发、国际化、中英文语言切换!

这里以Vue2为例子 第一步:安装vue-i18n npm install vue-i18n8.26.5 第二步:在src下创建js文件夹,继续创建language文件夹 在language文件夹里面创建zh.js、en.js、index.js这仨文件 这仨文件代码分别如下: zh.js export de…

在Eclipse在Java里面调用Python脚本的方法

由于项目中需要用到Java调用Python的脚本,来实现一些功能,就对jython做了一些了解,通过jython可以实现java对python脚本的调用。Java调用Python开发环境配置(EclipseJythonPyDev) 1、Jython是什么 Java可以使用Jython库来调用Python库。Jyt…