Fastgpt配合chatglm+m3e或ollama+m3e搭建个人知识库

概述:

人工智能大语言模型是近年来人工智能领域的一项重要技术,它的出现标志着自然语言处理领域的重大突破。这些模型利用深度学习和大规模数据训练,能够理解和生成人类语言,为各种应用场景提供了强大的文本处理能力。AI大语言模型的技术原理主要基于深度学习和自然语言处理技术,通过自监督学习和大规模文本数据的预训练来学习语言的表示。训练完成后,可以通过微调等方法,将模型应用于特定的任务和应用场景。
未来,AI大语言模型有望在更多领域发挥作用,包括自然语言理解、文本生成、对话系统、语言翻译等。它们可以用于自动摘要、文档生成、智能客服、智能问答等多种应用场景,为用户提供了更加智能和个性化的服务。
本文为学习大语言模型及FastGPT部署的学习笔记。通过直接部署**ChatGML3大语言模型****OLLAMA模型管理工具**配合FastGPT私有化搭建知识库。其中**one-api****fastgpt**是两种方法都需要部署的,其他的更建议使用ollama直接进行部署,切换模型方便快捷,易于管理。

硬件要求

以下配置仅作参考
**chatglm3-6b+m3e:**3060 12 ↑
**qwen:4b+m3e:**3060 12 ↑
**qwen:2b+m3e:**1660 6g↑
总结:模型量级越大所需显卡性能越高,过小的量级的大模型在低端cpu亦可运行,只是推理的精准度很差,更不能配合m3e向量模型进行推理,速度会非常慢。

本文档涉及到的资源

conda

https://www.anaconda.com/
Conda 是一个运行在 Windows、MacOS 和 Linux 上的开源包管理系统和环境管理系统。Conda 可以:

  • 快速安装、运行和更新包及其依赖项
  • 轻松地在本地计算机上创建、保存、加载和切换环境

它是为 Python 程序创建的,但它可以为任何语言打包和分发软件。

简单总结一下就是 Conda 很好、很强大,使用 Conda 会让你很省心。(人生苦短,我选 “Conda”!)

one-api

https://github.com/songquanpeng/one-api
All in one 的 OpenAI 接口 整合各种 API 访问方式 一键部署,开箱即用

chatglm3

https://github.com/THUDM/ChatGLM3
ChatGLM3 是智谱 AI 和清华大学 KEG 实验室联合发布的对话预训练模型。

m3e

https://modelscope.cn/models/Jerry0/m3e-base/summary
M3E 是 Moka Massive Mixed Embedding 的缩写

  • Moka,此模型由 MokaAI 训练,开源和评测,训练脚本使用 uniem ,评测 BenchMark 使用 MTEB-zh
  • Massive,此模型通过千万级 (2200w+) 的中文句对数据集进行训练
  • Mixed,此模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索
  • Embedding,此模型是文本嵌入模型,可以将自然语言转换成稠密的向量

ollama

https://ollama.com/
大语言模型管理工具

fastgpt

https://github.com/labring/FastGPT
FastGPT 是一个基于 LLM 大语言模型的知识库问答系统,提供开箱即用的数据处理、模型调用等能力。同时可以通过 Flow 可视化进行工作流编排,从而实现复杂的问答场景!

魔搭社区

https://modelscope.cn/home
ModelScope旨在打造下一代开源的模型即服务共享平台,为泛AI开发者提供灵活、易用、低成本的一站式模型服务产品,让模型应用更简单!

CONDA的使用

官网:https://www.anaconda.com/

  1. 安装好后更新工具

conda update -n base -c defaults conda

  1. 更新各种库

conda update --all

  1. 创建虚拟环境

conda create --name windows_chatglm3-6b python=3.11 -y

  1. 激活并进入虚拟环境

conda activate windows_chatglm3-6b

  1. 安装pytorch

cmd:nvidia-smi查看最高支持的 CUDA Version,我的是12.2

  1. 安装pytorch

PyTorch是一个开源的Python机器学习库,基于Torch,用于自然语言处理等应用程序。PyTorch既可以看作加入了GPU支持的numpy,同时也可以看成一个拥有自动求导功能的强大的深度神经网络 。

  1. 在 https://pytorch.org/get-started/locally/ 查询自己电脑需要执行的命令
  2. 在conda虚拟环境内执行以下命令安装pytorch

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

chatglm3环境搭建(非ollama模式)

模型及DEMO下载

一共需要下载两个模型chatglm3 及m3e

chatglm3下载

模型地址:https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary
下载方法:git
下载时间较久,耐心等待

git lfs install
git clone https://www.modelscope.cn/ZhipuAI/chatglm3-6b.git
m3e-base下载
git clone https://www.modelscope.cn/Jerry0/m3e-base.git

官方ChatGLM3 DEMO下载

地址:https://github.com/THUDM/ChatGLM3
下载方法:git

git clone https://github.com/THUDM/ChatGLM3

配置及运行

  1. 进入刚刚clone的 ChatGLM3/openai_api_demo文件夹

  2. 打开api_server.py的python文件

  3. 代码拉倒最下方

  4. 覆盖if name == “main”:方法内的代码 如下:

    其中一些地方需要修改,`tokenizer`及`model`的地址对应的是[chatglm3](#eB5m3)的下载地址,`embedding_model`的地址对应的是[m3e](#FHYRP)的下载地址,`port`可根据个人需要自行配置
    
tokenizer = AutoTokenizer.from_pretrained("E:\Work\HaoQue\FastGPT\models\chatglm3-6B-32k-int4", trust_remote_code=True)model = AutoModel.from_pretrained("E:\Work\HaoQue\FastGPT\models\chatglm3-6B-32k-int4", trust_remote_code=True, device_map="auto").eval()
# load Embeddingembedding_model = SentenceTransformer("E:\Work\HaoQue\FastGPT\models\m3e-base", trust_remote_code=True, device="cuda")uvicorn.run(app, host='0.0.0.0', port=8000, workers=1)
  1. 回到ChatGLM3根目录 进入刚刚创建的windows_chatglm3-6bconda 虚拟环境
  2. cmd运行pip install -r requirements.txt安装依赖
  3. 耐心等待安装完成
  4. 完成后运行通过python运行 python openai_api_demo/api_server.py
  5. 查看运行结果,以下为运行成功。

image.png

ollama环境搭建

ollama程序下载及模型安装安装

  1. 下载:https://ollama.com/download
  2. 安装直接下一步
  3. 安装完成后进入cmd 输入 ollama -v验证是否成功

image.png
通过ollama进行模型下载

  1. 模型列表:https://ollama.com/library
  2. 这里以qwen:1.8b为例
  3. cmd运行 ollama run qwen:1.8b
  4. 耐心等待下载即可
  5. 下载完成模型会自动启动,无需其他操作

m3e环境搭建(ollama模式)

使用docker进行部署,docker安装在此不做介绍。

docker run -d --name m3e -p 6008:6008 --gpus all -e sk-key=123321  registry.cn-hangzhou.aliyuncs.com/fastgpt_docker/m3e-large-api

one-api环境部署及配置

  1. 使用docker进行部署,docker安装在此不做介绍。
docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api
  1. 然后访问[http://localhost:3000/](http://localhost:3001/)端口为docker run 时候-p的端口,

image.png

  1. 登陆 初始账号用户名为 root,密码为 123456。
  2. 登陆后来到渠道页面添加渠道,此步骤添加的是大语言模型image.png
  3. 如果你是通过ollama运行的大模型,则需要再次添加新渠道,本次添加m3e渠道image.png
  4. 新建令牌image.png
  5. 新建令牌后,复制令牌sdk备用

fastgpt环境搭建

参考文档:https://doc.fastai.site/docs/development/docker/

  1. 非 Linux 环境或无法访问外网环境,可手动创建一个目录,并下载下面2个链接的文件: docker-compose.yml,config.json

注意: docker-compose.yml 配置文件中 Mongo 为 5.x,部分服务器不支持,需手动更改其镜像版本为 4.4.24(需要自己在docker hub下载,阿里云镜像没做备份)

  1. config.json配置文件修改
    1. 打开下载的config.json
    2. 复制并替换**llmModels**数组中的第一组数据,修改model和name属性为你部署的模型属性,其他可以不做修改
    {"model": "gemma:2b","name": "gemma:2b","maxContext": 16000,"avatar": "/imgs/model/openai.svg","maxResponse": 4000,"quoteMaxToken": 13000,"maxTemperature": 1.2,"charsPointsPrice": 0,"censor": false,"vision": false,"datasetProcess": true,"usedInClassify": true,"usedInExtractFields": true,"usedInToolCall": true,"usedInQueryExtension": true,"toolChoice": true,"functionCall": true,"customCQPrompt": "","customExtractPrompt": "","defaultSystemChatPrompt": "","defaultConfig": {}},
  1. 如果你是ollama部署的大模型
    1. 打开下载的config.json
    2. **vectorModels**数组中添加以下数据
    {"model": "m3e","name": "M3E","inputPrice": 0,"outputPrice": 0,"defaultToken": 700,"maxToken": 1800,"weight": 100}
  1. 打开docker-compose.yml
  2. 注释掉mysql 及 oneapi相关配置image.png
  3. **启动容器 **

在 docker-compose.yml 同级目录下执行。请确保docker-compose版本最好在2.17以上,否则可能无法执行自动化命令。

# 启动容器
docker-compose up -d
# 等待10s,OneAPI第一次总是要重启几次才能连上Mysql
sleep 10
# 重启一次oneapi(由于OneAPI的默认Key有点问题,不重启的话会提示找不到渠道,临时手动重启一次解决,等待作者修复)
docker restart oneapi
  1. 访问 FastGPT

目前可以通过 ip:3000 直接访问(注意防火墙)。登录用户名为 root,密码为docker-compose.yml环境变量里设置的 DEFAULT_ROOT_PSW。
如果需要域名访问,请自行安装并配置 Nginx。
首次运行,会自动初始化 root 用户,密码为 1234

  1. 新建知识库

image.png

  1. 上传知识库文件

image.png
image.png

  1. 新建AI应用

image.png

  1. 开始使用

image.png

运行报错

报huggingface-hub的错

pip install huggingface-hub==0.20.3

显存不足尝试设置环境变量

set PYTORCH_CUDA_ALLOC_COFF=expandable_segments:True
再次运行python api_server.py(经测试无用)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 108.00 MiB. GPU 0 has a total capacity of 6.00 GiB of which 0 bytes is free. Of the allocated memory 12.31 GiB is allocated by PyTorch, and 1.27 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables))

api_server.py整体

"""
This script implements an API for the ChatGLM3-6B model,
formatted similarly to OpenAI's API (https://platform.openai.com/docs/api-reference/chat).
It's designed to be run as a web server using FastAPI and uvicorn,
making the ChatGLM3-6B model accessible through OpenAI Client.Key Components and Features:
- Model and Tokenizer Setup: Configures the model and tokenizer paths and loads them.
- FastAPI Configuration: Sets up a FastAPI application with CORS middleware for handling cross-origin requests.
- API Endpoints:- "/v1/models": Lists the available models, specifically ChatGLM3-6B.- "/v1/chat/completions": Processes chat completion requests with options for streaming and regular responses.- "/v1/embeddings": Processes Embedding request of a list of text inputs.
- Token Limit Caution: In the OpenAI API, 'max_tokens' is equivalent to HuggingFace's 'max_new_tokens', not 'max_length'.
For instance, setting 'max_tokens' to 8192 for a 6b model would result in an error due to the model's inability to output
that many tokens after accounting for the history and prompt tokens.
- Stream Handling and Custom Functions: Manages streaming responses and custom function calls within chat responses.
- Pydantic Models: Defines structured models for requests and responses, enhancing API documentation and type safety.
- Main Execution: Initializes the model and tokenizer, and starts the FastAPI app on the designated host and port.Note:This script doesn't include the setup for special tokens or multi-GPU support by default.Users need to configure their special tokens and can enable multi-GPU support as per the provided instructions.Embedding Models only support in One GPU.Running this script requires 14-15GB of GPU memory. 2 GB for the embedding model and 12-13 GB for the FP16 ChatGLM3 LLM."""import os
import time
import tiktoken
import torch
import uvicornfrom fastapi import FastAPI, HTTPException, Response
from fastapi.middleware.cors import CORSMiddlewarefrom contextlib import asynccontextmanager
from typing import List, Literal, Optional, Union
from loguru import logger
from pydantic import BaseModel, Field
from transformers import AutoTokenizer, AutoModel
from utils import process_response, generate_chatglm3, generate_stream_chatglm3
from sentence_transformers import SentenceTransformerfrom sse_starlette.sse import EventSourceResponse# Set up limit request time
EventSourceResponse.DEFAULT_PING_INTERVAL = 1000000# set LLM path
MODEL_PATH = os.environ.get('MODEL_PATH', 'D:\WangMing\FastGPT\models\chatglm3-6b-copy')
TOKENIZER_PATH = os.environ.get("TOKENIZER_PATH", 'D:\WangMing\FastGPT\models\chatglm3-6b-copy')# set Embedding Model path
EMBEDDING_PATH = os.environ.get('EMBEDDING_PATH', 'D:\WangMing\FastGPT\models\m3e-base')@asynccontextmanager
async def lifespan(app: FastAPI):yieldif torch.cuda.is_available():torch.cuda.empty_cache()torch.cuda.ipc_collect()app = FastAPI(lifespan=lifespan)app.add_middleware(CORSMiddleware,allow_origins=["*"],allow_credentials=True,allow_methods=["*"],allow_headers=["*"],
)class ModelCard(BaseModel):id: strobject: str = "model"created: int = Field(default_factory=lambda: int(time.time()))owned_by: str = "owner"root: Optional[str] = Noneparent: Optional[str] = Nonepermission: Optional[list] = Noneclass ModelList(BaseModel):object: str = "list"data: List[ModelCard] = []class FunctionCallResponse(BaseModel):name: Optional[str] = Nonearguments: Optional[str] = Noneclass ChatMessage(BaseModel):role: Literal["user", "assistant", "system", "function"]content: str = Nonename: Optional[str] = Nonefunction_call: Optional[FunctionCallResponse] = Noneclass DeltaMessage(BaseModel):role: Optional[Literal["user", "assistant", "system"]] = Nonecontent: Optional[str] = Nonefunction_call: Optional[FunctionCallResponse] = None## for Embedding
class EmbeddingRequest(BaseModel):input: List[str]model: strclass CompletionUsage(BaseModel):prompt_tokens: intcompletion_tokens: inttotal_tokens: intclass EmbeddingResponse(BaseModel):data: listmodel: strobject: strusage: CompletionUsage# for ChatCompletionRequestclass UsageInfo(BaseModel):prompt_tokens: int = 0total_tokens: int = 0completion_tokens: Optional[int] = 0class ChatCompletionRequest(BaseModel):model: strmessages: List[ChatMessage]temperature: Optional[float] = 0.8top_p: Optional[float] = 0.8max_tokens: Optional[int] = Nonestream: Optional[bool] = Falsetools: Optional[Union[dict, List[dict]]] = Nonerepetition_penalty: Optional[float] = 1.1class ChatCompletionResponseChoice(BaseModel):index: intmessage: ChatMessagefinish_reason: Literal["stop", "length", "function_call"]class ChatCompletionResponseStreamChoice(BaseModel):delta: DeltaMessagefinish_reason: Optional[Literal["stop", "length", "function_call"]]index: intclass ChatCompletionResponse(BaseModel):model: strid: strobject: Literal["chat.completion", "chat.completion.chunk"]choices: List[Union[ChatCompletionResponseChoice, ChatCompletionResponseStreamChoice]]created: Optional[int] = Field(default_factory=lambda: int(time.time()))usage: Optional[UsageInfo] = None@app.get("/health")
async def health() -> Response:"""Health check."""return Response(status_code=200)@app.post("/v1/embeddings", response_model=EmbeddingResponse)
async def get_embeddings(request: EmbeddingRequest):embeddings = [embedding_model.encode(text) for text in request.input]embeddings = [embedding.tolist() for embedding in embeddings]def num_tokens_from_string(string: str) -> int:"""Returns the number of tokens in a text string.use cl100k_base tokenizer"""encoding = tiktoken.get_encoding('cl100k_base')num_tokens = len(encoding.encode(string))return num_tokensresponse = {"data": [{"object": "embedding","embedding": embedding,"index": index}for index, embedding in enumerate(embeddings)],"model": request.model,"object": "list","usage": CompletionUsage(prompt_tokens=sum(len(text.split()) for text in request.input),completion_tokens=0,total_tokens=sum(num_tokens_from_string(text) for text in request.input),)}return response@app.get("/v1/models", response_model=ModelList)
async def list_models():model_card = ModelCard(id="chatglm3-6b")return ModelList(data=[model_card])@app.post("/v1/chat/completions", response_model=ChatCompletionResponse)
async def create_chat_completion(request: ChatCompletionRequest):global model, tokenizerif len(request.messages) < 1 or request.messages[-1].role == "assistant":raise HTTPException(status_code=400, detail="Invalid request")gen_params = dict(messages=request.messages,temperature=request.temperature,top_p=request.top_p,max_tokens=request.max_tokens or 1024,echo=False,stream=request.stream,repetition_penalty=request.repetition_penalty,tools=request.tools,)logger.debug(f"==== request ====\n{gen_params}")if request.stream:# Use the stream mode to read the first few characters, if it is not a function call, direct stram outputpredict_stream_generator = predict_stream(request.model, gen_params)output = next(predict_stream_generator)if not contains_custom_function(output):return EventSourceResponse(predict_stream_generator, media_type="text/event-stream")# Obtain the result directly at one time and determine whether tools needs to be called.logger.debug(f"First result output:\n{output}")function_call = Noneif output and request.tools:try:function_call = process_response(output, use_tool=True)except:logger.warning("Failed to parse tool call")# CallFunctionif isinstance(function_call, dict):function_call = FunctionCallResponse(**function_call)"""In this demo, we did not register any tools.You can use the tools that have been implemented in our `tools_using_demo` and implement your own streaming tool implementation here.Similar to the following method:function_args = json.loads(function_call.arguments)tool_response = dispatch_tool(tool_name: str, tool_params: dict)"""tool_response = ""if not gen_params.get("messages"):gen_params["messages"] = []gen_params["messages"].append(ChatMessage(role="assistant",content=output,))gen_params["messages"].append(ChatMessage(role="function",name=function_call.name,content=tool_response,))# Streaming output of results after function callsgenerate = predict(request.model, gen_params)return EventSourceResponse(generate, media_type="text/event-stream")else:# Handled to avoid exceptions in the above parsing function process.generate = parse_output_text(request.model, output)return EventSourceResponse(generate, media_type="text/event-stream")# Here is the handling of stream = Falseresponse = generate_chatglm3(model, tokenizer, gen_params)# Remove the first newline characterif response["text"].startswith("\n"):response["text"] = response["text"][1:]response["text"] = response["text"].strip()usage = UsageInfo()function_call, finish_reason = None, "stop"if request.tools:try:function_call = process_response(response["text"], use_tool=True)except:logger.warning("Failed to parse tool call, maybe the response is not a tool call or have been answered.")if isinstance(function_call, dict):finish_reason = "function_call"function_call = FunctionCallResponse(**function_call)message = ChatMessage(role="assistant",content=response["text"],function_call=function_call if isinstance(function_call, FunctionCallResponse) else None,)logger.debug(f"==== message ====\n{message}")choice_data = ChatCompletionResponseChoice(index=0,message=message,finish_reason=finish_reason,)task_usage = UsageInfo.model_validate(response["usage"])for usage_key, usage_value in task_usage.model_dump().items():setattr(usage, usage_key, getattr(usage, usage_key) + usage_value)return ChatCompletionResponse(model=request.model,id="",  # for open_source model, id is emptychoices=[choice_data],object="chat.completion",usage=usage)async def predict(model_id: str, params: dict):global model, tokenizerchoice_data = ChatCompletionResponseStreamChoice(index=0,delta=DeltaMessage(role="assistant"),finish_reason=None)chunk = ChatCompletionResponse(model=model_id, id="", choices=[choice_data], object="chat.completion.chunk")yield "{}".format(chunk.model_dump_json(exclude_unset=True))previous_text = ""for new_response in generate_stream_chatglm3(model, tokenizer, params):decoded_unicode = new_response["text"]delta_text = decoded_unicode[len(previous_text):]previous_text = decoded_unicodefinish_reason = new_response["finish_reason"]if len(delta_text) == 0 and finish_reason != "function_call":continuefunction_call = Noneif finish_reason == "function_call":try:function_call = process_response(decoded_unicode, use_tool=True)except:logger.warning("Failed to parse tool call, maybe the response is not a tool call or have been answered.")if isinstance(function_call, dict):function_call = FunctionCallResponse(**function_call)delta = DeltaMessage(content=delta_text,role="assistant",function_call=function_call if isinstance(function_call, FunctionCallResponse) else None,)choice_data = ChatCompletionResponseStreamChoice(index=0,delta=delta,finish_reason=finish_reason)chunk = ChatCompletionResponse(model=model_id,id="",choices=[choice_data],object="chat.completion.chunk")yield "{}".format(chunk.model_dump_json(exclude_unset=True))choice_data = ChatCompletionResponseStreamChoice(index=0,delta=DeltaMessage(),finish_reason="stop")chunk = ChatCompletionResponse(model=model_id,id="",choices=[choice_data],object="chat.completion.chunk")yield "{}".format(chunk.model_dump_json(exclude_unset=True))yield '[DONE]'def predict_stream(model_id, gen_params):"""The function call is compatible with stream mode output.The first seven characters are determined.If not a function call, the stream output is directly generated.Otherwise, the complete character content of the function call is returned.:param model_id::param gen_params::return:"""output = ""is_function_call = Falsehas_send_first_chunk = Falsefor new_response in generate_stream_chatglm3(model, tokenizer, gen_params):decoded_unicode = new_response["text"]delta_text = decoded_unicode[len(output):]output = decoded_unicode# When it is not a function call and the character length is> 7,# try to judge whether it is a function call according to the special function prefixif not is_function_call and len(output) > 7:# Determine whether a function is calledis_function_call = contains_custom_function(output)if is_function_call:continue# Non-function call, direct stream outputfinish_reason = new_response["finish_reason"]# Send an empty string first to avoid truncation by subsequent next() operations.if not has_send_first_chunk:message = DeltaMessage(content="",role="assistant",function_call=None,)choice_data = ChatCompletionResponseStreamChoice(index=0,delta=message,finish_reason=finish_reason)chunk = ChatCompletionResponse(model=model_id,id="",choices=[choice_data],created=int(time.time()),object="chat.completion.chunk")yield "{}".format(chunk.model_dump_json(exclude_unset=True))send_msg = delta_text if has_send_first_chunk else outputhas_send_first_chunk = Truemessage = DeltaMessage(content=send_msg,role="assistant",function_call=None,)choice_data = ChatCompletionResponseStreamChoice(index=0,delta=message,finish_reason=finish_reason)chunk = ChatCompletionResponse(model=model_id,id="",choices=[choice_data],created=int(time.time()),object="chat.completion.chunk")yield "{}".format(chunk.model_dump_json(exclude_unset=True))if is_function_call:yield outputelse:yield '[DONE]'async def parse_output_text(model_id: str, value: str):"""Directly output the text content of value:param model_id::param value::return:"""choice_data = ChatCompletionResponseStreamChoice(index=0,delta=DeltaMessage(role="assistant", content=value),finish_reason=None)chunk = ChatCompletionResponse(model=model_id, id="", choices=[choice_data], object="chat.completion.chunk")yield "{}".format(chunk.model_dump_json(exclude_unset=True))choice_data = ChatCompletionResponseStreamChoice(index=0,delta=DeltaMessage(),finish_reason="stop")chunk = ChatCompletionResponse(model=model_id, id="", choices=[choice_data], object="chat.completion.chunk")yield "{}".format(chunk.model_dump_json(exclude_unset=True))yield '[DONE]'def contains_custom_function(value: str) -> bool:"""Determine whether 'function_call' according to a special function prefix.For example, the functions defined in "tools_using_demo/tool_register.py" are all "get_xxx" and start with "get_"[Note] This is not a rigorous judgment method, only for reference.:param value::return:"""return value and 'get_' in valueif __name__ == "__main__":# Load LLMtokenizer = AutoTokenizer.from_pretrained("D:\WangMing\FastGPT\models\chatglm3-6b-copy", trust_remote_code=True)model = AutoModel.from_pretrained("D:\WangMing\FastGPT\models\chatglm3-6b-copy", trust_remote_code=True, device_map="auto").quantize(4).eval()# load Embeddingembedding_model = SentenceTransformer("D:\WangMing\FastGPT\models\chatglm3-6b-copy", trust_remote_code=True, device="cuda")uvicorn.run(app, host='0.0.0.0', port=8000, workers=1)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/811819.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

ARL资产侦察灯塔系统

1、资产侦察灯塔系统搭建 1.1、系统要求 目前暂不支持 Windows&#xff0c;Linux 和 MAC 建议采用 Docker 运行&#xff0c;系统配置最低 2 核 4G。 由于自动资产发现过程中会有大量的的发包&#xff0c;建议采用云服务器可以带来更好的体验 实验环境&#xff1a; 系统&…

c++命令行解析开源库cxxopts上手教程

文章目录 cxxopts快速入门1. cmake环境配置2. 定义解析的规则3. 使用例子 cxxopts 简介 cxxopts是一个轻量级的C命令行解析库&#xff0c;它提供了易于使用的API来定义和解析命令行选项。它支持多种类型的选项&#xff0c;并且允许用户自定义选项的处理逻辑。 项目地址&#x…

微信小程序(六)定位搜索

一、引言 作者上一章讲了微信小程序的地图实现微信小程序&#xff08;五&#xff09;地图-CSDN博客&#xff0c;但是还有一个功能是和地图紧密结合的&#xff0c;那就是位置搜索定位&#xff0c;这里作者讲讲实现和原理&#xff0c;包括城市筛选。 二、定位搜索实现 1、位置搜…

如何选择适用于Mac的文件恢复软件?适用于 Mac 的最佳数据恢复软件清单

有人会说&#xff0c;我们的数字生活正变得几乎和我们的物理生活一样重要。我们在线工作&#xff0c;将记忆保存在数码照片库中&#xff0c;在信使中交流&#xff0c;并保留各种文档的数字扫描。 每个人都知道备份是必不可少的。建议每天至少同步一个数字备份&#xff08;例如…

Day23_学点儿IDEA_单元测试@Test在新module项目中失效、Jackson核心对象 ObjectMapper识别不到

版本 IntelliJ IDEA 2023.2.4 目录结构 Study(Project) ├──JavaSE(Module) │ └──xxxx └──JavaWeb(Module)└──xxxx问题 1.1 在JavaSE项目中正常可以用的单元测试Test&#xff0c;到了JavaWeb项目中不起作用了 1.2 解决方法 如果是新创建的项目&#xff0c;先…

物联网全栈智能应用实训系统

物联网全栈智能应用实训系统是一款集硬件、软件、网络、数据分析与应用开发于一体的综合性实训平台。它旨在帮助学习者全面掌握物联网技术的各个环节&#xff0c;从硬件设备选型、通信协议理解、软件开发、数据分析到应用部署&#xff0c;都能得到充分的实践锻炼。 一、产品构…

Jenkins+AWS CodeCommit(git)

问题 需要使用Jenkins搭建一套CI流&#xff0c;即通过git代码托管拉取代码&#xff0c;构建自定分支的代码&#xff0c;构建出jar&#xff0c;并进一步构建出docker镜像&#xff0c;并推送到docker私有库中。 准备 AWS云准备 这里假设已经在CodeCommit已经存在私有git代码仓…

Comparablae接口

在日常生经常涉及到排序的的问题&#xff0c;排序问题中又不得不涉及到比较的问题。在排序问题中根据不同的规则对多个对象进行比较&#xff0c;然后根据比较内容的不同对对象进行排序。java中的Comparable就是用来定义排序规则的接口。当要对类中的对象进行排序操作时&#xf…

View-Consistent 3D Editing with Gaussian Splatting

View-Consistent 3D Editing with Gaussian Splatting 使用高斯溅射进行视图一致的3D编辑 Yuxuan Wang 王宇轩11Xuanyu Yi 易轩宇11Zike Wu 吴子可11Na Zhao 赵娜22Long Chen 龙宸33Hanwang Zhang 张汉旺1144 Abstract 摘要 View-Consistent 3D Editing with Gaussian Sp…

Zookeeper与Kafka消息队列

目录 一、Zookeeper 1、zookeeper简介 2、zookeeper的特点 3、zookeeper的工作模式跟工作机制 3.1 工作模式&#xff1a; 3.2工作机制&#xff1a;​编辑 4、zookeeper应用场景及选举机制 4.1 应用场景&#xff1a; 4.2 选举机制&#xff1a; 4.2.1第一次启动选举机制…

每日Bug汇总--Day05

Bug汇总—Day05 一、项目运行报错 二、项目运行Bug 1、**问题描述&#xff1a;**前端将从后台查询的数据作为参数进行get请求&#xff0c;参数为空 原因分析&#xff1a; 这种写法可能只支全局的参数调用方法的传参响应 代码实现 if (this.jishiName) {this.$http({url…

JAVA集合ArrayList

目录 ArrayList概述 add(element) 用法 add(index, element)用法 remove&#xff08;element&#xff09;用法 remove&#xff08;index&#xff09;用法 get(index)用法 set(index,element) 练习 test1 定义一个集合&#xff0c;添加字符串&#xff0c;并进行遍历&…

HarmonyOS实战开发-音视频录制、如何实现音频录制和视频录制功能的应用

介绍 音视频录制应用是基于AVRecorder接口开发的实现音频录制和视频录制功能的应用&#xff0c;音视频录制的主要工作是捕获音频信号&#xff0c;接收视频信号&#xff0c;完成音视频编码并保存到文件中&#xff0c;帮助开发者轻松实现音视频录制功能&#xff0c;包括开始录制…

启动Unity发布的exe并且添加启动参数

启动Unity发布的exe并且添加启动参数 在启动Unity的时候添加一些启动的参数。 代码解释 在启动的时候获取的启动参数如果没有获取到正确的启动参数那么就退出程序&#xff0c;这个代码仅仅在发布到windows之后才会生效&#xff0c;在编辑器下这个代码虽然会获取到参数但是不能…

【YOLOv9改进[损失函数]】使用MPDIou回归损失函数帮助YOLOv9模型更优秀

本文中&#xff0c;第一部分概述了各种回归损失函数&#xff0c;当然也包括了今天的主角MPDIou。第二部分内容为在YOLOv9中使用MPDIou回归损失函数的方法。 1 回归损失函数&#xff08;Bounding Box Regression Loss&#xff09; 边界框回归损失计算的方法包括GIoU、DIoU、CI…

低频电磁仿真 | 新能源汽车性能提升的利器

永磁同步电机 新能源汽车的心脏 近年来&#xff0c;全球变暖的趋势日益加剧&#xff0c;极端天气事件层出不穷&#xff0c;这些现象都反映出当前气候形势的严峻性。为了应对这一全球性挑战&#xff0c;各国纷纷采取行动&#xff0c;制定了一系列降碳、减碳的措施。中国在2020年…

ElasticView一款ElasticSearch的web可视化工具

ElasticView 是一款用来监控ElasticSearch状态和操作ElasticSearch索引的web可视化工具。它由golang开发而成&#xff0c;具有部署方便&#xff0c;占用内存小等优点 ElasticSearch连接树管理&#xff08;更方便的切换测试/生产环境&#xff09;支持权限管理支持sql转换成dsl语…

从IoTDB的发展回顾时序数据库演进史

面向工业物联网时代&#xff0c;以 IoTDB 为代表的时序数据库加速发展。 时序数据的主要产生来源之一是设备与传感器&#xff0c;具有监测点多、采样频率高、存储数据量大等多类不同于其他数据类型的特性&#xff0c;从而导致数据库在实现高通量写入、存储成本、实时查询等多个…

第G2周:人脸图像生成(DCGAN)

第G2周&#xff1a;人脸图像生成&#xff08;DCGAN&#xff09; 第G2周&#xff1a;人脸图像生成&#xff08;DCGAN&#xff09;一、前言二、我的环境三、代码实现1、导入第三方库2、设置超参数3、导入数据 四、定义模型4.1 初始化权重4.2 定义生成器4.3 定义鉴别器 五、训练模…

2024年蓝桥杯40天打卡总结

2024蓝桥杯40天打卡总结 真题题解其它预估考点重点复习考点时间复杂度前缀和二分的两个模板字符串相关 String和StringBuilderArrayList HashSet HashMap相关蓝桥杯Java常用算法大数类BigInteger的存储与运算日期相关考点及函数质数最小公倍数和最大公约数排序库的使用栈Math类…