GraphRAG本地部署使用及兼容千帆通义

文章目录

  • 前言
  • 一、GraphRAG本地安装
    • 1.创建环境并安装
    • 2.准备demo数据
    • 3.初始化demo目录
  • 二、GraphRAG兼容千帆通义等大模型
    • 1.安装 graphrag-more
    • 2.准备Demo数据
    • 3.初始化demo目录
    • 4.移动和修改 settings.yaml 文件
  • 三、知识库构建与使用
    • 1.知识库构建
    • 2.执行查询


前言

GraphRAG是一种基于知识图谱的检索增强生成(RAG)应用,不同于传统基于向量检索的RAG应用,其允许进行更深层、更细致与上下文感知的检索,从而帮助获得更高质量的输出。

GitHub地址:https://github.com/microsoft/graphrag
文档地址:https://microsoft.github.io/graphrag/get_started/

graphrag-more地址:https://github.com/guoyao/graphrag-more

一、GraphRAG本地安装

1.创建环境并安装

#使用conda创建graphrag虚拟环境
conda create -n graphrag python=3.10
#安装graphrag
pip install graphrag

2.准备demo数据

# 创建demo目录
mkdir -p ./ragtest/input
# 下载微软官方demo数据
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./ragtest/input/book.txt

3.初始化demo目录

要初始化工作区,请先运行graphrag init命令。由于我们在上一步中已经配置了一个名为./ragtest的目录,请运行以下命令:

graphrag init --root ./ragtest

这将在./ragtest目录中创建两个文件:.env和settings.yaml。

  • .env包含运行GraphRAG所需的环境变量。如果您检查文件,您将看到一个定义的环境变量,GRAPHRAG_API_KEY=<API_KEY>。这是OpenAI API或Azure OpenAI端点的API密钥。您可以用您自己的API密钥替换它。如果您正在使用另一种形式的身份验证(即托管身份),请删除此文件。
  • settings.yaml包含GraphRAG的设置。您可以修改此文件来更改管道的设置。

二、GraphRAG兼容千帆通义等大模型

国内使用 OpenAI / AzureOpenAI 诸多不便,因此fork了官方代码创建了个新代码库:graphrag-more在其基础上做了小部分修改,支持使用百度千帆、阿里通义、Ollama。

安装步骤如下

1.安装 graphrag-more

如需二次开发或者调试的话,也可以直接使用源码的方式,步骤如下:
下载 graphrag-more 代码库
git clone https://github.com/guoyao/graphrag-more.git
安装依赖包
这里使用 poetry 来管理python虚拟环境
安装 poetry 参考:https://python-poetry.org/docs/#installation
cd graphrag-more
poetry install

conda create -n graphrag python=3.10
pip install graphrag-more
#or 如果使用了国内的镜像源需要改为官方的镜像源下载
pip install -i https://pypi.org/simple graphrag-more

2.准备Demo数据

# 创建demo目录
mkdir -p ./ragtest-more/input# 下载微软官方demo数据
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./ragtest-more/input/book.txt

3.初始化demo目录

python -m graphrag.index --init --root ./ragtest-more

4.移动和修改 settings.yaml 文件

根据选用的模型(千帆、通义、Ollama)将 example_settings 文件夹对应模型的 settings.yaml 文件复制到 ragtest 目录,覆盖初始化过程生成的 settings.yaml 文件。

每个settings.yaml里面都设置了默认的 llm 和 embeddings 模型,根据你自己要使用的模型修改 settings.yaml 文件的 model 配置
千帆默认使用 qianfan.ERNIE-3.5-128K 和 qianfan.bge-large-zh ,注意:必须带上 qianfan. 前缀 !!!
通义默认使用 tongyi.qwen-plus 和 tongyi.text-embedding-v2 ,注意:必须带上 tongyi. 前缀 !!!
Ollama默认使用 ollama.mistral:latest 和 ollama.quentinz/bge-large-zh-v1.5:latest ,注意:<=0.3.0版本时,其llm模型不用带前缀,>=0.3.1版本时,其llm模型必须带上 ollama. 前缀,embeddings模型必须带 ollama. 前缀 !!!

以下是Ollama、千帆、通义千问的配置文件内容

ollama


encoding_model: cl100k_base
skip_workflows: []
llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: ollama.mistral:latestmodel_supports_json: true # recommended if this is available for your model.# max_tokens: 4000# request_timeout: 180.0# api_base: http://localhost:11434/v1# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# temperature: 0 # temperature for sampling# top_p: 1 # top-p sampling# n: 1 # Number of completions to generateparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: threaded # or asyncioembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: threaded # or asyncio# target: required # or allllm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: ollama.quentinz/bge-large-zh-v1.5:latest# api_base: http://localhost:11434/api# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 300overlap: 100group_by_columns: [id] # by default, we don't allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: "input"file_encoding: utf-8file_pattern: ".*\\.txt$"cache:type: file # or blobbase_dir: "cache"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>storage:type: file # or blobbase_dir: "output" # output/${timestamp}/artifacts# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>reporting:type: file # or console, blobbase_dir: "output" # output/${timestamp}/reports# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>entity_extraction:## strategy: fully override the entity extraction strategy.##   type: one of graph_intelligence, graph_intelligence_json and nltk## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/entity_extraction.txt"entity_types: [organization,person,geo,event]max_gleanings: 1summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/summarize_descriptions.txt"max_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: "prompts/claim_extraction.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1community_reports:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/community_report.txt"max_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000global_search:# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 2000# concurrency: 32

千帆


encoding_model: cl100k_base
skip_workflows: []
llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: qianfan.ERNIE-3.5-128Kmodel_supports_json: false # recommended if this is available for your model, original default is true# max_tokens: 4000# request_timeout: 180.0# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 120 # set a leaky bucket throttle,original default is 10_000# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 2 # the number of parallel inflight requests that may be made,original default is 25temperature: 1e-10 # temperature for sampling, original default is 0# top_p: 1 # top-p sampling# n: 1 # Number of completions to generateparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: asyncio # or threadedembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: asyncio # or threaded# target: required # or allllm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: qianfan.bge-large-zh# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 1200overlap: 100group_by_columns: [id] # by default, we don't allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: "input"file_encoding: utf-8file_pattern: ".*\\.txt$"cache:type: file # or blobbase_dir: "cache"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>storage:type: file # or blobbase_dir: "output" # output/${timestamp}/artifacts# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>reporting:type: file # or console, blobbase_dir: "output" # output/${timestamp}/reports# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>entity_extraction:## strategy: fully override the entity extraction strategy.##   type: one of graph_intelligence, graph_intelligence_json and nltk## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/entity_extraction.txt"entity_types: [organization,person,geo,event]max_gleanings: 1summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/summarize_descriptions.txt"max_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: "prompts/claim_extraction.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1community_reports:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/community_report.txt"max_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10llm_temperature: 1e-10 # temperature for sampling, original default is 0# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 5000 # original default is 12000global_search:llm_temperature: 1e-10 # temperature for sampling, original default is 0# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 5000 # original default is 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 2000# concurrency: 32

通义千问


encoding_model: cl100k_base
skip_workflows: []
llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: tongyi.qwen-plusmodel_supports_json: false # recommended if this is available for your model, original default is true# max_tokens: 4000# request_timeout: 180.0# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# temperature: 0 # temperature for sampling# top_p: 1 # top-p sampling# n: 1 # Number of completions to generateparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: threaded # or asyncioembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: threaded # or asyncio# target: required # or allllm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: tongyi.text-embedding-v2# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 1200overlap: 100group_by_columns: [id] # by default, we don't allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: "input"file_encoding: utf-8file_pattern: ".*\\.txt$"cache:type: file # or blobbase_dir: "cache"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>storage:type: file # or blobbase_dir: "output" # output/${timestamp}/artifacts# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>reporting:type: file # or console, blobbase_dir: "output" # output/${timestamp}/reports# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>entity_extraction:## strategy: fully override the entity extraction strategy.##   type: one of graph_intelligence, graph_intelligence_json and nltk## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/entity_extraction.txt"entity_types: [organization,person,geo,event]max_gleanings: 1summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/summarize_descriptions.txt"max_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: "prompts/claim_extraction.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1community_reports:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/community_report.txt"max_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000global_search:# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 2000# concurrency: 32

三、知识库构建与使用

1.知识库构建

构建过程可能会触发 rate limit (限速)导致构建失败,重复执行几次,或者尝试调小 settings.yaml 中 的 requests_per_minute 和 concurrent_requests 配置,然后重试

#如果使用是graphrag
graphrag index --root ./ragtest
#如果使用的是graphrag-more
python -m graphrag.index --root ./ragtest-more

需要注意的是如果使用graphrag-more需要将秘钥配置环境变量

根据选用的模型,配置对应的环境变量,若使用Ollama需要安装并下载对应模型

1.千帆:需配置环境变量 QIANFAN_AK、QIANFAN_SK(注意是应用的AK/SK,不是安全认证的Access Key/Secret Key),如何获取请参考官方文档
2.通义:需配置环境变量 TONGYI_API_KEY(从0.3.6.1版本开始,也支持使用 DASHSCOPE_API_KEY,同时都配置的情况下 TONGYI_API_KEY 优先级高于 DASHSCOPE_API_KEY),如何获取请参考官方文档
3.Ollama:
安装:https://ollama.com/download ,安装后启动
下载模型
ollama pull mistral:latest
ollama pull quentinz/bge-large-zh-v1.5:latest

配置环境变量

linux: export QIANFAN_AK=“value”

在这里插入图片描述

2.执行查询

graphrag

# global query
graphrag query \
--root ./ragtest \
--method global \
--query "What are the top themes in this story?"# local query
graphrag query \
--root ./ragtest \
--method local \
--query "Who is Scrooge and what are his main relationships?"

graphrag-more

# global query
python -m graphrag.query \
--root ./ragtest-more \
--method global \
"What are the top themes in this story?"# local query
python -m graphrag.query \
--root ./ragtest-more \
--method local \
"Who is Scrooge, and what are his main relationships?"

运行结果如下
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/59568.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

揭秘2024年最火的5个科技趋势,你准备好迎接了吗?

在这个信息化飞速发展的时代&#xff0c;科技正以前所未有的速度改变着我们的生活。2024年&#xff0c;科技行业将迎来哪些新的突破与趋势&#xff1f;从人工智能到量子计算&#xff0c;从数字货币到智能家居&#xff0c;未来已来&#xff0c;而我们正站在一个巨变的风口浪尖上…

Python实例:爱心代码

前言 在编程的奇妙世界里,代码不仅仅是冰冷的指令集合,它还可以成为表达情感、传递温暖的独特方式。今天,我们将一同探索用 Python 语言绘制爱心的神奇之旅。 爱心,这个象征着爱与温暖的符号,一直以来都在人类的情感世界中占据着特殊的地位。而通过 Python 的强大功能,…

scala学习记录,Set,Map

set&#xff1a;集合&#xff0c;表示没有重复元素的集合&#xff0c;特点&#xff1a;唯一 语法格式&#xff1a;val 变量名 Set [类型]&#xff08;元素1&#xff0c;元素2...&#xff09; 可变不可变 可变&#xff08;mutable&#xff09;可对元素进行添加&#xff0c;删…

基于SpringBoot的免税商品优选购物商城的设计与实现

一、项目背景 从古至今&#xff0c;通过书本获取知识信息的方式完全被互联网络信息化&#xff0c;但是免税商品优选购物商城&#xff0c;对于购物商城工作来说&#xff0c;仍然是一项非常重要的工作。尤其是免税商品优选购物商城&#xff0c;传统人工记录模式已不符合当前社会…

从零开始的c++之旅——多态

1. 多态的概念 通俗来说就是多种形态。 多态分为编译时多态&#xff08;静态多态&#xff09;和运行时多态&#xff08;动态多态&#xff09;。 编译时多态主要就是我们之前提过的函数重载和函数模板&#xff0c;同名提高传不同的参数就可以调 用不同的函数&#xff0c…

火山引擎VeDI数据服务平台:在电商场景中,如何解决API编排问题?

01 平台介绍 数据服务平台可以在保证服务高可靠性和高安全性的同时&#xff0c;为各业务线搭建数据服务统一出口&#xff0c;促进数据共享&#xff0c;为数据和应用之间建立了一座“沟通桥梁”。 同时&#xff0c;解决数据理解困难、异构、重复建设、审计运维困难等问题&#x…

Object 内部类 异常

Objbect类 java提供了Object,它是所有类的父类,每个类都直接或间接的继承了Object类,因此Object类通常被称为超类 当定义一个类时,如果没有使用extends关键字直接去指定父类继承,只要没有被继承的类,都是会默认的去继承Object类,超类中定义了一些方法 方法名称方法说明boole…

Linux 高级IO

学习任务&#xff1a; 高级 I/O&#xff1a;select、poll、epoll、mmap、munmap 要求&#xff1a; 学习高级 I/O 的用法&#xff0c;并实操 1、高级 I/O&#xff1a; 前置知识&#xff1a; 阻塞、I/O 多路复用 PS: 非阻塞 I/O ------ 非阻塞 I/O 阻塞其实就是进入了休眠状态&am…

JAVA WEB — HTML CSS 入门学习

本文为JAVAWEB 关于HTML 的基础学习 一 概述 HTML 超文本标记语言 超文本 超越文本的限制 比普通文本更强大 除了文字信息 还可以存储图片 音频 视频等标记语言 由标签构成的语言HTML标签都是预定义的 HTML直接在浏览器中运行 在浏览器解析 CSS 是一种用来表现HTML或XML等文…

雷池社区版 7.1.0 LTS 发布了

LTS&#xff08;Long Term Support&#xff0c;长期支持版本&#xff09;是软件开发中的一个概念&#xff0c;表示该版本将获得较长时间的支持和更新&#xff0c;通常包含稳定性、性能改进和安全修复&#xff0c;但不包含频繁的新特性更新。 作为最受欢迎的社区waf&#xff0c…

出海企业如何借助云计算平台实现多区域部署?

云计算de小白 如需进一步了解&#xff0c;请单击链接了解有关 Akamai 云计算的更多信息 在本文中我们将告诉大家如何在Linode云计算平台上借助VLAN快速实现多地域部署。 首先我们需要明确一些基本概念和思想&#xff1a; 部署多区域 VLAN 为了在多区域部署中在不同的 VLAN …

AI赋能酒店设计|莱佛士学生成功入围WATG设计大赛

近日&#xff0c;由Wimberly Allison Tong & Goo&#xff08;WATG&#xff09;主办的“用人工智能重新构想酒店行业的未来”设计比赛正式拉开帷幕。这场设计比赛&#xff0c;不仅是为了庆祝WATG即将步入80周年&#xff0c;更是为了激发年轻设计师们的创造力和探索实践精神&…

Netty原来就是这样啊(二)

前言: Netty其实最大的特点就是在于对于对NIO进行了进一步的封装,除此以外Netty的特点就是在于其的高性能 高可用性,下面就会一一进行说明。 高性能: 我在Netty原来就是这样啊(一)-CSDN博客 解释了其中的零拷贝的技术除此以外还有Reactor线程模型,这个Reactor线程模型的思想…

对于相对速度的重新理解

狭义相对论速度合成公式如下&#xff0c; 现在让我们尝试用另一种方式把它推导出来。 我们先看速度的定义&#xff0c; 常规的速度合成方式如下&#xff0c; 如果我们用速度的倒数来理解速度&#xff0c; 原来的两个相对速度合成&#xff0c; 是因为假定了时间单位是一样的&am…

idea 导入Spring源码遇到的坑并解决

1.下载相关文件 通过百度网盘分享的文件&#xff1a;Spring 链接&#xff1a;https://pan.baidu.com/s/1r9rkGOCaY9SFn9ecng5cIg?pwd8888 提取码&#xff1a;8888 2.配置gradle环境 gradle下载地址 需要翻墙下 https://services.gradle.org/distributions/ 我选择的是 grad…

红队-linux基础(1)

声明 通过学习 泷羽sec的个人空间-泷羽sec个人主页-哔哩哔哩视频,做出的文章如涉及侵权马上删除文章 笔记的只是方便各位师傅学习知识,以下网站只涉及学习内容,其他的都与本人无关,切莫逾越法律红线,否则后果自负 一.openssl 1、openssl passwd -1 123 openssl是一个开源的…

迈入国际舞台,AORO M8防爆手机获国际IECEx、欧盟ATEX防爆认证

近日&#xff0c;深圳市遨游通讯设备有限公司&#xff08;以下简称“遨游通讯”&#xff09;旗下5G防爆手机——AORO M8&#xff0c;通过了CSA集团的严格测试和评估&#xff0c;荣获国际IECEx及欧盟ATEX防爆认证证书。2024年11月5日&#xff0c;CSA集团和遨游通讯双方领导在遨游…

[Unity Demo]从零开始制作空洞骑士Hollow Knight第十八集补充:制作空洞骑士独有的EventSystem和InputModule

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 前言一、制作空洞骑士独有的EventSystem和InputModule总结 前言 hello大家好久没见&#xff0c;之所以隔了这么久才更新并不是因为我又放弃了这个项目&#xff0c;而…

你们要的App电量分析测试来了

Batterystats 是包含在 Android 框架中的一种工具&#xff0c;用于收集设备上的电池数据。您可以使用 adb 将收集的电池数据转储到开发计算机&#xff0c;并创建一份可使用 Battery Historian 分析的报告。Battery Historian 会将报告从 Batterystats 转换为可在浏览器中查看的…

<项目代码>YOLOv8 学生课堂行为识别<目标检测

YOLOv8是一种单阶段&#xff08;one-stage&#xff09;检测算法&#xff0c;它将目标检测问题转化为一个回归问题&#xff0c;能够在一次前向传播过程中同时完成目标的分类和定位任务。相较于两阶段检测算法&#xff08;如Faster R-CNN&#xff09;&#xff0c;YOLOv8具有更高的…