GraphRAG本地部署使用及兼容千帆通义

文章目录

  • 前言
  • 一、GraphRAG本地安装
    • 1.创建环境并安装
    • 2.准备demo数据
    • 3.初始化demo目录
  • 二、GraphRAG兼容千帆通义等大模型
    • 1.安装 graphrag-more
    • 2.准备Demo数据
    • 3.初始化demo目录
    • 4.移动和修改 settings.yaml 文件
  • 三、知识库构建与使用
    • 1.知识库构建
    • 2.执行查询


前言

GraphRAG是一种基于知识图谱的检索增强生成(RAG)应用,不同于传统基于向量检索的RAG应用,其允许进行更深层、更细致与上下文感知的检索,从而帮助获得更高质量的输出。

GitHub地址:https://github.com/microsoft/graphrag
文档地址:https://microsoft.github.io/graphrag/get_started/

graphrag-more地址:https://github.com/guoyao/graphrag-more

一、GraphRAG本地安装

1.创建环境并安装

#使用conda创建graphrag虚拟环境
conda create -n graphrag python=3.10
#安装graphrag
pip install graphrag

2.准备demo数据

# 创建demo目录
mkdir -p ./ragtest/input
# 下载微软官方demo数据
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./ragtest/input/book.txt

3.初始化demo目录

要初始化工作区,请先运行graphrag init命令。由于我们在上一步中已经配置了一个名为./ragtest的目录,请运行以下命令:

graphrag init --root ./ragtest

这将在./ragtest目录中创建两个文件:.env和settings.yaml。

  • .env包含运行GraphRAG所需的环境变量。如果您检查文件,您将看到一个定义的环境变量,GRAPHRAG_API_KEY=<API_KEY>。这是OpenAI API或Azure OpenAI端点的API密钥。您可以用您自己的API密钥替换它。如果您正在使用另一种形式的身份验证(即托管身份),请删除此文件。
  • settings.yaml包含GraphRAG的设置。您可以修改此文件来更改管道的设置。

二、GraphRAG兼容千帆通义等大模型

国内使用 OpenAI / AzureOpenAI 诸多不便,因此fork了官方代码创建了个新代码库:graphrag-more在其基础上做了小部分修改,支持使用百度千帆、阿里通义、Ollama。

安装步骤如下

1.安装 graphrag-more

如需二次开发或者调试的话,也可以直接使用源码的方式,步骤如下:
下载 graphrag-more 代码库
git clone https://github.com/guoyao/graphrag-more.git
安装依赖包
这里使用 poetry 来管理python虚拟环境
安装 poetry 参考:https://python-poetry.org/docs/#installation
cd graphrag-more
poetry install

conda create -n graphrag python=3.10
pip install graphrag-more
#or 如果使用了国内的镜像源需要改为官方的镜像源下载
pip install -i https://pypi.org/simple graphrag-more

2.准备Demo数据

# 创建demo目录
mkdir -p ./ragtest-more/input# 下载微软官方demo数据
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./ragtest-more/input/book.txt

3.初始化demo目录

python -m graphrag.index --init --root ./ragtest-more

4.移动和修改 settings.yaml 文件

根据选用的模型(千帆、通义、Ollama)将 example_settings 文件夹对应模型的 settings.yaml 文件复制到 ragtest 目录,覆盖初始化过程生成的 settings.yaml 文件。

每个settings.yaml里面都设置了默认的 llm 和 embeddings 模型,根据你自己要使用的模型修改 settings.yaml 文件的 model 配置
千帆默认使用 qianfan.ERNIE-3.5-128K 和 qianfan.bge-large-zh ,注意:必须带上 qianfan. 前缀 !!!
通义默认使用 tongyi.qwen-plus 和 tongyi.text-embedding-v2 ,注意:必须带上 tongyi. 前缀 !!!
Ollama默认使用 ollama.mistral:latest 和 ollama.quentinz/bge-large-zh-v1.5:latest ,注意:<=0.3.0版本时,其llm模型不用带前缀,>=0.3.1版本时,其llm模型必须带上 ollama. 前缀,embeddings模型必须带 ollama. 前缀 !!!

以下是Ollama、千帆、通义千问的配置文件内容

ollama


encoding_model: cl100k_base
skip_workflows: []
llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: ollama.mistral:latestmodel_supports_json: true # recommended if this is available for your model.# max_tokens: 4000# request_timeout: 180.0# api_base: http://localhost:11434/v1# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# temperature: 0 # temperature for sampling# top_p: 1 # top-p sampling# n: 1 # Number of completions to generateparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: threaded # or asyncioembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: threaded # or asyncio# target: required # or allllm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: ollama.quentinz/bge-large-zh-v1.5:latest# api_base: http://localhost:11434/api# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 300overlap: 100group_by_columns: [id] # by default, we don't allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: "input"file_encoding: utf-8file_pattern: ".*\\.txt$"cache:type: file # or blobbase_dir: "cache"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>storage:type: file # or blobbase_dir: "output" # output/${timestamp}/artifacts# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>reporting:type: file # or console, blobbase_dir: "output" # output/${timestamp}/reports# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>entity_extraction:## strategy: fully override the entity extraction strategy.##   type: one of graph_intelligence, graph_intelligence_json and nltk## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/entity_extraction.txt"entity_types: [organization,person,geo,event]max_gleanings: 1summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/summarize_descriptions.txt"max_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: "prompts/claim_extraction.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1community_reports:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/community_report.txt"max_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000global_search:# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 2000# concurrency: 32

千帆


encoding_model: cl100k_base
skip_workflows: []
llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: qianfan.ERNIE-3.5-128Kmodel_supports_json: false # recommended if this is available for your model, original default is true# max_tokens: 4000# request_timeout: 180.0# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 120 # set a leaky bucket throttle,original default is 10_000# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 2 # the number of parallel inflight requests that may be made,original default is 25temperature: 1e-10 # temperature for sampling, original default is 0# top_p: 1 # top-p sampling# n: 1 # Number of completions to generateparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: asyncio # or threadedembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: asyncio # or threaded# target: required # or allllm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: qianfan.bge-large-zh# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 1200overlap: 100group_by_columns: [id] # by default, we don't allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: "input"file_encoding: utf-8file_pattern: ".*\\.txt$"cache:type: file # or blobbase_dir: "cache"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>storage:type: file # or blobbase_dir: "output" # output/${timestamp}/artifacts# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>reporting:type: file # or console, blobbase_dir: "output" # output/${timestamp}/reports# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>entity_extraction:## strategy: fully override the entity extraction strategy.##   type: one of graph_intelligence, graph_intelligence_json and nltk## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/entity_extraction.txt"entity_types: [organization,person,geo,event]max_gleanings: 1summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/summarize_descriptions.txt"max_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: "prompts/claim_extraction.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1community_reports:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/community_report.txt"max_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10llm_temperature: 1e-10 # temperature for sampling, original default is 0# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 5000 # original default is 12000global_search:llm_temperature: 1e-10 # temperature for sampling, original default is 0# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 5000 # original default is 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 2000# concurrency: 32

通义千问


encoding_model: cl100k_base
skip_workflows: []
llm:api_key: ${GRAPHRAG_API_KEY}type: openai_chat # or azure_openai_chatmodel: tongyi.qwen-plusmodel_supports_json: false # recommended if this is available for your model, original default is true# max_tokens: 4000# request_timeout: 180.0# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# temperature: 0 # temperature for sampling# top_p: 1 # top-p sampling# n: 1 # Number of completions to generateparallelization:stagger: 0.3# num_threads: 50 # the number of threads to use for parallel processingasync_mode: threaded # or asyncioembeddings:## parallelization: override the global parallelization settings for embeddingsasync_mode: threaded # or asyncio# target: required # or allllm:api_key: ${GRAPHRAG_API_KEY}type: openai_embedding # or azure_openai_embeddingmodel: tongyi.text-embedding-v2# api_base: https://<instance>.openai.azure.com# api_version: 2024-02-15-preview# organization: <organization_id># deployment_name: <azure_model_deployment_name># tokens_per_minute: 150_000 # set a leaky bucket throttle# requests_per_minute: 10_000 # set a leaky bucket throttle# max_retries: 10# max_retry_wait: 10.0# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times# concurrent_requests: 25 # the number of parallel inflight requests that may be made# batch_size: 16 # the number of documents to send in a single request# batch_max_tokens: 8191 # the maximum number of tokens to send in a single request# target: required # or optionalchunks:size: 1200overlap: 100group_by_columns: [id] # by default, we don't allow chunks to cross documentsinput:type: file # or blobfile_type: text # or csvbase_dir: "input"file_encoding: utf-8file_pattern: ".*\\.txt$"cache:type: file # or blobbase_dir: "cache"# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>storage:type: file # or blobbase_dir: "output" # output/${timestamp}/artifacts# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>reporting:type: file # or console, blobbase_dir: "output" # output/${timestamp}/reports# connection_string: <azure_blob_storage_connection_string># container_name: <azure_blob_storage_container_name>entity_extraction:## strategy: fully override the entity extraction strategy.##   type: one of graph_intelligence, graph_intelligence_json and nltk## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/entity_extraction.txt"entity_types: [organization,person,geo,event]max_gleanings: 1summarize_descriptions:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/summarize_descriptions.txt"max_length: 500claim_extraction:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this task# enabled: trueprompt: "prompts/claim_extraction.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1community_reports:## llm: override the global llm settings for this task## parallelization: override the global parallelization settings for this task## async_mode: override the global async_mode settings for this taskprompt: "prompts/community_report.txt"max_length: 2000max_input_length: 8000cluster_graph:max_cluster_size: 10embed_graph:enabled: false # if true, will generate node2vec embeddings for nodes# num_walks: 10# walk_length: 40# window_size: 2# iterations: 3# random_seed: 597832umap:enabled: false # if true, will generate UMAP embeddings for nodessnapshots:graphml: falseraw_entities: falsetop_level_nodes: falselocal_search:# text_unit_prop: 0.5# community_prop: 0.1# conversation_history_max_turns: 5# top_k_mapped_entities: 10# top_k_relationships: 10# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000global_search:# llm_temperature: 0 # temperature for sampling# llm_top_p: 1 # top-p sampling# llm_n: 1 # Number of completions to generate# max_tokens: 12000# data_max_tokens: 12000# map_max_tokens: 1000# reduce_max_tokens: 2000# concurrency: 32

三、知识库构建与使用

1.知识库构建

构建过程可能会触发 rate limit (限速)导致构建失败,重复执行几次,或者尝试调小 settings.yaml 中 的 requests_per_minute 和 concurrent_requests 配置,然后重试

#如果使用是graphrag
graphrag index --root ./ragtest
#如果使用的是graphrag-more
python -m graphrag.index --root ./ragtest-more

需要注意的是如果使用graphrag-more需要将秘钥配置环境变量

根据选用的模型,配置对应的环境变量,若使用Ollama需要安装并下载对应模型

1.千帆:需配置环境变量 QIANFAN_AK、QIANFAN_SK(注意是应用的AK/SK,不是安全认证的Access Key/Secret Key),如何获取请参考官方文档
2.通义:需配置环境变量 TONGYI_API_KEY(从0.3.6.1版本开始,也支持使用 DASHSCOPE_API_KEY,同时都配置的情况下 TONGYI_API_KEY 优先级高于 DASHSCOPE_API_KEY),如何获取请参考官方文档
3.Ollama:
安装:https://ollama.com/download ,安装后启动
下载模型
ollama pull mistral:latest
ollama pull quentinz/bge-large-zh-v1.5:latest

配置环境变量

linux: export QIANFAN_AK=“value”

在这里插入图片描述

2.执行查询

graphrag

# global query
graphrag query \
--root ./ragtest \
--method global \
--query "What are the top themes in this story?"# local query
graphrag query \
--root ./ragtest \
--method local \
--query "Who is Scrooge and what are his main relationships?"

graphrag-more

# global query
python -m graphrag.query \
--root ./ragtest-more \
--method global \
"What are the top themes in this story?"# local query
python -m graphrag.query \
--root ./ragtest-more \
--method local \
"Who is Scrooge, and what are his main relationships?"

运行结果如下
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/59568.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

揭秘2024年最火的5个科技趋势,你准备好迎接了吗?

在这个信息化飞速发展的时代&#xff0c;科技正以前所未有的速度改变着我们的生活。2024年&#xff0c;科技行业将迎来哪些新的突破与趋势&#xff1f;从人工智能到量子计算&#xff0c;从数字货币到智能家居&#xff0c;未来已来&#xff0c;而我们正站在一个巨变的风口浪尖上…

MySQL排序查询

排序查询 在实际应用中,经常都需要按照某个字段都某种排序都结果,实现语法&#xff1a; select 查询列表 from 表 where 条件 order by 排序字段列表 asc | desc; 案例&#xff1a;查询所有员工信息,要求工资从大到小排列 select * from employees order by salary desc; /…

Python实例:爱心代码

前言 在编程的奇妙世界里,代码不仅仅是冰冷的指令集合,它还可以成为表达情感、传递温暖的独特方式。今天,我们将一同探索用 Python 语言绘制爱心的神奇之旅。 爱心,这个象征着爱与温暖的符号,一直以来都在人类的情感世界中占据着特殊的地位。而通过 Python 的强大功能,…

JS JavaScript实现h5页面间跳转

一、不在JS中跳转 如果你不想在 JavaScript 中写页面跳转&#xff0c;而是希望使用 HTML 或者其它前端方式来实现页面跳转&#xff0c;下面是一些常见的方法&#xff1a; 1. 使用 <a> 标签进行跳转 HTML 中最常见的跳转方式就是使用 <a> 标签。它可以让用户点击链…

scala学习记录,Set,Map

set&#xff1a;集合&#xff0c;表示没有重复元素的集合&#xff0c;特点&#xff1a;唯一 语法格式&#xff1a;val 变量名 Set [类型]&#xff08;元素1&#xff0c;元素2...&#xff09; 可变不可变 可变&#xff08;mutable&#xff09;可对元素进行添加&#xff0c;删…

基于SpringBoot的免税商品优选购物商城的设计与实现

一、项目背景 从古至今&#xff0c;通过书本获取知识信息的方式完全被互联网络信息化&#xff0c;但是免税商品优选购物商城&#xff0c;对于购物商城工作来说&#xff0c;仍然是一项非常重要的工作。尤其是免税商品优选购物商城&#xff0c;传统人工记录模式已不符合当前社会…

【深度学习】DreamClear:提升图片分辨率的模型

基于PixArt-XL-2模型,效果很好。 DreamClear:高容量真实世界图像修复与隐私安全数据集构建 在图像修复领域,处理真实世界中的低质量(Low-Quality, LQ)图像并恢复其高质量(High-Quality, HQ)版本一直是一个具有挑战性的任务。今天,我们将介绍一个最新的开源项目——Dr…

从零开始的c++之旅——多态

1. 多态的概念 通俗来说就是多种形态。 多态分为编译时多态&#xff08;静态多态&#xff09;和运行时多态&#xff08;动态多态&#xff09;。 编译时多态主要就是我们之前提过的函数重载和函数模板&#xff0c;同名提高传不同的参数就可以调 用不同的函数&#xff0c…

js实现blob类型转化为excel文件

需求 后端通过接口将excel文件通过blob类型数据进行返回&#xff0c;前端接收数据并将其转化为excel文件进行下载 实现 接口方法 responseType&#xff1a;值为blob&#xff0c;标记返回数据类型为blob Content-Type&#xff1a;请求头设置&#xff0c;值为application/vnd…

融云「北极星」专业版:指标异常及时告警,趋势变化预先知悉

说起程序员的痛苦时刻&#xff0c;深夜接到告警电话、短信绝对榜上有名&#xff0c;甚至可能留下“铃声 PTSD”。 这也从另一个侧面提醒我们&#xff0c;所有在前台给用户丝滑体验的互联网产品&#xff0c;背后都有庞杂的系统和大量的工程师在支撑。而这其中&#xff0c;监控平…

安全篇(1)判断安全固件

判断安全固件的方法 一、通过串口开机打印 改方法适用Android与Tina 1.开机打印为SBOOT为安全 [289]HELLO! SBOOT is starting! 2.开机打印boot0为非安全 [88]BOOT0 commit : 1cbb5ea8b3 二、通过读数据 1.getprop | grep verifiedbootstate 这条命令的输出表示设备的…

火山引擎VeDI数据服务平台:在电商场景中,如何解决API编排问题?

01 平台介绍 数据服务平台可以在保证服务高可靠性和高安全性的同时&#xff0c;为各业务线搭建数据服务统一出口&#xff0c;促进数据共享&#xff0c;为数据和应用之间建立了一座“沟通桥梁”。 同时&#xff0c;解决数据理解困难、异构、重复建设、审计运维困难等问题&#x…

在Ubuntu 上实现 JAR 包的自启动

在 Ubuntu 上实现 JAR 包的自启动&#xff0c;可以通过以下几种方法&#xff1a; 方法一&#xff1a;使用 systemd 创建一个服务文件&#xff1a; 在 /etc/systemd/system/ 目录下创建一个新的服务文件&#xff0c;例如 myapp.service&#xff1a; sudo nano /etc/systemd/sys…

Object 内部类 异常

Objbect类 java提供了Object,它是所有类的父类,每个类都直接或间接的继承了Object类,因此Object类通常被称为超类 当定义一个类时,如果没有使用extends关键字直接去指定父类继承,只要没有被继承的类,都是会默认的去继承Object类,超类中定义了一些方法 方法名称方法说明boole…

SQLite -- 一个遵守君子协定的数据库

用惯了Oracle、PostgreSQL等数据库&#xff0c;今天接触到SQLite&#xff0c;简单尝试了下使用&#xff0c;顿感震惊&#xff01;&#xff01;&#xff01; 与传统的关系型数据库&#xff08;如 MySQL、PostgreSQL 等&#xff09;相比&#xff0c;它的约束是真的宽松。具体来说…

Linux 高级IO

学习任务&#xff1a; 高级 I/O&#xff1a;select、poll、epoll、mmap、munmap 要求&#xff1a; 学习高级 I/O 的用法&#xff0c;并实操 1、高级 I/O&#xff1a; 前置知识&#xff1a; 阻塞、I/O 多路复用 PS: 非阻塞 I/O ------ 非阻塞 I/O 阻塞其实就是进入了休眠状态&am…

JAVA WEB — HTML CSS 入门学习

本文为JAVAWEB 关于HTML 的基础学习 一 概述 HTML 超文本标记语言 超文本 超越文本的限制 比普通文本更强大 除了文字信息 还可以存储图片 音频 视频等标记语言 由标签构成的语言HTML标签都是预定义的 HTML直接在浏览器中运行 在浏览器解析 CSS 是一种用来表现HTML或XML等文…

ASRPRO 日历2

为避免与天问的ID冲突 ID前加10000 为使识别更顺畅 将 日期-月份 12月21日 合并 ;时间 10点25分 合并 通过串口获取日期 为使用常用词 计倒时 下周 明天,需通过串口获取当前日期 + 命令词 增加 我的 A的 B的 关系词 与任务 生日 买菜 增加 可自定义 任务 执行程序 双进…

Linux——Linux基础指令

Linux基本指令 文章目录 Linux基本指令1. 基础五指令(1) whoami(2) who(3) pwd(4) ls(5) clear 2. 文件常见命令(1) touch(2) mkdir(3) cp(4) mv(5) rm(6) cd 3. 常见IO命令(1) cat(2) tac(3) head(4) tail(5) more(6) less 4. 拓展命令(1) man手册(2) which(3) file(4) date(5…

雷池社区版 7.1.0 LTS 发布了

LTS&#xff08;Long Term Support&#xff0c;长期支持版本&#xff09;是软件开发中的一个概念&#xff0c;表示该版本将获得较长时间的支持和更新&#xff0c;通常包含稳定性、性能改进和安全修复&#xff0c;但不包含频繁的新特性更新。 作为最受欢迎的社区waf&#xff0c…