从零开始学​ChatGLM2-6B 模型基于 P-Tuning v2 的微调​

ChatGLM2-6B-PT

本项目实现了对于 ChatGLM2-6B 模型基于 P-Tuning v2 的微调。P-Tuning v2 将需要微调的参数量减少到原来的 0.1%,再通过模型量化、Gradient Checkpoint 等方法,最低只需要 7GB 显存即可运行。

下面以 ADGEN (广告生成) 数据集为例介绍代码的使用方法。

In [11]:

!pip install -r  /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt -i  https://pypi.tuna.tsinghua.edu.cn/simple/
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple/
Requirement already satisfied: protobuf in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 1)) (5.26.1)
Requirement already satisfied: transformers==4.30.2 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (4.30.2)
Requirement already satisfied: cpm_kernels in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 3)) (1.0.11)
Requirement already satisfied: torch>=2.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 4)) (2.2.2)
Requirement already satisfied: gradio in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 5)) (3.40.0)
Requirement already satisfied: mdtex2html in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 6)) (1.3.0)
Requirement already satisfied: sentencepiece in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 7)) (0.2.0)
Requirement already satisfied: accelerate in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 8)) (0.28.0)
Requirement already satisfied: sse-starlette in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from -r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 9)) (2.0.0)
Requirement already satisfied: filelock in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (3.13.3)
Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (0.22.2)
Requirement already satisfied: numpy>=1.17 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (1.26.4)
Requirement already satisfied: packaging>=20.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (24.0)
Requirement already satisfied: pyyaml>=5.1 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (2023.12.25)
Requirement already satisfied: requests in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (2.31.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (0.13.3)
Requirement already satisfied: safetensors>=0.3.1 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (0.4.2)
Requirement already satisfied: tqdm>=4.27 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from transformers==4.30.2->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 2)) (4.66.2)
Requirement already satisfied: typing-extensions>=4.8.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from torch>=2.0->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 4)) (4.10.0)
Requirement already satisfied: sympy in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from torch>=2.0->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 4)) (1.12)
Requirement already satisfied: networkx in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from torch>=2.0->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 4)) (3.2.1)
Requirement already satisfied: jinja2 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from torch>=2.0->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 4)) (3.1.3)
Requirement already satisfied: fsspec in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from torch>=2.0->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 4)) (2024.2.0)
...
Requirement already satisfied: referencing>=0.28.4 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 5)) (0.34.0)
Requirement already satisfied: rpds-py>=0.7.1 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 5)) (0.18.0)
Requirement already satisfied: uc-micro-py in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from linkify-it-py<3,>=1->markdown-it-py[linkify]>=2.0.0->gradio->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 5)) (1.0.3)
Requirement already satisfied: six>=1.5 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from python-dateutil>=2.7->matplotlib~=3.0->gradio->-r /mnt/e/AI-lab/ChatGLM2-6B/requirements.txt (line 5)) (1.16.0)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

In [13]:

# 运行微调除 ChatGLM2-6B 的依赖之外,还需要安装以下依赖
!pip install rouge_chinese nltk jieba datasets transformers[torch] -i https://pypi.douban.com/simple/
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple/
Requirement already satisfied: rouge_chinese in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (1.0.3)
Requirement already satisfied: nltk in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (3.8.1)
Requirement already satisfied: jieba in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (0.42.1)
Requirement already satisfied: datasets in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (2.18.0)
Requirement already satisfied: transformers[torch] in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (4.30.2)
Requirement already satisfied: six in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from rouge_chinese) (1.16.0)
Requirement already satisfied: click in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from nltk) (8.1.7)
Requirement already satisfied: joblib in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from nltk) (1.3.2)
Requirement already satisfied: regex>=2021.8.3 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from nltk) (2023.12.25)
Requirement already satisfied: tqdm in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from nltk) (4.66.2)
Requirement already satisfied: filelock in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (3.13.3)
Requirement already satisfied: numpy>=1.17 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (1.26.4)
Requirement already satisfied: pyarrow>=12.0.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (15.0.2)
Requirement already satisfied: pyarrow-hotfix in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (0.6)
Requirement already satisfied: dill<0.3.9,>=0.3.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (0.3.8)
Requirement already satisfied: pandas in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (2.2.1)
Requirement already satisfied: requests>=2.19.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (2.31.0)
Requirement already satisfied: xxhash in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (3.4.1)
Requirement already satisfied: multiprocess in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (0.70.16)
Requirement already satisfied: fsspec<=2024.2.0,>=2023.1.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from fsspec[http]<=2024.2.0,>=2023.1.0->datasets) (2024.2.0)
Requirement already satisfied: aiohttp in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (3.9.3)
Requirement already satisfied: huggingface-hub>=0.19.4 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (0.22.2)
Requirement already satisfied: packaging in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (24.0)
Requirement already satisfied: pyyaml>=5.1 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from datasets) (6.0.1)
...
Requirement already satisfied: pytz>=2020.1 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from pandas->datasets) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from pandas->datasets) (2024.1)
Requirement already satisfied: MarkupSafe>=2.0 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from jinja2->torch!=1.12.0,>=1.9->transformers[torch]) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in /home/ai001/anaconda3/envs/chatglm2-6b/lib/python3.9/site-packages (from sympy->torch!=1.12.0,>=1.9->transformers[torch]) (1.3.0)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

使用方法

下载数据集

ADGEN 数据集任务为根据输入(content)生成一段广告词(summary)。

{    "content": "类型#上衣*版型#宽松*版型#显瘦*图案#线条*衣样式#衬衫*衣袖型#泡泡袖*衣款式#抽绳",    "summary": "这件衬衫的款式非常的宽松,利落的线条可以很好的隐藏身材上的小缺点,穿在身上有着很好的显瘦效果。领口装饰了一个可爱的抽绳,漂亮的绳结展现出了十足的个性,配合时尚的泡泡袖型,尽显女性甜美可爱的气息。"    
}

从 Google Drive 或者 Tsinghua Cloud 下载处理好的 ADGEN 数据集,将解压后的 AdvertiseGen 目录放到 ptuning 目录下。

本项目中默认已经挂载了 ADGEN 数据集。

In [2]:

# 微调生成的 Checkpoint 文件较大,为避免占用 project 目录空间,我们将工作目录移到 temp 目录中进行后续工作

!cp -r /mnt/e/AI-lab/ChatGLM2-6B/ptuning /mnt/e/AI-lab/ChatGLM2-6B/temp

In [3]:

import os# 设置你想要切换到的目录路径
new_dir = '/mnt/e/AI-lab/ChatGLM2-6B/temp/ptuning'# 切换当前工作目录
os.chdir(new_dir)# 打印当前工作目录以确认切换成功
print(os.getcwd())

/mnt/e/AI-lab/ChatGLM2-6B/temp/ptuning 

In [4]:

# 拷贝 ADGEN 数据集到工作目录
!cp -r /home/mw/input/adgen9371 AdvertiseGen

In [5]:

# 检查数据集
!ls -alh AdvertiseGen
total 52M
drwxrwxrwx 1 ai001 ai001 4.0K Apr  3 21:16 .
drwxrwxrwx 1 ai001 ai001 4.0K Apr  4 17:34 ..
-rwxrwxrwx 1 ai001 ai001 487K Apr  4 17:34 dev.json
-rwxrwxrwx 1 ai001 ai001  52M Apr  4 17:34 train.json

训练

P-Tuning v2

PRE_SEQ_LEN 和 LR 分别是 soft prompt 长度和训练的学习率,可以进行调节以取得最佳的效果。P-Tuning-v2 方法会冻结全部的模型参数,可通过调整 quantization_bit 来被原始模型的量化等级,不加此选项则为 FP16 精度加载。

在默认配置 quantization_bit=4per_device_train_batch_size=1gradient_accumulation_steps=16 下,INT4 的模型参数被冻结,一次训练迭代会以 1 的批处理大小进行 16 次累加的前后向传播,等效为 16 的总批处理大小,此时最低只需 6.7G 显存。若想在同等批处理大小下提升训练效率,可在二者乘积不变的情况下,加大 per_device_train_batch_size 的值,但也会带来更多的显存消耗,请根据实际情况酌情调整。

Finetune

如果需要进行全参数的 Finetune,需要安装 Deepspeed,然后运行以下指令:

bash ds_train_finetune.sh

我们以 P-tuning v2 方法为例,采取参数 quantization_bit=4per_device_train_batch_size=1gradient_accumulation_steps=16 进行微调训练

In [14]:

# P-tuning v2
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
PRE_SEQ_LEN=128
LR=2e-2
NUM_GPUS=1!torchrun --standalone --nnodes=1 --nproc-per-node=1 main.py \--do_train \--train_file AdvertiseGen/train.json \--validation_file AdvertiseGen/dev.json \--preprocessing_num_workers 10 \--prompt_column content \--response_column summary \--overwrite_cache \--model_name_or_path /mnt/e/AI-lab/ChatGLM2-6B/ \--output_dir output/adgen-chatglm2-6b-pt-$PRE_SEQ_LEN-$LR \--overwrite_output_dir \--max_source_length 64 \--max_target_length 128 \--per_device_train_batch_size 1 \--per_device_eval_batch_size 1 \--gradient_accumulation_steps 16 \--predict_with_generate \--max_steps 3000 \--logging_steps 10 \--save_steps 1000 \--learning_rate 2e-2 \--pre_seq_len 128 \--ddp_find_unused_parameters False#--quantization_bit 4

In [17]:

# 加载模型
model_path = "/mnt/e/ai-lab/ChatGLM2-6B"
from transformers import AutoTokenizer, AutoModel
from utils import load_model_on_gpus
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = load_model_on_gpus("/mnt/e/ai-lab/ChatGLM2-6B", num_gpus=2)
model = model.eval()
# 使用 Markdown 格式打印模型输出
from IPython.display import display, Markdown, clear_outputdef display_answer(model, query, history=[]):for response, history in model.stream_chat(tokenizer, query, history=history):clear_output(wait=True)display(Markdown(response))return history

In [18]:

# 微调前
#model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda()
#model = model.eval()display_answer(model, "类型#上衣\*材质#牛仔布\*颜色#白色\*风格#简约\*图案#刺绣\*衣样式#外套\*衣款式#破洞")

上衣材质为牛仔布,颜色为白色,风格为简约,图案为刺绣,衣款式为外套,衣样式为破洞。

Out[18]:

[('类型#上衣\\*材质#牛仔布\\*颜色#白色\\*风格#简约\\*图案#刺绣\\*衣样式#外套\\*衣款式#破洞','上衣材质为牛仔布,颜色为白色,风格为简约,图案为刺绣,衣款式为外套,衣样式为破洞。')]

In [22]:

# 微调后
import os
import torch
from transformers import AutoConfig
from transformers import AutoModel
from transformers import AutoTokenizer, AutoModelos.environ['CUDA_VISIBLE_DEVICES'] = '1'model_path = "/mnt/e/ai-lab/ChatGLM2-6B"
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True, pre_seq_len=128)
model = AutoModel.from_pretrained(model_path, config=config, trust_remote_code=True)
prefix_state_dict = torch.load(os.path.join("/mnt/e/AI-lab/ChatGLM2-6B/temp/ptuning/output/adgen-chatglm2-6b-pt-128-0.02/checkpoint-1000", "pytorch_model.bin"))
new_prefix_state_dict = {}
for k, v in prefix_state_dict.items():if k.startswith("transformer.prefix_encoder."):new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v
model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)model = model.half().cuda()
model.transformer.prefix_encoder.float()
model = model.eval()# 使用 Markdown 格式打印模型输出
from IPython.display import display, Markdown, clear_output# 载入Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
response, history = model.chat(tokenizer, "类型#上衣*颜色#黑白*风格#简约*风格#休闲*图案#条纹*衣样式#风衣*衣样式#外套", history=[])
print(response)
#display_answer(model, "类型#上衣\*材质#牛仔布\*颜色#白色\*风格#简约\*图案#刺绣\*衣样式#外套\*衣款式#破洞")response, history = model.chat(tokenizer, "风衣有什么特征呢", history=[])
print(response)response, history = model.chat(tokenizer, "日常休闲一般穿什么风格的衣服好呢?", history=[])
print(response)

Loading checkpoint shards: 100%

 7/7 [02:39<00:00, 23.82s/it]

Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at /mnt/e/ai-lab/ChatGLM2-6B and are newly initialized: ['transformer.prefix_encoder.embedding.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at /mnt/e/ai-lab/ChatGLM2-6B and are newly initialized: ['transformer.prefix_encoder.embedding.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
简约的条纹风衣,在黑白两色的搭配下,看起来非常的干练利落。经典的条纹元素,带来一种简约休闲的时尚感,将女性优雅的气质完美展现出来。
这款风衣是经典的风衣款式,采用优质的面料制作,质感舒适。在设计上,风衣采用经典的翻领设计,修饰颈部曲线,让你看起来更加优雅。风衣前襟采用斜线处理,整体看起来更加有设计感。
休闲风格是生活中不可或缺的,无论是在职场还是日常休闲,它都是一种很受欢迎的时尚元素。对于休闲的衣装来说,它一般都具有很亲和的气质,可以搭配出各种不同的风格。像这款休闲的连衣裙,它采用柔软的面料,穿着舒适亲肤,而且可以轻松的搭配出各种不同的风格。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/802973.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【linux基础】bash脚本的学习:定义变量及引用变量、统计目标目录下所有文件行数、列数

假设目的&#xff1a;统计并输出指定文件夹下所有文件行数 单个文件可以用 wc -l &#xff1b;多个文件&#xff0c;可以用通配符 / 借助bash脚本 1.定义变量名&#xff0c;使用引号 a"bestqc.com.map" b"Anno.variant_function" c"enrichment/GOe…

访问网站时你的电脑都做了什么

电脑在访问百度时 首先在本地hosts文件里面查看本地有无域名对应的IP地址&#xff0c;若有就直接返回。若无&#xff0c;则本地DNS服务器当DNS的客户&#xff0c;向其它根域服务器发送报文查询IP地址&#xff0c;简单来说就是帮助主机查找IP&#xff0c;所以递归查询就在客户端…

【Ubuntu】远程连接乌班图的方式-命令行界面、图形界面

环境&#xff1a;ubuntu-22.04.2-amd64.iso连接工具&#xff1a;MobaXterm、windows自带远程桌面mstsc.exe重置root密码&#xff1a;Ubuntu默认root密码是随机的&#xff0c;需要使用命令sudo passwd 进行重置。 一、命令行界面-SSH连接 1.1 SSH远程环境搭建 # 安装ssh服务&a…

无需训练,这个新方法实现了生成图像尺寸、分辨率自由

ChatGPT狂飙160天&#xff0c;世界已经不是之前的样子。 新建了免费的人工智能中文站https://ai.weoknow.com 新建了收费的人工智能中文站https://ai.hzytsoft.cn/ 更多资源欢迎关注 近日&#xff0c;来自香港中文大学 - 商汤科技联合实验室等机构的研究者们提出了FouriScale&…

【Unity添加远程桌面】使用Unity账号远程控制N台电脑

设置地址&#xff1a; URDP终极远程桌面&#xff1b;功能强大&#xff0c;足以让开发人员、设计师、建筑师、工程师等等随时随地完成工作或协助别人https://cloud-desktop.u3dcloud.cn/在网站登录自己的Unity 账号上去 下载安装被控端安装 保持登录 3.代码添加当前主机 "…

机器视觉系统核心组件的功能深度解析

机器视觉系统作为一个高度集成的技术平台&#xff0c;其内部各个组件的功能性都至关重要。下面是对其核心组件的功能进行的详细分析&#xff1a; 1. 光源系统&#xff1a; 功能&#xff1a; 为被检测物体提供稳定且合适的光照条件&#xff0c;确保图像采集的清晰度和对比度。…

P5356 [Ynoi2017] 由乃打扑克

我手把手教她打扑克 qwq 综合分析一下2个操作&#xff0c;查找区间第k小的值&#xff0c;感觉可以用主席树&#xff0c;区间修改那没事了 考虑分块做法,块长B 分析第一个操作 只需要维护数列的单调性&#xff0c;然后二分答案上二分就ok了 分析第二个操作 维护一个加法懒…

纯小白蓝桥杯备赛笔记--DAY10(字符串)

文章目录 KMP字符串哈希算法简介&#xff1a;斤斤计较的小z--2047字符串hash Manacher回文串的性质算法简介最长回文子串 字典树基础朴素字符串查找步骤前缀判定--1204 01tire算法简介&#xff1a;例题1&#xff1a;例题2&#xff1a; KMP字符串哈希 算法简介&#xff1a; 真前…

使用MQTT.fx接入新版ONENet(24.4.8)

新版ONENet使用MQTT.fx 模拟接入 目录 新版ONENet使用MQTT.fx 模拟接入开始前的准备创建产品设备获取关键参数 计算签名使用MQTT.fx连接服务器数据流准备与上传数据流准备数据发送与接收 开始前的准备 创建产品 设备下载Token签名工具生成签名 创建产品设备 根据以下内容填写…

对等依赖报错问题及解决办法

npm i --legacy-peer-deps npm v7之后对等依赖出现的问题 教学视频 对等依赖报错问题及解决办法【渡一教育】_哔哩哔哩_bilibili

java使用while循环输出2-100的所有素数

使用while循环输出2-100的所有素数,每行输出5个 分析:素数:只能被1和自己整除的自然数 public static void main(String[] args) {int num 2;int count 0;int count1 0;while (num < 100) {for (int i 1; i < num; i) {if (num % i 0) {count;}}if (count 2) {Sys…

C++中高阶数据结构(AVL树的原理讲解)

AVL树 AVL树的定义 avl本质是搜索树,是高度平衡二叉搜索树.特点是:任何树的左右子树的高度差不超过1.最大的高度差值最大也只能是1,也称之为平衡因子, 平衡因子就是右子树减去左子树的值,这个值的绝对值的最大值只能是1.这个平衡因子不是必须的,只是一种控制方式,方便我们更…

5. TypeScript中的数组和元组

在TypeScript中&#xff0c;数组和元组是组织和管理数据的重要工具。本文将深入探讨TypeScript中数组和元组的使用&#xff0c;包括它们的定义、操作方法以及在实际开发中的应用。 数组的使用 数组是一种线性数据结构&#xff0c;尽管在JavaScript可以存取不同的数据类型&…

script标签中defer和async的区别

如果没有defer或async属性&#xff0c;浏览器会立即加载并执行相应的脚本。它不会等待后续加载的文档元素&#xff0c;读取到就会开始加载和执行&#xff0c;这样就阻塞了后续文档的加载。 js脚本网络加载时间&#xff0c;红色代表js脚本执行时间&#xff0c;绿色代表html解析…

每日OJ题_两个数组dp⑥_力扣97. 交错字符串

目录 力扣97. 交错字符串 解析代码 力扣97. 交错字符串 97. 交错字符串 难度 中等 给定三个字符串 s1、s2、s3&#xff0c;请你帮忙验证 s3 是否是由 s1 和 s2 交错 组成的。 两个字符串 s 和 t 交错 的定义与过程如下&#xff0c;其中每个字符串都会被分割成若干 非空 子…

day21-二叉树part08

235. 二叉搜索树的最近公共祖先 相对于 二叉树的最近公共祖先 本题就简单一些了&#xff0c;因为 可以利用二叉搜索树的特性无需全部遍历。特点&#xff1a;当前节点在p&#xff0c;q节点之前则必为最近公共祖先 class Solution {public TreeNode lowestCommonAncestor(TreeNo…

Leetcode面试经典150_Q80删除有序数组中的重复项 II

题目&#xff1a; 给你一个有序数组 nums &#xff0c;请你 原地 删除重复出现的元素&#xff0c;使得出现次数超过两次的元素只出现两次 &#xff0c;返回删除后数组的新长度。 不要使用额外的数组空间&#xff0c;你必须在 原地 修改输入数组 并在使用 O(1) 额外空间的条件…

C# WinForm —— 06 常用控件

公共控件功能Label标签&#xff0c;UI上的提示性文字TextBox文本框RadioButton单选按钮CheckBox复选框ComboBox下拉框&#xff0c;只能选择一个选项CheckedListBox带复选框的列表项&#xff0c;可以选择多个选项的下拉菜单DateTimePicker日期时间选择控件ListBox列表框ListView…

C++ 学习笔记

文章目录 【 字符串相关 】C 输入输出流strcpy_s() 字符串复制输出乱码 【 STL 】各个 STL 支持的常见方法 ? : 运算符switch case 运算符 switch(expression) {case constant-expression :statement(s);break; // 可选的case constant-expression :statement(s);break; //…

HarmonyOS实战开发-如何实现电话服务中SIM卡相关功能

介绍 本示例使用sim相关接口&#xff0c;展示了电话服务中SIM卡相关功能&#xff0c;包含SIM卡的服务提供商、ISO国家码、归属PLMN号信息&#xff0c;以及默认语音卡功能。 效果预览 使用说明&#xff1a; 1.若SIM卡槽1插入SIM卡则SIM卡1区域显示为蓝色&#xff0c;否则默认…