(笔记)第三期书生·浦语大模型实战营(十一卷王场)–书生基础岛第5关---XTuner 微调个人小助手认知

学员闯关手册:https://aicarrier.feishu.cn/wiki/ZcgkwqteZi9s4ZkYr0Gcayg1n1g?open_in_browser=true
课程视频:https://www.bilibili.com/video/BV1tz421B72y/
课程文档:
https://github.com/InternLM/Tutorial/tree/camp3/docs/L1/XTuner
关卡作业:https://github.com/InternLM/Tutorial/blob/camp3/docs/L1/XTuner/task.md
开发机平台:https://studio.intern-ai.org.cn/
开发机平台介绍:https://aicarrier.feishu.cn/wiki/GQ1Qwxb3UiQuewk8BVLcuyiEnHe

在这里插入图片描述

准备工作

#1、创建虚拟环境
#创建开发机:Cuda12.2-conda
#克隆代码
mkdir -p /root/InternLM/Tutorial
git clone -b camp3  https://github.com/InternLM/Tutorial /root/InternLM/Tutorial
# 创建虚拟环境
conda create -n xtuner0121 python=3.10 -y# 激活虚拟环境(注意:后续的所有操作都需要在这个虚拟环境中进行)
conda activate xtuner0121# 安装一些必要的库
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia -y
# 安装其他依赖
pip install transformers==4.39.3
pip install streamlit==1.36.0
#2、安装 XTuner
# 创建一个目录,用来存放源代码
mkdir -p /root/InternLM/codecd /root/InternLM/codegit clone -b v0.1.21  https://github.com/InternLM/XTuner /root/InternLM/code/XTuner
# 进入到源码目录
cd /root/InternLM/code/XTuner
conda activate xtuner0121# 执行安装
pip install -e '.[deepspeed]'
#下面是更快的安装
#pip install -e '.[deepspeed]' -i https://mirrors.aliyun.com/pypi/simple/
#3、模型准备
# 创建一个目录,用来存放微调的所有资料,后续的所有操作都在该路径中进行
mkdir -p /root/InternLM/XTunercd /root/InternLM/XTunermkdir -p Shanghai_AI_Laboratoryln -s /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b Shanghai_AI_Laboratory/internlm2-chat-1_8b
#使用tree命令来观察目录结构
apt-get install -y tree
tree -l
#4、xtuner_streamlit_demo.py
#tools/xtuner_streamlit_demo.py
import copy
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optionalimport streamlit as st
import torch
from torch import nn
from transformers.generation.utils import (LogitsProcessorList,StoppingCriteriaList)
from transformers.utils import loggingfrom transformers import AutoTokenizer, AutoModelForCausalLM  # isort: skiplogger = logging.get_logger(__name__)model_name_or_path = "/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b"@dataclass
class GenerationConfig:# this config is used for chat to provide more diversitymax_length: int = 2048top_p: float = 0.75temperature: float = 0.1do_sample: bool = Truerepetition_penalty: float = 1.000@torch.inference_mode()
def generate_interactive(model,tokenizer,prompt,generation_config: Optional[GenerationConfig] = None,logits_processor: Optional[LogitsProcessorList] = None,stopping_criteria: Optional[StoppingCriteriaList] = None,prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor],List[int]]] = None,additional_eos_token_id: Optional[int] = None,**kwargs,
):inputs = tokenizer([prompt], padding=True, return_tensors='pt')input_length = len(inputs['input_ids'][0])for k, v in inputs.items():inputs[k] = v.cuda()input_ids = inputs['input_ids']_, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]if generation_config is None:generation_config = model.generation_configgeneration_config = copy.deepcopy(generation_config)model_kwargs = generation_config.update(**kwargs)bos_token_id, eos_token_id = (  # noqa: F841  # pylint: disable=W0612generation_config.bos_token_id,generation_config.eos_token_id,)if isinstance(eos_token_id, int):eos_token_id = [eos_token_id]if additional_eos_token_id is not None:eos_token_id.append(additional_eos_token_id)has_default_max_length = kwargs.get('max_length') is None and generation_config.max_length is not Noneif has_default_max_length and generation_config.max_new_tokens is None:warnings.warn(f"Using 'max_length''s default ({repr(generation_config.max_length)}) \to control the generation length. "'This behaviour is deprecated and will be removed from the \config in v5 of Transformers -- we'' recommend using `max_new_tokens` to control the maximum \length of the generation.',UserWarning,)elif generation_config.max_new_tokens is not None:generation_config.max_length = generation_config.max_new_tokens + \input_ids_seq_lengthif not has_default_max_length:logger.warn(  # pylint: disable=W4902f"Both 'max_new_tokens' (={generation_config.max_new_tokens}) "f"and 'max_length'(={generation_config.max_length}) seem to ""have been set. 'max_new_tokens' will take precedence. "'Please refer to the documentation for more information. ''(https://huggingface.co/docs/transformers/main/''en/main_classes/text_generation)',UserWarning,)if input_ids_seq_length >= generation_config.max_length:input_ids_string = 'input_ids'logger.warning(f"Input length of {input_ids_string} is {input_ids_seq_length}, "f"but 'max_length' is set to {generation_config.max_length}. "'This can lead to unexpected behavior. You should consider'" increasing 'max_new_tokens'.")# 2. Set generation parameters if not already definedlogits_processor = logits_processor if logits_processor is not None \else LogitsProcessorList()stopping_criteria = stopping_criteria if stopping_criteria is not None \else StoppingCriteriaList()logits_processor = model._get_logits_processor(generation_config=generation_config,input_ids_seq_length=input_ids_seq_length,encoder_input_ids=input_ids,prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,logits_processor=logits_processor,)stopping_criteria = model._get_stopping_criteria(generation_config=generation_config,stopping_criteria=stopping_criteria)logits_warper = model._get_logits_warper(generation_config)unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)scores = Nonewhile True:model_inputs = model.prepare_inputs_for_generation(input_ids, **model_kwargs)# forward pass to get next tokenoutputs = model(**model_inputs,return_dict=True,output_attentions=False,output_hidden_states=False,)next_token_logits = outputs.logits[:, -1, :]# pre-process distributionnext_token_scores = logits_processor(input_ids, next_token_logits)next_token_scores = logits_warper(input_ids, next_token_scores)# sampleprobs = nn.functional.softmax(next_token_scores, dim=-1)if generation_config.do_sample:next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)else:next_tokens = torch.argmax(probs, dim=-1)# update generated ids, model inputs, and length for next stepinput_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)model_kwargs = model._update_model_kwargs_for_generation(outputs, model_kwargs, is_encoder_decoder=False)unfinished_sequences = unfinished_sequences.mul((min(next_tokens != i for i in eos_token_id)).long())output_token_ids = input_ids[0].cpu().tolist()output_token_ids = output_token_ids[input_length:]for each_eos_token_id in eos_token_id:if output_token_ids[-1] == each_eos_token_id:output_token_ids = output_token_ids[:-1]response = tokenizer.decode(output_token_ids)yield response# stop when each sentence is finished# or if we exceed the maximum lengthif unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):breakdef on_btn_click():del st.session_state.messages@st.cache_resource
def load_model():model = (AutoModelForCausalLM.from_pretrained(model_name_or_path,trust_remote_code=True).to(torch.bfloat16).cuda())tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,trust_remote_code=True)return model, tokenizerdef prepare_generation_config():with st.sidebar:max_length = st.slider('Max Length',min_value=8,max_value=32768,value=2048)top_p = st.slider('Top P', 0.0, 1.0, 0.75, step=0.01)temperature = st.slider('Temperature', 0.0, 1.0, 0.1, step=0.01)st.button('Clear Chat History', on_click=on_btn_click)generation_config = GenerationConfig(max_length=max_length,top_p=top_p,temperature=temperature)return generation_configuser_prompt = '<|im_start|>user\n{user}<|im_end|>\n'
robot_prompt = '<|im_start|>assistant\n{robot}<|im_end|>\n'
cur_query_prompt = '<|im_start|>user\n{user}<|im_end|>\n\<|im_start|>assistant\n'def combine_history(prompt):messages = st.session_state.messagesmeta_instruction = ('')total_prompt = f"<s><|im_start|>system\n{meta_instruction}<|im_end|>\n"for message in messages:cur_content = message['content']if message['role'] == 'user':cur_prompt = user_prompt.format(user=cur_content)elif message['role'] == 'robot':cur_prompt = robot_prompt.format(robot=cur_content)else:raise RuntimeErrortotal_prompt += cur_prompttotal_prompt = total_prompt + cur_query_prompt.format(user=prompt)return total_promptdef main():# torch.cuda.empty_cache()print('load model begin.')model, tokenizer = load_model()print('load model end.')st.title('InternLM2-Chat-1.8B')generation_config = prepare_generation_config()# Initialize chat historyif 'messages' not in st.session_state:st.session_state.messages = []# Display chat messages from history on app rerunfor message in st.session_state.messages:with st.chat_message(message['role'], avatar=message.get('avatar')):st.markdown(message['content'])# Accept user inputif prompt := st.chat_input('What is up?'):# Display user message in chat message containerwith st.chat_message('user'):st.markdown(prompt)real_prompt = combine_history(prompt)# Add user message to chat historyst.session_state.messages.append({'role': 'user','content': prompt,})with st.chat_message('robot'):message_placeholder = st.empty()for cur_response in generate_interactive(model=model,tokenizer=tokenizer,prompt=real_prompt,additional_eos_token_id=92542,**asdict(generation_config),):# Display robot response in chat message containermessage_placeholder.markdown(cur_response + '▌')message_placeholder.markdown(cur_response)# Add robot response to chat historyst.session_state.messages.append({'role': 'robot','content': cur_response,  # pylint: disable=undefined-loop-variable})torch.cuda.empty_cache()if __name__ == '__main__':main()

微调前的模型对话

conda activate xtuner0121
streamlit run /root/InternLM/Tutorial/tools/xtuner_streamlit_demo.py
#Power shell
ssh -CNg -L 8501:127.0.0.1:8501 root@ssh.intern-ai.org.cn -p 34911
#浏览器:http://127.0.0.1:8501

指令跟随微调

数据集准备

cd /root/InternLM/XTuner
mkdir -p datas
touch datas/assistant.json
cd /root/InternLM/XTuner
touch xtuner_generate_assistant.py
cd /root/InternLM/XTuner
cp /root/InternLM/Tutorial/tools/xtuner_generate_assistant.py ./
#xtuner_generate_assistant.py
import json# 设置用户的名字
name = '伍鲜同志'
# 设置需要重复添加的数据次数
n = 8000# 初始化数据
data = [{"conversation": [{"input": "请介绍一下你自己", "output": "我是{}的小助手,内在是上海AI实验室书生·浦语的1.8B大模型哦".format(name)}]},{"conversation": [{"input": "你在实战营做什么", "output": "我在这里帮助{}完成XTuner微调个人小助手的任务".format(name)}]}
]# 通过循环,将初始化的对话数据重复添加到data列表中
for i in range(n):data.append(data[0])data.append(data[1])# 将data列表中的数据写入到'datas/assistant.json'文件中
with open('datas/assistant.json', 'w', encoding='utf-8') as f:# 使用json.dump方法将数据以JSON格式写入文件# ensure_ascii=False 确保中文字符正常显示# indent=4 使得文件内容格式化,便于阅读json.dump(data, f, ensure_ascii=False, indent=4)
# 将对应的name进行修改(在第4行的位置)
- name = '伍鲜同志'
+ name = "朱娅梅"
#生成数据文件
cd /root/InternLM/XTuner
conda activate xtuner0121
python xtuner_generate_assistant.py

修改配置文件

#配置文件
conda activate xtuner0121
xtuner list-cfg -p internlm2
cd /root/InternLM/XTuner
conda activate xtuner0121
xtuner copy-cfg internlm2_chat_1_8b_qlora_alpaca_e3 .#######################################################################
#                          PART 1  Settings                           #
#######################################################################
- pretrained_model_name_or_path = 'internlm/internlm2-chat-1_8b'
+ pretrained_model_name_or_path = '/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b'- alpaca_en_path = 'tatsu-lab/alpaca'
+ alpaca_en_path = 'datas/assistant.json'evaluation_inputs = [
-    '请给我介绍五个上海的景点', 'Please tell me five scenic spots in Shanghai'
+    '请介绍一下你自己', 'Please introduce yourself'
]#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(type=process_hf_dataset,
-   dataset=dict(type=load_dataset, path=alpaca_en_path),
+   dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)),tokenizer=tokenizer,max_length=max_length,
-   dataset_map_fn=alpaca_map_fn,
+   dataset_map_fn=None,template_map_fn=dict(type=template_map_fn_factory, template=prompt_template),remove_unused_columns=True,shuffle_before_pack=True,pack_to_max_length=pack_to_max_length,use_varlen_attn=use_varlen_attn)

修改完后的完整的配置文件是

#configs/internlm2_chat_1_8b_qlora_alpaca_e3_copy.py
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,BitsAndBytesConfig)from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import alpaca_map_fn, template_map_fn_factory
from xtuner.engine.hooks import (DatasetInfoHook, EvaluateChatHook,VarlenAttnArgsToMessageHubHook)
from xtuner.engine.runner import TrainLoop
from xtuner.model import SupervisedFinetune
from xtuner.parallel.sequence import SequenceParallelSampler
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE#######################################################################
#                          PART 1  Settings                           #
#######################################################################
# Model
pretrained_model_name_or_path = '/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b'
use_varlen_attn = False# Data
alpaca_en_path = 'datas/assistant.json'
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 2048
pack_to_max_length = True# parallel
sequence_parallel_size = 1# Scheduler & Optimizer
batch_size = 1  # per_device
accumulative_counts = 16
accumulative_counts *= sequence_parallel_size
dataloader_num_workers = 0
max_epochs = 3
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1  # grad clip
warmup_ratio = 0.03# Save
save_steps = 500
save_total_limit = 2  # Maximum checkpoints to keep (-1 means unlimited)# Evaluate the generation performance during the training
evaluation_freq = 500
SYSTEM = SYSTEM_TEMPLATE.alpaca
evaluation_inputs = ['请介绍一下你自己', 'Please introduce yourself'
]#######################################################################
#                      PART 2  Model & Tokenizer                      #
#######################################################################
tokenizer = dict(type=AutoTokenizer.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,padding_side='right')model = dict(type=SupervisedFinetune,use_varlen_attn=use_varlen_attn,llm=dict(type=AutoModelForCausalLM.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,torch_dtype=torch.float16,quantization_config=dict(type=BitsAndBytesConfig,load_in_4bit=True,load_in_8bit=False,llm_int8_threshold=6.0,llm_int8_has_fp16_weight=False,bnb_4bit_compute_dtype=torch.float16,bnb_4bit_use_double_quant=True,bnb_4bit_quant_type='nf4')),lora=dict(type=LoraConfig,r=64,lora_alpha=16,lora_dropout=0.1,bias='none',task_type='CAUSAL_LM'))#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(type=process_hf_dataset,dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)),tokenizer=tokenizer,max_length=max_length,dataset_map_fn=None,template_map_fn=dict(type=template_map_fn_factory, template=prompt_template),remove_unused_columns=True,shuffle_before_pack=True,pack_to_max_length=pack_to_max_length,use_varlen_attn=use_varlen_attn)sampler = SequenceParallelSampler \if sequence_parallel_size > 1 else DefaultSampler
train_dataloader = dict(batch_size=batch_size,num_workers=dataloader_num_workers,dataset=alpaca_en,sampler=dict(type=sampler, shuffle=True),collate_fn=dict(type=default_collate_fn, use_varlen_attn=use_varlen_attn))#######################################################################
#                    PART 4  Scheduler & Optimizer                    #
#######################################################################
# optimizer
optim_wrapper = dict(type=AmpOptimWrapper,optimizer=dict(type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),accumulative_counts=accumulative_counts,loss_scale='dynamic',dtype='float16')# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md  # noqa: E501
param_scheduler = [dict(type=LinearLR,start_factor=1e-5,by_epoch=True,begin=0,end=warmup_ratio * max_epochs,convert_to_iter_based=True),dict(type=CosineAnnealingLR,eta_min=0.0,by_epoch=True,begin=warmup_ratio * max_epochs,end=max_epochs,convert_to_iter_based=True)
]# train, val, test setting
train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)#######################################################################
#                           PART 5  Runtime                           #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [dict(type=DatasetInfoHook, tokenizer=tokenizer),dict(type=EvaluateChatHook,tokenizer=tokenizer,every_n_iters=evaluation_freq,evaluation_inputs=evaluation_inputs,system=SYSTEM,prompt_template=prompt_template)
]if use_varlen_attn:custom_hooks += [dict(type=VarlenAttnArgsToMessageHubHook)]# configure default hooks
default_hooks = dict(# record the time of every iteration.timer=dict(type=IterTimerHook),# print log every 10 iterations.logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),# enable the parameter scheduler.param_scheduler=dict(type=ParamSchedulerHook),# save checkpoint per `save_steps`.checkpoint=dict(type=CheckpointHook,by_epoch=False,interval=save_steps,max_keep_ckpts=save_total_limit),# set sampler seed in distributed evrionment.sampler_seed=dict(type=DistSamplerSeedHook),
)# configure environment
env_cfg = dict(# whether to enable cudnn benchmarkcudnn_benchmark=False,# set multi process parametersmp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),# set distributed parametersdist_cfg=dict(backend='nccl'),
)# set visualizer
visualizer = None# set log level
log_level = 'INFO'# load from which checkpoint
load_from = None# whether to resume training from the loaded checkpoint
resume = False# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)# set log processor
log_processor = dict(by_epoch=False)
cd /root/InternLM/XTuner
cp /root/InternLM/Tutorial/configs/internlm2_chat_1_8b_qlora_alpaca_e3_copy.py ./
启动微调
cd /root/InternLM/XTuner
conda activate xtuner0121
xtuner train ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py
模型转换
cd /root/InternLM/XTuner
conda activate xtuner0121# 先获取最后保存的一个pth文件
pth_file=`ls -t ./work_dirs/internlm2_chat_1_8b_qlora_alpaca_e3_copy/*.pth | head -n 1`
export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert pth_to_hf ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py ${pth_file} ./hf
模型合并
cd /root/InternLM/XTuner
conda activate xtuner0121export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert merge /root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b ./hf ./merged --max-shard-size 2GB

修改脚本文件xtuner_streamlit_demo.py并启动

# 直接修改脚本文件第18行
- model_name_or_path = "/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b"
+ model_name_or_path = "/root/InternLM/XTuner/merged"
conda activate xtuner0121
streamlit run /root/InternLM/Tutorial/tools/xtuner_streamlit_demo.py
#Power shell端口映射:ssh -CNg -L 8501:127.0.0.1:8501 root@ssh.intern-ai.org.cn -p 43551
#浏览器:http://127.0.0.1:8501 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/55701.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

ThreadLocal原理解析及面试

基本使用 讲原理之前&#xff0c;我简单写个demo小程序说说怎么使用 public class TestThreadLocal {public static void main(String[] args) throws InterruptedException {ThreadLocal<String> tl new ThreadLocal();/**主线程设置了一个值*/tl.set("SSSSSs&…

Java生成二维码示例(带logo以及文字描述)

先看一下生成效果 普通二维码 普通带文本二维码 带logo二维码 带logo带文本二维码 直接上代码 这里主要是用的第三方工具生成二维码的&#xff0c;所以我们需要先引入 jar 包 <dependency><groupId>com.google.zxing</groupId><artifactId>core</…

2024诺贝尔生理学或医学奖:RNA技术将拯救人类世界

生信碱移 miRNA领域获得最新诺贝尔奖 “我好像接到了真的诺贝尔委员会的电话&#xff01;” 加里鲁夫坎 2024年诺贝尔医学奖得主 ▲ 两位诺贝尔奖获奖得主。来源:诺贝尔生理学或医学奖委员会。 就在今天&#xff0c;卡罗林斯卡学院的诺贝尔大会决定将2024年诺贝尔生理学或医学…

动手学深度学习(李沐)PyTorch 第 6 章 卷积神经网络

李宏毅-卷积神经网络CNN 如果使用全连接层&#xff1a;第一层的weight就有3*10^7个 观察 1&#xff1a;检测模式不需要整张图像 很多重要的pattern只要看小范围即可 简化1&#xff1a;感受野 根据观察1 可以做第1个简化&#xff0c;卷积神经网络会设定一个区域&#xff0c…

无人机之飞行算法篇

无人机的飞行算法是一个复杂而精细的系统&#xff0c;它涵盖了多个关键技术和算法&#xff0c;以确保无人机能够稳定、准确地执行飞行任务。 一、位置估计 无人机在空中飞行过程中需要实时获取其位置信息&#xff0c;以便进行路径规划和控制。这通常通过以下传感器实现&#…

基于STM32的模拟舞台灯光控制系统设计

引言 本项目设计了一个基于STM32的模拟舞台灯光控制系统&#xff0c;可以通过调节灯光的亮度、颜色和模式&#xff0c;实现多种灯光效果模拟&#xff0c;如渐变、闪烁和跟随节奏的灯光变化。该系统结合了LED灯条、PWM控制和按钮输入等&#xff0c;实现了舞台灯光的多样化展示。…

【Linux的那些事】shell命名及Linux权限的理解

目录 一、shell命令以及运行原理 二、Linux权限的概念 三、Linux权限管理 3.1.文件访问者的分类&#xff08;人&#xff09; 3.2.文件类型和访问权限&#xff08;事物属性&#xff09; 3.3.文件权限值的表示方法 3.4.文件访问权限的相关设置方法 a)chmod b)chown c)…

5.错误处理在存储过程中的重要性(5/10)

错误处理在存储过程中的重要性 引言 在数据库编程中&#xff0c;存储过程是一种重要的组件&#xff0c;它允许用户将一系列SQL语句封装成一个单元&#xff0c;以便重用和简化数据库操作。然而&#xff0c;像任何编程任务一样&#xff0c;存储过程中的代码可能会遇到错误或异常…

使用MTVerseXR SDK实现VR串流

1、概述​ MTVerseXR SDK 是摩尔线程GPU加速的虚拟现实&#xff08;VR&#xff09;流媒体平台&#xff0c;专门用于从远程服务器流式传输基于标准OpenXR的应用程序。MTVerseXR可以通过Wi-Fi和USB流式将VR内容从Windows服务器流式传输到XR客户端设备, 使相对性能低的VR客户端可…

15分钟学 Python 第38天 :Python 爬虫入门(四)

Day38 : Python爬虫异常处理与反爬虫机制 章节1&#xff1a;异常处理的重要性 在爬虫开发过程中&#xff0c;网络请求和数据解析常常会遭遇各种异常。正确的异常处理可以提高程序的稳定性&#xff0c;避免崩溃&#xff0c;并帮助开发者快速定位问题。 章节2&#xff1a;常见…

18710 统计不同数字的个数(升级版)

### 思路 为了快速判断某个数字是否在之前出现过&#xff0c;我们可以使用一个布尔数组来记录每个数字是否已经出现过。由于题目中给出了数字的范围&#xff08;0 < ai < 200000&#xff09;&#xff0c;我们可以开一个大小为200001的布尔数组来记录每个数字的出现情况。…

网络编程(15)——服务器如何主动退出

十五、day15 服务器主动退出一直是服务器设计必须考虑的一个方向&#xff0c;旨在能通过捕获信号使服务器安全退出。我们可以通过asio提供的信号机制绑定回调函数即可实现优雅退出。 之前服务器的主函数如下 #include "CSession.h" #include "CServer.h"…

ASP.NetCore---I18n(internationalization)多语言版本的应用

文章目录 0.实现的效果如下1.创建新项目I18nBaseDemo2.添加页面中的下拉框3.在HomeController中添加ChangeLanguage方法4.在Progress.cs 文件中添加如下代码&#xff1a;5. 在progress.cs中添加code6.添加Resource资源文件7.在页面中引用i18n的变量8. 重启项目&#xff0c;应该…

录屏达人必备!四款神器助你轻松搞定一切

录屏&#xff0c;一个既简单又实用的技能&#xff0c;不仅能帮助我们记录下电脑上的精彩瞬间&#xff0c;还能在需要的时候进行演示。是不是觉得特别棒呢&#xff1f;今天&#xff0c;我就来给大家分享一下如何轻松地录屏&#xff0c;并推荐四款非常实用的录屏工具。 一、如何录…

力扣hot100--链表

链表 1. 2. 两数相加 给你两个 非空 的链表&#xff0c;表示两个非负的整数。它们每位数字都是按照 逆序 的方式存储的&#xff0c;并且每个节点只能存储 一位 数字。 请你将两个数相加&#xff0c;并以相同形式返回一个表示和的链表。 你可以假设除了数字 0 之外&#xff…

网络学习第二篇

认识网关和路由器 这里大家先了解一下什么三层设备。 三层设备 三层设备是指在网络架构中能够工作在第三层&#xff08;网络层&#xff09;的设备&#xff0c;通常包括三层交换机和路由器。这些设备可以根据IP地址进行数据包的转发和路由选择&#xff0c;从而在不同的网络之间…

JVM Class类文件结构

国庆节快乐 2024年10月2日17:49:22 目录 前言 magic 数 文件版本 使用JClassLib观察class文件 一般信息 接口 常量池 字段 方法 常量池计数器 常量池 类型 CONSTANT_Methodref_info CONSTANT_Class_info 类型结构总表 访问标志 类索引, …

【DataSophon】DataSophon1.2.1 整合Zeppelin并配置Hive|Trino|Spark解释器

目录 ​一、Zeppelin简介 二、实现步骤 2.1 Zeppelin包下载 2.2 work配置文件 三、配置常用解释器 3.1配置Hive解释器 3.2 配置trino解释器 3.3 配置Spark解释器 一、Zeppelin简介 Zeppelin是Apache基金会下的一个开源框架&#xff0c;它提供了一个数据可视化的框架&am…

影视cms泛目录用什么程序?苹果cms二次开发泛目录插件

影视CMS泛目录一般使用的程序有很多种&#xff0c;&#xff08;maccmscn&#xff09;以下是其中几种常见的程序&#xff1a; WordPress&#xff1a;WordPress是一个非常流行的开源内容管理系统&#xff0c;可以通过安装一些插件来实现影视CMS泛目录功能。其中&#xff0c;一款常…

基于H3C环境的实验——OSPF

目录 实验设备和环境 实验设备 实验环境 实验记录 1、单区域 OSPF基本配置 步骤1:搭建实验环境并完成基本配置 步骤2:检查网络连通性和路由器路由表。 步骤3:配置OSPF 步骤4:检查路由器OSPF邻居状态及路由表 实验设备和环境 实验设备 三台路由器、两台PC、电源线、两…