昇思MindSpore学习笔记2-04 LLM原理和实践--文本解码原理--以MindNLP为例

摘要:

介绍了昇思MindSpore AI框架采用贪心搜索、集束搜索计算高概率词生成文本的方法、步骤,并为解决重复等问题所作的多种尝试。

这一节完全看不懂,猜测是如何用一定范围的词造句。

一、概念

自回归语言模型

文本序列概率分布

分解为每个词基于其上文的条件概率的乘积

        

        W_0:初始上下文单词序列

        T: 时间步

        当生成EOS标签时,停止生成。

MindNLP/huggingface Transformers提供的文本生成方法

Greedy search

在每个时间步t输出概率最高的词

Wt=argmax_w P(w|w(l::t-l))

贪心搜索输出序列("The","nice","woman") 的条件概率为:0.5 * 0.4 = 0.2

缺点:容易错过后面的高概率词

如:dog=0.5, has=0.9 ![image.png](attachment:image.png =600x600)

二、环境配置

%%capture captured_output
# 实验环境已经预装了mindspore==2.2.14,如需更换mindspore版本,可更改下面mindspore的版本号
!pip uninstall mindspore -y
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
!pip uninstall mindvision -y
!pip uninstall mindinsight -y

输出:

Found existing installation: mindvision 0.1.0
Uninstalling mindvision-0.1.0:Successfully uninstalled mindvision-0.1.0
WARNING: Skipping mindinsight as it is not installed.

# 该案例在 mindnlp 0.3.1 版本完成适配,如果发现案例跑不通,可以指定mindnlp版本,执行`!pip install mindnlp==0.3.1`
!pip install mindnlp

输出:

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: mindnlp in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (0.3.1)
Requirement already satisfied: mindspore in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.2.14)
Requirement already satisfied: tqdm in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (4.66.4)
Requirement already satisfied: requests in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.32.3)
Requirement already satisfied: datasets in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.20.0)
Requirement already satisfied: evaluate in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.4.2)
Requirement already satisfied: tokenizers in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.19.1)
Requirement already satisfied: safetensors in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.4.3)
Requirement already satisfied: sentencepiece in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.2.0)
Requirement already satisfied: regex in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2024.5.15)
Requirement already satisfied: addict in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (2.4.0)
Requirement already satisfied: ml-dtypes in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.4.0)
Requirement already satisfied: pyctcdecode in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.5.0)
Requirement already satisfied: jieba in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (0.42.1)
Requirement already satisfied: pytest==7.2.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindnlp) (7.2.0)
Requirement already satisfied: attrs>=19.2.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (23.2.0)
Requirement already satisfied: iniconfig in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (2.0.0)
Requirement already satisfied: packaging in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (23.2)
Requirement already satisfied: pluggy<2.0,>=0.12 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (1.5.0)
Requirement already satisfied: exceptiongroup>=1.0.0rc8 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (1.2.0)
Requirement already satisfied: tomli>=1.0.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pytest==7.2.0->mindnlp) (2.0.1)
Requirement already satisfied: filelock in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (3.15.3)
Requirement already satisfied: numpy>=1.17 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (1.26.4)
Requirement already satisfied: pyarrow>=15.0.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (16.1.0)
Requirement already satisfied: pyarrow-hotfix in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.6)
Requirement already satisfied: dill<0.3.9,>=0.3.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.3.8)
Requirement already satisfied: pandas in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (2.2.2)
Requirement already satisfied: xxhash in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (3.4.1)
Requirement already satisfied: multiprocess in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.70.16)
Requirement already satisfied: fsspec<=2024.5.0,>=2023.1.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from fsspec[http]<=2024.5.0,>=2023.1.0->datasets->mindnlp) (2024.5.0)
Requirement already satisfied: aiohttp in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (3.9.5)
Requirement already satisfied: huggingface-hub>=0.21.2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (0.23.4)
Requirement already satisfied: pyyaml>=5.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from datasets->mindnlp) (6.0.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (2.2.2)
Requirement already satisfied: certifi>=2017.4.17 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from requests->mindnlp) (2024.6.2)
Requirement already satisfied: protobuf>=3.13.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (5.27.1)
Requirement already satisfied: asttokens>=2.0.4 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (2.0.5)
Requirement already satisfied: pillow>=6.2.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (10.3.0)
Requirement already satisfied: scipy>=1.5.4 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (1.13.1)
Requirement already satisfied: psutil>=5.6.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (5.9.0)
Requirement already satisfied: astunparse>=1.6.3 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from mindspore->mindnlp) (1.6.3)
Requirement already satisfied: pygtrie<3.0,>=2.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pyctcdecode->mindnlp) (2.5.0)
Requirement already satisfied: hypothesis<7,>=6.14 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pyctcdecode->mindnlp) (6.104.2)
Requirement already satisfied: six in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from asttokens>=2.0.4->mindspore->mindnlp) (1.16.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from astunparse>=1.6.3->mindspore->mindnlp) (0.43.0)
Requirement already satisfied: aiosignal>=1.1.2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (1.3.1)
Requirement already satisfied: frozenlist>=1.1.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (1.9.4)
Requirement already satisfied: async-timeout<5.0,>=4.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from aiohttp->datasets->mindnlp) (4.0.3)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from huggingface-hub>=0.21.2->datasets->mindnlp) (4.11.0)
Requirement already satisfied: sortedcontainers<3.0.0,>=2.1.0 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from hypothesis<7,>=6.14->pyctcdecode->mindnlp) (2.4.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pandas->datasets->mindnlp) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pandas->datasets->mindnlp) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages (from pandas->datasets->mindnlp) (2024.1)[notice] A new release of pip is available: 24.1 -> 24.1.1
[notice] To update, run: python -m pip install --upgrade pip

三、贪心搜索Greedy search

#greedy_search
​
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
# generate text until the output length (which includes the context length) reaches 50
greedy_output = model.generate(input_ids, max_length=50)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))

输出:

Building prefix dict from the default dictionary ...
Dumping model to file cache /tmp/jieba.cache
Loading model cost 1.006 seconds.
Prefix dict has been built successfully.
100%---------------------------------------- 26.0/26.0 [00:00<00:00, 1.63kB/s]
100%---------------------------------------- 0.99M/0.99M [00:00<00:00, 18.9MB/s]
100%---------------------------------------- 446k/446k [00:00<00:00, 7.49MB/s]
100%---------------------------------------- 1.29M/1.29M [00:00<00:00, 18.0MB/s]
100%---------------------------------------- 665/665 [00:00<00:00, 49.7kB/s]
100%---------------------------------------- 523M/523M [00:42<00:00, 17.2MB/s]Output:
-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog.I'm not sure if I'll

四、集束搜索Beam search

每个时间步保留最可能的 num_beams 个词选择概率最高的序列。

如图以 num_beams=2 为例:

("The","dog" ,"has"  ) : 0.4 * 0.9 = 0.36

("The","nice","woman") : 0.5 * 0.4 = 0.20

优点:一定程度保留最优路径

缺点:

        1. 无法解决重复问题;

        2. 开放域生成效果差

from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
# activate beam search and early_stopping
beam_output = model.generate(input_ids, max_length=50, num_beams=5, early_stopping=True
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
print(100 * '-')
​
# set no_repeat_ngram_size to 2
beam_output = model.generate(input_ids, max_length=50, num_beams=5, no_repeat_ngram_size=2, early_stopping=True
)
​
print("Beam search with ngram, Output:\n" + 100 * '-')
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
print(100 * '-')
​
# set return_num_sequences > 1
beam_outputs = model.generate(input_ids, max_length=50, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True
)
​
# now we have 3 output sequences
print("return_num_sequences, Output:\n" + 100 * '-')
for i, beam_output in enumerate(beam_outputs):print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=True)))
print(100 * '-')

输出:

-------------------------------------------------------------------------------------------------I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again.""I don't think I'll ever be able to walk with her again.""I don't think I
-------------------------------------------------------------------------------------------------
Beam search with ngram, Output:
-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again.""I'm not sure what to say to that," she said. "I mean, it's not like I'm
-------------------------------------------------------------------------------------------------
return_num_sequences, Output:
-------------------------------------------------------------------------------------------------
0: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again.""I'm not sure what to say to that," she said. "I mean, it's not like I'm
1: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again.""I'm not sure what to say to that," she said. "I mean, it's not like she's
2: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again.""I'm not sure what to say to that," she said. "I mean, it's not like we're
3: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again.""I'm not sure what to say to that," she said. "I mean, it's not like I've
4: I enjoy walking with my cute dog, but I don't think I'll ever be able to walk with her again.""I'm not sure what to say to that," she said. "I mean, it's not like I can
-------------------------------------------------------------------------------------------------

五、集束搜索的问题

 

重复问题 

n-gram 惩罚:

候选词再现概率设置为 0

设置no_repeat_ngram_size=2 ,任意 2-gram 不会出现两次

Notice: 实际文本生成需要重复出现

Sample

根据当前条件概率分布随机选择输出词 

("car") ~P(w∣"The")("drives") ~P(w∣"The","car")

优点:文本生成多样性高

缺点:生成文本不连续

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
# activate sampling and deactivate top_k by setting top_k sampling to 0
sample_output = model.generate(input_ids, do_sample=True, max_length=50, top_k=0
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog Neddy as much as I'd like. Keep up the good work Neddy!"I realized what Neddy meant when he first launched the website. "Thank you so much for joining."I

Temperature 

降低softmax 的temperature使 P(w∣w1:t−1​)分布更陡峭

增加高概率单词的似然

降低低概率单词的似然

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(1234)
# activate sampling and deactivate top_k by setting top_k sampling to 0
sample_output = model.generate(input_ids, do_sample=True, max_length=50, top_k=0,temperature=0.7
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog and have never had a problem with her until now.A large dog named Chucky managed to get a few long stretches of grass on her back and ran around with it for about 5 minutes, ran around

TopK sample

选出概率最大的 K 个词,重新归一化,最后在归一化后的 K 个词中采样

将采样池限制为固定大小 K :

  • 在分布比较尖锐的时候产生胡言乱语
  • 在分布比较平坦的时候限制模型的创造力
import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
# activate sampling and deactivate top_k by setting top_k sampling to 0
sample_output = model.generate(input_ids, do_sample=True, max_length=50, top_k=50
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog.She's always up for some action, so I have seen her do some stuff with it.Then there's the two of us.The two of us I'm talking about were

Top-P sample

在累积概率超过概率 p 的最小单词集中进行采样,重新归一化

采样池可以根据下一个词的概率分布动态增加和减少

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
​
# deactivate top_k sampling and sample only from 92% most likely words
sample_output = model.generate(input_ids, do_sample=True, max_length=50, top_p=0.92, top_k=0
)
​
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

输出:

-------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog Neddy as much as I'd like. Keep up the good work Neddy!"I realized what Neddy meant when he first launched the website. "Thank you so much for joining."I

top_k_top_p

import mindspore
from mindnlp.transformers import GPT2Tokenizer, GPT2LMHeadModel
​
tokenizer = GPT2Tokenizer.from_pretrained("iiBcai/gpt2", mirror='modelscope')
​
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("iiBcai/gpt2", pad_token_id=tokenizer.eos_token_id, mirror='modelscope')
​
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='ms')
​
mindspore.set_seed(0)
# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
sample_outputs = model.generate(input_ids,do_sample=True,max_length=50,top_k=5,top_p=0.95,num_return_sequences=3
)
​
print("Output:\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))

输出:

-------------------------------------------------------------------------------------------------
0: I enjoy walking with my cute dog."My dog loves the smell of the dog. I'm so happy that she's happy with me."I love to walk with my dog. I'm so happy that she's happy
1: I enjoy walking with my cute dog. I'm a big fan of my cat and her dog, but I don't have the same enthusiasm for her. It's hard not to like her because it is my dog.My husband, who
2: I enjoy walking with my cute dog, but I'm also not sure I would want my dog to walk alone with me."She also told The Daily Beast that the dog is very protective."I think she's very protective of

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/865024.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

多模态融合 + 慢病精准预测

多模态融合 慢病精准预测 慢病预测算法拆解子解法1&#xff1a;多模态数据集成子解法2&#xff1a;实时数据处理与更新子解法3&#xff1a;采用大型语言多模态模型&#xff08;LLMMs&#xff09;进行深度学习分析 慢病预测更多模态 论文&#xff1a;https://arxiv.org/pdf/2406…

发电机保护屏组成都有哪些,如何选择

发电机保护屏组成都有哪些&#xff0c;如何选择 发电机是电力系统中最常用的一种电力设备。例如水力发电机&#xff0c;柴油发电机&#xff0c;风力发电机&#xff0c;火力发电等等。发电机保护是保证发电机安全、稳定运行的重要手段之一。对于一些小型机组的发电机&#xff0c…

探囊取物之多形式注册页面(基于BootStrap4)

基于BootStrap4的注册页面&#xff0c;支持手机验证码注册、账号密码注册 低配置云服务器&#xff0c;首次加载速度较慢&#xff0c;请耐心等候&#xff1b;演练页面可点击查看源码 预览页面&#xff1a;http://www.daelui.com/#/tigerlair/saas/preview/ly4gax38ub9j 演练页…

RTSP协议在视频监控系统中的典型应用、以及视频监控设备的rtsp地址格式介绍

目录 一、协议概述 1、定义 2、提交者 3、位置 二、主要特点 1、实时性 2、可扩展性 3、控制功能 4、回放支持 5、网络适应性 三、RTSP的工作原理 1、会话准备 2、会话建立 3、媒体流控制 4、会话终止 5、媒体数据传输 四、协议功能 1、双向性 2、带外协议 …

趣玩双色球APP-PyQt5实现

开发环境及软件主要功能说明 开发环境 win10 Vscode Python10.5-64_bit 使用的python库 requests,bs4,pandas,PyQt5 主要功能说明&#xff1a; 数据库更新&#xff0c;保存&#xff0c;另存为功能过滤显示&#xff0c;根据期数&#xff0c;开奖日期&#xff0c;开间期号过…

AndroidStudio activity-1.8.0.aar依赖报错

在使用Androidstudio自帶的創建activity及配套 xml時&#xff0c;構建項目失敗&#xff0c;報錯内容&#xff1a; Null extracted folder for artifact: ResolvedArtifact(componentIdentifierandroidx.activity:activity:1.8.0, variantNamenull, artifactFileC:\Users\hhhh\.…

Golang 开发实战day15 - Input info

&#x1f3c6;个人专栏 &#x1f93a; leetcode &#x1f9d7; Leetcode Prime &#x1f3c7; Golang20天教程 &#x1f6b4;‍♂️ Java问题收集园地 &#x1f334; 成长感悟 欢迎大家观看&#xff0c;不执着于追求顶峰&#xff0c;只享受探索过程 Golang 开发实战day15 - 用户…

AMEYA360:类比半导体推出36V超低输入偏置电流高性能通用运算放大器

在精密信号处理领域&#xff0c;每一次技术创新都意味着性能的飞跃与应用的拓展。上海类比半导体技术有限公司(以下简称“类比半导体”)凭借其在模拟及数模混合芯片设计领域的深厚积累&#xff0c;今日正式宣布推出其全新OPJ301x系列超低输入偏置电流高性能通用运算放大器。该系…

Canvas 指纹:它是什么以及如何绕过它

什么是 Canvas 指纹&#xff1f; 网络浏览器在执行其功能时会收集各种信息。当这些信息中的某些被用于识别网站用户时&#xff0c;这被称为浏览器指纹。 浏览器指纹包括以下有关浏览器的信息&#xff1a;设备型号、浏览器类型和版本、操作系统 (OS)、屏幕分辨率、时区、p0p 文…

AI大模型对话(上下文)缓存能力

互联网应用中&#xff0c;为了提高数据获取的即时性&#xff0c;产生了各种分布式缓存组件&#xff0c;比如Redis、Memcached等等。 大模型时代&#xff0c;除非是免费模型&#xff0c;否则每次对话都会花费金钱来进行对话&#xff0c;对话是不是也可以参照缓存的做法来提高命…

java面试-SpringAOP

1.SpringAOP的使用 你了解Spring AOP 吗&#xff1f; 通过预编译方式和运行期动态代理实现程序功能的统一维护的一种技术。 2.SpringAOP的原理 我们可以将ASM生成的类进行缓存&#xff0c;这样能解决生成的类比较低效的问题。 ASM是可以操作字节码的框架。 真实实现类和…

去中心化社会的崛起:探索区块链对社会结构的影响

随着区块链技术的发展和应用&#xff0c;我们正逐步迈向一个去中心化的社会结构。本文将深入探讨区块链技术如何影响社会结构&#xff0c;从经济、政治到文化等多个方面进行探索和分析&#xff0c;揭示其可能带来的革命性变革。 1. 区块链技术的基本原理回顾 1.1 分布式账本与…

黑芝麻科技A1000简介

文章目录 1. A1000 简介2. 感知能力评估3. 竞品对比4. 系统软件1. A1000 简介

Latex写作工具整理(Overleaf)

一、公式&#xff08;MathType&#xff09; 先用MathType编辑好公式&#xff0c;再粘贴到Overleaf 预置-剪切和复制预置-选择“MathML或Tex"-确定 1.行内公式 粘贴到overleaf里面把两侧的" \["替换成"$" $ A $ 2.单行公式 \begin{equation}\labe…

ROS2 rosbag2记录仪

rosbag2类似于行车记录仪&#xff0c;录制一段话题数据&#xff0c;录制完成后可以多次发布出来进行测试和实验&#xff0c;也可以将话题数据分享给别人用于验证算法等。 1.启动talker服务 ros2 run demo_nodes_cpp talker 2.记录话题数据 chatter ros2 bag record /chatte…

将多个SQL查询合并的两种方式

说明&#xff1a;单个简单查询是非常容易的&#xff0c;但是为了避免多次访问访问数据库&#xff0c;我们会尽可能通过表关联将业务所需要的字段值一次性查出来。而有时候不太清楚表之间的关联关系&#xff08;这取决于对业务的熟悉程度&#xff09;&#xff0c;或者实际情况就…

2024年工程项目管理者的软件指南:11款必试进度管理工具

本文将分享11个值得关注的工程项目进度管理软件&#xff1a;Worktile、Fieldwire、Procore、Buildxact、InEight、Contractor Foreman、Housecall Pro、ClickUp、RedTeam Go、Visual Planning、B2W Schedule。 在竞争激烈的建筑行业&#xff0c;工程项目的进度管理是项目成功的…

超简洁Django个人博客系统(适合初学者)

一、环境介绍 Django4.2.13Markdown3.3.4PyMySQL1.1.1Python3.8PyCharm 2023.1.2 (Professional Edition) 二、功能简介 用户登录 通过在pycharm终端执行以下命令创建超级管理员。python manage.py create createsuperuser 创建完成后再通过新建的超级管理员账号进行登录 …

CentOS7.9下yum升级Apache HTTP Server2.4.6到2.4.60

CentOS7.9系统默认的Apache版本 在CentOS7.9上&#xff0c;如果使用yum安装Apache HTTP Server是最多到2.4.6版本的&#xff0c;这是因为el7下官方仓库的最高版本就是2.4.6&#xff0c;证据如下&#xff1a; $ yum info httpd ...... Installed Packages Name : httpd…

深入解析:Java爬虫的本质是什么?

深入解析&#xff1a;Java爬虫的本质是什么&#xff1f; 引言&#xff1a; 随着互联网的快速发展&#xff0c;获取网络数据已成为许多应用场景中的重要需求。而爬虫作为一种自动化程序&#xff0c;能够模拟人类浏览器的行为&#xff0c;从网页中提取所需信息&#xff0c;成为了…