实现llava的【单轮对话】调整成【多轮对话】 (输入图片/多模态/多轮对话/llava)

使用llava时,将llava的单轮对话调整成多轮对话

  • 先说好
  • 方法一,直接对官方网站的quick start代码进行修改
  • 方法二:使用bash结合基础的单轮对话代码进行修改
  • 2.1 首先是最基础的llava官网代码回顾
    • 2.2 伪多轮对话方法(每次新的prompt输入)
    • 2.3 真正的==多轮对话==,也就是每次输入问题prompt的时候,把上次模型的回答(对话历史)也一起输入进去,可以把文件修改成这样:

先说好

看方法二!看方法二!看方法二!
方法二的帮助更大,格式通俗易懂!输出文件什么的都设置好了写好了
方法一我是一个py文件里实现,但目前效果不太好,因为太费内存了!还是方法二,分开成py和bash文件效果更好!

方法一,直接对官方网站的quick start代码进行修改

llava_github官网里的单轮对话代码,我能跑起来的如下所示:

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"  # 指定使用 GPU 0from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_modelmodel_path = "/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"tokenizer, model, image_processor, context_len = load_pretrained_model(model_path=model_path,model_base=None,model_name=get_model_name_from_path(model_path),# load_4bit=True,# load_8bit=True
)model_path = "/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"
prompt = "What are the things I should be cautious about when I visit here?"
image_file = "/home/data/yjgroup/fsy/VG_100K/1.jpg"args = type('Args', (), {"model_path": model_path,"model_base": None,"model_name": get_model_name_from_path(model_path),"query": prompt,"conv_mode": None,"image_file": image_file,"sep": ",","temperature": 0.2,"top_p":None,"num_beams": 1,'max_new_tokens': 512
})()eval_model(args)

现在我希望能够不仅实现输入一张图片后,进行多模态单轮对话,还可以进行多轮对话,我可以将代码修改成如下所示:
但是请注意:这样修改以后会很占内存

import os
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model# 设置环境变量
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"  # 指定使用 GPU 0# 加载预训练模型
model_path = "/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path=model_path,model_base=None,model_name=get_model_name_from_path(model_path)
)# 定义初始对话内容和图片路径
initial_prompt = "What are the things I should be cautious about when I visit here?"
image_file = "/home/data/yjgroup/fsy/VG_100K/1.jpg"# 创建一个类来存储对话上下文
class MultiTurnDialog:def __init__(self, initial_prompt, image_file):self.dialog = [{"role": "user", "content": initial_prompt}]self.image_file = image_filedef add_turn(self, role, content):self.dialog.append({"role": role, "content": content})def get_dialog(self):return "\n".join([f"{turn['role'].capitalize()}: {turn['content']}" for turn in self.dialog])# 初始化对话
multi_turn_dialog = MultiTurnDialog(initial_prompt, image_file)# 定义一个函数来运行模型并获取响应
def get_model_response(dialog, image_file):args = type('Args', (), {"model_path": model_path,"model_base": None,"model_name": get_model_name_from_path(model_path),"query": dialog,"conv_mode": None,"image_file": image_file,"sep": ",","temperature": 0.2,"top_p": None,"num_beams": 1,'max_new_tokens': 512})()return eval_model(args)# 初次运行模型获取响应
response = get_model_response(multi_turn_dialog.get_dialog(), multi_turn_dialog.image_file)
print(f"Assistant: {response}")# 添加初次对话到对话上下文
multi_turn_dialog.add_turn("assistant", response)# 示例多轮对话
user_inputs = ["Can you tell me more about the safety measures?","What about the local customs and traditions?"
]for user_input in user_inputs:# 添加用户输入到对话上下文multi_turn_dialog.add_turn("user", user_input)# 获取当前对话上下文current_dialog = multi_turn_dialog.get_dialog()# 运行模型获取响应response = get_model_response(current_dialog, multi_turn_dialog.image_file)print(f"Assistant: {response}")# 添加模型响应到对话上下文multi_turn_dialog.add_turn("assistant", response)# 最终对话内容
print("Final dialog context:")
print(multi_turn_dialog.get_dialog())

模型输出:

(下面这个输出说明一件事。我们的输出内容没有保存好,所以我又重新修改了一下代码

Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00, 2.35s/it]
You are using a model of type llava to instantiate a model of type llava_llama. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:06<00:00, 2.25s/it]
When visiting a place like the one shown in the image, which appears to be a city street with parked cars, trees, and pedestrians, here are some general things to be cautious about:Pedestrian Safety: Always be aware of your surroundings and watch for vehicles when crossing streets.Parking Regulations: If you're parking your vehicle, make sure to follow the parking signs and regulations to avoid fines or towing.Traffic Signals: Pay attention to traffic lights and crosswalks to ensure you're following the rules of the road.Personal Safety: Keep an eye on your belongings to prevent theft, and be mindful of your personal space.Local Laws and Customs: Familiarize yourself with local laws and customs to avoid inadvertently breaking the law or offending someone.Weather Conditions: Depending on the season, be prepared for the weather. If it's cold, dress appropriately. If it's hot, stay hydrated.Health Precautions: Depending on the current health advisories, you may need to take precautions such as wearing a mask or using hand sanitizer.Communication: If you're in a foreign country, have a way to communicate in case of an emergency, such as a local SIM card for your phone or a translation app.Emergency Services: Know the local emergency numbers and the location of the nearest embassy or consulate if you're traveling internationally.Cultural Sensitivity: Be respectful of the local culture and traditions. This includes dress codes, behavior in public spaces, and respect for religious sites.Remember, these are general tips and the specific precautions you should take may vary depending on the specific location and your own personal circumstances.
You are using a model of type llava to instantiate a model of type llava_llama. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.65it/s]
You are using a model of type llava to instantiate a model of type llava_llama. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00, 2.47s/it]
When visiting an area like the one shown in the image, which appears to be a city street with a sidewalk, here are some general things to be cautious about:Traffic: Always be aware of the traffic around you, especially if you're walking near the road. Cars and bicycles can move quickly, so make sure to stay on the sidewalk and follow traffic rules.Pedestrian Safety: Look both ways before crossing the street, even if there's a crosswalk. Be mindful of vehicles that may not see you.Personal Safety: Keep an eye on your belongings to prevent theft. In busy areas, pickpockets can be a concern.Local Laws and Customs: Be aware of local laws and customs. Some places have specific rules about littering, smoking, or drinking in public.Weather Conditions: Depending on the season, be prepared for the weather. If it's cold, wear appropriate clothing. If it's hot, stay hydrated and wear sunscreen.Health Precautions: Depending on the region, there may be health advisories or precautions to take, such as vaccinations or avoiding certain foods.Communication: Have a way to communicate in case of an emergency. This could be a local phone, a map, or a translation app.Emergency Services: Know the local emergency numbers and the location of the nearest embassy or consulate if you're traveling internationally.Scams: Be wary of common tourist scams. They can range from overpriced services to more elaborate cons.Cultural Sensitivity: Be respectful of local customs and traditions. This can include dress codes, behavior in religious sites, and respect for local laws.Remember, these are general tips and the specific precautions can vary depending on the exact location and the time of your visit.
Assistant: None
You are using a model of type llava to instantiate a model of type llava_llama. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00, 2.42s/it]
None
Assistant: None
You are using a model of type llava to instantiate a model of type llava_llama. This is not supported for all configurations of models and can yield errors.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00, 2.49s/it]
None
Assistant: None
Final dialog context:
User: What are the things I should be cautious about when I visit here?
Assistant: None
User: Can you tell me more about the safety measures?
Assistant: None
User: What about the local customs and traditions?
Assistant: None

第三次修改:
这次修改的问题是:确实把输出都保存到prompt里了,但这样修改太占用内存了,所以很容易跑不起来,所以我换了一种思路(方法二)

import os
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model
from io import StringIO
import sys# 设置环境变量
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"  # 指定使用 GPU 0# 加载预训练模型
model_path = "/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path=model_path,model_base=None,model_name=get_model_name_from_path(model_path)
)# 定义初始对话内容和图片路径
initial_prompt = "What are the things I should be cautious about when I visit here?"
image_file = "/home/data/yjgroup/fsy/VG_100K/1.jpg"# 创建一个类来存储对话上下文
class MultiTurnDialog:def __init__(self, initial_prompt, image_file):self.dialog = [{"role": "user", "content": initial_prompt}]self.image_file = image_filedef add_turn(self, role, content):self.dialog.append({"role": role, "content": content})def get_dialog(self):return "\n".join([f"{turn['role'].capitalize()}: {turn['content']}" for turn in self.dialog])# 初始化对话
multi_turn_dialog = MultiTurnDialog(initial_prompt, image_file)# 定义一个函数来运行模型并获取响应
def get_model_response(dialog, image_file):args = type('Args', (), {"model_path": model_path,"model_base": None,"model_name": get_model_name_from_path(model_path),"query": dialog,"conv_mode": None,"image_file": image_file,"sep": ",","temperature": 0.2,"top_p": None,"num_beams": 1,'max_new_tokens': 512})()# 捕获标准输出old_stdout = sys.stdoutsys.stdout = mystdout = StringIO()try:eval_model(args)finally:# 恢复标准输出sys.stdout = old_stdout# 获取模型输出model_output = mystdout.getvalue().strip()return model_output# 初次运行模型获取响应
response = get_model_response(multi_turn_dialog.get_dialog(), multi_turn_dialog.image_file)
print(f"Assistant: {response}")# 添加初次对话到对话上下文
multi_turn_dialog.add_turn("assistant", response)# 示例多轮对话
user_inputs = ["Can you tell me more about the safety measures?","What about the local customs and traditions?"
]for user_input in user_inputs:# 添加用户输入到对话上下文multi_turn_dialog.add_turn("user", user_input)# 获取当前对话上下文current_dialog = multi_turn_dialog.get_dialog()# 运行模型获取响应response = get_model_response(current_dialog, multi_turn_dialog.image_file)print(f"Assistant: {response}")# 添加模型响应到对话上下文multi_turn_dialog.add_turn("assistant", response)# 最终对话内容
print("Final dialog context:")
print(multi_turn_dialog.get_dialog())

方法二:使用bash结合基础的单轮对话代码进行修改

2.1 首先是最基础的llava官网代码回顾

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"  # 指定使用 GPU 0from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_modelmodel_path = "/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"tokenizer, model, image_processor, context_len = load_pretrained_model(model_path=model_path,model_base=None,model_name=get_model_name_from_path(model_path),# load_4bit=True,# load_8bit=True
)model_path = "/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"
prompt = "What are the things I should be cautious about when I visit here?"
image_file = "/home/data/yjgroup/fsy/VG_100K/1.jpg"args = type('Args', (), {"model_path": model_path,"model_base": None,"model_name": get_model_name_from_path(model_path),"query": prompt,"conv_mode": None,"image_file": image_file,"sep": ",","temperature": 0.2,"top_p":None,"num_beams": 1,'max_new_tokens': 512
})()eval_model(args)

2.2 伪多轮对话方法(每次新的prompt输入)

  1. 建立一个Bash 脚本(multi_round_dialogue.sh)
#!/bin/bash# 定义变量
MODEL_PATH="/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"
IMAGE_FILE="/home/data/yjgroup/fsy/VG_100K/1.jpg"
OUTPUT_FILE="dialogue_output.txt"
CUDA_DEVICE="0"  # 指定使用的 CUDA 设备# 清空输出文件
> $OUTPUT_FILE# 设置每轮对话的提示
PROMPTS=("What are the things I should be cautious about when I visit here?""Can you suggest some local foods to try?""What are the best tourist attractions in this area?"
)# 对话轮数(根据提示数量设置)
ROUNDS=${#PROMPTS[@]}for ((i=0; i<ROUNDS; i++))
doPROMPT=${PROMPTS[$i]}echo "Round $((i+1))" >> $OUTPUT_FILEecho "User: $PROMPT" >> $OUTPUT_FILE# 调用 Python 脚本并获取输出RESPONSE=$(CUDA_VISIBLE_DEVICES=$CUDA_DEVICE python3 run_llava_dialogue.py "$MODEL_PATH" "$IMAGE_FILE" "$PROMPT")echo "Assistance: $RESPONSE" >> $OUTPUT_FILEecho "" >> $OUTPUT_FILE
done
  1. 建立一个修改后的 Python 脚本(run_llava_dialogue.py)
import os
import sys
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model# 从命令行参数获取输入
model_path = sys.argv[1]
image_file = sys.argv[2]
prompt = sys.argv[3]os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = os.getenv("CUDA_VISIBLE_DEVICES", "0")  # 获取 CUDA 设备# 加载预训练模型
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path=model_path,model_base=None,model_name=get_model_name_from_path(model_path),# load_4bit=True,# load_8bit=True
)args = type('Args', (), {"model_path": model_path,"model_base": None,"model_name": get_model_name_from_path(model_path),"query": prompt,"conv_mode": None,"image_file": image_file,"sep": ",","temperature": 0.2,"top_p": None,"num_beams": 1,'max_new_tokens': 512
})()# 运行模型评估
response = eval_model(args)# 打印响应以便 Bash 脚本捕获
print(response)
  1. 使用方法:
    • 确保 multi_round_dialogue.sh 和 run_llava_dialogue.py 放在同一目录下。
    • 给Bash脚本添加可执行权限:
    chmod +x multi_round_dialogue.sh
    
    • 运行 Bash 脚本:
    ./multi_round_dialogue.sh
    

这个脚本会运行三轮对话,并将每轮对话的用户输入和助手响应保存到 dialogue_output.txt 文件中。每轮对话的问题已经提前设置好,并在每次运行时指定 CUDA 的卡号。

2.3 真正的多轮对话,也就是每次输入问题prompt的时候,把上次模型的回答(对话历史)也一起输入进去,可以把文件修改成这样:

修改后的 Bash 脚本(multi_round_dialogue.sh)

#!/bin/bash# 定义变量
MODEL_PATH="/data1/yjgroup/tym/lab_sync_mac/LLaVA/checkpoints/llava-v1.6-vicuna-7b"
IMAGE_FILE="/home/data/yjgroup/fsy/VG_100K/1.jpg"
OUTPUT_FILE="dialogue_output.txt"
CUDA_DEVICE="0"  # 指定使用的 CUDA 设备# 清空输出文件
> $OUTPUT_FILE# 设置每轮对话的提示
PROMPTS=("What are the things I should be cautious about when I visit here?""Can you suggest some local foods to try?""What are the best tourist attractions in this area?"
)# 对话轮数(根据提示数量设置)
ROUNDS=${#PROMPTS[@]}# 初始化对话历史
DIALOGUE_HISTORY=""for ((i=0; i<ROUNDS; i++))
doPROMPT=${PROMPTS[$i]}echo "Round $((i+1))" >> $OUTPUT_FILEecho "User: $PROMPT" >> $OUTPUT_FILE# 组合当前对话历史和新的提示if [ -z "$DIALOGUE_HISTORY" ]; thenINPUT_PROMPT="$PROMPT"elseINPUT_PROMPT="$DIALOGUE_HISTORY\nUser: $PROMPT"fi# 调用 Python 脚本并获取输出RESPONSE=$(CUDA_VISIBLE_DEVICES=$CUDA_DEVICE python3 run_llava_dialogue.py "$MODEL_PATH" "$IMAGE_FILE" "$INPUT_PROMPT")echo "Assistance: $RESPONSE" >> $OUTPUT_FILEecho "" >> $OUTPUT_FILE# 更新对话历史DIALOGUE_HISTORY="$INPUT_PROMPT\nAssistance: $RESPONSE"
done

修改后的 Python 脚本(run_llava_dialogue.py)

import os
import sys
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model# 从命令行参数获取输入
model_path = sys.argv[1]
image_file = sys.argv[2]
prompt = sys.argv[3]os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = os.getenv("CUDA_VISIBLE_DEVICES", "0")  # 获取 CUDA 设备# 加载预训练模型
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path=model_path,model_base=None,model_name=get_model_name_from_path(model_path),# load_4bit=True,# load_8bit=True
)args = type('Args', (), {"model_path": model_path,"model_base": None,"model_name": get_model_name_from_path(model_path),"query": prompt,"conv_mode": None,"image_file": image_file,"sep": ",","temperature": 0.2,"top_p": None,"num_beams": 1,'max_new_tokens': 512
})()# 运行模型评估
response = eval_model(args)# 打印响应以便 Bash 脚本捕获
print(response)

使用方法:

  1. 把multi_round_dialogue.sh 和 run_llava_dialogue.py 放在同一目录下。
  2. 给 Bash 脚本添加可执行权限:
chmod +x multi_round_dialogue.sh
  1. 运行 Bash 脚本:
./multi_round_dialogue.sh

这个脚本会运行三轮对话,并将每轮对话的用户输入和助手响应保存到 dialogue_output.txt 文件中。每轮对话的问题会与之前的对话历史一起作为输入传递给模型,从而实现真正的多轮对话。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/43278.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

AEC10 SA计算整理 --- 基础SA

LuxSA: LuxSALumaAvgLumaBE16x16 LuxSATarget[setparam/tr:luxlux] LuxSAAdjRatioLuxSATarget/LuxSALumaLuxSALuma: 计算16x16区域的平均亮度&#xff08;Luma值&#xff09;。 LuxSATarget: 通过参数设置获取目标亮度值&#xff08;通常与当前光线条件相关&#xff09;。 Lux…

【多模态】41、VILA | 打破常规多模态模型训练策略,在预训练阶段就微调 LLM 被证明能取得更好的效果!

论文&#xff1a;VILA: On Pre-training for Visual Language Models 代码&#xff1a;https://github.com/NVlabs/VILA 出处&#xff1a;NVLabs 时间&#xff1a;2024.05 贡献&#xff1a; 证明在预训练阶段对 LLM 进行微调能够提升模型对上下文任务的效果在 SFT 阶段混合…

同三维T80006解码器视频使用操作说明书:高清HDMI解码器,高清SDI解码器,4K超清HDMI解码器,双路4K超高清解码器

同三维T80006解码器视频使用操作说明书&#xff1a;高清HDMI解码器&#xff0c;高清SDI解码器&#xff0c;4K超清HDMI解码器&#xff0c;双路4K超高清解码器 解码器T80006 同三维&#xff0c;十多年老品牌&#xff0c;我们一直专注&#xff1a;视频采集卡、视频编码器、转码器、…

Centos7离线安装ElasticSearch7.4.2

一、官网下载相关的安装包 ElasticSearch7.4.2&#xff1a; elasticsearch-7.4.2-linux-x86_64.tar.gz 下载中文分词器&#xff1a; elasticsearch-analysis-ik-7.4.2.zip 二、上传解压文件到服务器 上传到目录&#xff1a;/home/data/elasticsearch 解压文件&#xff1…

免费无限白嫖阿里云服务器

今天&#xff0c;我来分享一个免费且无限使用阿里云服务器的方法&#xff0c;零成本&#xff01;这适用于日常测试学习&#xff0c;比如测试 Shell 脚本、学习 Docker 安装、MySQL 等等。跟着我的步骤&#xff0c;你将轻松拥有一个稳定可靠的服务器&#xff0c;为你的学习和实践…

数据库的优点和缺点分别是什么

数据库作为数据存储和管理的核心组件&#xff0c;具有一系列显著的优点&#xff0c;同时也存在一些潜在的缺点。以下是对数据库优点和缺点的详细分析&#xff1a; 优点 数据一致性&#xff1a;数据库通过事务处理和锁机制等手段&#xff0c;确保数据的一致性和完整性。这意味着…

错误记录-SpringCloud-OpenFeign测试远程调用

文章目录 1&#xff0c;org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name memberController: Unsatisfied dependency expressed through field couponFeign2&#xff0c; Receiver class org.springframework.cloud.netflix…

几种不同的方式禁止IP访问网站(PHP、Nginx、Apache设置方法)

1、PHP禁止IP和IP段访问 <?//禁止某个IP$banned_ip array ("127.0.0.1",//"119.6.20.66","192.168.1.4");if ( in_array( getenv("REMOTE_ADDR"), $banned_ip ) ){die ("您的IP禁止访问&#xff01;");}//禁止某个IP段…

SAP S4 环境下,KSU1 Ob52 转为前台操作,不产生传输请求号

参考 OB52/KSV1/KSU1等与Client状态相关的前台操作-y_q_yang se16n 编辑表 T811FLAGS 新增行 CYCLES MAINTENANCE X 即可

WAIC 2024 AI盛宴大会亮点回顾

&#x1f389; 刚刚落幕的WAIC 2024大会&#xff0c;简直是科技迷们的狂欢节&#xff01;这场全球瞩目的人工智能盛会&#xff0c;不仅汇聚了全球顶尖的智慧&#xff0c;还带来了无数令人惊叹的创新成果。让我们一起回顾那些抓人眼球的亮点吧&#xff01; &#x1f525; 超燃开…

Java面试八股之MySQL的redo log和undo log

MySQL的redo log和undo log 在MySQL的InnoDB存储引擎中&#xff0c;redo log和undo log是两种重要的日志&#xff0c;它们各自服务于不同的目的&#xff0c;对数据库的事务处理和恢复机制至关重要。 Redo Log&#xff08;重做日志&#xff09; 功能 redo log的主要作用是确…

是不是看错了,C++ 构造函数也可以是虚函数?(一)

以下内容为本人的学习笔记&#xff0c;如需要转载&#xff0c;请声明原文链接 微信公众号「ENG八戒」https://mp.weixin.qq.com/s/2HXYlggENXcSwTdydtof0g 首先&#xff0c;C 构造函数可以是虚函数吗&#xff1f; 语法上来说&#xff0c;答案是不行的。类对象的创建依赖于类构…

世界人工智能大会 | 江行智能大模型解决方案入选“AI赋能新型工业化创新应用优秀案例”

日前&#xff0c;2024世界人工智能大会暨人工智能全球治理高级别会议在上海启幕。本次大会主题为“以共商促共享&#xff0c;以善治促善智”&#xff0c;汇聚了上千位全球科技、产业界领军人物&#xff0c;共同探讨大模型、数据、新型工业化等人工智能深度发展时代下的热点话题…

[AI 大模型] Anthropic Claude

文章目录 [AI 大模型] Anthropic Claude简介模型架构发展新技术和优势示例 [AI 大模型] Anthropic Claude 简介 Anthropic Claude 是由 Anthropic 开发的一系列大型语言模型&#xff0c;旨在提供高性能、可靠和安全的 AI 解决方案。Claude 模型以其在语言处理、推理、分析和编…

响应式R2DBC数据库mybatis

介绍 响应式&#xff1a;Web、网络、IO&#xff08;存储&#xff09;、中间件&#xff08;Redis、MySQL&#xff09; 应用开发&#xff1a; ● 网络 ● 存储&#xff1a;MySQL、Redis ● Web&#xff1a;Webflux ● 前端&#xff1b; 后端&#xff1a;Controller – Service –…

【C++ STL】模拟实现 vector

标题&#xff1a;【C STL】模拟实现 vector 水墨不写bug &#xff08;图片来源于网络&#xff09; 正文开始&#xff1a; STL中的vector是一个动态数组&#xff0c;支持随机访问&#xff0c;可以根据需要来扩展数组空间。 本项目将实现vector的部分常用功能&#xff0c;以增强…

阿里云 Ubuntu 开启允许 ssh 密码方式登录

以前用的 centos&#xff0c;重置系统为 ubuntu 后在ssh 远程连接时遇到了点问题&#xff1a; 在阿里云控制台重置实例密码后无法使用密码进行 SSH 连接登录 原因&#xff1a;阿里云 Ubuntu 默认禁用密码登录方式 解决办法&#xff1a; 先使用其他用户登录到服务器 这里进来…

国产操作系统安装配置auditd审计工具 _ 统信 _ 麒麟 _ 中科方德

原文链接&#xff1a;国产操作系统安装配置auditd审计工具 | 统信 | 麒麟 | 中科方德 Hello&#xff0c;大家好啊&#xff01;今天给大家带来一篇在国产桌面操作系统上部署auditd审计工具的文章。auditd是Linux审计系统的核心守护进程&#xff0c;用于记录系统安全相关的事件和…

Python自动化测试系列[v1.0.0][自动化测试报告]

BeautifulReport测试报告 获取BeautifulReport模块 BeautifulReport 源码Clone地址为 BeautifulReport &#xff0c;其中BeautifulReport.py和其template是我们需要的 BeautifulReport 如下代码是BeautifulReport.py的源码&#xff0c;其中几个注释的地方需要注意&#xff…

等保测评实战指南:从准备到实施的全面剖析

在数字化时代&#xff0c;信息安全已成为企业发展的基石。信息安全等级保护&#xff08;简称“等保”&#xff09;测评作为评估和提升信息系统安全防护能力的重要手段&#xff0c;越来越受到企业的重视。本文将从实战角度出发&#xff0c;全面剖析等保测评的准备、实施及后续改…