AutoGen ConversableAgent 基类解析

目录

一、ConversableAgent 类

二、主要函数

1.1 __init__

1.2 initiate_chat


本文主要对 AutoGen 代理的基类 ConversableAgent 进行介绍。

一、ConversableAgent 类

ConversableAgent 类是代理的基类,AssistantAgent 和 UserProxyAgent 是该类的子类,部分代码如下所示。

class ConversableAgent(LLMAgent):"""(In preview) A class for generic conversable agents which can be configured as assistant or user proxy.After receiving each message, the agent will send a reply to the sender unless the msg is a termination msg.For example, AssistantAgent and UserProxyAgent are subclasses of this class,configured with different default settings.To modify auto reply, override `generate_reply` method.To disable/enable human response in every turn, set `human_input_mode` to "NEVER" or "ALWAYS".To modify the way to get human input, override `get_human_input` method.To modify the way to execute code blocks, single code block, or function call, override `execute_code_blocks`,`run_code`, and `execute_function` methods respectively."""DEFAULT_CONFIG = False  # False or dict, the default config for llm inferenceMAX_CONSECUTIVE_AUTO_REPLY = 100  # maximum number of consecutive auto replies (subject to future change)DEFAULT_SUMMARY_PROMPT = "Summarize the takeaway from the conversation. Do not add any introductory phrases."DEFAULT_SUMMARY_METHOD = "last_msg"llm_config: Union[Dict, Literal[False]]def __init__(self,name: str,system_message: Optional[Union[str, List]] = "You are a helpful AI Assistant.",is_termination_msg: Optional[Callable[[Dict], bool]] = None,max_consecutive_auto_reply: Optional[int] = None,human_input_mode: Literal["ALWAYS", "NEVER", "TERMINATE"] = "TERMINATE",function_map: Optional[Dict[str, Callable]] = None,code_execution_config: Union[Dict, Literal[False]] = False,llm_config: Optional[Union[Dict, Literal[False]]] = None,default_auto_reply: Union[str, Dict] = "",description: Optional[str] = None,chat_messages: Optional[Dict[Agent, List[Dict]]] = None,silent: Optional[bool] = None,):

AssistantAgent 类如下所示。

class AssistantAgent(ConversableAgent):"""(In preview) Assistant agent, designed to solve a task with LLM.AssistantAgent is a subclass of ConversableAgent configured with a default system message.The default system message is designed to solve a task with LLM,including suggesting python code blocks and debugging.`human_input_mode` is default to "NEVER"and `code_execution_config` is default to False.This agent doesn't execute code by default, and expects the user to execute the code."""DEFAULT_SYSTEM_MESSAGE = """You are a helpful AI assistant.
Solve tasks using your coding and language skills.
In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.
Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.
When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.
If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.
If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.
When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.
Reply "TERMINATE" in the end when everything is done."""DEFAULT_DESCRIPTION = "A helpful and general-purpose AI assistant that has strong language skills, Python skills, and Linux command line skills."def __init__(self,name: str,system_message: Optional[str] = DEFAULT_SYSTEM_MESSAGE,llm_config: Optional[Union[Dict, Literal[False]]] = None,is_termination_msg: Optional[Callable[[Dict], bool]] = None,max_consecutive_auto_reply: Optional[int] = None,human_input_mode: Literal["ALWAYS", "NEVER", "TERMINATE"] = "NEVER",description: Optional[str] = None,**kwargs,):"""Args:name (str): agent name.system_message (str): system message for the ChatCompletion inference.Please override this attribute if you want to reprogram the agent.llm_config (dict or False or None): llm inference configuration.Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)for available options.is_termination_msg (function): a function that takes a message in the form of a dictionaryand returns a boolean value indicating if this received message is a termination message.The dict can contain the following keys: "content", "role", "name", "function_call".max_consecutive_auto_reply (int): the maximum number of consecutive auto replies.default to None (no limit provided, class attribute MAX_CONSECUTIVE_AUTO_REPLY will be used as the limit in this case).The limit only plays a role when human_input_mode is not "ALWAYS".**kwargs (dict): Please refer to other kwargs in[ConversableAgent](conversable_agent#__init__)."""super().__init__(name,system_message,is_termination_msg,max_consecutive_auto_reply,human_input_mode,llm_config=llm_config,description=description,**kwargs,)if logging_enabled():log_new_agent(self, locals())# Update the provided description if None, and we are using the default system_message,# then use the default description.if description is None:if system_message == self.DEFAULT_SYSTEM_MESSAGE:self.description = self.DEFAULT_DESCRIPTION

UserProxyAgent 类如下所示。

class UserProxyAgent(ConversableAgent):"""(In preview) A proxy agent for the user, that can execute code and provide feedback to the other agents.UserProxyAgent is a subclass of ConversableAgent configured with `human_input_mode` to ALWAYSand `llm_config` to False. By default, the agent will prompt for human input every time a message is received.Code execution is enabled by default. LLM-based auto reply is disabled by default.To modify auto reply, register a method with [`register_reply`](conversable_agent#register_reply).To modify the way to get human input, override `get_human_input` method.To modify the way to execute code blocks, single code block, or function call, override `execute_code_blocks`,`run_code`, and `execute_function` methods respectively."""# Default UserProxyAgent.description values, based on human_input_modeDEFAULT_USER_PROXY_AGENT_DESCRIPTIONS = {"ALWAYS": "An attentive HUMAN user who can answer questions about the task, and can perform tasks such as running Python code or inputting command line commands at a Linux terminal and reporting back the execution results.","TERMINATE": "A user that can run Python code or input command line commands at a Linux terminal and report back the execution results.","NEVER": "A computer terminal that performs no other action than running Python scripts (provided to it quoted in ```python code blocks), or sh shell scripts (provided to it quoted in ```sh code blocks).",}def __init__(self,name: str,is_termination_msg: Optional[Callable[[Dict], bool]] = None,max_consecutive_auto_reply: Optional[int] = None,human_input_mode: Literal["ALWAYS", "TERMINATE", "NEVER"] = "ALWAYS",function_map: Optional[Dict[str, Callable]] = None,code_execution_config: Union[Dict, Literal[False]] = {},default_auto_reply: Optional[Union[str, Dict, None]] = "",llm_config: Optional[Union[Dict, Literal[False]]] = False,system_message: Optional[Union[str, List]] = "",description: Optional[str] = None,**kwargs,):"""Args:name (str): name of the agent.is_termination_msg (function): a function that takes a message in the form of a dictionaryand returns a boolean value indicating if this received message is a termination message.The dict can contain the following keys: "content", "role", "name", "function_call".max_consecutive_auto_reply (int): the maximum number of consecutive auto replies.default to None (no limit provided, class attribute MAX_CONSECUTIVE_AUTO_REPLY will be used as the limit in this case).The limit only plays a role when human_input_mode is not "ALWAYS".human_input_mode (str): whether to ask for human inputs every time a message is received.Possible values are "ALWAYS", "TERMINATE", "NEVER".(1) When "ALWAYS", the agent prompts for human input every time a message is received.Under this mode, the conversation stops when the human input is "exit",or when is_termination_msg is True and there is no human input.(2) When "TERMINATE", the agent only prompts for human input only when a termination message is received orthe number of auto reply reaches the max_consecutive_auto_reply.(3) When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stopswhen the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.function_map (dict[str, callable]): Mapping function names (passed to openai) to callable functions.code_execution_config (dict or False): config for the code execution.To disable code execution, set to False. Otherwise, set to a dictionary with the following keys:- work_dir (Optional, str): The working directory for the code execution.If None, a default working directory will be used.The default working directory is the "extensions" directory under"path_to_autogen".- use_docker (Optional, list, str or bool): The docker image to use for code execution.Default is True, which means the code will be executed in a docker container. A default list of images will be used.If a list or a str of image name(s) is provided, the code will be executed in a docker containerwith the first image successfully pulled.If False, the code will be executed in the current environment.We strongly recommend using docker for code execution.- timeout (Optional, int): The maximum execution time in seconds.- last_n_messages (Experimental, Optional, int): The number of messages to look back for code execution. Default to 1.default_auto_reply (str or dict or None): the default auto reply message when no code execution or llm based reply is generated.llm_config (dict or False or None): llm inference configuration.Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)for available options.Default to False, which disables llm-based auto reply.When set to None, will use self.DEFAULT_CONFIG, which defaults to False.system_message (str or List): system message for ChatCompletion inference.Only used when llm_config is not False. Use it to reprogram the agent.description (str): a short description of the agent. This description is used by other agents(e.g. the GroupChatManager) to decide when to call upon this agent. (Default: system_message)**kwargs (dict): Please refer to other kwargs in[ConversableAgent](conversable_agent#__init__)."""super().__init__(name=name,system_message=system_message,is_termination_msg=is_termination_msg,max_consecutive_auto_reply=max_consecutive_auto_reply,human_input_mode=human_input_mode,function_map=function_map,code_execution_config=code_execution_config,llm_config=llm_config,default_auto_reply=default_auto_reply,description=(description if description is not None else self.DEFAULT_USER_PROXY_AGENT_DESCRIPTIONS[human_input_mode]),**kwargs,)if logging_enabled():log_new_agent(self, locals())

AssistantAgent 和 UserProxyAgent 中只有自身的构造函数,其他函数都继承自 ConversableAgent。

二、主要函数

下面对 ConversableAgent 的主要函数 __init__、initiate_chat 进行介绍。

1.1 __init__

def __init__(name: str,system_message: Optional[Union[str, List]] = "You are a helpful AI Assistant.",is_termination_msg: Optional[Callable[[Dict], bool]] = None,max_consecutive_auto_reply: Optional[int] = None,human_input_mode: Literal["ALWAYS", "NEVER","TERMINATE"] = "TERMINATE",function_map: Optional[Dict[str, Callable]] = None,code_execution_config: Union[Dict, Literal[False]] = False,llm_config: Optional[Union[Dict, Literal[False]]] = None,default_auto_reply: Union[str, Dict] = "",description: Optional[str] = None,chat_messages: Optional[Dict[Agent, List[Dict]]] = None,silent: Optional[bool] = None)

参数介绍:

name :  str 类型,当前代理的名字;

system_message: OpenAI API ChatCompletion 中的系统消息,可以理解为 Prompt;

human_input_mode:str 类型,代理在收到消息时,是否需要人类介入。可能的值有“ALWAYS”、“TERMINATE”、“NEVER”,默认是 “TERMINATE”。

(1) 当“ALWAYS”时,代理每次收到消息时都会提示人工输入。在此模式下,当人工输入为“exit”或is_termination_msg为True且没有人为输入时,对话将停止;

 (2) 当“TERMINATE”时,代理仅在收到终止消息或自动回复次数达到max_consecutive_auto_reply时才提示人工输入;

(3) 当“NEVER”时,代理永远不会提示人工输入。在此模式下,当自动回复次数达到max_consecutive_auto_reply或is_termination_msg为True时,对话将停止。

llm_config:dict or False or None 类型,大模型配置;

下面看一个具体的例子。

import os
from autogen import AssistantAgent, UserProxyAgent# LLM配置
llm_config = {"model": "meta/llama-3.1-405b-instruct","api_key": "XXXXXX","base_url": "https://integrate.api.nvidia.com/v1"
}# 助手
assistant = AssistantAgent("assistant", # 代理名称system_message="你是一个很棒的人工智能助手", # 默认值 You are a helpful AI Assistant.human_input_mode="NEVER",llm_config=llm_config  # 配置大模型支持
)# 人工代理
user_proxy = UserProxyAgent("user_proxy", # 代理名称system_message="你是一个很棒的助手",code_execution_config=False  # 本地执行
)# 开始聊天
user_proxy.initiate_chat(assistant,message="为什么三角形比较稳固?",
)

输出如下所示。

(autogenstudio) D:\code\autogenstudio_images\example>python Two-Agent_Chat-3.py
Student_Agent (to Teacher_Agent):为什么三角形比较稳固?--------------------------------------------------------------------------------
[autogen.oai.client: 08-29 08:48:02] {329} WARNING - Model meta/llama-3.1-405b-instruct is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
Teacher_Agent (to Student_Agent):因为三角形的三条边都相互支撑,这使得它比二维空间中的其它形状都更加稳固。--------------------------------------------------------------------------------
[autogen.oai.client: 08-29 08:48:05] {329} WARNING - Model meta/llama-3.1-405b-instruct is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
Student_Agent (to Teacher_Agent):不错,你的理解是正确的。三角形这样的几何形状能够使这个结构既坚固又轻便。不管我们怎样施加推力和拉力,它仍然保持坚实和稳定。再见--------------------------------------------------------------------------------

is_termination_msg:这里需要写一个函数,入参是一个字典类型(key 包括:"content", "role", "name", "function_call",返回 bool 值代表是否是终止消息;

max_consecutive_auto_reply :int 类型,在一次对话中可以回复多少次,默认是 100 次。

function_map:dict[str, callable] 类型,将函数名称对应到函数调用,str 为函数名称,callable 为函数调用内容,可以实现多个函数的注册;

code_execution_config:dict or False 类型,代理执行代码环境配置;设置为 False 是本地执行(官方说设置为 False 表示禁用,但是实际测试中并不是),有两种方式:本地执行,docker 中执行。

(1)本地执行

本地执行可以设置目录,不设置目录会使用默认目录;

import tempfile
import os
from autogen import ConversableAgent
from autogen.coding import LocalCommandLineCodeExecutor# 创建临时目录存储代码文件
temp_dir = tempfile.TemporaryDirectory()# 创建本地命令行代码执行器。
executor = LocalCommandLineCodeExecutor(timeout=10,  # 代码执行最长时间work_dir=temp_dir.name,  # 使用临时目录
)# Create an agent with code executor configuration.
code_executor_agent = ConversableAgent("code_executor_agent",llm_config=False,  # Turn off LLM for this agent.code_execution_config={"executor": executor},  # Use the local command line code executor.human_input_mode="ALWAYS",  # Always take human input for this agent for safety.
)message_with_code_block = """This is a message with code block.
The code block is below:
```python
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randint(0, 100, 100)
y = np.random.randint(0, 100, 100)
plt.scatter(x, y)
plt.savefig('scatter.png')
print('Scatter plot saved to scatter.png')
```
This is the end of the message.
"""# Generate a reply for the given code.
reply = code_executor_agent.generate_reply(messages=[{"role": "user", "content": message_with_code_block
}])print(reply)print(os.listdir(temp_dir.name))# 清空历史目录
temp_dir.cleanup()

输出如下所示。

(autogenstudio) D:\code\autogenstudio_images\example>python LocalCommandLineCodeExecutor.py
Replying as code_executor_agent. Provide feedback to the sender. Press enter to skip and use auto-reply, or type 'exit' to end the conversation:>>>>>>>> NO HUMAN INPUT RECEIVED.>>>>>>>> USING AUTO REPLY...>>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...
exitcode: 0 (execution succeeded)
Code output: Scatter plot saved to scatter.png['scatter.png', 'tmp_code_e24bf32d4a21990fb9e4b5eb889ebe5a.py']

(2)Docker执行

import tempfile
from autogen import ConversableAgent, config_list_from_json
from autogen.coding import DockerCommandLineCodeExecutor# Create a temporary directory to store the code files.
temp_dir = tempfile.TemporaryDirectory()# Create a Docker command line code executor.
executor = DockerCommandLineCodeExecutor(image="python:3.12-slim",  # Execute code using the given docker image name.timeout=10,  # Timeout for each code execution in seconds.work_dir=temp_dir.name,  # Use the temporary directory to store the code files.
)# Create an agent with code executor configuration that uses docker.
code_executor_agent_using_docker = ConversableAgent("code_executor_agent_docker",llm_config=False,  # Turn off LLM for this agent.code_execution_config={"executor": executor},  # Use the docker command line code executor.human_input_mode="ALWAYS",  # Always take human input for this agent for safety.
)# When the code executor is no longer used, stop it to release the resources.
# executor.stop()# 代码执行代理
code_executor_agent = ConversableAgent("code_executor_agent",llm_config=False,  # Turn off LLM for this agent.code_execution_config={"executor": executor},  # Use the local command line code executor.human_input_mode="ALWAYS",  # Always take human input for this agent for safety.
)# 配置LLM
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST",
)# The code writer agent's system message is to instruct the LLM on how to use
# the code executor in the code executor agent.
code_writer_system_message = """You are a helpful AI assistant.
Solve tasks using your coding and language skills.
In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.
1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.
2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.
Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.
When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.
If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.
If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.
When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.
Reply 'TERMINATE' in the end when everything is done.
"""# 负责写代码的代理
code_writer_agent = ConversableAgent("code_writer_agent",system_message=code_writer_system_message,llm_config={"config_list": config_list},code_execution_config=False,  # Turn off code execution for this agent.
)# 开始对话
chat_result = code_executor_agent.initiate_chat(code_writer_agent,message="Write Python code to calculate the 14th Fibonacci number.",
)

输出结果如下所示。

(autogenstudio) D:\code\autogenstudio_images\example>python DockerCommandLineCodeExecutor.py
code_executor_agent (to code_writer_agent):Write Python code to calculate the 14th Fibonacci number.-------------------------------------------------------------------------------->>>>>>>> USING AUTO REPLY...
[autogen.oai.client: 08-30 22:34:47] {329} WARNING - Model meta/llama-3.1-405b-instruct is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
WARNING:autogen.oai.client:Model meta/llama-3.1-405b-instruct is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
code_writer_agent (to code_executor_agent):```python
def fibonacci(n):if n <= 0:return "Input should be a positive integer"elif n == 1:return 0elif n == 2:return 1else:fib_sequence = [0, 1]while len(fib_sequence) < n:fib_sequence.append(fib_sequence[-1] + fib_sequence[-2])return fib_sequence[-1]print(fibonacci(14))
```This Python code defines a function `fibonacci(n)` that calculates the nth Fibonacci number. It starts with handling edge cases for non-positive integers and the base cases for the first two Fibonacci numbers (0 and 1). Then, it generates the Fibonacci sequence up to the nth number by continuously adding the last two numbers in the sequence. Finally, it prints the 14th Fibonacci number by calling the function with `n = 14`.TERMINATE--------------------------------------------------------------------------------
Replying as code_executor_agent. Provide feedback to code_writer_agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation:>>>>>>>> NO HUMAN INPUT RECEIVED.>>>>>>>> USING AUTO REPLY...>>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...
code_executor_agent (to code_writer_agent):exitcode: 0 (execution succeeded)
Code output: 233-------------------------------------------------------------------------------->>>>>>>> USING AUTO REPLY...
[autogen.oai.client: 08-30 22:35:25] {329} WARNING - Model meta/llama-3.1-405b-instruct is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
WARNING:autogen.oai.client:Model meta/llama-3.1-405b-instruct is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
code_writer_agent (to code_executor_agent):The code executed successfully and the output is correct. The 14th Fibonacci number is indeed 233.TERMINATE--------------------------------------------------------------------------------
Replying as code_executor_agent. Provide feedback to code_writer_agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation:

default_auto_reply:当没有代码执行或基于 llm 的回复生成时的默认自动回复。

description:str 类型,对代理的简短描述,description 主要用于在群聊(GroupChatManager)中其他代理通过 description 来了解该代理以便调用,默认值是 system_message。

chat_messages:存储代理聊天记录;

1.2 initiate_chat

initiate_chat 的作用是开始与另外一个代理聊天。

def initiate_chat(recipient: "ConversableAgent",clear_history: bool = True,silent: Optional[bool] = False,cache: Optional[AbstractCache] = None,max_turns: Optional[int] = None,summary_method: Optional[Union[str, Callable]] = DEFAULT_SUMMARY_METHOD,summary_args: Optional[dict] = {},message: Optional[Union[Dict, str, Callable]] = None,**kwargs) -> ChatResult

参数:

recipient:接收消息的代理(与之对话的代理);

clear_history:bool 类型,是否清空聊天历史信息,默认清空;

silent:bool or None,是否打印对话消息,目前还是实验性质;

max_turns:int or None 类型,对话轮次;

参考链接:

[1] agentchat.conversable_agent | AutoGen

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/52861.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

使用LinkedHashMap实现固定大小的LRU缓存

使用LinkedHashMap实现固定大小的LRU缓存 1. 什么是LRU&#xff1f; LRU是"Least Recently Used"的缩写&#xff0c;意为"最近最少使用"。LRU缓存是一种常用的缓存淘汰算法&#xff0c;它的核心思想是&#xff1a;当缓存满时&#xff0c;优先淘汰最近最少…

PTA L1-030 一帮一

L1-030 一帮一&#xff08;15分&#xff09; “一帮一学习小组”是中小学中常见的学习组织方式&#xff0c;老师把学习成绩靠前的学生跟学习成绩靠后的学生排在一组。本题就请你编写程序帮助老师自动完成这个分配工作&#xff0c;即在得到全班学生的排名后&#xff0c;在当前尚…

Mac下的压缩包和Win看到的不一样怎么办 Mac压缩后Win电脑看文件名会乱码

在当今多平台的数字工作环境中&#xff0c;Mac和Windows用户常常需要交换文件&#xff0c;但有时候会遇到一些兼容性问题。特别是在处理压缩文件时&#xff0c;Mac用户创建的压缩包在Windows系统中打开时&#xff0c;常常会遇到文件名乱码的问题。本文将详细讨论“Mac下的压缩包…

C语言基础(二十八)

1、冒泡排序&#xff1a; #include "date.h" #include <stdio.h> #include <stdlib.h> #include <time.h> // 函数声明 void bubbleSort(int *arr, int n); int* createRandomArray(int n, int *size); int main() { int time getTi…

Java算法之梳排序(Comb Sort)

梳排序简介 梳排序&#xff08;Comb Sort&#xff09;是冒泡排序的一个变种&#xff0c;其核心思想是在比较相邻元素之前先进行更大步长的比较。这种算法的名称来源于其工作方式类似于梳头发时的动作&#xff0c;先大范围地移动&#xff0c;然后逐渐减小移动的步长&#xff0c…

22行为型设计模式——解释器模式

一、解释器模式 解释器模式&#xff08;Interpreter Pattern&#xff09;是一种行为型设计模式&#xff0c;主要用于解析和解释特定的语言或表达式。它的核心思想是为语言中的每种语法规则定义一个解释器&#xff0c;通过这些解释器将语言的表示形式转换为可执行的操作。解释器…

双臂机器人协作/合作阻抗建模及其控制实现(Dual-Arm Cooperative)

机器人阻抗控制是一种基于力的控制方法,其核心在于通过调整机器人的阻抗特性(如刚度、阻尼等),使机器人在与环境交互时能够表现出特定的力学行为。以下是对机器人阻抗控制的详细解析: 一、定义与内涵 机器人阻抗控制是指通过调整机器人的阻抗特性(如刚度、阻尼等),使…

【Python机器学习】NLP词频背后的含义——距离和相似度

我们可以使用相似度评分&#xff08;和距离&#xff09;&#xff0c;根据两篇文档的表示向量间的相似度&#xff08;或距离&#xff09;来判断文档间有多相似。 我们可以使用相似度评分&#xff08;和举例&#xff09;来查看LSA主题模型与高维TF-IDF模型之间的一致性。在去掉了…

697.数组的度

697.数组的度 给定一个非空且只包含非负数的整数数组 nums&#xff0c;数组的 度 的定义是指数组里任一元素出现频数的最大值。 你的任务是在 nums 中找到与 nums 拥有相同大小的度的最短连续子数组&#xff0c;返回其长度。 示例 1&#xff1a; 输入&#xff1a;nums [1,2,2…

STM32基于HAL库串口printf使用和接收

我们这里使用HAL库直接用cubemx生成代码配置串口 1.打开cubemx&#xff0c;选择MCU型号 2.我这里使用的是STM32F103C8T6&#xff0c;根据自己的型号选择&#xff0c;这里不限制型号 3.选择时钟源 4.系统设置 5时钟配置 5.选择和配置串口 5.配置中断和中断优先级 6.工程设置…

【时时三省】c语言例题----华为机试题<最长回文子串>

山不在高&#xff0c;有仙则名。水不在深&#xff0c;有龙则灵。 ----CSDN 时时三省 1&#xff0c;题目 HJ85 最长回文子串 描述 给定一个仅包含小写字母的字符串&#xff0c;求它的最长回文子串的长度。 所谓回文串&#xff0c;指左右对称的字符串。 所谓子串&#xff0…

二叉搜索树的最近公共祖先:递归与迭代解法全面解析

在本篇文章中&#xff0c;我们将详细解读力扣第235题“二叉搜索树的最近公共祖先”。通过学习本篇文章&#xff0c;读者将掌握如何在二叉搜索树中找到两个节点的最近公共祖先&#xff0c;并了解相关的复杂度分析和模拟面试问答。每种方法都将配以详细的解释&#xff0c;以便于理…

代码随想录算法训练营第三十一天|56. 合并区间 738.单调递增的数字

56. 合并区间 题目&#xff1a; 以数组 intervals 表示若干个区间的集合&#xff0c;其中单个区间为 intervals[i] [starti, endi] 。请你合并所有重叠的区间&#xff0c;并返回 一个不重叠的区间数组&#xff0c;该数组需恰好覆盖输入中的所有区间 。 示例 1&#xff1a; 输…

MySQL数据库事务的学习(有业务场景案例)

一、事务的基本概念 定义&#xff1a;事务是数据库管理系统执行过程中的一个逻辑单位&#xff0c;由一个或多个SQL语句组成&#xff0c;这些语句作为一个整体一起向系统提交&#xff0c;要么全部执行&#xff0c;要么全部不执行。 二、ACID特性详解 1. 原子性&#xff08;At…

node环境安装、vue-cli搭建过程、element-UI搭建使用过程

vue-cli 官方提供的一个脚手架&#xff0c;用于快速生成一个 vue 的项目模板&#xff1b;预先定义好的目录结构及基础代码&#xff0c;就好比咱们在创建 Maven 项目时可以选择创建一个骨架项目&#xff0c;这个骨架项目就是脚手架&#xff0c;我们的开发更加的快速 前端项目架…

探索Python测试的奥秘:nose库的魔法之旅

文章目录 探索Python测试的奥秘&#xff1a;nose库的魔法之旅1. 背景&#xff1a;为什么要用nose&#xff1f;2. nose是什么&#xff1f;3. 如何安装nose&#xff1f;4. 五个简单的库函数使用方法4.1 nose.tools.assert_true4.2 nose.tools.assert_equal4.3 nose.tools.raises4…

html2canvas、pdf-lib、file-saver将html页面导出成pdf

html2canvas、pdf-lib、file-saver将html页面导出成pdf 项目背景 需要根据用户的账号信息&#xff0c;生成一个pdf报告发给客户&#xff0c;要求报告包含echart饼图、走势图等。 方案 使用html2canvas&#xff0c;将页面转成图片&#xff0c;再通过pdf-lib将图片转成pdf文件…

Visual Studio Code离线汉化

从官网下载Visual Studio Code安装包后&#xff0c; 下载Visual Studio Code&#xff1a;https://code.visualstudio.com/ 若因网络等问题无法在线安装语言包&#xff0c;可以尝试离线安装&#xff1a; 从官网下载语言包&#xff1a; Extensions for Visual Studio family …

Stable Diffusion majicMIX_realistic模型的介绍及使用

一、简介 majicMIX_realistic模型是一种能够渲染出具有神秘或幻想色彩的真实场景的AI模型。这个模型的特点是在现实场景的基础上&#xff0c;通过加入一些魔法与奇幻元素来营造出极具画面效果和吸引力的图像。传统意义的现实场景虽然真实&#xff0c;但通常情况下缺乏奇幻性&a…

【网络世界】网络层

目录 &#x1f308;前言&#x1f308; &#x1f4c1; 网络层 &#x1f4c1; IPV4 &#x1f4c2; 什么是IP地址 &#x1f4c2; 网段划分 &#x1f4c2; 特殊IP &#x1f4c2; 内网和公网 &#x1f4c2; IPV4的危机 &#x1f4c1; IP协议格式 &#x1f4c1; 路由 &#x1f…