DPDK系列之三十六报文转发

一、网络报文处理

学过网络通信的都知道,其实在网络的底层数据就是一包(帧)包的。换句话说,所有的网络设备转发的其实就是一包包的二进制流数据。对设备或者驱动来说,这些数据没有什么任何意义,它们只是负责进行检验、处理、转发。说白了就像一个个的物流中转站,它只管看看包裹是否损坏,发往何地,然后扔到指定的传送带上即可。网络上的数据包也是如此。
现实世界中,当双11时,包裹量大增,物流中心也得搞一些处理的方法,或者加人加机器或者改善流程,更或者直接升级物流设备采用机器人自动仓储。同样,对于网络世界,也是如此,它会从软件到硬件有一个完整的数据处理的流程,包括应用框架,算法以及后面要提到的硬件的处理等等。
那么首先这里要先弄明白网络处理模块有哪些:
1、首先得有输入输出模块,这是网络吞吐的接口即:
Packet input: 报文输入
Packet output: 硬件发出
2、然后需要对报文进行处理:
Pre-processing: 报文比较粗粒度处理
Input classification: 报文较细粒度分流
3、然后是数据管理和控制模块:
Ingress queuing: 提供基于描述符的队列FIFO
Delivery/Scheduling: 根据队列优先级和CPU状态进行调度
Accelerator: 提供加解密和压缩/解压缩等硬件功能
Egress queueing: 在出口上根据QOS等级进行调度
4、完成后清扫现场
Post processing: 后期报文处理释放缓存
其实把这些模块按功能逻辑一划分,立刻就明白了,这比画张图还好理解。

二、转发应用框架

说到应该框架就要谈到转发模型,一提到模型,大家就基本可以明白了,如果没有明显的技术突破,模型基本是不会动的。所以这里用到的模型有两种:
1、Pipleline模型(Packet Framework)
Pipeline很好理解,计算机的CPU中使用就是这种流水模型。流水模型非常适合于一些有节奏的有规律的工作。比如对CPU密集型应用和IO密集型应用可以分别用不同的引擎来处理。在DPDK中,其可以按功能分成zoom out(多核应用框架)和zoom in(单个流水线模块)。
在这些模块中,通过使用三部分即逻辑端口、查找表和处理逻辑单元来实现对Pipeline的报文处理。端口做为每流水单元的模块输入,而通过查找表来确定处理方法,而处理逻辑则决定了报文的处理和最终流向。这样,一层层的堆叠,就形成了一个Pipeline。
DPDK支持的Pipeline有以下几种:
Packet I/O
Flow classification
Firewall
Routing
Metering
Traffic Mgmt
这些Pipeline都可以简单的通过配置文件来使用其进行应用。但是这种模型由于流水的限制,不容易进行扩展,对多核支持的也不如RTC好。

2、run to completion模型(RTC)
看到这个模型,写过网络编程的小伙伴是不是想到了IOCP,完成端口,这两还真得非常类似,说白了都是为了充分挖掘多核的优势。它对于处理一些上下文逻辑关系并行的数据流则非常有优势,它可以充分使用各个核心动态的分配处理各个逻辑层,并且很容易进行扩展。
在DPDK中可以通过参数指令将核心绑定到线程上,这样,不同的数据收发队列就可以与逻辑核心,从而保证一个报文只能在一个线程中进行处理。同时,通用的处理器单元使得编程也变得更简单。

3、二者的比较
通过上面分析,其实可以总结出来,对于并行度要求高但优化处理不高的报文,可以使用RTC模型;反之可以使用Pipeline模型。前者更适合于高并发的短连接后者更适合于长连接连续数据处理,方便进行优化动作。

三、相关算法

相关的算法就比较简单了,主要有以下几种:
1、精确匹配算法
从名字就可以看出来,直接就可以匹配上,精确配对。在网络中常用的就是哈希。不管你是哪种哈希,反正是哈希。应用哈希就需要解决哈希冲突的问题,常用的还是两种,链表和开放地址。这些都是老生常谈,不再赘述。
同样在DPDK中对哈希的校验也进行了优化,对字节对齐进行了处理。然后使用不同的硬件指令一次处理相关校验或者在无法使用硬件时使用查表的方法进行,这是典型的空间换时间。
2、最长匹配算法
最长前缀匹配(Longest Prefix Matching, LPM)算法是指在IP协议中被路由器用于在路由表中进行选择的一个算法。这个算法也很常见,在密码学和网络中经常可以用到。一般比较常用的是LPM算法。

3、ACL算法
ACL算法其实就是通过访问一个控制库,利用分类规则来对输入的数据包进行处理分类。ACL 库利用N元组的匹配规则进行类型匹配,提供如下操作:
创建AC(access domain) 的上下文
加规则到AC的上下文中
对于所有规则创建相关的结构体
进行入方向报文分类
销毁AC相关的资源

四、报文分发

DPDK中提供了一套报文转发的库和API,它的原理基本上就是通过distributor分发给不同的工作者Worker。而distributor则从Mbuf中拿到相关数据。这样,就形成了一个完整的分发流程。

五、源码

下面看一下DPDK中相关的源码:

//dpdk-stable-19.11.14\lib\librte_eventdev......
#include "rte_eventdev.h"
#include "rte_eventdev_pmd.h"static struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];struct rte_eventdev *rte_eventdevs = rte_event_devices;static struct rte_eventdev_global eventdev_globals = {.nb_devs		= 0
};/* Event dev north bound API implementation */uint8_t
rte_event_dev_count(void)
{return eventdev_globals.nb_devs;
}int
rte_event_dev_get_dev_id(const char *name)
{int i;uint8_t cmp;if (!name)return -EINVAL;for (i = 0; i < eventdev_globals.nb_devs; i++) {cmp = (strncmp(rte_event_devices[i].data->name, name,RTE_EVENTDEV_NAME_MAX_LEN) == 0) ||(rte_event_devices[i].dev ? (strncmp(rte_event_devices[i].dev->driver->name, name,RTE_EVENTDEV_NAME_MAX_LEN) == 0) : 0);if (cmp && (rte_event_devices[i].attached ==RTE_EVENTDEV_ATTACHED))return i;}return -ENODEV;
}int
rte_event_dev_socket_id(uint8_t dev_id)
{struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev = &rte_eventdevs[dev_id];return dev->data->socket_id;
}int
rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
{struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev = &rte_eventdevs[dev_id];if (dev_info == NULL)return -EINVAL;memset(dev_info, 0, sizeof(struct rte_event_dev_info));RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);(*dev->dev_ops->dev_infos_get)(dev, dev_info);dev_info->dequeue_timeout_ns = dev->data->dev_conf.dequeue_timeout_ns;dev_info->dev = dev->dev;return 0;
}
......
int
rte_event_port_link(uint8_t dev_id, uint8_t port_id,const uint8_t queues[], const uint8_t priorities[],uint16_t nb_links)
{struct rte_eventdev *dev;uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];uint16_t *links_map;int i, diag;RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);dev = &rte_eventdevs[dev_id];if (*dev->dev_ops->port_link == NULL) {RTE_EDEV_LOG_ERR("Function not supported\n");rte_errno = ENOTSUP;return 0;}if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);rte_errno = EINVAL;return 0;}if (queues == NULL) {for (i = 0; i < dev->data->nb_queues; i++)queues_list[i] = i;queues = queues_list;nb_links = dev->data->nb_queues;}if (priorities == NULL) {for (i = 0; i < nb_links; i++)priorities_list[i] = RTE_EVENT_DEV_PRIORITY_NORMAL;priorities = priorities_list;}for (i = 0; i < nb_links; i++)if (queues[i] >= dev->data->nb_queues) {rte_errno = EINVAL;return 0;}diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id],queues, priorities, nb_links);if (diag < 0)return diag;links_map = dev->data->links_map;/* Point links_map to this port specific area */links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);for (i = 0; i < diag; i++)links_map[queues[i]] = (uint8_t)priorities[i];return diag;
}int
rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,uint8_t queues[], uint16_t nb_unlinks)
{struct rte_eventdev *dev;uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];int i, diag, j;uint16_t *links_map;RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);dev = &rte_eventdevs[dev_id];if (*dev->dev_ops->port_unlink == NULL) {RTE_EDEV_LOG_ERR("Function not supported");rte_errno = ENOTSUP;return 0;}if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);rte_errno = EINVAL;return 0;}links_map = dev->data->links_map;/* Point links_map to this port specific area */links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);if (queues == NULL) {j = 0;for (i = 0; i < dev->data->nb_queues; i++) {if (links_map[i] !=EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {all_queues[j] = i;j++;}}queues = all_queues;} else {for (j = 0; j < nb_unlinks; j++) {if (links_map[queues[j]] ==EVENT_QUEUE_SERVICE_PRIORITY_INVALID)break;}}nb_unlinks = j;for (i = 0; i < nb_unlinks; i++)if (queues[i] >= dev->data->nb_queues) {rte_errno = EINVAL;return 0;}diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id],queues, nb_unlinks);if (diag < 0)return diag;for (i = 0; i < diag; i++)links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;return diag;
}int
rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id)
{struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev = &rte_eventdevs[dev_id];if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);return -EINVAL;}/* Return 0 if the PMD does not implement unlinks in progress.* This allows PMDs which handle unlink synchronously to not implement* this function at all.*/RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlinks_in_progress, 0);return (*dev->dev_ops->port_unlinks_in_progress)(dev,dev->data->ports[port_id]);
}int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,uint8_t queues[], uint8_t priorities[])
{struct rte_eventdev *dev;uint16_t *links_map;int i, count = 0;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev = &rte_eventdevs[dev_id];if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);return -EINVAL;}links_map = dev->data->links_map;/* Point links_map to this port specific area */links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);for (i = 0; i < dev->data->nb_queues; i++) {if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {queues[count] = i;priorities[count] = (uint8_t)links_map[i];++count;}}return count;
}int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,uint64_t *timeout_ticks)
{struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev = &rte_eventdevs[dev_id];RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timeout_ticks, -ENOTSUP);if (timeout_ticks == NULL)return -EINVAL;return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
}
...

其实其核心的两组API一个在rte_eventdev.c和rte_service.c中,上面是前者,下面看看后者:

#include "eal_private.h"#define RTE_SERVICE_NUM_MAX 64#define SERVICE_F_REGISTERED    (1 << 0)
#define SERVICE_F_STATS_ENABLED (1 << 1)
#define SERVICE_F_START_CHECK   (1 << 2)/* runstates for services and lcores, denoting if they are active or not */
#define RUNSTATE_STOPPED 0
#define RUNSTATE_RUNNING 1/* internal representation of a service */
struct rte_service_spec_impl {/* public part of the struct */struct rte_service_spec spec;/* atomic lock that when set indicates a service core is currently* running this service callback. When not set, a core may take the* lock and then run the service callback.*/rte_atomic32_t execute_lock;/* API set/get-able variables */int8_t app_runstate;int8_t comp_runstate;uint8_t internal_flags;/* per service statistics *//* Indicates how many cores the service is mapped to run on.* It does not indicate the number of cores the service is running* on currently.*/rte_atomic32_t num_mapped_cores;uint64_t calls;uint64_t cycles_spent;
} __rte_cache_aligned;/* the internal values of a service core */
struct core_state {/* map of services IDs are run on this core */uint64_t service_mask;uint8_t runstate; /* running or stopped */uint8_t is_service_core; /* set if core is currently a service core */uint8_t service_active_on_lcore[RTE_SERVICE_NUM_MAX];uint64_t loops;uint64_t calls_per_service[RTE_SERVICE_NUM_MAX];
} __rte_cache_aligned;static uint32_t rte_service_count;
static struct rte_service_spec_impl *rte_services;
static struct core_state *lcore_states;
static uint32_t rte_service_library_initialized;int32_t
rte_service_init(void)
{if (rte_service_library_initialized) {RTE_LOG(NOTICE, EAL,"service library init() called, init flag %d\n",rte_service_library_initialized);return -EALREADY;}rte_services = rte_calloc("rte_services", RTE_SERVICE_NUM_MAX,sizeof(struct rte_service_spec_impl),RTE_CACHE_LINE_SIZE);if (!rte_services) {RTE_LOG(ERR, EAL, "error allocating rte services array\n");goto fail_mem;}lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,sizeof(struct core_state), RTE_CACHE_LINE_SIZE);if (!lcore_states) {RTE_LOG(ERR, EAL, "error allocating core states array\n");goto fail_mem;}int i;struct rte_config *cfg = rte_eal_get_configuration();for (i = 0; i < RTE_MAX_LCORE; i++) {if (lcore_config[i].core_role == ROLE_SERVICE) {if ((unsigned int)i == cfg->master_lcore)continue;rte_service_lcore_add(i);}}rte_service_library_initialized = 1;return 0;
fail_mem:rte_free(rte_services);rte_free(lcore_states);return -ENOMEM;
}......
static int32_t
service_runner_func(void *arg)
{RTE_SET_USED(arg);uint32_t i;const int lcore = rte_lcore_id();struct core_state *cs = &lcore_states[lcore];while (cs->runstate == RUNSTATE_RUNNING) {const uint64_t service_mask = cs->service_mask;for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {if (!service_valid(i))continue;/* return value ignored as no change to code flow */service_run(i, cs, service_mask, service_get(i), 1);}cs->loops++;rte_smp_rmb();}/* Switch off this core for all services, to ensure that future* calls to may_be_active() know this core is switched off.*/for (i = 0; i < RTE_SERVICE_NUM_MAX; i++)cs->service_active_on_lcore[i] = 0;return 0;
}int32_t
rte_service_lcore_count(void)
{int32_t count = 0;uint32_t i;for (i = 0; i < RTE_MAX_LCORE; i++)count += lcore_states[i].is_service_core;return count;
}int32_t
rte_service_lcore_list(uint32_t array[], uint32_t n)
{uint32_t count = rte_service_lcore_count();if (count > n)return -ENOMEM;if (!array)return -EINVAL;uint32_t i;uint32_t idx = 0;for (i = 0; i < RTE_MAX_LCORE; i++) {struct core_state *cs = &lcore_states[i];if (cs->is_service_core) {array[idx] = i;idx++;}}return count;
}int32_t
rte_service_lcore_count_services(uint32_t lcore)
{if (lcore >= RTE_MAX_LCORE)return -EINVAL;struct core_state *cs = &lcore_states[lcore];if (!cs->is_service_core)return -ENOTSUP;return __builtin_popcountll(cs->service_mask);
}int32_t
rte_service_start_with_defaults(void)
{/* create a default mapping from cores to services, then start the* services to make them transparent to unaware applications.*/uint32_t i;int ret;uint32_t count = rte_service_get_count();int32_t lcore_iter = 0;uint32_t ids[RTE_MAX_LCORE] = {0};int32_t lcore_count = rte_service_lcore_list(ids, RTE_MAX_LCORE);if (lcore_count == 0)return -ENOTSUP;for (i = 0; (int)i < lcore_count; i++)rte_service_lcore_start(ids[i]);for (i = 0; i < count; i++) {/* do 1:1 core mapping here, with each service getting* assigned a single core by default. Adding multiple services* should multiplex to a single core, or 1:1 if there are the* same amount of services as service-cores*/ret = rte_service_map_lcore_set(i, ids[lcore_iter], 1);if (ret)return -ENODEV;lcore_iter++;if (lcore_iter >= lcore_count)lcore_iter = 0;ret = rte_service_runstate_set(i, 1);if (ret)return -ENOEXEC;}return 0;
}static int32_t
service_update(struct rte_service_spec *service, uint32_t lcore,uint32_t *set, uint32_t *enabled)
{uint32_t i;int32_t sid = -1;for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {if ((struct rte_service_spec *)&rte_services[i] == service &&service_valid(i)) {sid = i;break;}}if (sid == -1 || lcore >= RTE_MAX_LCORE)return -EINVAL;if (!lcore_states[lcore].is_service_core)return -EINVAL;uint64_t sid_mask = UINT64_C(1) << sid;if (set) {uint64_t lcore_mapped = lcore_states[lcore].service_mask &sid_mask;if (*set && !lcore_mapped) {lcore_states[lcore].service_mask |= sid_mask;rte_atomic32_inc(&rte_services[sid].num_mapped_cores);}if (!*set && lcore_mapped) {lcore_states[lcore].service_mask &= ~(sid_mask);rte_atomic32_dec(&rte_services[sid].num_mapped_cores);}}if (enabled)*enabled = !!(lcore_states[lcore].service_mask & (sid_mask));rte_smp_wmb();return 0;
}int32_t
rte_service_map_lcore_set(uint32_t id, uint32_t lcore, uint32_t enabled)
{struct rte_service_spec_impl *s;SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);uint32_t on = enabled > 0;return service_update(&s->spec, lcore, &on, 0);
}int32_t
rte_service_map_lcore_get(uint32_t id, uint32_t lcore)
{struct rte_service_spec_impl *s;SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);uint32_t enabled;int ret = service_update(&s->spec, lcore, 0, &enabled);if (ret == 0)return enabled;return ret;
}static void
set_lcore_state(uint32_t lcore, int32_t state)
{/* mark core state in hugepage backed config */struct rte_config *cfg = rte_eal_get_configuration();cfg->lcore_role[lcore] = state;/* mark state in process local lcore_config */lcore_config[lcore].core_role = state;/* update per-lcore optimized state tracking */lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
}int32_t
rte_service_lcore_reset_all(void)
{/* loop over cores, reset all to mask 0 */uint32_t i;for (i = 0; i < RTE_MAX_LCORE; i++) {if (lcore_states[i].is_service_core) {lcore_states[i].service_mask = 0;set_lcore_state(i, ROLE_RTE);lcore_states[i].runstate = RUNSTATE_STOPPED;}}for (i = 0; i < RTE_SERVICE_NUM_MAX; i++)rte_atomic32_set(&rte_services[i].num_mapped_cores, 0);rte_smp_wmb();return 0;
}int32_t
rte_service_lcore_add(uint32_t lcore)
{if (lcore >= RTE_MAX_LCORE)return -EINVAL;if (lcore_states[lcore].is_service_core)return -EALREADY;set_lcore_state(lcore, ROLE_SERVICE);/* ensure that after adding a core the mask and state are defaults */lcore_states[lcore].service_mask = 0;lcore_states[lcore].runstate = RUNSTATE_STOPPED;rte_smp_wmb();return rte_eal_wait_lcore(lcore);
}int32_t
rte_service_lcore_del(uint32_t lcore)
{if (lcore >= RTE_MAX_LCORE)return -EINVAL;struct core_state *cs = &lcore_states[lcore];if (!cs->is_service_core)return -EINVAL;if (cs->runstate != RUNSTATE_STOPPED)return -EBUSY;set_lcore_state(lcore, ROLE_RTE);rte_smp_wmb();return 0;
}int32_t
rte_service_lcore_start(uint32_t lcore)
{if (lcore >= RTE_MAX_LCORE)return -EINVAL;struct core_state *cs = &lcore_states[lcore];if (!cs->is_service_core)return -EINVAL;if (cs->runstate == RUNSTATE_RUNNING)return -EALREADY;/* set core to run state first, and then launch otherwise it will* return immediately as runstate keeps it in the service poll loop*/cs->runstate = RUNSTATE_RUNNING;int ret = rte_eal_remote_launch(service_runner_func, 0, lcore);/* returns -EBUSY if the core is already launched, 0 on success */return ret;
}......

RTC及算法相关代码可自行在源码中查找,这里不再赘述。

五、总结

学习这种功能知识点,最重要的是把握整体逻辑和处理流程。算法和框架可以先放到一边,待了解清楚整体流程后,再深入到其中进行学习,能更好的理解和掌握相关的知识体系。学习要有学习方法,要有清晰的思路。万不可一上来就陷入细节,出力甚多却所得甚少。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/147703.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Web之HTML笔记

Web之HTML、CSS、JS Web标准一、HTML&#xff08;超文本标记语言&#xff09;HTML 基本结构标签常用标签1.font标签2.p标签3.注释4.h系列标题5.img6.超链接a7.列表8.表格9.表单 Web之CSS笔记 Web标准 结构标准用于对网页元素进行整理和分类(HTML)表现标准用于设置网页元素的版…

LoRa模块空中唤醒功能原理和物联网应用

LoRa模块是一种广泛应用于物联网领域的无线通信模块&#xff0c;支持低功耗、远距离和低成本的无线通信。 其空中唤醒功能是一项重要的应用&#xff0c;可以实现设备的自动唤醒&#xff0c;从而在没有人工干预的情况下实现设备的远程监控和控制。 LoRa模块空中唤醒功能的原理…

HTTPS加密为什么能保证网络安全?

随着互联网的普及和发展&#xff0c;网络安全问题日益严重。为了保护用户的隐私和数据安全&#xff0c;许多网站都采用了HTTPS加密技术。那么&#xff0c;HTTPS加密为什么可以保证网络安全呢&#xff1f; 原因是HTTP协议采用的是数据明文传输方式。用户从客户端浏览器提交数据…

SHELL中的数组及其相关操作

快捷查看指令 ctrlf 进行搜索会直接定位到需要的知识点和命令讲解&#xff08;如有不正确的地方欢迎各位小伙伴在评论区提意见&#xff0c;博主会及时修改&#xff09; 数组 在shell中&#xff0c;可以使用数组来存储和操作一组数据。数组是由一个或多个元素组成的有序集合&am…

springboot生成PDF,并且添加水印

/*** 导出调查问卷*/ApiLog("导出调查问卷")PostMapping("/print/{id}")ApiOperationSupport(order 23)ApiOperation(value "导出报告", notes "导出报告")public void print(PathVariable Long id, HttpServletResponse response…

CentOS7设置 redis 开机自启动

CentOS7设置 redis 开机自启动 步骤1.创建redis.service文件2.重新加载所有服务3.设置开机自启动4.自由地使用linux系统命令4.1.启动 Redis 服务4.2.查看 Redis 状态(-l:查看完整的信息)4.3.停止 Redis 服务4.4.重启 Redis 服务 步骤 如果你傲娇&#xff0c;不想拷贝&#xff0…

PDF控件Spire.PDF for .NET【转换】演示:将PDF 转换为 HTML

由于各种原因&#xff0c;您可能想要将 PDF 转换为 HTML。例如&#xff0c;您需要在社交媒体上共享 PDF 文档或在网络上发布 PDF 内容。在本文中&#xff0c;您将了解如何使用Spire.PDF for .NET在 C# 和 VB.NET 中将 PDF 转换为 HTML。 Spire.Doc 是一款专门对 Word 文档进行…

虹科示波器 | 汽车免拆检修 | 2015款奔驰G63AMG车发动机偶尔自动熄火

一、故障现象 一辆2015款奔驰G63AMG车&#xff0c;搭载157发动机&#xff0c;累计行驶里程约为9.4万km。车主反映&#xff0c;该车低速行驶时&#xff0c;发动机偶尔会自动熄火&#xff0c;故障大概1个星期出现1次。 二、故障诊断 接车后路试&#xff0c;故障未能再现。用故障检…

机器人制作开源方案 | 智能快递付件机器人

一、作品简介 作者&#xff1a;贺沅、聂开发、王兴文、石宇航、盛余庆 单位&#xff1a;黑龙江科技大学 指导老师&#xff1a;邵文冕、苑鹏涛 1. 项目背景 受新冠疫情的影响&#xff0c;大学校园内都采取封闭式管理来降低传染的风险&#xff0c;导致学生不能外出&#xff0c…

GNU gold链接器 - target.cc 实现特定目标架构的支持

一、Target::do_is_local_label_name(const char* name) const 1. object.cc 中 调用target().is_local_label_name(name) 这段代码是在链接器中用于决定是否应该丢弃本地符号的部分。它包含了一些逻辑&#xff0c;以便在满足特定条件时丢弃本地符号。下面是关键部分的解释&…

SpringCloud微服务:Nacos和Eureka的区别

目录 配置&#xff1a; 区别&#xff1a; ephemeral设置为true时 ephemeral设置为false时&#xff08;这里我使用的服务是order-service&#xff09; 1. Nacos与eureka的共同点 都支持服务注册和服务拉取 都支持服务提供者心跳方式做健康检测 2. Nacos与Eu…

【git】一些容易混淆的操作

git clone vs git init: git clone&#xff1a;用于从现有的 Git 仓库复制一个副本到本地。这通常是参与一个已存在项目的起始步骤。git init&#xff1a;用于在本地创建一个新的 Git 仓库。这是开始一个全新项目的第一步。 git add vs git commit: git add&#xff1a;将更改…

在服务器导出kafka topic数据

使用Kafka自带的工具&#xff1a;Kafka提供了一个命令行工具kafka-console-consumer&#xff0c;可以用来消费指定Topic的数据并将其打印到控制台。 1.打印到控制台 命令如下&#xff1a; kafka-console-consumer.sh --bootstrap-server $kafkaHost --topic $topicName --from-…

【广州华锐互动】VR可视化政务服务为公众提供更直观、形象的政策解读

虚拟现实&#xff08;VR&#xff09;技术正在逐渐应用于政务服务领域&#xff0c;为公众提供更加便捷、高效和个性化的服务体验。通过VR眼镜、手机等设备&#xff0c;公众可以在虚拟环境中参观政务服务中心&#xff0c;并根据自己的需求选择不同的办事窗口或事项进行咨询和办理…

06-流媒体-YUV数据在SDL控件显示

整体方案&#xff1a; 采集端&#xff1a;摄像头采集&#xff08;YUV&#xff09;->编码&#xff08;YUV转H264&#xff09;->写封装&#xff08;&#xff28;264转FLV&#xff09;->RTMP推流 客户端&#xff1a;RTMP拉流->解封装&#xff08;FLV转H264&#xff09…

SQL零基础入门教程,贼拉详细!贼拉简单! 速通数据库期末考!(七)

LEFT JOIN LEFT JOIN 同样用于关联两个表&#xff0c;ON 关键字后指定两个表共有的字段作为匹配条件&#xff0c;与 INNER JOIN 不同的地方在于匹配不上的数据行&#xff0c;INNER JOIN 对两表匹配不上的数据行不返回结果&#xff0c;而 LEFT JOIN 只对右表&#xff08;table2…

Rust根据条件删除相邻元素:dedup

文章目录 示例dedup_bydedup_by_key Rust系列&#xff1a;初步⚙所有权⚙结构体和枚举类⚙函数进阶⚙泛型和特征⚙并发和线程通信 示例 Rust中的动态数组Vec提供了dedup函数&#xff0c;用于删除相邻重复元素。此外&#xff0c;还提供了dedup_by和dedup_by_key&#xff0c;可…

STM32外部中断(EXTI)与RTOS多任务处理的协同设计

当在STM32上使用外部中断&#xff08;EXTI&#xff09;与RTOS&#xff08;Real-Time Operating System&#xff0c;实时操作系统&#xff09;进行多任务处理时&#xff0c;需要设计合适的协同机制&#xff0c;以确保可靠的中断处理和任务调度。在下面的概述中&#xff0c;我将介…

【shell 常用脚本30例】

先了解下编写Shell过程中注意事项 开头加解释器&#xff1a;#!/bin/bash语法缩进&#xff0c;使用四个空格&#xff1b;多加注释说明。命名建议规则&#xff1a;全局变量名大写、局部变量小写&#xff0c;函数名小写&#xff0c;名字体现出实际作用。默认变量是全局的&#xf…

AnimateDiff搭配Stable diffution制作AI视频

话不多说&#xff0c;先看视频 1. AnimateDiff的技术原理 AnimateDiff可以搭配扩散模型算法&#xff08;Stable Diffusion&#xff09;来生成高质量的动态视频&#xff0c;其中动态模型&#xff08;Motion Models&#xff09;用来实时跟踪人物的动作以及画面的改变。我们使用 …