kafka权限认证 topic权限认证 权限动态认证-亲测成功

kafka权限认证 topic权限认证 权限动态认证-亲测成功

kafka动态认证 自定义认证 安全认证-亲测成功

MacBook Linux安装Kafka

Linux解压安装Kafka

介绍

1、Kafka的权限分类

  • 身份认证(Authentication):对client 与服务器的连接进行身份认证,brokers和zookeeper之间的连接进行Authentication(producer 和 consumer)、其他 brokers、tools与 brokers 之间连接的认证。上一篇博文介绍了连接的身份认证。

  • 权限控制(Authorization):实现对于消息级别的权限控制,clients的读写操作进行Authorization:(生产/消费/group)数据权限。这节我们讲解Topic权限的控制。

kafka配置自定义权限认证

修改配置文件,在kafka主目录下,D:\kafka_2.12-3.5.0\config\server.properties

enable_db_acl = true
authorizer.class.name=com.liang.kafka.auth.handler.MyAclAuthorizer
super.users=admin;liangdruid.name = mysql_db
druid.type = com.alibaba.druid.pool.DruidDataSource
druid.url = jdbc:mysql://127.0.0.1:3306/test?useSSL=FALSE&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&serverTimezone=Asia/Shanghai
druid.username = root
druid.password = root
druid.filters = stat
druid.driverClassName = com.mysql.cj.jdbc.Driver
druid.initialSize = 5
druid.minIdle = 2
druid.maxActive = 50
druid.maxWait = 60000
druid.timeBetweenEvictionRunsMillis = 60000
druid.minEvictableIdleTimeMillis = 300000
druid.validationQuery = SELECT 'x'
druid.testWhileIdle = true
druid.testOnBorrow = false
druid.poolPreparedStatements = false
druid.maxPoolPreparedStatementPerConnectionSize = 20

其中:

  • enable_db_acl用来控制是否开启动态权限认证。
  • authorizer.class.name配置自定义权限的类

windows完整配置如下:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.#
# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.
# See kafka.server.KafkaConfig for additional details and defaults
############################## Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=0############################# Socket Server Settings ############################## The address the socket server listens on. If not configured, the host name will be equal to the value of
# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
#advertised.listeners=PLAINTEXT://your.host.name:9092
advertised.listeners=SASL_PLAINTEXT://localhost:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
sasl.enabled.mechanisms = PLAIN
sasl.mechanism.inter.broker.protocol = PLAIN
security.inter.broker.protocol = SASL_PLAINTEXT
listeners = SASL_PLAINTEXT://localhost:9092enable_db_acl = true
authorizer.class.name=com.liang.kafka.auth.handler.MyAclAuthorizer
super.users=admin;liangenable_db_auth = true
listener.name.sasl_plaintext.plain.sasl.server.callback.handler.class=com.liang.kafka.auth.handler.MyPlainServerCallbackHandler
druid.name = mysql_db
druid.type = com.alibaba.druid.pool.DruidDataSource
druid.url = jdbc:mysql://127.0.0.1:3306/testdb?useSSL=FALSE&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&serverTimezone=Asia/Shanghai
druid.topic.url = jdbc:mysql://127.0.0.1:3306/topicdb?useSSL=FALSE&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&serverTimezone=Asia/Shanghai
druid.username = root
druid.password = root
druid.filters = stat
druid.driverClassName = com.mysql.cj.jdbc.Driver
druid.initialSize = 5
druid.minIdle = 2
druid.maxActive = 50
druid.maxWait = 60000
druid.timeBetweenEvictionRunsMillis = 60000
druid.minEvictableIdleTimeMillis = 300000
druid.validationQuery = SELECT 'x'
druid.testWhileIdle = true
druid.testOnBorrow = false
druid.poolPreparedStatements = false
druid.maxPoolPreparedStatementPerConnectionSize = 20# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=D:\kafka_2.12-3.5.0\kafka-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

Linux下完整配置如下

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id = 999############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
# listeners=PRIVATE://:9092,PUBLIC://:9093sasl.enabled.mechanisms = PLAIN
sasl.mechanism.inter.broker.protocol = PLAIN
security.inter.broker.protocol = SASL_PLAINTEXT
listeners = SASL_PLAINTEXT://:9092enable_db_acl = true
authorizer.class.name=com.liang.kafka.auth.handler.MyAclAuthorizer
super.users=admin;liangenable_db_auth = true
listener.name.sasl_plaintext.plain.sasl.server.callback.handler.class=com.liang.kafka.auth.handler.MyPlainServerCallbackHandler
druid.name = mysql_db
druid.type = com.alibaba.druid.pool.DruidDataSource
druid.url = jdbc:mysql://192.168.1.77:3306/testdb?useSSL=FALSE&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&serverTimezone=Asia/Shanghai
druid.topic.url = jdbc:mysql://192.168.1.77:3306/topicdb?useSSL=FALSE&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&serverTimezone=Asia/Shanghai
druid.username = root
druid.password = root
druid.filters = stat
druid.driverClassName = com.mysql.cj.jdbc.Driver
druid.initialSize = 5
druid.minIdle = 2
druid.maxActive = 50
druid.maxWait = 60000
druid.timeBetweenEvictionRunsMillis = 60000
druid.minEvictableIdleTimeMillis = 300000
druid.validationQuery = SELECT 'x'
druid.testWhileIdle = true
druid.testOnBorrow = false
druid.poolPreparedStatements = false
druid.maxPoolPreparedStatementPerConnectionSize = 20# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners = SASL_PLAINTEXT://192.168.1.77:10092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=/opt/kafka/kafka-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=127.0.0.1:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
自定义实现topic权限认证

用户查询,订阅或发送topic时,判断是否有此topic的权限,订阅时有没有订阅分组的权限等。

maven项目引入相关的依赖包,pom添加如下依赖包

        <dependency><groupId>org.apache.kafka</groupId><artifactId>kafka_2.13</artifactId><version>2.8.1</version></dependency><dependency><groupId>cn.hutool</groupId><artifactId>hutool-cache</artifactId><version>5.7.21</version></dependency>

动态topic权限认证完整代码如下:

package com.liang.kafka.auth.handler;import cn.hutool.core.collection.CollUtil;
import com.alibaba.druid.pool.DruidDataSource;
import com.liang.kafka.auth.cache.LocalCache;
import com.liang.kafka.auth.util.DataSourceUtil;
import org.apache.kafka.common.Endpoint;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclBindingFilter;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.resource.PatternType;
import org.apache.kafka.common.resource.ResourcePattern;
import org.apache.kafka.common.resource.ResourceType;
import org.apache.kafka.server.authorizer.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;import java.io.IOException;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.*;
import java.util.concurrent.CompletionStage;
import java.util.stream.Collectors;
import static com.liang.kafka.auth.constants.Constants.*;/***  kafka acl 自定义鉴权*  配置方法:在server.properties添加如下配置:*  super.users 超级用户,多个用;隔开*  authorizer.class.name=com.liang.kafka.auth.handler.MyAclAuthorizer*  liang*/
public class MyAclAuthorizer  implements Authorizer {private static final Logger logger = LoggerFactory.getLogger(MyAclAuthorizer.class);/*** 数据源*/private DruidDataSource dataSource = null;private static final String SUPER_USERS_PROP = "super.users";/*** 超级管理员*/private Set<String> superUserSet;/*** 是否开启数据库acl验证*/private boolean enableDbAcl;@Overridepublic Map<Endpoint, ? extends CompletionStage<Void>> start(AuthorizerServerInfo authorizerServerInfo) {//logger.info("------------------start");return new HashMap<>();}/***  实现你的访问控制逻辑*/@Overridepublic List<AuthorizationResult> authorize(AuthorizableRequestContext authorizableRequestContext, List<Action> list) {return list.stream().map(action -> authorizeAction(authorizableRequestContext, action)).collect(Collectors.toList());}/*** 访问控制逻辑处理*/private AuthorizationResult authorizeAction(AuthorizableRequestContext authorizableRequestContext, Action action) {ResourcePattern resource = action.resourcePattern();if (resource.patternType() != PatternType.LITERAL) {throw new IllegalArgumentException("Only literal resources are supported. Got: " + resource.patternType());}//是否开启数据库acl验证if (!enableDbAcl) {return AuthorizationResult.ALLOWED;}String principal = authorizableRequestContext.principal().getName();AclOperation operation = action.operation();//logger.info("------resource type:{}---name:{}----operation:{}------用户名principal:{}", resource.resourceType(), resource.name(), operation.name(), principal);//1 超级用户直接通过if (superUserSet.contains(principal)) {//logger.info("-------------------超级用户直接通过");return AuthorizationResult.ALLOWED;}//2 资源类型为 Cluster 直接不通过if (resource.resourceType().equals(ResourceType.CLUSTER)) {logger.error("-------------------资源类型为Cluster直接不通过");return AuthorizationResult.DENIED;}//3 资源类型为 TransactionalId、DelegationToken 直接通过if (resource.resourceType().equals(ResourceType.TRANSACTIONAL_ID) || resource.resourceType().equals(ResourceType.DELEGATION_TOKEN)) {//logger.info("-------------------资源类型为 TransactionalId、DelegationToken 直接通过");return AuthorizationResult.ALLOWED;}String username = principal;//4 资源类型为 group 只能用默认组消费if (resource.resourceType().equals(ResourceType.GROUP)) {if (isGroup(resource.name(), username)) {return AuthorizationResult.ALLOWED;}logger.error("------------------资源类型为 group:{} 只能用默认分组消费,直接不通过", resource.name());return AuthorizationResult.DENIED;}//5 查询数据库权限配置表信息,找到则通过,否则不通过if (isAcls(resource.name(), username)) {return AuthorizationResult.ALLOWED;}return AuthorizationResult.DENIED;}/*** 判断是否为 默认分组: default_group*/private boolean isGroup(String resourceName, String username) {String defaultGroup = username + KAFKA_GROUP_SPLIT + "default_group";if (resourceName.equals(defaultGroup)) {return true;}return false;}/*** 查询数据库,判断是否有权限*/private Boolean isAcls(String resourceName, String username) {List<String> topics = LocalCache.getCache(username);if (CollUtil.isEmpty(topics)) {//从数据库查询topics = queryDb(username);if (CollUtil.isEmpty(topics)) {return Boolean.FALSE;}LocalCache.addCache(username, topics);}Boolean checkBool = checkTopic(resourceName, topics, username);return checkBool;}/*** 检查是否有topic权限, topic:username&topic*/private Boolean checkTopic(String resourceName, List<String> topics, String username) {for (String topic : topics) {if (topic == null || topic.length() == 0) {continue;}String tmp = username + KAFKA_TOPIC_SPLIT + topic;if (tmp.equals(resourceName)) {return Boolean.TRUE;}}return Boolean.FALSE;}/*** 查询数据库*/private List<String> queryDb(String username) {List<String> dbList = new ArrayList<>();String userQuery = "select t.topic\n" +" from topic t\n" +" left join mq_info i on t.mq_id = i.mq_id\n" +" where i.default_instance = 1 and t.del_status = 0 and t.username = ?";Connection conn = null;try {conn = dataSource.getConnection();PreparedStatement statement = conn.prepareStatement(userQuery);statement.setString(1, tenantId);ResultSet resultSet = statement.executeQuery();while (resultSet.next()) {dbList.add(resultSet.getString("topic"));}} catch (Exception e) {logger.error("-------------------数据库查询topic异常:{}", e);throw new RuntimeException(e);} finally {if (conn != null) {try {conn.close();} catch (SQLException e) {throw new RuntimeException(e);}}}return dbList;}/*** 创建权限*/@Overridepublic List<? extends CompletionStage<AclCreateResult>> createAcls(AuthorizableRequestContext authorizableRequestContext, List<AclBinding> list) {logger.error("------------------createAcls----没有创建权限操作");throw new UnsupportedOperationException();}/*** 删除权限*/@Overridepublic List<? extends CompletionStage<AclDeleteResult>> deleteAcls(AuthorizableRequestContext authorizableRequestContext, List<AclBindingFilter> list) {logger.error("------------------deleteAcls----没有删除权限操作");throw new UnsupportedOperationException();}@Overridepublic Iterable<AclBinding> acls(AclBindingFilter aclBindingFilter) {//logger.info("------------------acls-----获取符合查询条件的Acl操作");ArrayList aclBindings = new ArrayList();return aclBindings;}@Overridepublic void close() throws IOException {if (dataSource != null) {dataSource.close();}}@Overridepublic void configure(Map<String, ?> map) {String superUsers = (String) map.get(SUPER_USERS_PROP);//logger.info("------------------superUsers:{}", superUsers);if (superUsers == null || superUsers.isEmpty()) {superUserSet = new HashSet<>();} else {superUserSet = Arrays.stream(superUsers.split(";")).map(String::trim).collect(Collectors.toSet());}Object endbAclObject = map.get(ENABLE_DB_ACL);if (Objects.isNull(endbAclObject)) {logger.error("------------------缺少开关配置 enable_db_acl!");enableDbAcl = Boolean.FALSE;return;}enableDbAcl = TRUE.equalsIgnoreCase(endbAclObject.toString());if (!enableDbAcl) {return;}dataSource = DataSourceUtil.getIotInstance(map);}}

编译打包运行

编译打成jar包之后,需要放到libs上当,D:\kafka_2.12-3.5.0\libs\xxx。
注意:还有代码中使用了第三方相关依赖包也需要一起放入。
在这里插入图片描述

重启kafka后生效,观察日志,可以看到用户连接后,发送和订阅就会去查询数据库,查询到用户没有权限时,会提示报错如下。

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/157907.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

代码随想录算法训练营第五十天| 309.最佳买卖股票时机含冷冻期 714.买卖股票的最佳时机含手续费

文档讲解&#xff1a;代码随想录 视频讲解&#xff1a;代码随想录B站账号 状态&#xff1a;看了视频题解和文章解析后做出来了 309.最佳买卖股票时机含冷冻期 class Solution:def maxProfit(self, prices: List[int]) -> int:n len(prices)if n < 2:return 0dp [[0]*3…

vue2【axios请求】

1&#xff1a;axios作用 axios&#xff08;发音&#xff1a;艾克c奥斯&#xff09;是前端圈最火的&#xff0c;专注于数据请求的库。 Axios 是一个基于 promise 的 HTTP 库&#xff0c;可以用在浏览器和 node.js 中axios的github:https://github.com/axios/axios 中文官网地址…

【opencv】计算机视觉:停车场车位实时识别

目录 目标 整体流程 背景 详细讲解 目标 我们想要在一个实时的停车场监控视频中&#xff0c;看看要有多少个车以及有多少个空缺车位。然后我们可以标记空的&#xff0c;然后来车之后&#xff0c;实时告诉应该停在那里最方便、最近&#xff01;&#xff01;&#xff01;实现…

C++ 使用c++类模板实现动态数组-可实现自定义数据类型存储

.hpp文件 #include <iostream> #include <cstdlib> #include <cstring> using namespace std; template <class T> class arraylist { private:T* data ;//数组地址int size;//长度int count;//容量public:arraylist();~arraylist();void add(T t);T&…

GitHub 报告发布:TypeScript 取代 Java 成为第三受欢迎语言

GitHub发布的2023年度Octoverse开源状态报告发布&#xff0c;研究围绕AI、云和Git的开源活动如何改变开发人员体验&#xff0c;以及在开发者和企业中产生的影响。报告发现了三大趋势&#xff1a; 1、生成式AI的广泛应用&#xff1a; 开发人员大量使用生成式AI进行构建。越来越…

[Linux] 进程入门

&#x1f4bb;文章目录 &#x1f4c4;前言计算机的结构体系与概念冯诺依曼体系结构操作系统概念目的与定位 进程概念描述进程-PCBtask_struct检查进程利用fork创建子进程 进程状态进程状态查看僵尸进程孤儿进程 &#x1f4d3;总结 &#x1f4c4;前言 作为一名程序员&#xff0c…

Python 跨文件夹导入自定义包

一、问题再现 有时我们自己编写一些模块时&#xff0c;跨文件夹调用会出现ModuleNotFoundError: No module named XXX 二、解决方案 只需要在下层文件夹中的__init__.py文件中&#xff0c;添加如下代码即可&#xff1a; import sys from os import path sys.path.append(pa…

单链表OJ题——11.随机链表的复制

11.随机链表的复制 138. 随机链表的复制 - 力扣&#xff08;LeetCode&#xff09; /* 解题思路&#xff1a; 此题可以分三步进行&#xff1a; 1.拷贝链表的每一个节点&#xff0c;拷贝的节点先链接到被拷贝节点的后面 2.复制随机指针的链接&#xff1a;拷贝节点的随机指针指向…

板块概念相关(五)

5-板块概念相关 文章目录 5-板块概念相关一. 查询所有的版块列表二. 查询所有的概念列表三. 查询所有的地域列表四. 查询所有的版块资金支持的类型五. 查询某个版块历史记录列表,形成图表形式六. 查询某个版块历史记录列表七. 查询某个版块今日资金,形成图表形式八. 查询该板块…

【Python爬虫】8大模块md文档集合从0到scrapy高手,第7篇:selenium 数据提取详解

本文主要学习一下关于爬虫的相关前置知识和一些理论性的知识&#xff0c;通过本文我们能够知道什么是爬虫&#xff0c;都有那些分类&#xff0c;爬虫能干什么等&#xff0c;同时还会站在爬虫的角度复习一下http协议。 爬虫全套笔记地址&#xff1a; 请移步这里 共 8 章&#x…

单链表OJ题——10.环形链表2

10.环形链表2 142. 环形链表 II - 力扣&#xff08;LeetCode&#xff09; /* 解题思路&#xff1a; 如果链表存在环&#xff0c;则fast和slow会在环内相遇&#xff0c;定义相遇点到入口点的距离为X,定义环的长度为C,定义头到入口的距离为L,fast在slow进入环之后一圈内追上slow…

【攻防世界-misc】simple_transfer

1.下载并打开文件&#xff0c; 2.这个文件是一个pcap文件&#xff0c; 用wireshark打开&#xff0c;并按上图步骤操作&#xff0c; 会自动定位到有flag的这个信息行&#xff0c;这时需要右键追踪该信息的tcp流即可。 向下查找时&#xff0c;可以看到有一个pdf文件在这个里面&…

【Java基础】Java导Excel攻略

&#x1f49d;&#x1f49d;&#x1f49d;欢迎来到我的博客&#xff0c;很高兴能够在这里和您见面&#xff01;希望您在这里可以感受到一份轻松愉快的氛围&#xff0c;不仅可以获得有趣的内容和知识&#xff0c;也可以畅所欲言、分享您的想法和见解。 推荐:kwan 的首页,持续学…

【开源】基于Vue和SpringBoot的教学过程管理系统

项目编号&#xff1a; S 054 &#xff0c;文末获取源码。 \color{red}{项目编号&#xff1a;S054&#xff0c;文末获取源码。} 项目编号&#xff1a;S054&#xff0c;文末获取源码。 目录 一、摘要1.1 项目介绍1.2 项目录屏 二、功能模块2.1 教师端2.2 学生端2.3 微信小程序端2…

8 个有效的安卓数据恢复软件——可让丢失的文件起死回生!

所有数字设备最终都会失败。安卓设备也不例外&#xff0c;无论您使用的是 Android 手机还是平板电脑。由于缺乏备份、意外删除、存储卡问题、生根错误等&#xff0c;您可能会丢失一些宝贵的数据。 如果发生这种情况&#xff0c;最好的选择之一是使用安卓数据恢复软件——这可能…

Xilinx Zynq-7000系列FPGA任意尺寸图像缩放,提供两套工程源码和技术支持

目录 1、前言免责声明 2、相关方案推荐FPGA图像处理方案FPGA图像缩放方案 3、设计思路详解HLS 图像缩放介绍 4、工程代码1&#xff1a;图像缩放 HDMI 输出PL 端 FPGA 逻辑设计PS 端 SDK 软件设计 5、工程代码2&#xff1a;图像缩放 LCD 输出PL 端 FPGA 逻辑设计PS 端 SDK 软件设…

漏洞检测与EPSS评分

EPSS (利用预测评分系统)是为了测量特定的漏洞在野外被利用的可能性。EPSS 得分范围从0% (最低的利用概率)到100% (最高的利用概率)。此外&#xff0c;由于仅从概率得分很难推断出真正的意义&#xff0c;EPSS 还提供百分位排名; 百分位排名衡量 EPSS 概率相对于所有其他 EPSS 得…

事关Django的静态资源目录设置与静态资源文件引用(Django的setting.py中的三句静态资源(static)目录设置语句分别是什么作用?)

在Django的setting.py中常见的三句静态资源(static)目录设置语句如下&#xff1a; STATICFILES_DIRS [os.path.join(BASE_DIR, static_list)] # 注意这是一个列表,即可以有多个目录的路径 STATIC_ROOT os.path.join(BASE_DIR, static_root) STATIC_URL /static-url/本文介…

PCS7中如何实现DB块变量的自动上传

问题:如何实现PCS7中DB块中变量的自动上传? 解答:PCS7下,所有CFC中的变量都通过编译的方式自动上传的OS项目中,针对自定义的DB块同样也可以通过设置相关属性自动上传的OS中,具体操作如下: 插入一个全局数据块。 注意:数据块号必须符合要求,可以参考PCS7中定义的预留DB…