AWS S3 协议对接 minio/oss 等

使用亚马逊 S3 协议访问对象存储
[s3-API](https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/API/API_Operations_Amazon_Simple_Storage_Service.html)- 兼容S3协议的对象存储有- minio- 似乎是完全兼容 [兼容文档](https://www.minio.org.cn/product/s3-compatibility.html)- 阿里云oss- [兼容主要的 API ](https://help.aliyun.com/zh/oss/developer-reference/compatibility-with-amazon-s3?spm=a2c4g.11186623.0.0.590b32bcHb4D6a)- 七牛云oss- 等等

依赖

<dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><optional>true</optional></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><scope>test</scope></dependency><!--使用的依赖--><dependency><groupId>com.amazonaws</groupId><artifactId>aws-java-sdk-s3</artifactId><version>1.12.522</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.commons/commons-lang3 --><dependency><groupId>org.apache.commons</groupId><artifactId>commons-lang3</artifactId><version>3.12.0</version></dependency></dependencies>

读取配置

package com.xx.awss3demo.config;import lombok.Data;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;@Data
@ConfigurationProperties(prefix = "s3")
@Component
public class S3Properties {/*** 对象存储服务的URL*/private String endpoint;/***  path-style nginx 反向代理和S3默认支持* 模式 {http://bucketname.endpoint}  -- true* 模式 {http://endpoint/bucketname}  -- false*/private Boolean pathStyleAccess = false;/*** 区域*/private String region;/*** Access key就像用户ID,可以唯一标识你的账户*/private String accessKey;/*** Secret key是你账户的密码*/private String secretKey;/*** 最大线程数,默认: 100*/private Integer maxConnections = 50;}

配置文件

server:port: 8888s3:# aliyun oss#endpoint: http://oss-cn-shanghai.aliyuncs.com#accessKey: #secretKey: # minioendpoint: http://192.168.1.1:9000accessKey: adminsecretKey: admin1234bucketName: lqs3bucketregion:maxConnections: 100

文件操作

package com.xx.awss3demo.service;import com.amazonaws.ClientConfiguration;
import com.amazonaws.ClientConfigurationFactory;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import com.amazonaws.util.IOUtils;
import com.liuqi.awss3demo.config.S3Properties;
import lombok.SneakyThrows;
import lombok.extern.log4j.Log4j2;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;import javax.annotation.PostConstruct;
import java.io.*;
import java.net.URL;
import java.util.*;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;@ConditionalOnClass(S3Properties.class)
@Service
@Log4j2
public class S3FileService {@Autowiredprivate S3Properties s3Properties;private AmazonS3 amazonS3;@PostConstructpublic void init() {log.info(s3Properties);amazonS3 = AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(s3Properties.getAccessKey(), s3Properties.getSecretKey()))).withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(s3Properties.getEndpoint(),s3Properties.getRegion())).withPathStyleAccessEnabled(s3Properties.getPathStyleAccess()).withChunkedEncodingDisabled(true).withClientConfiguration(new ClientConfiguration().withMaxConnections(s3Properties.getMaxConnections()).withMaxErrorRetry(1)).build();}/*** 创建bucket* 注意:bucket name 不允许有特殊字符及大写字母** @param bucketName bucket名称* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucket">AWS API* Documentation</a>*/@SneakyThrowspublic void createBucket(String bucketName) {if (!bucketName.toLowerCase().equals(bucketName)) {throw new RuntimeException("bucket name not allow upper case");}if (checkBucketExist(bucketName)) {log.info("bucket: {} 已经存在", bucketName);return;}amazonS3.createBucket((bucketName));}@SneakyThrowspublic boolean checkBucketExist(String bucketName) {return amazonS3.doesBucketExistV2(bucketName);}/*** 获取全部bucket* <p>** @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets">AWS* API Documentation</a>*/@SneakyThrowspublic List<Bucket> getAllBuckets() {return amazonS3.listBuckets();}/*** 根据bucket获取bucket详情** @param bucketName bucket名称* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets">AWS* API Documentation</a>*/@SneakyThrowspublic Optional<Bucket> getBucket(String bucketName) {return amazonS3.listBuckets().stream().filter(b -> b.getName().equals(bucketName)).findFirst();}/*** @param bucketName bucket名称* @see <a href=* "http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucket">AWS API* Documentation</a>*/@SneakyThrowspublic void removeBucket(String bucketName) {amazonS3.deleteBucket(bucketName);}/*** 复制文件* @param bucketName* @param srcObjectName* @param tarObjectName*/public void copyObject(String bucketName, String srcObjectName,String tarObjectName){amazonS3.copyObject(bucketName,srcObjectName,bucketName,tarObjectName);}/*** 上传文件,指定文件类型** @param bucketName  bucket名称* @param objectName  文件名称* @param stream      文件流* @param contextType 文件类型* @throws Exception*/@SneakyThrowspublic void putObject(String bucketName, String objectName, InputStream stream,String contextType) {ObjectMetadata objectMetadata = new ObjectMetadata();objectMetadata.setContentLength(stream.available());objectMetadata.setContentType(contextType);putObject(bucketName, objectName, stream, objectMetadata);}/*** 上传文件** @param bucketName bucket名称* @param objectName 文件名称* @param stream     文件流* @throws Exception*/@SneakyThrowspublic void putObject(String bucketName, String objectName, InputStream stream) {ObjectMetadata objectMetadata = new ObjectMetadata();objectMetadata.setContentLength(stream.available());objectMetadata.setContentType("application/octet-stream");putObject(bucketName, objectName, stream, objectMetadata);}/*** 上传文件** @param bucketName     bucket名称* @param objectName     文件名称* @param stream         文件流* @param objectMetadata 对象元数据* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObject">AWS* API Documentation</a>*/@SneakyThrowsprivate PutObjectResult putObject(String bucketName, String objectName, InputStream stream,ObjectMetadata objectMetadata) {byte[] bytes = IOUtils.toByteArray(stream);ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);// 上传return amazonS3.putObject(bucketName, objectName, byteArrayInputStream, objectMetadata);}/*** 判断object是否存在** @param bucketName bucket名称* @param objectName 文件名称* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject">AWS* API Documentation</a>*/@SneakyThrowspublic boolean checkObjectExist(String bucketName, String objectName) {return amazonS3.doesObjectExist(bucketName, objectName);}/*** 获取文件** @param bucketName bucket名称* @param objectName 文件名称* @return 二进制流* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject">AWS* API Documentation</a>*/@SneakyThrowspublic S3Object getObject(String bucketName, String objectName) {return amazonS3.getObject(bucketName, objectName);}/*** 删除文件** @param bucketName bucket名称* @param objectName 文件名称* @throws Exception* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObject">AWS* API Documentation</a>*/@SneakyThrowspublic void deleteObject(String bucketName, String objectName) {amazonS3.deleteObject(bucketName, objectName);}/*** 大文件分段上传** @param file        MultipartFile* @param bucketName  bucketName* @param objectName  objectName* @param minPartSize 每片大小,单位:字节(eg:5242880 <- 5m)*/public void uploadMultipartFileByPart(MultipartFile file, String bucketName, String objectName,int minPartSize) {if (file.isEmpty()) {log.error("file is empty");}// 计算分片大小long size = file.getSize();// 得到总共的段数,和 分段后,每个段的开始上传的字节位置List<Long> positions = Collections.synchronizedList(new ArrayList<>());long filePosition = 0;while (filePosition < size) {positions.add(filePosition);filePosition += Math.min(minPartSize, (size - filePosition));}if (log.isDebugEnabled()) {log.debug("总大小:{},分为{}段", size, positions.size());}// 创建一个列表保存所有分传的 PartETag, 在分段完成后会用到List<PartETag> partETags = Collections.synchronizedList(new ArrayList<>());// 第一步,初始化,声明下面将有一个 Multipart Upload// 设置文件类型ObjectMetadata metadata = new ObjectMetadata();metadata.setContentType(file.getContentType());InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName,objectName, metadata);InitiateMultipartUploadResult initResponse = this.initiateMultipartUpload(initRequest);if (log.isDebugEnabled()) {log.debug("开始上传");}//声明线程池ExecutorService exec = Executors.newFixedThreadPool(3);long begin = System.currentTimeMillis();try {// MultipartFile 转 FileFile toFile = multipartFileToFile(file);for (int i = 0; i < positions.size(); i++) {int finalI = i;exec.execute(() -> {long time1 = System.currentTimeMillis();UploadPartRequest uploadRequest = new UploadPartRequest().withBucketName(bucketName).withKey(objectName).withUploadId(initResponse.getUploadId()).withPartNumber(finalI + 1).withFileOffset(positions.get(finalI)).withFile(toFile).withPartSize(Math.min(minPartSize, (size - positions.get(finalI))));// 第二步,上传分段,并把当前段的 PartETag 放到列表中partETags.add(this.uploadPart(uploadRequest).getPartETag());if (log.isDebugEnabled()) {log.debug("第{}段上传耗时:{}", finalI + 1, (System.currentTimeMillis() - time1));}});}//任务结束关闭线程池exec.shutdown();//判断线程池是否结束,不加会直接结束方法while (true) {if (exec.isTerminated()) {break;}}// 第三步,完成上传,合并分段CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName,objectName,initResponse.getUploadId(), partETags);this.completeMultipartUpload(compRequest);//删除本地缓存文件if (toFile != null && !toFile.delete()) {log.error("Failed to delete cache file");}} catch (Exception e) {this.abortMultipartUpload(new AbortMultipartUploadRequest(bucketName, objectName,initResponse.getUploadId()));log.error("Failed to upload, " + e.getMessage());}if (log.isDebugEnabled()) {log.debug("总上传耗时:{}", (System.currentTimeMillis() - begin));}}/*** 根据文件前置查询文件集合** @param bucketName bucket名称* @param prefix     前缀* @param recursive  是否递归查询* @return S3ObjectSummary 列表* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects">AWS* API Documentation</a>*/@SneakyThrowspublic List<S3ObjectSummary> getAllObjectsByPrefix(String bucketName, String prefix,boolean recursive) {ObjectListing objectListing = amazonS3.listObjects(bucketName, prefix);return new ArrayList<>(objectListing.getObjectSummaries());}/*** 查询文件版本** @param bucketName bucket名称* @return S3ObjectSummary 列表* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects">AWS* API Documentation</a>*/@SneakyThrowspublic List<S3VersionSummary> getAllObjectsVersionsByPrefixV2(String bucketName,String objectName) {VersionListing versionListing = amazonS3.listVersions(bucketName, objectName);return new ArrayList<>(versionListing.getVersionSummaries());}/*** 获取文件外链** @param bucketName bucket名称* @param objectName 文件名称* @param expires    过期时间 <=7 单位天* @return url*/@SneakyThrowspublic String generatePresignedUrl(String bucketName, String objectName, Integer expires) {Date date = new Date();Calendar calendar = new GregorianCalendar();calendar.setTime(date);calendar.add(Calendar.DAY_OF_MONTH, expires);URL url = amazonS3.generatePresignedUrl(bucketName, objectName, calendar.getTime());return url.toString();}/*** 开放链接,默认public没有设置访问权限* url 规则:${endPoint}/${bucketName}/${objectName}** @param bucketName* @param objectName* @return*/public String generatePublicUrl(String bucketName, String objectName) {return s3Properties.getEndpoint() + "/" + bucketName + "/" + objectName;}/*** 初始化,声明有一个Multipart Upload** @param initRequest 初始化请求* @return 初始化返回*/private InitiateMultipartUploadResult initiateMultipartUpload(InitiateMultipartUploadRequest initRequest) {return amazonS3.initiateMultipartUpload(initRequest);}/*** 上传分段** @param uploadRequest 上传请求* @return 上传分段返回* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart">AWS* API Documentation</a>*/private UploadPartResult uploadPart(UploadPartRequest uploadRequest) {return amazonS3.uploadPart(uploadRequest);}/*** 分段合并** @param compRequest 合并请求* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUpload">AWS* API Documentation</a>*/private CompleteMultipartUploadResult completeMultipartUpload(CompleteMultipartUploadRequest compRequest) {return amazonS3.completeMultipartUpload(compRequest);}/*** 中止分片上传** @param uploadRequest 中止文件上传请求* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUpload">AWS* API Documentation</a>*/private void abortMultipartUpload(AbortMultipartUploadRequest uploadRequest) {amazonS3.abortMultipartUpload(uploadRequest);}/*** MultipartFile 转 File*/private File multipartFileToFile(MultipartFile file) throws Exception {File toFile = null;if (file.equals("") || file.getSize() <= 0) {file = null;} else {InputStream ins = null;ins = file.getInputStream();toFile = new File(file.getOriginalFilename());//获取流文件OutputStream os = new FileOutputStream(toFile);int bytesRead = 0;byte[] buffer = new byte[8192];while ((bytesRead = ins.read(buffer, 0, 8192)) != -1) {os.write(buffer, 0, bytesRead);}os.close();ins.close();}return toFile;}
}

测试方法

package com.xx.awss3demo;import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.liuqi.awss3demo.service.S3FileService;
import lombok.extern.log4j.Log4j2;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.List;@SpringBootTest
@Log4j2
class AwsS3DemoApplicationTests {@Autowiredprivate S3FileService s3FileService;public String bk="lqs3bucket";@Testvoid contextLoads() {}@Testpublic void bucketTest() {s3FileService.createBucket(bk);s3FileService.getAllBuckets().forEach(b -> System.out.println(b.getName()));s3FileService.removeBucket(bk);}@Testpublic void objectTest() throws IOException {s3FileService.createBucket(bk);if (s3FileService.checkObjectExist(bk, "d1/ss/1.txt")) {log.info("文件已经存在");}s3FileService.putObject(bk,"d1/ss/1.txt",new ByteArrayInputStream("hello world xxx".getBytes(StandardCharsets.UTF_8)));s3FileService.copyObject(bk,"d1/ss/1.txt","d1/ss/1_copy.txt");S3Object object = s3FileService.getObject(bk, "d1/ss/1_copy.txt");byte[] bytes = object.getObjectContent().readAllBytes();log.info("内容是:{}",new String(bytes,StandardCharsets.UTF_8));//s3FileService.deleteObject(bk,"1.txt");}@Testpublic void listTest(){List<S3ObjectSummary> objectList = s3FileService.getAllObjectsByPrefix(bk, "/d1", true);objectList.forEach(object->{log.info(object.getKey());});}@Testpublic void genUrlTest(){String s = s3FileService.generatePresignedUrl(bk, "1.txt", 7);System.out.println(s);}}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/23184.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【BASH】回顾与知识点梳理(七)

【BASH】回顾与知识点梳理 七 七. 前六章知识点总结及练习7.1 总结7.2 练习 该系列目录 --> 【BASH】回顾与知识点梳理&#xff08;目录&#xff09; 七. 前六章知识点总结及练习 7.1 总结 由于核心在内存中是受保护的区块&#xff0c;因此我们必须要透过『 Shell 』将我…

大数据课程H1——TELECOM的电信流量项目架构

文章作者邮箱&#xff1a;yugongshiyesina.cn 地址&#xff1a;广东惠州 ▲ 本章节目的 ⚪ 了解TELECOM项目的架构和环境配置&#xff1b; ⚪ 了解TELECOM项目的数据字典&#xff1b; 一、简介 1. 概述 1. 当用户通过网络设备(手机、平板电脑、笔记本电脑等)进…

JavaScript闭包和this

目录 JavaScript闭包和this 1 闭包 1.1 变量作用域 1&#xff09;函数内部可以读取全局变量 2&#xff09;函数外部无法读取函数内部的局部变量 1.2 读取函数内部的局部变量 1&#xff09;在函数内部再定义一个函数 2&#xff09;为外部程序提供访问函数局部变量的入口 1.3…

Github 创建自己的博客网站

参考pku大佬视频制作&#xff0c;附上B站视频&#xff1a;【GitHub Pages 个人网站构建与发布】 同时还参考了&#xff1a;【Python版宝藏级静态站点生成器Material for MkDocs】 GitHub Pages 介绍 内容参考&#xff1a;GitHub Pages - 杨希杰的个人网站 (yang-xijie.githu…

【CSS】ios上fixed固定定位的input输入框兼容问题

需求 &#xff1a; 实现一个简单的需求&#xff0c;上方是搜索框并且固定顶部&#xff0c;下方是滚动的内容list 问题 : 若如图上方使用固定定位, 下方用scroll-view, 在安卓上是没有问题的, 但是发现ios上会出现兼容问题 : 问题1: 当content list滚动到中间时再去搜索, 展…

c语言每日一练(1)

前言&#xff1a; 每日一练系列&#xff0c;每一期都包含5道选择题&#xff0c;2道编程题&#xff0c;博主会尽可能详细地进行讲解&#xff0c;令初学者也能听的清晰。每日一练系列会持续更新&#xff0c;暑假时三天之内必有一更&#xff0c;到了开学之后&#xff0c;将看学业情…

解决Springboot+VUE项目部署出现的跨域问题

自己写了一个项目&#xff0c;写好了&#xff0c;发现不会部署&#xff0c;然后到处查资料&#xff0c;最终终于部署好自己写的系统&#xff0c;系统为前后端分离项目。需要分别部署在同一个服务器docker中&#xff0c;配置不同得端口隐射&#xff0c;部署得过程中主要是跨域问…

你值得拥有——流星雨下的告白(Python实现)

目录 1 前言 2 霍金说移民外太空 3 浪漫的流星雨展示 4 Python代码 1 前言 我们先给个小故事&#xff0c;提一下大家兴趣&#xff1b;然后我给出论据&#xff0c;得出结论。最后再浪漫的流星雨表白代码奉上&#xff0c;还有我自创的一首诗。开始啦&#xff1a; 2 霍金说…

Oracle锁的学习

Oracle数据库中的锁机制 数据库是一个多用户使用的共享资源。当多个用户并发地存取数据时&#xff0c;在数据库中就会产生多个事务同时存取同一数据的情况。若对并发操作不加控制就可能会读取和存储不正确的数据&#xff0c;破坏数据库的一致性。 在数据库中有两种基本的锁类…

spark history网络流量占用高问题记录

生产环境遇到一台机器网络流量占用高告警 由于监控只有机器总的网络流量&#xff0c;没有具体进程的 于是只能登陆服务器&#xff0c;安装nethogs&#xff1a;yum install nethogs 然后执行nethogs命令查看进程流量 观察到主要是spark history server这个进程占用流量高(最高…

docker-compose搭建redis服务

docker-compose搭建redis服务 1.首先准备所需文件 mkdir data touch redis.conf touch docker-compose.yaml # 这个结构 [rootiZbp16ukkrjo2m3jyyo3tfZ redis]# ls data docker-compose.yaml redis.conf2.编辑redis.conf bind 0.0.0.0 port 6379 # tcp-backlog 511 timeou…

puppeteer监听response并封装为express服务调用

const express require(express); const puppeteer require(puppeteer); const app express(); let browser; // 声明一个全局变量来存储浏览器实例app.get(/getInfo, async (req, res) > {try {const page_param req.query.page; // 获取名为"page"的查询参数…

openCV图像读取和显示

文章目录 一、imread二、namedWindow三、imshow #include <opencv2/opencv.hpp> #include <iostream>using namespace std; using namespace cv;int main(int argc,char** argv) {cv::Mat img imread("./sun.png"); //3通道 24位if (img.empty()) {std:…

bitbucket ssh登录提示 port 22: Operation timed out

bitbucket ssh登录失败 执行命令 ssh -T -vvv gitbitbucket.org结果提示&#xff1a; ssh: connect to host bitbucket.org port 22: Operation timed out原因&#xff1a;使用了22端口其实并不稳定。配置的其实如果连接到443端口更稳定。修改 ~/.ssh/config &#xff0c;增…

Alchemy Catalyst 2023 crack

Alchemy Catalyst 2023 crack Alchemy CATALYST是一个可视化本地化环境&#xff0c;支持本地化工作流程的各个方面。它帮助组织加快本地化进程&#xff0c;比竞争对手更快地进入新市场&#xff0c;并为他们创造新的收入机会。 创建全球影响力 高质量的产品和服务翻译对跨国组织…

【980. 不同路径 III】

来源&#xff1a;力扣&#xff08;LeetCode&#xff09; 描述&#xff1a; 在二维网格 grid 上&#xff0c;有 4 种类型的方格&#xff1a; 1 表示起始方格。且只有一个起始方格。2 表示结束方格&#xff0c;且只有一个结束方格。0 表示我们可以走过的空方格。-1 表示我们无…

windows创建占用特定端口程序

默认情况下&#xff0c;远程桌面使用的是3389端口。如果您想将远程桌面端口更改为8005&#xff0c;以达到模拟程序占用端口8005的情况&#xff0c;可以执行以下操作&#xff1a; 如执行以下命令&#xff0c;则1&#xff0c;2&#xff0c;3步相同操作可以跳过&#xff0c;直接往…

二进制安装K8S(单Master集群架构)

目录 一&#xff1a;操作系统初始化配置 1、项目拓扑图 2、服务器 3、初始化操作 二&#xff1a; 部署 etcd 集群 1、etcd 介绍 2、准备签发证书环境 3、master01 节点上操作 &#xff08;1&#xff09;生成Etcd证书 &#xff08;2&#xff09;创建用于存放 etcd 配置文…

在VUE中使用websocket

websocket概念 1、WebSocket是HTML5下一种新的协议&#xff0c;在单个TCP连接上进行全双工通信&#xff1b; 2、Websocket是一个持久化的协议&#xff0c;浏览器和服务器只需要完成一次握手&#xff0c;两者之间就直接可以创建持久性的连接&#xff0c;并进行双向数据传输&…

传球游戏

题目描述 上体育课的时候&#xff0c;小蛮的老师经常带着同学们一起做游戏。这次&#xff0c;老师带着同学们一起做传球游戏。 游戏规则是这样的&#xff1a;n个同学站成一个圆圈&#xff0c;其中的一个同学手里拿着一个球&#xff0c;当老师吹哨子时开始传球&#xff0c;每个…