Hugging Face使用Stable diffusion Diffusers Transformers Accelerate Pipelines VAE

Diffusers

A library that offers an implementation of various diffusion models, including text-to-image models.

提供不同扩散模型的实现的库,代码上最简洁,国内的问题是 huggingface 需要翻墙。

Transformers

A Hugging Face library that provides pre-trained deep learning models for natural language processing tasks.

提供了预训练深度学习模型,

Accelerate

This library, also from Hugging Face, simplifies the execution of deep learning models on multiple devices, such as multiple CPUs, GPUs, or even TPUs.

加速库,可以针对不同硬件CPUs, GPUs,TPUs 加快执行模型速度

Invisible_watermark

A package that allows embedding invisible watermarks in images. It is not used directly in the code shown, but could be useful for marking generated images.

不可见水印,可以给生成的图片加水印

Mediapy

A library that allows you to display and manipulate images and videos in a Jupyter notebook.

Pipelines

Pipelines provide a simple way to run state-of-the-art diffusion models in inference. Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler components - all of which are needed to have a functioning end-to-end diffusion system.

列如, Stable Diffusion 由3个独立的预训练模型组成

  • Conditional Unet
  • CLIP text encoder
  • a scheduler component, scheduler,
  • a CLIPFeatureExtractor,
  • as well as a safety checker. All of these components are necessary to run stable diffusion in inference even though they were trained or created independently from each other.

Stable diffusion using Hugging Face

最简单的调用

from diffusers import StableDiffusionPipelinepipe = StableDiffusionPipeline.from_pretrained('CompVis/stable-diffusion-v1-4').to('cuda')# Initialize a prompt
prompt = "a dog wearing hat"# Pass the prompt in the pipeline
pipe(prompt).images[0]

理解核心模块

上面的文生图流程就是使用的扩散模型(diffusion models), Stable diffusion 模型是潜扩散模型(Latent Diffusion Model, LDM)。具体概念参考:深入浅出讲解Stable Diffusion原理,新手也能看明白 - 知乎

在latent diffusion里有3个重要的部分

  1. A text encoder 文本编码器, in this case, a CLIP Text encoder
  2. An autoencoder, in this case, a 变分自编码器(Variational Auto Encoder)也叫 VAE
  3. A U-Net

CLIP Text Encoder

概念

CLIP(Contrastive Language–Image Pre-training) 基于对比学习的语言-图像预训练,它将文本作为输入,并将输出的结果向量存储在 embedding 属性中。CLIP 模型可以把图像和文本,嵌入到相同的潜在特征空间 (latent space)。

任何机器模型都无法识别自然语言,需要将自然语言转换成一堆它能理解的数字,也叫embeddings,这个转换的过程可以分为2步

1. Tokenizer - 将文字(字词)切割,并使用lookup表来转换成数字
2. Token_To_Embedding Encoder - Converting those numerical sub-words into a representation that contains the representation of that text

代码

import torch, logging
## disable warnings
logging.disable(logging.WARNING)  
## Import the CLIP artifacts 
from transformers import CLIPTextModel, CLIPTokenizer
## Initiating tokenizer and encoder.
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14", torch_dtype=torch.float16)
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14", torch_dtype=torch.float16).to("cuda")prompt = ["a dog wearing hat"]
tok =tokenizer(prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt") 
print(tok.input_ids.shape)
tok

tokenizer 返回字典里有2个对象
1. input_ids -

A tensor of size 1x77 as one prompt was passed and padded to 77 max length. 49406 表示起始 token, 320 是 “a”, 1929 是 dog, 3309 是 wearing, 3801 是 hat, 49407 is the end of text token repeated till the pad length of 77.
2. attention_mask - 1 representing an embedded value and 0 representing padding.

for token in list(tok.input_ids[0,:7]): print(f"{token}:{tokenizer.convert_ids_to_tokens(int(token))}")

接着看Token_To_Embedding Encoder,它将 input_ids 转换成 embeddings

emb = text_encoder(tok.input_ids.to("cuda"))[0].half()
print(f"Shape of embedding : {emb.shape}")
emb

从中可以看出, 每个1x77 的 tokenized 输入被转换成 1x77x768 纬度的 embedding. 由此可见,每个输入的单词被转换成 768-dimensional 空间.

在Stable diffusion pipeline的表现

Stable diffusion 使用CLIP trained encoder 转换输入的文字,它成为U-net.的一个输入源。从另外一个方面来说,CLIP使用图片encoder和文字encoder,生成了在 latent space里相似的embeddings,这种相似更精确的定义是Contrastive objective。

VAE — Variational Auto Encoder变分自编码器

概念

autoencoder 包含2个部分
1. Encoder takes an image as input and converts it into a low dimensional latent representation
2. Decoder takes the latent representation and converts it back into an image

从图中可见,Encoder像粉碎机直接将图粉碎成几个碎片,decoder 又从碎片整合出原图

代码

## To import an image from a URL 
from fastdownload import FastDownload  
## Imaging  library 
from PIL import Image 
from torchvision import transforms as tfms  
## Basic libraries 
import numpy as np 
import matplotlib.pyplot as plt 
%matplotlib inline  
## Loading a VAE model 
from diffusers import AutoencoderKL 
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", torch_dtype=torch.float16).to("cuda")
def load_image(p):'''     Function to load images from a defined path     '''    return Image.open(p).convert('RGB').resize((512,512))
def pil_to_latents(image):'''     Function to convert image to latents     '''     init_image = tfms.ToTensor()(image).unsqueeze(0) * 2.0 - 1.0   init_image = init_image.to(device="cuda", dtype=torch.float16)init_latent_dist = vae.encode(init_image).latent_dist.sample() * 0.18215     return init_latent_dist  
def latents_to_pil(latents):     '''     Function to convert latents to images     '''     latents = (1 / 0.18215) * latents     with torch.no_grad():         image = vae.decode(latents).sample     image = (image / 2 + 0.5).clamp(0, 1)     image = image.detach().cpu().permute(0, 2, 3, 1).numpy()      images = (image * 255).round().astype("uint8")     pil_images = [Image.fromarray(image) for image in images]        return pil_images
p = FastDownload().download('https://lafeber.com/pet-birds/wp-content/uploads/2018/06/Scarlet-Macaw-2.jpg')
img = load_image(p)
print(f"Dimension of this image: {np.array(img).shape}")
img

开始使用 VAE encoder 压缩图片

latent_img = pil_to_latents(img)
print(f"Dimension of this latent representation: {latent_img.shape}")

我们可以看到VAE 压缩一个 3 x 512 x 512 纬度的图片到 4 x 64 x 64 图片,压缩比例有48x,可以看看4通道的latent表现

fig, axs = plt.subplots(1, 4, figsize=(16, 4))
for c in range(4):axs[c].imshow(latent_img[0][c].detach().cpu(), cmap='Greys')

理论上从这四张图中能得到原图的很多信息,接着我们用 decoder来往回解压缩。

decoded_img = latents_to_pil(latent_img)
decoded_img[0]

从中我们可以看出VAE decoder 可以从48x compressed latent representation 还原原图。

注意2张图里的眼镜,其实有细微差别,整个流程不是无损的

在Stable diffusion pipeline 里扮演的角色

没有 VAE 加入, Stable diffusion 也能完整使用,使用VAE能减少生成高清图的计算量。 The latent diffusion models can perform diffusion in this latent space produced by the VAE encoder and once we have our desired latent outputs produced by the diffusion process, we can convert them back to the high-resolution image by using the VAE decoder. To get a better intuitive understanding of Variation Autoencoders and how they are trained, read this blog by Irhum Shafkat.

U-Net model

概念

U-Net model 有2个输入
1. Noisy latent or Noise- Noisy latents are latents produced by a VAE encoder (in case an initial image is provided) with added noise or it can take pure noise input in case we want to create a random new image based solely on a textual description
2. Text embeddings - CLIP-based embedding generated by input textual prompts

U-Net model 的输出是可预测的 noise residual which the input noisy latent contains. In other words, it predicts the noise which is subtracted from the noisy latents to return the original de-noised latents.

代码

from diffusers import UNet2DConditionModel, LMSDiscreteScheduler
## Initializing a scheduler
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
## Setting number of sampling steps
scheduler.set_timesteps(51)
## Initializing the U-Net model
unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet", torch_dtype=torch.float16).to("cuda")

代码里 imported unet 也加入了 scheduler 。scheduler是用来确认指定diffusion 处理过程中指定步骤加入多少 noise latent

从图中可以看出,diffusion 处理中,一开始noise比较高或许逐步降低

noise = torch.randn_like(latent_img) # Random noise
fig, axs = plt.subplots(2, 3, figsize=(16, 12))
for c, sampling_step in enumerate(range(0,51,10)):encoded_and_noised = scheduler.add_noise(latent_img, noise, timesteps=torch.tensor([scheduler.timesteps[sampling_step]]))axs[c//3][c%3].imshow(latents_to_pil(encoded_and_noised)[0])axs[c//3][c%3].set_title(f"Step - {sampling_step}")

让我们看看 U-Net 如何从图片中去除noise。先加入些noise

encoded_and_noised = scheduler.add_noise(latent_img, noise, timesteps=torch.tensor([scheduler.timesteps[40]])) latents_to_pil(encoded_and_noised)[0]

跑个 U-Net 并试着去噪

## Unconditional textual prompt
prompt = [""]
## Using clip model to get embeddings
text_input = tokenizer(prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt")
with torch.no_grad(): text_embeddings = text_encoder(text_input.input_ids.to("cuda"))[0]## Using U-Net to predict noise    
latent_model_input = torch.cat([encoded_and_noised.to("cuda").float()]).half()
with torch.no_grad():noise_pred = unet(latent_model_input,40,encoder_hidden_states=text_embeddings)["sample"]
## Visualize after subtracting noise 
latents_to_pil(encoded_and_noised- noise_pred)[0]

如上图,噪声已经去掉了不少

扮演的角色

Latent diffusion 在 latent 空间里使用 U-Net 逐步的降噪以达到预期的效果。在每一步中,加入到latents 的 noise 数量将会达到最总的降噪输出。 U-Net 最早是由  this paper 提出的。U-Net 由encoder 和 decoder ,组合成 ResNet blocks。The stable diffusion U-Net also has cross-attention layers to provide them with the ability to condition(影响) the output based on the 输入的文字。 The Cross-attention layers are added to both the encoder and the decoder part of the U-Net usually between ResNet blocks. You can learn more about this U-Net architecture here.

组合

我们将试着将 CLIP text encoder, VAE, and U-Net 三者一起组合,看看如何走通文生图流程

回顾The Diffusion Process

stable diffusion mode 需要文字输入和seed。文字输入通过CLIP转换成 77*768 的数组,seed用来生成高斯噪音(4x64x64),它将会成为第一个latent image representation.

Note — You will notice that there is an additional dimension mentioned (1x) in the image like 1x77x768 for text embedding, that is because it represents the batch size of 1.

Next, the U-Net iteratively denoises(降噪) the random latent image representations while conditioning(训练) on the text embeddings. The output of the U-Net is predicted(预测) noise residual(剩余), which is then used to compute conditioned(影响) latents via a scheduler algorithm. This process of denoising and text conditioning is repeated N times (We will use 50) to retrieve a better latent image representation.

Once this process is complete, the latent image representation (4x64x64) is decoded by the VAE decoder to retrieve the final output image (3x512x512).

Note — This iterative denoising is an important step for getting a good output image. Typical steps are in the range of 30–80. However, there are recent papers that claim to reduce it to 4–5 steps by using distillation techniques.

代码

import torch, logging
## disable warnings
logging.disable(logging.WARNING)  
## Imaging  library
from PIL import Image
from torchvision import transforms as tfms
## Basic libraries
import numpy as np
from tqdm.auto import tqdm
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import display
import shutil
import os
## For video display
from IPython.display import HTML
from base64 import b64encode## Import the CLIP artifacts 
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, UNet2DConditionModel, LMSDiscreteScheduler
## Initiating tokenizer and encoder.
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14", torch_dtype=torch.float16)
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14", torch_dtype=torch.float16).to("cuda")
## Initiating the VAE
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", torch_dtype=torch.float16).to("cuda")
## Initializing a scheduler and Setting number of sampling steps
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
scheduler.set_timesteps(50)
## Initializing the U-Net model
unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet", torch_dtype=torch.float16).to("cuda")
## Helper functions
def load_image(p):'''Function to load images from a defined path'''return Image.open(p).convert('RGB').resize((512,512))
def pil_to_latents(image):'''Function to convert image to latents'''init_image = tfms.ToTensor()(image).unsqueeze(0) * 2.0 - 1.0init_image = init_image.to(device="cuda", dtype=torch.float16) init_latent_dist = vae.encode(init_image).latent_dist.sample() * 0.18215return init_latent_dist
def latents_to_pil(latents):'''Function to convert latents to images'''latents = (1 / 0.18215) * latentswith torch.no_grad():image = vae.decode(latents).sampleimage = (image / 2 + 0.5).clamp(0, 1)image = image.detach().cpu().permute(0, 2, 3, 1).numpy()images = (image * 255).round().astype("uint8")pil_images = [Image.fromarray(image) for image in images]return pil_images
def text_enc(prompts, maxlen=None):'''A function to take a texual promt and convert it into embeddings'''if maxlen is None: maxlen = tokenizer.model_max_lengthinp = tokenizer(prompts, padding="max_length", max_length=maxlen, truncation=True, return_tensors="pt") return text_encoder(inp.input_ids.to("cuda"))[0].half()

后续代码是StableDiffusionPipeline.from_pretrained  简化版本,主要展示过程

def prompt_2_img(prompts, g=7.5, seed=100, steps=70, dim=512, save_int=False):"""Diffusion process to convert prompt to image"""# Defining batch sizebs = len(prompts) # Converting textual prompts to embeddingtext = text_enc(prompts) # Adding an unconditional prompt , helps in the generation processuncond =  text_enc([""] * bs, text.shape[1])emb = torch.cat([uncond, text])# Setting the seedif seed: torch.manual_seed(seed)# Initiating random noiselatents = torch.randn((bs, unet.in_channels, dim//8, dim//8))# Setting number of steps in schedulerscheduler.set_timesteps(steps)# Adding noise to the latents latents = latents.to("cuda").half() * scheduler.init_noise_sigma# Iterating through defined stepsfor i,ts in enumerate(tqdm(scheduler.timesteps)):# We need to scale the i/p latents to match the varianceinp = scheduler.scale_model_input(torch.cat([latents] * 2), ts)# Predicting noise residual using U-Netwith torch.no_grad(): u,t = unet(inp, ts, encoder_hidden_states=emb).sample.chunk(2)# Performing Guidancepred = u + g*(t-u)# Conditioning  the latentslatents = scheduler.step(pred, ts, latents).prev_sample# Saving intermediate imagesif save_int: if not os.path.exists(f'./steps'):os.mkdir(f'./steps')latents_to_pil(latents)[0].save(f'steps/{i:04}.jpeg')# Returning the latent representation to output an image of 3x512x512return latents_to_pil(latents)

最终使用

images = prompt_2_img(["A dog wearing a hat", "a photograph of an astronaut riding a horse"], save_int=False)
for img in images:display(img)

def prompt_2_img(prompts, g=7.5, seed=100, steps=70, dim=512, save_int=False):

参数解释
1. prompt - 文字,文生图
2. g or guidance scale - It’s a value that determines how close the image should be to the textual prompt. This is related to a technique called Classifier free guidance which improves the quality of the images generated. The higher the value of the guidance scale, more close it will be to the textual prompt
3. seed - This sets the seed from which the initial Gaussian noisy latents are generated
4. steps - Number of de-noising steps taken for generating the final latents.
5. dim - dimension of the image, for simplicity we are currently generating square images, so only one value is needed
6. save_int - This is optional, a boolean flag, if we want to save intermediate latent images, helps in visualization.

也可以参考的webui里的界面

可视化整个过程 

参考

https://towardsdatascience.com/stable-diffusion-using-hugging-face-501d8dbdd8

https://huggingface.co/blog/stable_diffusion

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/82899.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

怎么将几张图片做成pdf合在一起

怎么将几张图片做成pdf合在一起?在我们平时的工作中,图片和pdf都是非常重要的电脑文件,使用也非常频繁,图片能够更为直观的展示内容,而pdf则更加的正规,很多重要文件大多会做成pdf格式的。在职场人的日常工…

C# OpenCvSharp 图片模糊检测(拉普拉斯算子)

效果 项目 代码 using OpenCvSharp; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Windows.Forms.VisualStyl…

java CAS详解(深入源码剖析)

CAS是什么 CAS是compare and swap的缩写,即我们所说的比较交换。该操作的作用就是保证数据一致性、操作原子性。 cas是一种基于锁的操作,而且是乐观锁。在java中锁分为乐观锁和悲观锁。悲观锁是将资源锁住,等之前获得锁的线程释放锁之后&am…

React 全栈体系(九)

第五章 React 路由 一、相关理解 1. SPA 的理解 单页 Web 应用(single page web application,SPA)。整个应用只有一个完整的页面。点击页面中的链接不会刷新页面,只会做页面的局部更新。数据都需要通过 ajax 请求获取, 并在前端…

el-select 下拉框全选、多选的几种方式组件

组件一、基础多选 适用性较广的基础多选&#xff0c;用 Tag 展示已选项 <template><el-select v-model"value1" multiple placeholder"请选择"><el-optionv-for"item in options":key"item.value":label"item.la…

基于SpringBoot+Vue的宠物领养饲养交流管理平台设计与实现

前言 &#x1f497;博主介绍&#xff1a;✌全网粉丝10W,CSDN特邀作者、博客专家、CSDN新星计划导师、全栈领域优质创作者&#xff0c;博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java、小程序技术领域和毕业项目实战✌&#x1f497; &#x1f447;&#x1f3fb;…

vue3+TS前端JS实现 搜索关键词变红

起初在网上搜索获得的处理方式大都是类似这种&#xff1a; 但是实际使用中发现&#xff0c;对于汉字和数字是没有问题的&#xff0c;但是如果有字母就会出现问题。 1.只有汉字和数字的时候&#xff1a;匹配正常。 2.当有字母的时候&#xff1a;异常替换。 原因&#xff1a;第二…

TuyaLink 快速入门教程

通过本入门教程&#xff0c;大家能了解到如何在涂鸦 IoT 开发平台上使用 TuyaLink 完成智能设备接入。并通过 Java 程序&#xff0c;在 IntelliJ IDEA 中使用 TuyaLink 的 GitHub Demo 工程&#xff0c;对一个电工开关设备&#xff0c;实现基本的数据上报下发功能。 准备工作 …

测试与FastAPI应用数据之间的差异

【squids.cn】 全网zui低价RDS&#xff0c;免费的迁移工具DBMotion、数据库备份工具DBTwin、SQL开发工具等 当使用两个不同的异步会话来测试FastAPI应用程序与数据库的连接时&#xff0c;可能会出现以下错误&#xff1a; 在测试中&#xff0c;在数据库中创建了一个对象&#x…

QT记事本+登陆界面的简单实现

主体头文件 #ifndef JSB_H #define JSB_H#include <QMainWindow> #include <QMenuBar>//菜单栏 #include <QToolBar>//工具栏 #include <QStatusBar>//状态栏 #include <QTextEdit>//文本 #include <QLabel>//标签 #include <QDebug&g…

Golang gorm 一对一关系

一对一关系 一对一关系比较少&#xff0c;一般用于表的扩展例如一张用户表&#xff0c;有很多字段那么就可以把它拆分为两张表&#xff0c;常用的字段放主表&#xff0c;不常用的字段放详情表。 针对用户表来说可以通过user去点出userinfo。 创建表和插入数据 package mainimp…

纯js实现html指定页面导出word

因为最近做了范文网站需要&#xff0c;所以要下载为word文档&#xff0c;如果php进行处理&#xff0c;很吃后台服务器&#xff0c;所以想用前端进行实现。查询github发现&#xff0c;确实有这方面的插件。 js导出word文档所需要的两个插件&#xff1a; FileSaver.js jquery.w…

【AI视野·今日NLP 自然语言处理论文速览 第三十六期】Tue, 19 Sep 2023

AI视野今日CS.NLP 自然语言处理论文速览 Tue, 19 Sep 2023 (showing first 100 of 106 entries) Totally 106 papers &#x1f449;上期速览✈更多精彩请移步主页 Daily Computation and Language Papers Speaker attribution in German parliamentary debates with QLoRA-ada…

如何理解高效IO

目录 前言 1.如何理解高效的IO 2.五种IO模型 3.非阻塞IO 4.非阻塞代码编写 总结 前言 哈喽&#xff0c;很高兴和大家见面&#xff01;今天我们要介绍的关于IO的话题&#xff0c;在计算机中IO是非常常规的操作&#xff0c;例如将数据显示到外设&#xff0c;或者将数据从主…

【LeetCode75】第五十九题 第N个泰波那契数

目录 题目&#xff1a; 示例&#xff1a; 分析&#xff1a; 代码&#xff1a; 题目&#xff1a; 示例&#xff1a; 分析&#xff1a; 题目顾名思义&#xff0c;让我们求出第N个泰波那契数&#xff0c;也就是除了开头三个数之外&#xff0c;第四个数开始就是等于前三个数之…

基于 Alpine 环境构建 aspnetcore6-runtime 的 Docker 镜像

关于 Alpine Linux 此处就不再过多讲述&#xff0c;请自行查看相关文档。 .NET 支持的体系结构 下表列出了当前支持的 .NET 体系结构以及支持它们的 Alpine 版本。 这些版本在 .NET 到达支持终止日期或 Alpine 的体系结构受支持之前仍受支持。请注意&#xff0c;Microsoft 仅正…

Jenkins+Gitee+Docker+Ruoyi项目前后端分离部署

前言 描述&#xff1a;本文主要是用来记录 如何用标题上的技术&#xff0c;部署到云服务器上通过ip正常访问。 一、总览 1.1、Docker做的事 拉取 mysql 镜像拉取 redis 镜像拉取 jdk 镜像拉取 nginx 镜像 解释说明&#xff1a;前端项目的打包文件放在 nginx容器运行。后端…

PWMADC重要参数

频率的计算 1、 ARR&#xff08;TIM_Period&#xff09; 是计数值&#xff1b; 2、 PSC&#xff08;TIM_Prescaler&#xff09; 是预分频值。 频率计算公式&#xff1a;Fpwm 主频 / ((ARR1)*(PSC1))(单位&#xff1a;Hz) 占空比的计算 计算公式&#xff1a;duty circle TIM3…

部署大数据平台详细教程以及遇到的问题解答(ubuntu18.04下安装ambari2.7.3+HDP3.1.0)

节点准备: 我搭建的是3台,节点可以随意。建议最少是3台 hostname ip 角色 ubuntu-1804-1 172.21.73.53 从节点 ubuntu-1804-2 172.21.73.54 主节点 ubuntu-1804-3 172.21.73.55 从节点 一:关闭所有节点的防火墙 sudo ufw disable二:配置时钟同步NTP 所有节点安装ntp sud…

Lua学习笔记:探究package

前言 本篇在讲什么 理解Lua的package 本篇需要什么 对Lua语法有简单认知 对C语法有简单认知 依赖Visual Studio工具 本篇的特色 具有全流程的图文教学 重实践&#xff0c;轻理论&#xff0c;快速上手 提供全流程的源码内容 ★提高阅读体验★ &#x1f449; ♠ 一级…