【PyTorch][chapter 22][李宏毅深度学习][ WGAN]【实战三】

前言:

      本篇主要讲两个WGAN的两个例子:

     1   高斯混合模型 WGAN实现

      2   MNIST 手写数字识别 -WGAN 实现

     WGAN 训练起来蛮麻烦的,如果要获得好的效果很多超参数需要手动设置

1: 噪声的维度

2:    学习率

3: 生成器,鉴别器网络模型

4:    batchsz,num_epochs


目录:

  1.     Google Colab
  2.     损失函数
  3.     高斯混合模型(WGAN 实现)
  4.     MNIST 手写数字识别(WGAN 实现)

   一   Google Colab

     1.1 : 打开google 云盘
          https://drive.google.com/drive/my-drive

     1.2:  新建 wgan.ipynb 文件
          把对应的python 脚本拖在该目录下面

     


     
1.3:打开colab 


            https://colab.research.google.com/drive/

             新建笔记

1.4:  在colab 中运行 main.py


from google.colab import drive
import os
drive.mount('/content/drive')
os.chdir('/content/drive/My Drive/wgan.ipynb/')
%run main.py


二 WGAN 损失函数

2.1 Wasserstein 约束和 WGAN 的约束条件转换原理

       我们知道WGAN 是根据Wasserstein Distance 推导出来的。

 Wasserstein Distance 原始约束关系为f(x)+g(y)\leq c(x,y)

只要满足k-Lipschitz 约束条件,肯定可以满足原始的约束条件。

证明如下:

这种约束关系很难求解,一般采用Weight Clipping 或者  Gradient penalty 两种方案

来约束.

  2.2 Weight Clipping

          这是一种工程经验,无理论基础

          1-Lipschitz 约束条件为:

                    ||f(x)-f(y)||\leq K||x-y||,\forall x,y

          一张1024*1024的灰度图,其状态变量为256^{1024*1024},要让所有的状态组合满足该约束条件

没有办法求解。早期的解决方案是做weight Clipping

            在利用 gradient descent 进行参数更新后,在对所有参数进行如下操作:

           w=\left\{\begin{matrix} c,\, \, \, \, \, if w>c\\ -c,\, \, \, \, \, if w<-c \\ w,\, \, \, \, \, otherwise \end{matrix}\right.

          通过约束w 范围,来约束f(x),f(y) 输出范围,效果比较好。

     2.3  Gradient penalty

       这是一种工程经验,无严格理论基础

          问题:

          weight clipping会导致很容易一不小心就梯度消失或者梯度爆炸。原因是判别器是一个多层网络,如果我们把clipping threshold设得稍微小了一点,每经过一层网络,梯度就变小一点点,多层之后就会指数衰减;反之,如果设得稍微大了一点,每经过一层网络,梯度变大一点点,多层之后就会指数爆炸。只有设得不大不小,才能让生成器获得恰到好处的回传梯度,然而在实际应用中这个平衡区域可能很狭窄,就会给调参工作带来麻烦


三    高斯混合模型(WGAN 实现)

  3.1 模型部分代码

     model.py

# -*- coding: utf-8 -*-
"""
Created on Tue Mar 19 10:50:31 2024@author: chengxf2
"""import torch
from torch  import nnimport random #numpy 里面的库
from torchsummary import summaryclass Generator(nn.Module):def __init__(self,z_dim=2,h_dim=400):super(Generator, self).__init__()#z:[batch, z_dim]self.net = nn.Sequential(nn.Linear(z_dim, h_dim),nn.ReLU(True),nn.Linear(h_dim, h_dim),nn.ReLU(True),nn.Linear(h_dim, 2))def forward(self, z):#print("\n input.shape",z.shape)output = self.net(z)return outputclass Discriminator(nn.Module):def __init__(self,input_dim,h_dim):super(Discriminator,self).__init__()self.net = nn.Sequential(nn.Linear(input_dim, h_dim),nn.ReLU(True),nn.Linear(h_dim, h_dim),nn.ReLU(True),nn.Linear(h_dim, h_dim),nn.ReLU(True),nn.Linear(h_dim, 1),nn.Tanh())def  forward(self, x):out = self.net(x)return out.view(-1)def model_summary():device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')    #[channel, w,h]#summary(model=net, input_size=(3,32,32),batch_size=2, device="cpu")summary(Generator(2,100).to(device), (1,2,),batch_size=5)print(Generator(100))print("\n 鉴别器")summary(Discriminator(2,100).to(device) , (2,2))

3.2  训练部分代码

     main.py

# -*- coding: utf-8 -*-
"""
Created on Tue Mar 19 11:06:37 2024@author: chengxf2
"""import torch
from torch   import autograd,optim,autograd
import numpy as np
import visdom
import random
from model import Generator,Discriminator
import visdom
import matplotlib.pyplot as pltbatchsz = 512
H_dim = 400
viz = visdom.Visdom()
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') def data_generator():#8个高斯分布的中心点scale = 2centers = [(1, 0),(-1, 0),(0, 1),(0, -1),(1. / np.sqrt(2), 1. / np.sqrt(2)),(1. / np.sqrt(2), -1. / np.sqrt(2)),(-1. / np.sqrt(2), 1. / np.sqrt(2)),(-1. / np.sqrt(2), -1. / np.sqrt(2))]# 放大一下centers = [(scale * x, scale * y) for x, y in centers]while True:dataset = []for i in range(batchsz):#随机生成一个点point = np.random.randn(2) * 0.02#随机选择一个高斯分布center = random.choice(centers)#N(0,1)+center (x,y)point[0] += center[0]point[1] += center[1]dataset.append(point)dataset = np.array(dataset).astype(np.float32)dataset /= 1.414yield  datasetdef generate_image(D, G, xr, epoch):"""Generates and saves a plot of the true distribution, the generator, and thecritic."""N_POINTS = 128RANGE = 3plt.clf()points = np.zeros((N_POINTS, N_POINTS, 2), dtype='float32')points[:, :, 0] = np.linspace(-RANGE, RANGE, N_POINTS)[:, None]points[:, :, 1] = np.linspace(-RANGE, RANGE, N_POINTS)[None, :]points = points.reshape((-1, 2))# (16384, 2)# print('p:', points.shape)# draw contourwith torch.no_grad():points = torch.Tensor(points).to(device) # [16384, 2]disc_map = D(points).to(device).numpy() # [16384]x = y = np.linspace(-RANGE, RANGE, N_POINTS)cs = plt.contour(x, y, disc_map.reshape((len(x), len(y))).transpose())plt.clabel(cs, inline=1, fontsize=10)plt.colorbar()# draw sampleswith torch.no_grad():z = torch.randn(batchsz, 2).to(device) # [b, 2]samples = G(z).to(device).numpy() # [b, 2]plt.scatter(xr[:, 0], xr[:, 1], c='green', marker='.')plt.scatter(samples[:, 0], samples[:, 1], c='red', marker='+')viz.matplot(plt, win='contour', opts=dict(title='p(x):%d'%epoch))def gradient_penalty(D,xr,xf):LAMBDA  = 0.2t = torch.rand(batchsz,1).to(device)#[b,1]=>[b,2]t = t.expand_as(xf)#interpolation mid = t*xr+(1-t)*xf#需要对 mid 求导mid.requires_grad_() pred = D(mid)grad = autograd.grad(outputs=pred, inputs= mid,grad_outputs= torch.ones_like(pred) ,create_graph= True,retain_graph = True,only_inputs=True)[0]gp = torch.pow((grad.norm(2, dim=1)-1),2).mean()return gp*LAMBDAdef gen():## 读取模型z_dim=2h_dim=400model = Generator(z_dim,h_dim ).to(device)state_dict = torch.load('Generator.pt')model.load_state_dict(state_dict)model.eval()z = torch.randn(batchsz, 2).to(device)#tf.stop_graident()xf = model(z)print(xf)def main():z_dim=2h_dim=400input_dim =2np.random.seed(23)num_epochs  = 2data_iter = data_generator()torch.manual_seed(23)G = Generator(z_dim,h_dim ).to(device)D = Discriminator(input_dim, h_dim).to(device)#print(G)#print(D)optim_G = optim.Adam(G.parameters(), lr = 5e-4, betas =(0.5,0.9))optim_D = optim.Adam(D.parameters(), lr = 5e-4, betas =(0.5,0.9))viz.line([[0,0]], [0], win='loss', opts=dict(title='loss',legend=['D', 'G']))for epoch in range(num_epochs):#1. train Discrimator firstlyfor  _ in range(5):# 1.1 train on real datax = next(data_iter)xr = torch.from_numpy(x).to(device)#[batch_size, 2]=>[batch, 1]predr = D(xr)#max predr , min lossrlossr = -predr.mean()#1.2  train on fake dataz = torch.randn(batchsz, 2).to(device)#tf.stop_graident()xf = G(z).detach()predf = D(xf)lossf = predf.mean()#1.3 gradient penaltygp = gradient_penalty(D,xr,xf.detach())#1.4 aggregate allloss_D = lossr+lossf+gp#1.5 optimizeoptim_D.zero_grad()loss_D.backward()optim_D.step()#2 train Generator secondlyz = torch.randn(batchsz, 2).to(device)#tf.stop_graident()xf = G(z)predf = D(xf)loss_G = - predf.mean()#optimizeoptim_G.zero_grad() loss_G.backward()optim_G.step()if epoch%100 ==0:viz.line([[loss_D.item(), loss_G.item()]],[epoch],win='loss',update='append')print(f"loss_D {loss_D.item()} \t ,loss_G {loss_G.item()}")generate_image(D,G,x, epoch)print("\n train end")            #二、只保存模型中的参数并读取torch.save(G.state_dict(), 'Generator.pt')torch.save(G.state_dict(), 'Discriminator.pt')#2  train Generator
#http://www.manongjc.com/detail/42-hvxyfyduytmpwzz.html   gen()


四    MNIST 手写数字识别

    4.1  模型部分

          model.py  

# -*- coding: utf-8 -*-
"""
Created on Mon Mar 18 10:19:26 2024@author: chengxf2
"""
import torch.nn as nn
from torchsummary import summary
import torchclass Generator(nn.Module):def __init__(self, z_dim=10, im_chan=1, hidden_dim=64):super(Generator, self).__init__()self.z_dim = z_dimself.gen = nn.Sequential(self.layer1(z_dim, hidden_dim * 4,kernel_size=3, stride=2),self.layer1(hidden_dim * 4, hidden_dim * 2,kernel_size=4,stride = 1),self.layer1(hidden_dim * 2,hidden_dim ,kernel_size=3,stride = 2, ),self.layer2(hidden_dim,im_chan,kernel_size=4,stride=2))def layer1(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):#inplace = true, 就相当于在原内存计算return nn.Sequential(nn.ConvTranspose2d(input_channel, output_channel, kernel_size, stride, padding),nn.BatchNorm2d(output_channel),nn.ReLU(inplace=True),)def layer2(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):#双曲正切函数的输出范围为(-1,1)return  nn.Sequential(nn.ConvTranspose2d(input_channel, output_channel, kernel_size, stride, padding),nn.Tanh())def forward(self, noise):'''Parameters----------noise : [batch, z_dim]Returns-------输出的是图片[batch, channel, width, height]'''x = noise.view(len(noise), self.z_dim, 1, 1)return self.gen(x)class Discriminator(nn.Module):def __init__(self, im_chan=1, hidden_dim=16):super(Discriminator, self).__init__()self.disc = nn.Sequential(self.block1(im_chan,hidden_dim * 4,kernel_size=4,stride=2),self.block1(hidden_dim * 4,hidden_dim * 8,kernel_size=4,stride=2,),self.block2(hidden_dim * 8,1,kernel_size=4,stride=2,),)def block1(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):return nn.Sequential(nn.Conv2d(input_channel, output_channel, kernel_size, stride, padding),nn.BatchNorm2d(output_channel),nn.LeakyReLU(0.2, inplace=True))def block2(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):return  nn.Sequential(nn.Conv2d(input_channel, output_channel, kernel_size, stride, padding),)def forward(self, image):return self.disc(image)def model_summary():device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')    summary(Generator(100).to(device), (100,))print(Generator(100))print("\n 鉴别器")summary(Discriminator().to(device) , (1,28,28))
model_summary()

    4.2  训练部分

         main.py

# -*- coding: utf-8 -*-
"""
Created on Mon Mar 18 10:37:21 2024@author: chengxf2
"""import torch
from   model import Generator
from   model import  Discriminator
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader, ConcatDataset, TensorDataset
from torchvision.datasets import MNIST
import time
import matplotlib.pyplot as plt
import numpy as np
from torchvision.utils import make_grid
import torch.nn as nndef get_noise(n_samples, z_dim, device='cpu'):return torch.randn(n_samples,z_dim,device=device)def weights_init(m):if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):torch.nn.init.normal_(m.weight, 0.0, 0.02)if isinstance(m, nn.BatchNorm2d):torch.nn.init.normal_(m.weight, 0.0, 0.02)torch.nn.init.constant_(m.bias, 0)def gradient_penalty(gradient):#Gradient Penaltygradient = gradient.view(len(gradient), -1)gradient_norm = gradient.norm(2, dim=1)penalty = torch.mean((gradient_norm - 1)**2)return penaltydef get_gen_loss(crit_fake_pred):#生成器的lossgen_loss = -1. * torch.mean(crit_fake_pred)return gen_lossdef get_crit_loss(crit_fake_pred, crit_real_pred, gp, c_lambda):#鉴别器的loss, 原公式加符号,转换为极小值求梯度crit_loss = torch.mean(crit_fake_pred) - torch.mean(crit_real_pred) + c_lambda * gpreturn crit_lossdef get_gradient(crit, real, fake, epsilon):#随机采样mixed_images = real * epsilon + fake * (1 - epsilon)mixed_scores = crit(mixed_images)gradient = torch.autograd.grad(inputs=mixed_images,outputs=mixed_scores,grad_outputs=torch.ones_like(mixed_scores), create_graph=True,retain_graph=True,)[0]return gradientdef show_new_gen_images(tensor_img, num_img=25):tensor_img = (tensor_img + 1) / 2unflat_img = tensor_img.detach().cpu()img_grid = make_grid(unflat_img[:num_img], nrow=5)plt.imshow(img_grid.permute(1, 2, 0).squeeze(),cmap='gray')plt.title("gen image")plt.show()def show_tensor_images(image_tensor, num_images=25, size=(1, 28, 28), show_fig=False, epoch=0):#生成器输出的范围[-1,1]#image_tensor = (image_tensor + 1) / 2image_unflat = image_tensor.detach().cpu().view(-1, *size)image_grid = make_grid(image_unflat[:num_images], nrow=5)plt.axis('off')label =f"Epoch: {epoch}"plt.title(label)plt.imshow(image_grid.permute(1, 2, 0).squeeze())#if show_fig:#plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))plt.show()def show_loss(G_mean_losses,C_mean_losses ):plt.figure(figsize=(10,5))plt.title("Generator and Discriminator Loss During Training")plt.plot(G_mean_losses,label="G-Loss")plt.plot(C_mean_losses,label="C-Loss")plt.xlabel("iterations")plt.ylabel("Loss")plt.legend()plt.show()def train():z_dim = 32batch_size = 128device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')  lr = 1e-4beta_1 = 0.0 beta_2 = 0.9#MNIST Dataset Loadprint("\n init 1: MNIST Dataset Load ")fixed_noise = get_noise(batch_size, z_dim, device=device)train_transform = transforms.Compose([transforms.ToTensor(),])dataloader = DataLoader( MNIST('.', download=True, transform=train_transform),batch_size=batch_size,shuffle=True)print("\n init2:  Loaded Data Visualization")start = time.time()dataiter = iter(dataloader)images,labels = next(dataiter)print ('Time is {} sec'.format(time.time()-start))plt.figure(figsize=(8,8))plt.axis("off")plt.title("Training Images")plt.imshow(np.transpose(make_grid(images.to(device), padding=2, normalize=True).cpu(),(1,2,0)))print('Shape of loading one batch:', images.shape)print('Total no. of batches present in trainloader:', len(dataloader))#Optimizergen = Generator(z_dim).to(device)gen_opt = torch.optim.Adam(gen.parameters(), lr=lr, betas=(beta_1, beta_2))crit  = Discriminator().to(device) crit_opt = torch.optim.Adam(crit.parameters(), lr=lr, betas=(beta_1, beta_2))gen = gen.apply(weights_init)crit = crit.apply(weights_init)  print("\n -------- train ------------")n_epochs = 10cur_step = 0total_steps = 0start_time = time.time()cur_step = 0generator_losses = []Discriminator_losses = []C_mean_losses = []G_mean_losses = []c_lambda = 10crit_repeats = 5for epoch in range(n_epochs):cur_step = 0start = time.time()for real, _ in dataloader:cur_batch_size = len(real)real = real.to(device)mean_iteration_Discriminator_loss = 0for _ in range(crit_repeats):### Update Discriminator ###crit_opt.zero_grad()fake_noise = get_noise(cur_batch_size, z_dim, device=device)fake = gen(fake_noise)crit_fake_pred = crit(fake.detach())crit_real_pred = crit(real)epsilon = torch.rand(len(real), 1, 1, 1, device=device, requires_grad=True)gradient = get_gradient(crit, real, fake.detach(), epsilon)gp = gradient_penalty(gradient)crit_loss = get_crit_loss(crit_fake_pred, crit_real_pred, gp, c_lambda)# Keep track of the average Discriminator loss in this batchmean_iteration_Discriminator_loss += crit_loss.item() / crit_repeats# Update gradientscrit_loss.backward(retain_graph=True)# Update optimizercrit_opt.step()Discriminator_losses += [mean_iteration_Discriminator_loss]### Update generator ###gen_opt.zero_grad()fake_noise_2 = get_noise(cur_batch_size, z_dim, device=device)fake_2 = gen(fake_noise_2)crit_fake_pred = crit(fake_2)gen_loss = get_gen_loss(crit_fake_pred)gen_loss.backward()# Update the weightsgen_opt.step()# Keep track of the average generator lossgenerator_losses += [gen_loss.item()]cur_step += 1total_steps += 1print_val = f"Epoch: {epoch}/{n_epochs} Steps:{cur_step}/{len(dataloader)}\t"print_val += f"Epoch_Run_Time: {(time.time()-start):.6f}\t"print_val += f"Loss_C : {mean_iteration_Discriminator_loss:.6f}\t"print_val += f"Loss_G : {gen_loss:.6f}\t"  print(print_val, end='\r',flush = True)gen_mean = sum(generator_losses[-cur_step:]) / cur_stepcrit_mean = sum(Discriminator_losses[-cur_step:]) / cur_stepC_mean_losses.append(crit_mean)G_mean_losses.append(gen_mean)print_val = f"Epoch: {epoch}/{n_epochs} Total Steps:{total_steps}\t"print_val += f"Total_Time : {(time.time() - start_time):.6f}\t"print_val += f"Loss_C : {mean_iteration_Discriminator_loss:.6f}\t"print_val += f"Loss_G : {gen_loss:.6f}\t"print_val += f"Loss_C_Mean : {crit_mean:.6f}\t"print_val += f"Loss_G_Mean : {gen_mean:.6f}\t"print(print_val)fake_noise = fixed_noisefake = gen(fake_noise)show_tensor_images(fake, show_fig=True,epoch=epoch)cur_step = 0print("\n-----训练结束--------------")num_image = 25noise = get_noise(num_image, z_dim, device=device)#Batch Normalization,Dropout不使用gen.eval()crit.eval()with torch.no_grad():fake_img = gen(noise)show_new_gen_images(fake_img.reshape(num_image,1,28,28))train()

PyTorch-Wasserstein GAN(WGAN) | Kaggle

WGAN模型——pytorch实现_python实现wgn函数-CSDN博客
WGAN_哔哩哔哩_bilibili

15 李宏毅【機器學習2021】生成式對抗網路 (Generative Adversarial Network, GAN) (中) – 理論介紹與WGAN_哔哩哔哩_bilibili

课时12 WGAN-GP实战_哔哩哔哩_bilibili

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/757338.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【深度学习与神经网络】MNIST手写数字识别1

简单的全连接层 导入相应库 import torch import numpy as np from torch import nn,optim from torch.autograd import Variable import matplotlib.pyplot as plt from torchvision import datasets, transforms from torch.utils.data import DataLoader读入数据并转为ten…

[Python人工智能] 四十三.命名实体识别 (4)利用bert4keras构建Bert+BiLSTM-CRF实体识别模型

从本专栏开始,作者正式研究Python深度学习、神经网络及人工智能相关知识。前文讲解如何实现中文命名实体识别研究,构建BiGRU-CRF模型实现。这篇文章将继续以中文语料为主,介绍融合Bert的实体识别研究,使用bert4keras和kears包来构建Bert+BiLSTM-CRF模型。然而,该代码最终结…

vue2 实战:模板模式与渲染模式代码互切

显示效果 模板模式 <template><tr ><td class"my-td" v-if"element.isInsert1"><el-button type"danger" circle size"mini" class"delete-btn" title"删除" click"deleteItem()&quo…

KMM初探

什么是KMM&#xff1f; 在开始使用 KMM 之前&#xff0c;您需要了解 Kotlin。 KMM 全称&#xff1a;Kotlin Multiplatform Mobile&#xff09;是一个用于跨平台移动开发的 SDK,相比于其他跨平台框架&#xff0c;KMM是原生UI逻辑共享的理念,由KMM封装成Android(Kotlin/JVM)的aar…

AI大模型智能大气科学探索之:ChatGPT在大气科学领域建模、数据分析、可视化与资源评估中的高效应用及论文写作

本文深度探讨人工智能在大气科学中的应用&#xff0c;特别是如何结合最新AI模型与Python技术处理和分析气候数据。课程介绍包括GPT-4等先进AI工具&#xff0c;旨在帮助大家掌握这些工具的功能及应用范围。本文内容覆盖使用GPT处理数据、生成论文摘要、文献综述、技术方法分析等…

Java 学习和实践笔记(41):API 文档以及String类的常用方法

JDK 8用到的全部类的文档在这里下载&#xff1a; Java Development Kit 8 文档 | Oracle 中国

docker入门(一)—— docker概述

docker 概述 docker 官网&#xff1a;http://www.docker.com 官网文档&#xff1a; https://docs.docker.com/get-docker/ Docker Hub官网&#xff1a;https://hub.docker.com &#xff08;仓库&#xff09; 什么是 docker docker 是一个开源的容器化平台&#xff0c;可以…

FSP40罗德与施瓦茨FSP40频谱分析仪

181/2461/8938产品概述&#xff1a; 频率范围:9千赫至40千兆赫 分辨率带宽:1赫兹至10兆赫 显示的平均噪音水平:-155分贝&#xff08;1赫兹&#xff09; 相位噪声:10 kHz时为-113 dB&#xff08;1Hz&#xff09; 附加滤波器:100 Hz至5 MHz的通道滤波器和RRC滤波器、1 Hz至3…

数据仓库系列总结

一、数据仓库架构 1、数据仓库的概念 数据仓库&#xff08;Data Warehouse&#xff09;是一个面向主题的、集成的、相对稳定的、反映历史变化的数据集合&#xff0c;用于支持管理决策。 数据仓库通常包含多个来源的数据&#xff0c;这些数据按照主题进行组织和存储&#x…

Springboot+vue的仓库管理系统(有报告)。Javaee项目,springboot vue前后端分离项目。

演示视频&#xff1a; Springbootvue的仓库管理系统&#xff08;有报告&#xff09;。Javaee项目&#xff0c;springboot vue前后端分离项目。 项目介绍&#xff1a; 采用M&#xff08;model&#xff09;V&#xff08;view&#xff09;C&#xff08;controller&#xff09;三层…

Leetcode 62. 不同路径

心路历程&#xff1a; 这道题基本就是Q-learning经典迷宫问题的简化版本&#xff0c;所以肯定是用动态规划了&#xff0c;毕竟RL中的时序差分估计法的本身也是来自于MC和动态规划的结合。如果正常正向思维思考的话&#xff0c;首先看不到问题明显的循环结构&#xff0c;考虑递…

秒级生图,大模型 SDXL-turbo、LCM-SDXL 实战案例来了

最近一个月&#xff0c;快速生图成为文生图领域的热点&#xff0c;其中比较典型的两种方式的代表模型分别为SDXL-turbo 和 LCM-SDXL。 SDXL-turbo 模型是 SDXL 1.0 的蒸馏版本&#xff0c;SDXL-Turbo 基于一种称之为对抗扩散蒸馏&#xff08;ADD&#xff09;的新颖的训练方法&…

Go 1.22 - 更加强大的 Go 执行跟踪

原文&#xff1a;Michael Knyszek - 2024.03.14 runtime/trace 包含了一款强大的工具&#xff0c;用于理解和排查 Go 程序。这个功能可以生成一段时间内每个 goroutine 的执行追踪。然后&#xff0c;你可以使用 go tool trace 命令&#xff08;或者优秀的开源工具 gotraceui&a…

Spring Cloud 整合 GateWay

目录 第一章 微服务架构图 第二章 Spring Cloud整合Nacos集群 第三章 Spring Cloud GateWay 第四章 Spring Cloud Alibaba 整合Sentinel 第五章 Spring Cloud Alibaba 整合SkyWalking链路跟踪 第六章 Spring Cloud Alibaba 整合Seata分布式事务 第七章 Spring Cloud 集成Auth用…

[Qt学习笔记]Release后的exe程序在新的电脑上出现“找不到MSVCP140.dll”的错误

1、背景介绍 我们在打包程序的时候一般都会把相关依赖库整体打包&#xff0c;这样程序在新的电脑和环境下就不需要再去配置对应的环境&#xff0c;但是有时候新程序在一台新的电脑运行时会出现“找不到MSVCP140.dll”这种错误&#xff0c;其原因就是在新电脑的操作系统中缺少一…

倒计时 7 天 | 立即加入 GDE 成长计划,飞跃成为谷歌开发者专家

谷歌开发者专家 (Google Developer Experts&#xff0c;GDE)&#xff0c;又称谷歌开发者专家项目&#xff0c;是由一群经验丰富的技术专家、具有社交影响力的开发者和思想领袖组成的全球性社区。通过在各项活动演讲以及各个平台上发布优质内容来积极助力开发者、企业和技术社区…

AI助手 - 月之暗面 Kimi.ai

前言 这是 AI工具专栏 下的第四篇&#xff0c;这一篇所介绍的AI&#xff0c;也许是截至今天&#xff08;204-03-19&#xff09;国内可访问的实用性最强的一款。 今年年初&#xff0c;一直看到有人推荐 Kimi&#xff0c;不过面对雨后春笋般的各类品质的AI&#xff0c;说实话也有…

windows 多网卡情况dns解析超时问题的排查

最近遇到一个问题 多网卡&#xff0c;多网络环境下&#xff0c;dns解析总是超时。 排查之后发现是dns配置的问题&#xff0c;一个有线网络配置的内网dns&#xff0c;一个无线网络配置的公网dns 访问公网时莫名的时不时出现超时现象 初步排查是dns解析的耗时太长&#xff0c;…

【Go语言】Go语言中的函数

Go语言中的函数 Go语言中&#xff0c;函数主要有三种类型&#xff1a; 普通函数 匿名函数&#xff08;闭包&#xff09; 类方法 1 函数定义 Go语言函数的基本组成包括&#xff1a;关键字func、函数名、参数列表、返回值、函数体和返回语句。Go语言是强类型语言&#xff0…

【MySQL | 第五篇】MySQL事务总结

文章目录 5.MySQL事务5.1什么是事务&#xff1f;5.2什么是数据库事务&#xff1f;5.3数据库事务四大特性5.4并发事务带来的问题及解决方案&#xff1f;5.4.1脏读/不可重复读/幻读5.4.2不可重复读和幻读有什么区别&#xff1f;5.4.3解决并发事务带来的问题&#xff08;1&#xf…