昇思25天学习打卡营第19天 | CycleGAN图像风格迁移互换

内容介绍:

CycleGAN(Cycle Generative Adversarial Network) 即循环对抗生成网络,该模型实现了一种在没有配对示例的情况下学习将图像从源域 X 转换到目标域 Y 的方法。

该模型一个重要应用领域是域迁移(Domain Adaptation),可以通俗地理解为图像风格迁移。其实在 CycleGAN 之前,就已经有了域迁移模型,比如 Pix2Pix ,但是 Pix2Pix 要求训练数据必须是成对的,而现实生活中,要找到两个域(画风)中成对出现的图片是相当困难的,因此 CycleGAN 诞生了,它只需要两种域的数据,而不需要他们有严格对应关系,是一种新的无监督的图像迁移网络。

具体内容:

1. 导包

from download import download
from mindspore.dataset import MindDataset
import numpy as np
import matplotlib.pyplot as plt
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore.common.initializer import Normal
import mindspore as ms
from mindspore import value_and_grad
import os
import time
import random
import numpy as np
from PIL import Image
from mindspore import Tensor, save_checkpoint
from mindspore import dtype
%%time
import os
from PIL import Image
import mindspore.dataset as ds
import mindspore.dataset.vision as vision
from mindspore import load_checkpoint, load_param_into_net

2. 数据集下载

url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/application/CycleGAN_apple2orange.zip"download(url, ".", kind="zip", replace=True)

3. 数据集加载

# 读取MindRecord格式数据
name_mr = "./CycleGAN_apple2orange/apple2orange_train.mindrecord"
data = MindDataset(dataset_files=name_mr)
print("Datasize: ", data.get_dataset_size())batch_size = 1
dataset = data.batch(batch_size)
datasize = dataset.get_dataset_size()

4. 可视化

mean = 0.5 * 255
std = 0.5 * 255plt.figure(figsize=(12, 5), dpi=60)
for i, data in enumerate(dataset.create_dict_iterator()):if i < 5:show_images_a = data["image_A"].asnumpy()show_images_b = data["image_B"].asnumpy()plt.subplot(2, 5, i+1)show_images_a = (show_images_a[0] * std + mean).astype(np.uint8).transpose((1, 2, 0))plt.imshow(show_images_a)plt.axis("off")plt.subplot(2, 5, i+6)show_images_b = (show_images_b[0] * std + mean).astype(np.uint8).transpose((1, 2, 0))plt.imshow(show_images_b)plt.axis("off")else:break
plt.show()

5. 构建生成器

weight_init = Normal(sigma=0.02)class ConvNormReLU(nn.Cell):def __init__(self, input_channel, out_planes, kernel_size=4, stride=2, alpha=0.2, norm_mode='instance',pad_mode='CONSTANT', use_relu=True, padding=None, transpose=False):super(ConvNormReLU, self).__init__()norm = nn.BatchNorm2d(out_planes)if norm_mode == 'instance':norm = nn.BatchNorm2d(out_planes, affine=False)has_bias = (norm_mode == 'instance')if padding is None:padding = (kernel_size - 1) // 2if pad_mode == 'CONSTANT':if transpose:conv = nn.Conv2dTranspose(input_channel, out_planes, kernel_size, stride, pad_mode='same',has_bias=has_bias, weight_init=weight_init)else:conv = nn.Conv2d(input_channel, out_planes, kernel_size, stride, pad_mode='pad',has_bias=has_bias, padding=padding, weight_init=weight_init)layers = [conv, norm]else:paddings = ((0, 0), (0, 0), (padding, padding), (padding, padding))pad = nn.Pad(paddings=paddings, mode=pad_mode)if transpose:conv = nn.Conv2dTranspose(input_channel, out_planes, kernel_size, stride, pad_mode='pad',has_bias=has_bias, weight_init=weight_init)else:conv = nn.Conv2d(input_channel, out_planes, kernel_size, stride, pad_mode='pad',has_bias=has_bias, weight_init=weight_init)layers = [pad, conv, norm]if use_relu:relu = nn.ReLU()if alpha > 0:relu = nn.LeakyReLU(alpha)layers.append(relu)self.features = nn.SequentialCell(layers)def construct(self, x):output = self.features(x)return outputclass ResidualBlock(nn.Cell):def __init__(self, dim, norm_mode='instance', dropout=False, pad_mode="CONSTANT"):super(ResidualBlock, self).__init__()self.conv1 = ConvNormReLU(dim, dim, 3, 1, 0, norm_mode, pad_mode)self.conv2 = ConvNormReLU(dim, dim, 3, 1, 0, norm_mode, pad_mode, use_relu=False)self.dropout = dropoutif dropout:self.dropout = nn.Dropout(p=0.5)def construct(self, x):out = self.conv1(x)if self.dropout:out = self.dropout(out)out = self.conv2(out)return x + outclass ResNetGenerator(nn.Cell):def __init__(self, input_channel=3, output_channel=64, n_layers=9, alpha=0.2, norm_mode='instance', dropout=False,pad_mode="CONSTANT"):super(ResNetGenerator, self).__init__()self.conv_in = ConvNormReLU(input_channel, output_channel, 7, 1, alpha, norm_mode, pad_mode=pad_mode)self.down_1 = ConvNormReLU(output_channel, output_channel * 2, 3, 2, alpha, norm_mode)self.down_2 = ConvNormReLU(output_channel * 2, output_channel * 4, 3, 2, alpha, norm_mode)layers = [ResidualBlock(output_channel * 4, norm_mode, dropout=dropout, pad_mode=pad_mode)] * n_layersself.residuals = nn.SequentialCell(layers)self.up_2 = ConvNormReLU(output_channel * 4, output_channel * 2, 3, 2, alpha, norm_mode, transpose=True)self.up_1 = ConvNormReLU(output_channel * 2, output_channel, 3, 2, alpha, norm_mode, transpose=True)if pad_mode == "CONSTANT":self.conv_out = nn.Conv2d(output_channel, 3, kernel_size=7, stride=1, pad_mode='pad',padding=3, weight_init=weight_init)else:pad = nn.Pad(paddings=((0, 0), (0, 0), (3, 3), (3, 3)), mode=pad_mode)conv = nn.Conv2d(output_channel, 3, kernel_size=7, stride=1, pad_mode='pad', weight_init=weight_init)self.conv_out = nn.SequentialCell([pad, conv])def construct(self, x):x = self.conv_in(x)x = self.down_1(x)x = self.down_2(x)x = self.residuals(x)x = self.up_2(x)x = self.up_1(x)output = self.conv_out(x)return ops.tanh(output)# 实例化生成器
net_rg_a = ResNetGenerator()
net_rg_a.update_parameters_name('net_rg_a.')net_rg_b = ResNetGenerator()
net_rg_b.update_parameters_name('net_rg_b.')

6. 判别器

# 定义判别器
class Discriminator(nn.Cell):def __init__(self, input_channel=3, output_channel=64, n_layers=3, alpha=0.2, norm_mode='instance'):super(Discriminator, self).__init__()kernel_size = 4layers = [nn.Conv2d(input_channel, output_channel, kernel_size, 2, pad_mode='pad', padding=1, weight_init=weight_init),nn.LeakyReLU(alpha)]nf_mult = output_channelfor i in range(1, n_layers):nf_mult_prev = nf_multnf_mult = min(2 ** i, 8) * output_channellayers.append(ConvNormReLU(nf_mult_prev, nf_mult, kernel_size, 2, alpha, norm_mode, padding=1))nf_mult_prev = nf_multnf_mult = min(2 ** n_layers, 8) * output_channellayers.append(ConvNormReLU(nf_mult_prev, nf_mult, kernel_size, 1, alpha, norm_mode, padding=1))layers.append(nn.Conv2d(nf_mult, 1, kernel_size, 1, pad_mode='pad', padding=1, weight_init=weight_init))self.features = nn.SequentialCell(layers)def construct(self, x):output = self.features(x)return output# 判别器初始化
net_d_a = Discriminator()
net_d_a.update_parameters_name('net_d_a.')net_d_b = Discriminator()
net_d_b.update_parameters_name('net_d_b.')

7. 优化器和损失函数

# 构建生成器,判别器优化器
optimizer_rg_a = nn.Adam(net_rg_a.trainable_params(), learning_rate=0.0002, beta1=0.5)
optimizer_rg_b = nn.Adam(net_rg_b.trainable_params(), learning_rate=0.0002, beta1=0.5)optimizer_d_a = nn.Adam(net_d_a.trainable_params(), learning_rate=0.0002, beta1=0.5)
optimizer_d_b = nn.Adam(net_d_b.trainable_params(), learning_rate=0.0002, beta1=0.5)# GAN网络损失函数,这里最后一层不使用sigmoid函数
loss_fn = nn.MSELoss(reduction='mean')
l1_loss = nn.L1Loss("mean")def gan_loss(predict, target):target = ops.ones_like(predict) * targetloss = loss_fn(predict, target)return loss

8. 定义前向计算过程

# 前向计算def generator(img_a, img_b):fake_a = net_rg_b(img_b)fake_b = net_rg_a(img_a)rec_a = net_rg_b(fake_b)rec_b = net_rg_a(fake_a)identity_a = net_rg_b(img_a)identity_b = net_rg_a(img_b)return fake_a, fake_b, rec_a, rec_b, identity_a, identity_blambda_a = 10.0
lambda_b = 10.0
lambda_idt = 0.5def generator_forward(img_a, img_b):true = Tensor(True, dtype.bool_)fake_a, fake_b, rec_a, rec_b, identity_a, identity_b = generator(img_a, img_b)loss_g_a = gan_loss(net_d_b(fake_b), true)loss_g_b = gan_loss(net_d_a(fake_a), true)loss_c_a = l1_loss(rec_a, img_a) * lambda_aloss_c_b = l1_loss(rec_b, img_b) * lambda_bloss_idt_a = l1_loss(identity_a, img_a) * lambda_a * lambda_idtloss_idt_b = l1_loss(identity_b, img_b) * lambda_b * lambda_idtloss_g = loss_g_a + loss_g_b + loss_c_a + loss_c_b + loss_idt_a + loss_idt_breturn fake_a, fake_b, loss_g, loss_g_a, loss_g_b, loss_c_a, loss_c_b, loss_idt_a, loss_idt_bdef generator_forward_grad(img_a, img_b):_, _, loss_g, _, _, _, _, _, _ = generator_forward(img_a, img_b)return loss_gdef discriminator_forward(img_a, img_b, fake_a, fake_b):false = Tensor(False, dtype.bool_)true = Tensor(True, dtype.bool_)d_fake_a = net_d_a(fake_a)d_img_a = net_d_a(img_a)d_fake_b = net_d_b(fake_b)d_img_b = net_d_b(img_b)loss_d_a = gan_loss(d_fake_a, false) + gan_loss(d_img_a, true)loss_d_b = gan_loss(d_fake_b, false) + gan_loss(d_img_b, true)loss_d = (loss_d_a + loss_d_b) * 0.5return loss_ddef discriminator_forward_a(img_a, fake_a):false = Tensor(False, dtype.bool_)true = Tensor(True, dtype.bool_)d_fake_a = net_d_a(fake_a)d_img_a = net_d_a(img_a)loss_d_a = gan_loss(d_fake_a, false) + gan_loss(d_img_a, true)return loss_d_adef discriminator_forward_b(img_b, fake_b):false = Tensor(False, dtype.bool_)true = Tensor(True, dtype.bool_)d_fake_b = net_d_b(fake_b)d_img_b = net_d_b(img_b)loss_d_b = gan_loss(d_fake_b, false) + gan_loss(d_img_b, true)return loss_d_b# 保留了一个图像缓冲区,用来存储之前创建的50个图像
pool_size = 50
def image_pool(images):num_imgs = 0image1 = []if isinstance(images, Tensor):images = images.asnumpy()return_images = []for image in images:if num_imgs < pool_size:num_imgs = num_imgs + 1image1.append(image)return_images.append(image)else:if random.uniform(0, 1) > 0.5:random_id = random.randint(0, pool_size - 1)tmp = image1[random_id].copy()image1[random_id] = imagereturn_images.append(tmp)else:return_images.append(image)output = Tensor(return_images, ms.float32)if output.ndim != 4:raise ValueError("img should be 4d, but get shape {}".format(output.shape))return output

9. 计算梯度和反向传播

# 实例化求梯度的方法
grad_g_a = value_and_grad(generator_forward_grad, None, net_rg_a.trainable_params())
grad_g_b = value_and_grad(generator_forward_grad, None, net_rg_b.trainable_params())grad_d_a = value_and_grad(discriminator_forward_a, None, net_d_a.trainable_params())
grad_d_b = value_and_grad(discriminator_forward_b, None, net_d_b.trainable_params())# 计算生成器的梯度,反向传播更新参数
def train_step_g(img_a, img_b):net_d_a.set_grad(False)net_d_b.set_grad(False)fake_a, fake_b, lg, lga, lgb, lca, lcb, lia, lib = generator_forward(img_a, img_b)_, grads_g_a = grad_g_a(img_a, img_b)_, grads_g_b = grad_g_b(img_a, img_b)optimizer_rg_a(grads_g_a)optimizer_rg_b(grads_g_b)return fake_a, fake_b, lg, lga, lgb, lca, lcb, lia, lib# 计算判别器的梯度,反向传播更新参数
def train_step_d(img_a, img_b, fake_a, fake_b):net_d_a.set_grad(True)net_d_b.set_grad(True)loss_d_a, grads_d_a = grad_d_a(img_a, fake_a)loss_d_b, grads_d_b = grad_d_b(img_b, fake_b)loss_d = (loss_d_a + loss_d_b) * 0.5optimizer_d_a(grads_d_a)optimizer_d_b(grads_d_b)return loss_d

10. 训练

epochs = 1
save_step_num = 80
save_checkpoint_epochs = 1
save_ckpt_dir = './train_ckpt_outputs/'print('Start training!')for epoch in range(epochs):g_loss = []d_loss = []start_time_e = time.time()for step, data in enumerate(dataset.create_dict_iterator()):start_time_s = time.time()img_a = data["image_A"]img_b = data["image_B"]res_g = train_step_g(img_a, img_b)fake_a = res_g[0]fake_b = res_g[1]res_d = train_step_d(img_a, img_b, image_pool(fake_a), image_pool(fake_b))loss_d = float(res_d.asnumpy())step_time = time.time() - start_time_sres = []for item in res_g[2:]:res.append(float(item.asnumpy()))g_loss.append(res[0])d_loss.append(loss_d)if step % save_step_num == 0:print(f"Epoch:[{int(epoch + 1):>3d}/{int(epochs):>3d}], "f"step:[{int(step):>4d}/{int(datasize):>4d}], "f"time:{step_time:>3f}s,\n"f"loss_g:{res[0]:.2f}, loss_d:{loss_d:.2f}, "f"loss_g_a: {res[1]:.2f}, loss_g_b: {res[2]:.2f}, "f"loss_c_a: {res[3]:.2f}, loss_c_b: {res[4]:.2f}, "f"loss_idt_a: {res[5]:.2f}, loss_idt_b: {res[6]:.2f}")epoch_cost = time.time() - start_time_eper_step_time = epoch_cost / datasizemean_loss_d, mean_loss_g = sum(d_loss) / datasize, sum(g_loss) / datasizeprint(f"Epoch:[{int(epoch + 1):>3d}/{int(epochs):>3d}], "f"epoch time:{epoch_cost:.2f}s, per step time:{per_step_time:.2f}, "f"mean_g_loss:{mean_loss_g:.2f}, mean_d_loss:{mean_loss_d :.2f}")if epoch % save_checkpoint_epochs == 0:os.makedirs(save_ckpt_dir, exist_ok=True)save_checkpoint(net_rg_a, os.path.join(save_ckpt_dir, f"g_a_{epoch}.ckpt"))save_checkpoint(net_rg_b, os.path.join(save_ckpt_dir, f"g_b_{epoch}.ckpt"))save_checkpoint(net_d_a, os.path.join(save_ckpt_dir, f"d_a_{epoch}.ckpt"))save_checkpoint(net_d_b, os.path.join(save_ckpt_dir, f"d_b_{epoch}.ckpt"))print('End of training!')

11. 模型推理

# 加载权重文件
def load_ckpt(net, ckpt_dir):param_GA = load_checkpoint(ckpt_dir)load_param_into_net(net, param_GA)g_a_ckpt = './CycleGAN_apple2orange/ckpt/g_a.ckpt'
g_b_ckpt = './CycleGAN_apple2orange/ckpt/g_b.ckpt'load_ckpt(net_rg_a, g_a_ckpt)
load_ckpt(net_rg_b, g_b_ckpt)# 图片推理
fig = plt.figure(figsize=(11, 2.5), dpi=100)
def eval_data(dir_path, net, a):def read_img():for dir in os.listdir(dir_path):path = os.path.join(dir_path, dir)img = Image.open(path).convert('RGB')yield img, dirdataset = ds.GeneratorDataset(read_img, column_names=["image", "image_name"])trans = [vision.Resize((256, 256)), vision.Normalize(mean=[0.5 * 255] * 3, std=[0.5 * 255] * 3), vision.HWC2CHW()]dataset = dataset.map(operations=trans, input_columns=["image"])dataset = dataset.batch(1)for i, data in enumerate(dataset.create_dict_iterator()):img = data["image"]fake = net(img)fake = (fake[0] * 0.5 * 255 + 0.5 * 255).astype(np.uint8).transpose((1, 2, 0))img = (img[0] * 0.5 * 255 + 0.5 * 255).astype(np.uint8).transpose((1, 2, 0))fig.add_subplot(2, 8, i+1+a)plt.axis("off")plt.imshow(img.asnumpy())fig.add_subplot(2, 8, i+9+a)plt.axis("off")plt.imshow(fake.asnumpy())eval_data('./CycleGAN_apple2orange/predict/apple', net_rg_a, 0)
eval_data('./CycleGAN_apple2orange/predict/orange', net_rg_b, 4)
plt.show()

CycleGAN最吸引人的部分莫过于其循环一致性损失(Cycle Consistency Loss)。这一设计巧妙地解决了传统GAN(生成对抗网络)在图像转换中可能出现的“一对多”映射问题,确保了转换后的图像能够再被转换回原始图像,从而保留了原始图像的关键信息。这种设计不仅提高了转换的可靠性,也大大扩展了GAN在图像风格迁移中的应用范围。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/42657.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java中获取Class对象的三种方式

Java中获取Class对象的三种方式 1、对象调用getClass()方法2、类名.class的方式3、通过Class.forName()静态方法4、总结 &#x1f496;The Begin&#x1f496;点点关注&#xff0c;收藏不迷路&#x1f496; 在Java中&#xff0c;Class对象是一个非常重要的概念&#xff0c;它代…

JVM原理(二十):JVM虚拟机内存的三特性详解

1. 原子性、可进行、有序性 1.1. 原子性 Java内存模型围绕着在并发过程中如何处理原子性、可见性和有序性这三个特征来建立的。 Java内存模型来直接保证的原子性变量操作包括read、load、assign、use、store和write这六个。我们大致可以认为&#xff0c;基本数据类型的访问、…

科普新能源充电桩

充电桩是新能源电动车的配套基础设施&#xff0c;为电动车提供充电服务&#xff0c;与我们的生活也是息息相关&#xff0c;本篇文章来科普一下充电桩基础知识。 充电桩的分类 按照供电方式分类 交流充电桩&#xff1a;特点是小电流、桩体较小、安装灵活&#xff1b;直流充电…

【TB作品】51单片机 Proteus仿真 超声波LCD1602ADC0832 身高体重测量仪

00024 超声波LCD1602ADC0832 实验报告&#xff1a;基于51单片机的身高体重测量仪设计 背景介绍 本实验设计并实现了一个基于51单片机的身高体重测量仪。该系统利用超声波传感器测量高度&#xff0c;通过ADC0832模数转换芯片获取重量数据&#xff0c;并使用LCD1602显示屏显示…

Alt与Tab切换窗口时将Edge多个标签页作为一个整体参与切换的方法

本文介绍在Windows电脑中&#xff0c;使用Alt与Tab切换窗口时&#xff0c;将Edge浏览器作为一个整体参与切换&#xff0c;而不是其中若干个页面参与切换的方法。 最近&#xff0c;需要将主要使用的浏览器由原本的Chrome换为Edge&#xff1b;但是&#xff0c;在更换后发现&#…

桌面快充插线板+伸缩数据线,轻松实现1+1>2

手机、平板、笔记本等电子设备已成为我们日常工作和学习的必备工具。然而,随着设备数量的增加,充电问题也日益凸显。桌面空间有限,多个快充头不仅显得杂乱无章,而且效率低下,无法满足我们高效办公的需求。 在这样的背景下,倍思Nomos氮化镓100W桌面充电站凭借其创新的设计和强大…

SpringBoot新手快速入门系列教程七:基于一个低配centoos服务器,如何通过宝塔面板部署一个SpringBoot项目

1&#xff0c;如何打包一个项目 通过IDEA自带的命令行&#xff0c;执行 ./gradlew clean build 2&#xff0c;检查生成的JAR文件 进入 build/libs 目录&#xff0c;你应该会看到一个类似 helloredis-0.0.1-SNAPSHOT.jar 的文件。 3&#xff1a;运行生成的JAR文件 你可以在…

ESP32-C3-Arduino-uart

引脚图 2实现串口发送接收 1默认值初始化串口&#xff08;默认是uart0&#xff09; Serial.begin(UART_BAUD); 参数是波特率 2自定义其他串口 2-1创建实例 HardwareSerial SerialUART(0); //数值指的是uart0 1为uart1.。。。。 2-2初始化 SerialUART.begin(UART_BAU…

Docker运行MSSQL2022

安装地址: https://hub.docker.com/r/microsoft/mssql-server 拉取mssql容器镜像 docker pull mcr.microsoft.com/mssql/server 拉取成功 运行mssql镜像 docker run -e "ACCEPT_EULAY" -e "MSSQL_SA_PASSWORDAa123456" -p 1433:1433 -d mcr.microsof…

人工智能系列-Python面向对象编程

&#x1f308;个人主页&#xff1a;羽晨同学 &#x1f4ab;个人格言:“成为自己未来的主人~” 面向对象编程和面向过程编程 在使用计算机语言进行代码编写时&#xff0c;常见的两种思路是面向对象编程和面向过程编程。 面向过程&#xff1a;根据业务逻辑从上到下写代码。…

异常组成、作用、处理方式(3种)、异常方法、自定义异常

目录 异常的组成&#xff1a;运行异常与编译异常 两者区别&#xff1a;编译异常用来提醒程序员&#xff0c;运行异常大部分是由于参数传递错误导致 异常作用&#xff1a; 作用1&#xff1a;就是平时的报错&#xff0c;方便我们找到报错的来源 作用2&#xff1a;在方法内部…

一.2.(2)基本共射放大电路组成、工作原理;

1.基本共射放大电路组成 共什么取决于输入输出&#xff0c;共剩下的那一极 2.工作原理 输入信号ui通过电容C1加到三极管的基 极&#xff0c;引起基极电流iB的变化&#xff0c;iB的变化又使集电极电流ic发生变 化&#xff0c;且ic的变化量是iB变化量的β倍。由于有集电极电压&…

linux-虚拟内存-虚拟cpu

1、进程&#xff1a; 计算机中的程序关于某数据集合上的一次运行活动。 狭义定义&#xff1a;进程是正在运行的程序的实例&#xff08;an instance of a computer program that is being executed&#xff09;。广义定义&#xff1a;进程是一个具有一定独立功能的程序关于某个…

Let‘s Encrypt 申请免费 SSL 证书(每隔60天自动更新证书)

文章目录 官网文档简介安装 Nginxacme.sh生成证书智能化生成证书 安装证书查看已安装证书更新证书 官网 https://letsencrypt.org/zh-cn/ 文档 https://letsencrypt.org/zh-cn/docs/ 简介 Let’s Encrypt 是一个非营利组织提供的免费SSL/TLS证书颁发机构&#xff0c;旨在促…

2024世界人工智能大会,神仙打架

B站&#xff1a;啥都会一点的研究生公众号&#xff1a;啥都会一点的研究生 AI圈最近又发生了啥新鲜事&#xff1f; 该栏目以周更频率总结国内外前沿AI动态&#xff0c;感兴趣的可以点击订阅合集以及时收到最新推送 B站首秀世界人工智能大会&#xff0c;展示自研AI技术与AIGC…

人工智能在病理切片虚拟染色及染色标准化领域的系统进展分析|文献速递·24-07-07

小罗碎碎念 本期文献主题&#xff1a;人工智能在病理切片虚拟染色及染色标准化领域的系统进展分析 这一期文献的速递&#xff0c;是有史以来数量最大的一次&#xff0c;足足有十一篇&#xff0c;本来打算分两期写&#xff0c;但是为了知识的系统性&#xff0c;我决定咬咬牙&…

Partisia Blockchain 现已完成第一阶段空投,即将在DeFi领域发力

Partisia Blockchain 是以 MPC 方案为基础的 Layer1 生态&#xff0c;其具备可审计的隐私特性&#xff0c;同时还能保持链的可拓展、高迸发、可互操作以及安全等系列特性&#xff0c;Partisia Blockchain 被认为是目前最具潜力的企业级公链&#xff0c;并且估值高达 16 亿美元。…

远心镜头简介

一、远心镜头 大家都有这种印象&#xff0c;一个物体在人眼看来&#xff0c;会有近大远小的现象。这是因为物体近的时候&#xff0c;在视网膜上投影大&#xff0c;小的时候&#xff0c;投影小。镜头也是一样&#xff0c;因为近大远小的原因&#xff0c;会产生误差。特别是在做尺…

java项目总结8

1.方法引用 1.方法引用概述 注意注意&#xff1a; 1.引用出必须是函数式接口 2.被引用的方法必须已经存在 3.被引用方法的型参和返回值需要跟抽象方法保持一致 4.被引方法的功能要满足当前需求 Arrays.sort(arr,Main::subtraction); Main是该类的名称&#xff0c;&#xff1a…

C++20中的基于范围的for循环(range-based for loop)

C11中引入了对基于范围的for循环(range-based for loop)的支持&#xff1a;该循环对一系列值(例如容器中的所有元素)进行操作。代码段如下&#xff1a; const std::vector<int> vec{ 1,2,3,4,5 }; for (const auto& i : vec)std::cout << i << ", …