人工智能算法工程师(中级)课程15-常见的网络模型及设计原理与代码详解

大家好,我是微学AI,今天给大家介绍一下人工智能算法工程师(中级)课程15-常见的网络模型及设计原理与代码详解。
本文给大家介绍常见的网络模型及其设计原理与代码实现,涵盖了LeNet、AlexNet、VggNet、GoogLeNet、InceptionNet、ResNet、DenseNet、DarkNet、MobileNet等经典网络。通过对这些网络模型的深入剖析,我们了解到LeNet是最早的卷积神经网络,适用于手写数字识别;AlexNet引入了ReLU激活函数和Dropout策略,显著提高了图像识别性能;VggNet通过堆叠卷积层实现了深层网络;GoogLeNet的Inception模块创新性地提高了网络深度和宽度;ResNet引入残差学习解决了深层网络训练困难的问题;DenseNet通过特征复用提高了参数效率;DarkNet作为YOLO系列的基础网络,具有高效的特点;MobileNet则通过深度可分离卷积实现了移动端部署

文章目录

  • 一、LeNet
  • 二、AlexNet
  • 三、VggNet
  • 四、GoogLeNet (InceptionNet v1)
  • 五、ResNet
  • 六、DenseNet
  • 七、DarkNet
  • 八、MobileNet
  • 九、总结

一、LeNet

  1. 数学原理
    LeNet是一种用于手写数字识别的卷积神经网络(CNN),主要包括卷积层、池化层和全连接层。其数学原理主要涉及卷积运算和池化运算。
  2. 代码详解
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):def __init__(self):super(LeNet, self).__init__()self.conv1 = nn.Conv2d(1, 6, 5)self.conv2 = nn.Conv2d(6, 16, 5)self.fc1 = nn.Linear(16*4*4, 120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 10)def forward(self, x):x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2))x = x.view(-1, self.num_flat_features(x))x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)return xdef num_flat_features(self, x):size = x.size()[1:]  # all dimensions except the batch dimensionnum_features = 1for s in size:num_features *= sreturn num_features
net = LeNet()

在这里插入图片描述

二、AlexNet

  1. 数学原理
    AlexNet是第一个在ImageNet竞赛中取得优异成绩的深度卷积神经网络。它引入了ReLU激活函数、局部响应归一化(LRN)和重叠的最大池化。
  2. 代码详解
class AlexNet(nn.Module):def __init__(self, num_classes=1000):super(AlexNet, self).__init__()self.features = nn.Sequential(nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=2),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2),nn.Conv2d(96, 256, kernel_size=5, padding=2),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2),nn.Conv2d(256, 384, kernel_size=3, padding=1),nn.ReLU(inplace=True),nn.Conv2d(384, 384, kernel_size=3, padding=1),nn.ReLU(inplace=True),nn.Conv2d(384, 256, kernel_size=3, padding=1),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2),)self.classifier = nn.Sequential(nn.Dropout(),nn.Linear(256 * 6 * 6, 4096),nn.ReLU(inplace=True),nn.Dropout(),nn.Linear(4096, 4096),nn.ReLU(inplace=True),nn.Linear(4096, num_classes),)def forward(self, x):x = self.features(x)x = x.view(x.size(0), -1)x = self.classifier(x)return x
net = AlexNet()

在这里插入图片描述

三、VggNet

  1. 数学原理
    VggNet通过重复使用简单的卷积层来构建深度网络。它证明了通过堆叠小的卷积核可以构建深层网络。
  2. 代码详解
class VGG(nn.Module):def __init__(self, features, num_classes=1000):super(VGG, self).__init__()self.features = featuresself.classifier = nn.Sequential(nn.Linear(512 * 7 * 7, 4096),nn.ReLU(True),nn.Dropout(),nn.Linear(4096, 4096),nn.ReLU(True),nn.Dropout(),nn.Linear(4096, num_classes),)def forward(self, x):x = self.features(x)x = x.view(x.size(0), -1)x = self.classifier(x)return x
def make_layers(cfg, batch_norm=False):layers = []in_channels = 3for v in cfg:if v == 'M':layers += [nn.MaxPool2d(kernel_size=2, stride=2)]else:conv2d = nn.Conv2d(in_channels, v,kernel_size=3, padding=1)if batch_norm:layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]else:layers += [conv2d, nn.ReLU(inplace=True)]in_channels = vreturn nn.Sequential(*layers)
cfg = {'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
vgg11 = VGG(make_layers(cfg['VGG11']))

在这里插入图片描述

四、GoogLeNet (InceptionNet v1)

  1. 数学原理
    GoogLeNet引入了Inception模块,该模块通过并行连接不同尺寸的卷积层和池化层来增加网络的深度和宽度,同时控制计算成本。
  2. 代码详解
class Inception(nn.Module):def __init__(self, in_planes, n1x1, n3x3red, n3x3, n5x5red, n5x5, pool_planes):super(Inception, self).__init__()self.b1 = nn.Sequential(nn.Conv2d(in_planes, n1x1, kernel_size=1),nn.ReLU(True),)self.b2 = nn.Sequential(nn.Conv2d(in_planes, n3x3red, kernel_size=1),nn.ReLU(True),nn.Conv2d(n3x3red, n3x3, kernel_size=3, padding=1),nn.ReLU(True),)self.b3 = nn.Sequential(nn.Conv2d(in_planes, n5x5red, kernel_size=1),nn.ReLU(True),nn.Conv2d(n5x5red, n5x5, kernel_size=5, padding=2),nn.ReLU(True),)self.b4 = nn.Sequential(nn.MaxPool2d(3, stride=1, padding=1),nn.Conv2d(in_planes, pool_planes, kernel_size=1),nn.ReLU(True),)def forward(self, x):y1 = self.b1(x)y2 = self.b2(x)y3 = self.b3(x)y4 = self.b4(x)return torch.cat([y1, y2, y3, y4], 1)
class GoogLeNet(nn.Module):def __init__(self):super(GoogLeNet, self).__init__()self.pre_layers = nn.Sequential(nn.Conv2d(3, 192, kernel_size=3, padding=1),nn.ReLU(True),)self.a3 = Inception(192,  64,  96, 128, 16, 32, 32)self.b3 = Inception(256, 128, 128, 192, 32, 96, 64)self.maxpool = nn.MaxPool2d(3, stride=2, padding=1)self.a4 = Inception(480, 192,  96, 208, 16,  48,  64)self.b4 = Inception(512, 160, 112, 224, 24,  64,  64)self.c4 = Inception(512, 128, 128, 256, 24,  64,  64)self.d4 = Inception(512, 112, 144, 288, 32,  64,  64)self.e4 = Inception(528, 256, 160, 320, 32, 128, 128)self.a5 = Inception(832, 256, 160, 320, 32, 128, 128)self.b5 = Inception(832, 384, 192, 384, 48, 128, 128)self.avgpool = nn.AvgPool2d(8, stride=1)self.linear = nn.Linear(1024, 1000)def forward(self, x):out = self.pre_layers(x)out = self.a3(out)out = self.b3(out)out = self.maxpool(out)out = self.a4(out)out = self.b4(out)out = self.c4(out)out = self.d4(out)out = self.e4(out)out = self.maxpool(out)out = self.a5(out)out = self.b5(out)out = self.avgpool(out)out = out.view(out.size(0), -1)out = self.linear(out)return out
net = GoogLeNet()

五、ResNet

  1. 数学原理
    ResNet引入了残差学习来解决深度网络的训练问题。通过引入跳跃连接(shortcut connections),允许梯度直接流回前面的层,从而缓解了梯度消失的问题。
  2. 代码详解
class BasicBlock(nn.Module):expansion = 1def __init__(self, in_planes, planes, stride=1):super(BasicBlock, self).__init__()self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)self.bn1 = nn.BatchNorm2d(planes)self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)self.bn2 = nn.BatchNorm2d(planes)self.shortcut = nn.Sequential()if stride != 1 or in_planes != self.expansion*planes:self.shortcut = nn.Sequential(nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(self.expansion*planes))def forward(self, x):out = F.relu(self.bn1(self.conv1(x)))out = self.bn2(self.conv2(out))out += self.shortcut(x)out = F.relu(out)return out
class ResNet(nn.Module):def __init__(self, block, num_blocks, num_classes=1000):super(ResNet, self).__init__()self.in_planes = 64self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)self.bn1 = nn.BatchNorm2d(64)self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)self.linear = nn.Linear(512*block.expansion, num_classes)def _make_layer(self, block, planes, num_blocks, stride):strides = [stride] + [1]*(num_blocks-1)layers = []for stride in strides:layers.append(block(self.in_planes, planes, stride))self.in_planes = planes * block.expansionreturn nn.Sequential(*layers)def forward(self, x):out = F.relu(self.bn1(self.conv1(x)))out = self.layer1(out)out = self.layer2(out)out = self.layer3(out)out = self.layer4(out)out = F.avg_pool2d(out, 4)out = out.view(out.size(0), -1)out = self.linear(out)return out
def ResNet18():return ResNet(BasicBlock, [2,2,2,2])
net = ResNet18()

在这里插入图片描述

六、DenseNet

  1. 数学原理
    DenseNet通过将每层与其他层连接,而不是仅与前一层连接,来提高网络的效率。这种连接方式减少了参数数量,并促进了特征的重用。
  2. 代码详解
class Bottleneck(nn.Module):def __init__(self, in_planes, growth_rate):super(Bottleneck, self).__init__()self.bn1 = nn.BatchNorm2d(in_planes)self.conv1 = nn.Conv2d(in_planes, 4*growth_rate, kernel_size=1, bias=False)self.bn2 = nn.BatchNorm2d(4*growth_rate)self.conv2 = nn.Conv2d(4*growth_rate, growth_rate, kernel_size=3, padding=1, bias=False)def forward(self, x):out = self.conv1(F.relu(self.bn1(x)))out = self.conv2(F.relu(self.bn2(out)))out = torch.cat([out,x], 1)return out
class Transition(nn.Module):def __init__(self, in_planes, out_planes):super(Transition, self).__init__()self.bn = nn.BatchNorm2d(in_planes)self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=1, bias=False)def forward(self, x):out = self.conv(F.relu(self.bn(x)))out = F.avg_pool2d(out, 2)return out
class DenseNet(nn.Module):def __init__(self, block, nblocks, growth_rate=12, reduction=0.5, num_classes=1000):super(DenseNet, self).__init__()self.growth_rate = growth_ratenum_planes = 2*growth_rateself.conv1 = nn.Conv2d(3, num_planes, kernel_size=3, padding=1, bias=False)self.dense1 = self._make_dense_layers(block, num_planes, nblocks[0])num_planes += nblocks[0]*growth_rateout_planes = int(math.floor(num_planes*reduction))self.trans1 = Transition(num_planes, out_planes)num_planes = out_planesself.dense2 = self._make_dense_layers(block, num_planes, nblocks[1])num_planes += nblocks[1]*growth_rateout_planes = int(math.floor(num_planes*reduction))self.trans2 = Transition(num_planes, out_planes)num_planes = out_planesself.dense3 = self._make_dense_layers(block, num_planes, nblocks[2])num_planes += nblocks[2]*growth_rateout_planes = int(math.floor(num_planes*reduction))self.trans3 = Transition(num_planes, out_planes)num_planes = out_planesself.dense4 = self._make_dense_layers(block, num_planes, nblocks[3])num_planes += nblocks[3]*growth_rateself.bn = nn.BatchNorm2d(num_planes)self.linear = nn.Linear(num_planes, num_classes)def _make_dense_layers(self, block, in_planes, nblock):layers = []for i in range(nblock):layers.append(block(in_planes, self.growth_rate))in_planes += self.growth_ratereturn nn.Sequential(*layers)def forward(self, x):out = self.conv1(x)out = self.trans1(self.dense1(out))out = self.trans2(self.dense2(out))out = self.trans3(self.dense3(out))out = self.dense4(out)out = F.avg_pool2d(F.relu(self.bn(out)), 4)out = out.view(out.size(0), -1)out = self.linear(out)return out
def DenseNet121():return DenseNet(Bottleneck, [6,12,24,16], growth_rate=32)
net = DenseNet121()

在这里插入图片描述

七、DarkNet

  1. 数学原理
    DarkNet是一个开源的神经网络框架,它被用于YOLO(You Only Look Once)目标检测系统。它由一系列卷积层组成,这些层具有不同数量的过滤器和尺寸,以及步长和填充。
  2. 代码详解
class DarknetBlock(nn.Module):def __init__(self, in_channels):super(DarknetBlock, self).__init__()self.conv1 = nn.Conv2d(in_channels, in_channels * 2, kernel_size=3, stride=1, padding=1)self.bn1 = nn.BatchNorm2d(in_channels * 2)self.conv2 = nn.Conv2d(in_channels * 2, in_channels, kernel_size=1, stride=1, padding=0)self.bn2 = nn.BatchNorm2d(in_channels)def forward(self, x):out = F.relu(self.bn1(self.conv1(x)), inplace=True)out = F.relu(self.bn2(self.conv2(out)), inplace=True)out += xreturn out
class Darknet(nn.Module):def __init__(self, num_classes=1000):super(Darknet, self).__init__()self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1)self.bn1 = nn.BatchNorm2d(32)self.layer1 = self._make_layer(32, 64, 1)self.layer2 = self._make_layer(64, 128, 2)self.layer3 = self._make_layer(128, 256, 8)self.layer4 = self._make_layer(256, 512, 8)self.layer5 = self._make_layer(512, 1024, 4)self.global_avgpool = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Linear(1024, num_classes)def _make_layer(self, in_channels, out_channels, num_blocks):layers = []layers.append(nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=2, padding=1))layers.append(nn.BatchNorm2d(out_channels))layers.append(nn.ReLU(inplace=True))for i in range(num_blocks):layers.append(DarknetBlock(out_channels))return nn.Sequential(*layers)def forward(self, x):x = F.relu(self.bn1(self.conv1(x)), inplace=True)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)x = self.layer5(x)x = self.global_avgpool(x)x = x.view(x.size(0), -1)x = self.fc(x)return x
net = Darknet()

八、MobileNet

  1. 数学原理
    MobileNet基于深度可分离卷积(depthwise separable convolution),它将标准卷积分解为深度卷积和逐点卷积,从而大幅减少参数数量和计算量。
  2. 代码详解
class DepthwiseSeparableConv(nn.Module):def __init__(self, in_channels, out_channels, stride):super(DepthwiseSeparableConv, self).__init__()self.depthwise = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=stride, padding=1, groups=in_channels)self.pointwise = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1)def forward(self, x):x = self.depthwise(x)x = self.pointwise(x)return x
class MobileNet(nn.Module):def __init__(self, num_classes=1000):super(MobileNet, self).__init__()self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=1)self.bn1 = nn.BatchNorm2d(32)self.layers = self._make_layers(32, num_classes)def _make_layers(self, in_channels, num_classes):layers = []for x in [64, 128, 128, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024]:stride = 2 if x > in_channels else 1layers.append(DepthwiseSeparableConv(in_channels, x, stride))layers.append(nn.BatchNorm2d(x))layers.append(nn.ReLU(inplace=True))in_channels = xlayers.append(nn.AdaptiveAvgPool2d((1, 1)))layers.append(nn.Conv2d(in_channels, num_classes, kernel_size=1))return nn.Sequential(*layers)def forward(self, x):x = F.relu(self.bn1(self.conv1(x)), inplace=True)x = self.layers(x)x = x.view(x.size(0), -1)return x
net = MobileNet()

九、总结

以上代码展示了如何使用PyTorch构建不同的神经网络模型。每个模型都有其独特的结构和设计原理,适用于不同的应用场景。在实际应用中,可以根据任务需求和硬件条件选择合适的模型。由于篇幅限制,这里没有包含每个模型的完整训练和测试代码,但上述代码可以作为构建和训练这些模型的基础。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/872833.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

MATLAB科研数据可视化教程

原文链接:MATLAB科研数据可视化https://mp.weixin.qq.com/s?__bizMzUzNTczMDMxMg&mid2247609462&idx3&snf7043936fc5ee42b833c7c9f3bcd24ba&chksmfa826d91cdf5e4872eb275e5319b66ba6927ea0074fb2293fe1ca47d6aedf38ab91050be484c&token1551213…

SQL面试题练习 —— 统计最大连续登录天数区间

目录 1 题目2 建表语句3 题解 1 题目 2 建表语句 CREATE TABLE IF NOT EXISTS user_login_tb (uid INT,login_date DATE ); insert into user_login_tb(uid, login_date) values( 1, 2022-08-02),(1, 2022-08-03),(2, 2022-08-03),(2, 2022-08-04),(2, 2022-08-05),(2, 2022-08…

6个高效再利用的UI作品集设计模板

UI 作品集是指用户界面设计师的个人作品集。它展示了设计师的设计能力、技巧和风格,也是充分展示他们设计能力的证明。优秀的UI 作品集应具有简洁明了、美观大方、良好的互动体验和明确的目标。本文将从两个方面的介绍 Ui 作品集模板的全部内容:UI 作品集…

【C++11】(lambda)

C11中的lambda与线程。 目录 Lambda:仿函数的缺点:Lambda语法:Lambda使用示例:两数相加:两数交换:解决Goods排序问题: Lambda原理: Lambda: 假设我们有一个商品类&…

ClickHouse集成LDAP实现简单的用户认证

1.这里我的ldap安装的是docker版的 docker安装的化就yum就好了 sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin sudo systemctl start docker 使用下面的命令验证sudo docker run hello-world docker pull osixia/openl…

微信小程序与本地MySQL数据库通信

微信小程序与本地MySQL数据库通信 因为本地MySQL服务器没有域名,也没有进行相应的请求操作封装,因此微信小程序没办法和数据库通信。 但是对于开发人员来说,没有数据库,那还能干撒?虽然我尝试过用json-server&#x…

好用的AI搜索引擎

1. 360AI 搜索 访问 360AI 搜索: https://www.huntagi.com/sites/1706642948656.html 360AI 搜索介绍: 360AI 搜索,新一代智能答案引擎,值得信赖的智能搜索伙伴,为复杂搜索提供专业支持,解锁更相关、更全面的答案。AI…

探索Facebook的最新更新:社交体验的新高度

Facebook作为全球领先的社交媒体平台,一直致力于不断创新和改进,以提供更优质的用户体验。近期,Facebook推出了一系列新的更新,旨在提升用户的社交互动体验和平台功能。本文将详细探讨这些最新更新,分析其对用户和社交…

如何用AI交互数字人一体机,打造政务服务新名片?

如今,将“高效办成一件事”作为优化政务服务、提升行政效能的重要抓手,各地方为了促进政务服务由传统模式向数字化、智能化方向转变,纷纷在政务服务场景融合了AI交互数字人,实现“无人化、智慧化”导办、帮办、代办等模式&#xf…

Apache AGE的MATCH子句

MATCH子句允许您在数据库中指定查询将搜索的模式。这是检索数据以在查询中使用的主要方法。 通常在MATCH子句之后会跟随一个WHERE子句,以添加用户定义的限制条件到匹配的模式中,以操纵返回的数据集。谓词是模式描述的一部分,不应被视为仅在匹…

3D问界—在MAYA中使用Python脚本进行批量轴居中

问题提出:MAYA中如何使用Python脚本 今天不是一篇纯理论,主要讲一下MAYA中如何使用Python脚本,并解决一个实际问题,文章会放上我自己的代码,若感兴趣欢迎尝试,当然,若有问题可以见文章末尾渠道&…

分布式IO系统2通道串口通信模块M602x

现场总线耦合器本身包含一个电源模块,它有 2 个串口通道,通过 Modbus RTU(Master)协议连接外部串行设备,实现耦合器与外部串行设备通信,现以连接设备的示例带大家了解我们钡铼的2 通道串口通信模块 M602x。…

【BUG】已解决:Uncaught SyntaxError: Unexpected token ‘<‘

已解决:Could not install packages due to an EnvironmentError: [Errno 13] Permission denied 欢迎来到我的主页,我是博主英杰,211科班出身,就职于医疗科技公司,热衷分享知识,武汉城市开发者社区主理人 …

Neither the JAVA_HOME nor the JRE_HOME environment variable is defined问题解决

一、系统环境变量中添加tomcatjdk的环境变量声明 1、右击此电脑->属性->高级系统设置 可复制粘贴下面的变量名 CATALINA_HOME 点击path->编辑->新建 可将下面值粘入 %CATALINA_HOME%\bin 2、配置jdk的系统变量 系统变量->新建->如图 可将下面变量名粘入 J…

Flutter热更新技术探索

一,需求背景: APP 发布到市场后,难免会遇到严重的 BUG 阻碍用户使用,因此有在不发布新版本 APP 的情况下使用热更新技术立即修复 BUG 需求。原生 APP(例如:Android & IOS)的热更新需求已经…

LabVIEW 与 PLC 通讯方式

在工业自动化中,LabVIEW 与 PLC(可编程逻辑控制器)的通信至关重要,常见的通信方式包括 OPC、Modbus、EtherNet/IP、Profibus/Profinet 和 Serial(RS232/RS485)。这些通信协议各有特点和应用场景&#xff0c…

@google/model-viewer 导入 改纹理 (http-serve)

导入模型 改纹理 效果图 <template><div><h1>鞋模型</h1><model-viewerstyle"width: 300px; height: 300px"id"my-replace-people"src"/imgApi/Astronaut.glb"auto-rotatecamera-controls></model-viewer>&…

C++STL---priority_queue知识总结及模拟实现

前言 和stack与queue一样&#xff0c;priority_queue也是一种容器适配器。 他的本质其实是堆&#xff0c;作优先级队列的底层需要能够通过随机迭代器访问&#xff0c;所以他的底层是可以由vector和queue实例化&#xff0c;默认情况下priority_queue默认是用vector作为底层实例…

智慧博物馆的“眼睛”:视频智能监控技术守护文物安全与智能化管理

近日&#xff0c;位于四川德阳的三星堆博物馆迎来了参观热潮。据新闻报道&#xff0c;三星堆博物馆的日均参观量达1.5万人次。随着暑假旅游高峰期的到来&#xff0c;博物馆作为重要的文化场所&#xff0c;也迎来了大量游客。博物馆作为文化和历史的重要载体&#xff0c;其安全保…

关于vue实现导出excel表,以及导出的excel后的图片超过单元格的问题

实现导出带图标片的excel的方法&#xff0c; 首先&#xff1a; import table2excel from js-table2excel // 导出表格 按钮点击后触发事件 const onBatchExport () > {const column [//数据表单{title: "ID", //表头名称titlekey: "id", //数据ty…