输出网络结构图,mmdetection

控制台输入:python tools/train.py /home/yuan3080/桌面/detection_paper_6/mmdetection-master1/mmdetection-master_yanhuo/work_dirs/lad_r50_paa_r101_fpn_coco_1x/lad_r50_a_r101_fpn_coco_1x.py

这个是输出方法里面的,不是原始方法。

如下所示,加一个print(model)就可以
,然后运行:控制台输入

之后,之后输出即可,如下所示:

在这里插入图片描述

LAD((backbone): Res2Net((stem): Sequential((0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(8): ReLU(inplace=True))(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(layer1): Res2Layer((0): Bottle2neck((conv1): Conv2d(64, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=1, stride=1, padding=0)(1): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(convs): ModuleList((0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(256, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(256, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer2): Res2Layer((0): Bottle2neck((conv1): Conv2d(256, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=2, stride=2, padding=0)(1): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(pool): AvgPool2d(kernel_size=3, stride=2, padding=1)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): Bottle2neck((conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer3): Res2Layer((0): Bottle2neck((conv1): Conv2d(512, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=2, stride=2, padding=0)(1): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(pool): AvgPool2d(kernel_size=3, stride=2, padding=1)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(4): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(5): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer4): Res2Layer((0): Bottle2neck((conv1): Conv2d(1024, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=2, stride=2, padding=0)(1): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(pool): AvgPool2d(kernel_size=3, stride=2, padding=1)(convs): ModuleList((0): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(2): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(2048, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(2048, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))))init_cfg={'type': 'Pretrained', 'checkpoint': 'torchvision://resnet50'}(neck): FPN((lateral_convs): ModuleList((0): ConvModule((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)))(1): ConvModule((conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)))(2): ConvModule((conv): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))))(fpn_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(3): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))(4): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))))init_cfg={'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}(bbox_head): LADHead((loss_cls): FocalLoss()(loss_bbox): GIoULoss()(relu): ReLU(inplace=True)(cls_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(3): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(reg_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(3): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(atss_cls): Conv2d(256, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(atss_reg): Conv2d(256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(atss_centerness): Conv2d(256, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(scales): ModuleList((0): Scale()(1): Scale()(2): Scale()(3): Scale()(4): Scale())(loss_centerness): CrossEntropyLoss(avg_non_ignore=False))

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/217133.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

分层自动化测试的实战思考!

自动化测试的分层模型 自动化测试的分层模型,我们应该已经很熟悉了,按照分层测试理念,自动化测试的投入产出应该是一个金字塔模型。越是向下,投入/产出比就越高,但开展的难易程度/成本和技术要求就越高,但…

附录C 流水线:基础与中级概念

1. 引言 1.1 什么是流水线? 流水线爱是一种将多条指令重叠执行的实现技术,它利用了一条指令所需的多个操作之间的并行性。(指令操作的非原子性和指令类型的多样性) 在计算流水线中,每个步骤完成指令的一部分&#x…

Leetcode143 重排链表

重排链表 题解1 线性表 给定一个单链表 L 的头节点 head ,单链表 L 表示为: L0 → L1 → … → Ln - 1 → Ln请将其重新排列后变为: L0 → Ln → L1 → Ln - 1 → L2 → Ln - 2 → …不能只是单纯的改变节点内部的值,而是需要实际…

现代物流系统的分析与设计

目 录 引言 3一、系统分析 4 (一)需求分析 4 (二)可行性分析 4 二、 总体设计 4 (一)项目规划 4 (二)系统功能结构图 5 三、详细设计 6 (一)系统登录设计 6 …

【技术分享】企业网必不可少的NAT技术

NAT是一种地址转换技术,它可以将IP数据报文头中的IP地址转换为另一个IP地址,并通过转换端口号达到地址重用的目的。NAT作为一种缓解IPv4公网地址枯竭的过渡技术,由于实现简单,得到了广泛应用。 NAT解决了什么问题? 随…

【lesson11】表的约束(4)

文章目录 表的约束的介绍唯一键约束测试建表插入测试建表插入测试建表插入测试修改表插入测试 表的约束的介绍 真正约束字段的是数据类型,但是数据类型约束很单一,需要有一些额外的约束,更好的保证数据的合法性,从业务逻辑角度保…

关于学习计算机的心得与体会

也是隔了一周没有发文了,最近一直在准备期末考试,后来想了很久,学了这么久的计算机,这当中有些收获和失去想和各位正在和我一样在学习计算机的路上的老铁分享一下,希望可以作为你们碰到困难时的良药。先叠个甲&#xf…

Appium 自动化自学篇 —— 初识Appium自动化!

Appium 简介 随着移动终端的普及,手机应用越来越多,也越来越重要。而作为测试 的我们也要与时俱进,努力学习手机 App 的相关测试,文章将介绍手机自动化测试框架 Appium 。 那究竟什么是 Appium 呢? 接下来我们一起来学习PythonS…

分布式环境认证和授权-基于springboot+JWT+拦截器实现-实操+源码下载

1、功能概述? 1、当用户登录的时候,将用户的信息通过JWT进行加密和签名,并将JWT产生了token信息保存到当前浏览器的localStoragee中,即本地存储中。 2、当用户登录成功后,访问其他资源的时候,程序从localStorage中获…

二蛋赠书十一期:《TypeScript入门与区块链项目实战》

前言 大家好!我是二蛋,一个热爱技术、乐于分享的工程师。在过去的几年里,我一直通过各种渠道与大家分享技术知识和经验。我深知,每一位技术人员都对自己的技能提升和职业发展有着热切的期待。因此,我非常感激大家一直…

Backtrader 文档学习-Quickstart

Backtrader 文档学习-Quickstart 0. 前言 backtrader,功能十分完善,有完整的使用文档,安装相对简单(直接pip安装即可)。 优点是运行速度快,支持pandas的矢量运算;支持参数自动寻优运算&#x…

DNS漫游指南:从网址到IP的奇妙之旅

当用户在浏览器中输入特定网站时发生的整个端到端过程可以参考下图 1*4vb-NMUuYTzYBYUFSuSKLw.png 问题: 什么是 DNS? 答案 → DNS 指的是域名系统(Domain Name System)。DNS 是互联网的目录,将人类可读的域名&#…

cobalt strike基础使用

coblat strike使用 服务搭建 首先将server端文件放进kali中 对其赋权 执行时需要root权限 设置ip 启动服务 ./teamserver 10.4.7.138 123456回到win11启动cs,输入刚才配置的信息 上线方式 木马(exe上线) 查看一下开放的端口 添加监听 …

最新科研成果:在钻石中存储多比特数据,实现25GB数据密度

近日,纽约城市大学(CUNY)的研究人员已经成功地利用钻石原子结构中的小型氮缺陷作为“颜色中心”来写入数据进行存储(然后是检索)。这项发表在《自然纳米技术》上的技术允许通过将数据编码为多个光频率(即颜…

T5论文个人记录

参考&转载自: 介绍Google推出的大一统模型—T5_谷歌大模型_深度之眼的博客-CSDN博客 T5 和 mT5-CSDN博客 T5:Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer(万字长文略解T5)_t5论文…

tomcat软件部署

1.tomcat 2.tomcat功能组件 3.请求过程 4.tomcat部署 一.tomcat tomcat 是 Java 语言开发的,Tomcat 服务器是一个免费的开放源代码的 Web 应用服务器,却不如nginx,apache功能强大,通常作为 Servlet 和 JSP容器,单…

2023 CCF中国软件大会(CCF ChinaSoft)“软件定义汽车”论坛成功召开

2023年12月1日下午,2023年度CCF中国软件大会“软件定义汽车”论坛成功召开。 本次论坛由华东师范大学蒲戈光教授、武汉光庭信息技术股份有限公司朱敦尧董事长以及华东师范大学张越龄副教授联合组织举办。论坛主要关注汽车的智能网联化与电动化的融合,包括…

scala方法与函数

定义方法定义函数方法和函数的区别scala的方法函数操作 1.9 方法与函数 1.9.1 定义方法 定义方法的基本格式是: def 方法名称(参数列表):返回值类型 方法体 def add(x: Int, y: Int): Int x y println(add(1, 2)) // 3 //也…

揭秘高效大型语言模型:技术、方法与应用展望

近年来,大型语言模型(LLMs)在自然语言处理领域取得了显著的进展,如GPT-series(GPT-3, GPT-4)、Google-series(Gemini, PaLM), Meta-series(LLAMA1&2), BLOOM, GLM等模型在各种任务中展现出惊人的能力。然而,随着模…

初学python的体会心得20字,初学python的体会心得2000

大家好,小编来为大家解答以下问题,学了python的心得体会200字,初学python的体会心得20字,现在让我们一起来看看吧! 本学期,我们学习了杨老师的《python语言程序设计》这门课程,其实早在大一期间…