输出网络结构图,mmdetection

控制台输入:python tools/train.py /home/yuan3080/桌面/detection_paper_6/mmdetection-master1/mmdetection-master_yanhuo/work_dirs/lad_r50_paa_r101_fpn_coco_1x/lad_r50_a_r101_fpn_coco_1x.py

这个是输出方法里面的,不是原始方法。

如下所示,加一个print(model)就可以
,然后运行:控制台输入

之后,之后输出即可,如下所示:

在这里插入图片描述

LAD((backbone): Res2Net((stem): Sequential((0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)(6): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(8): ReLU(inplace=True))(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(layer1): Res2Layer((0): Bottle2neck((conv1): Conv2d(64, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=1, stride=1, padding=0)(1): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(convs): ModuleList((0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(256, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(256, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer2): Res2Layer((0): Bottle2neck((conv1): Conv2d(256, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=2, stride=2, padding=0)(1): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(pool): AvgPool2d(kernel_size=3, stride=2, padding=1)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): Bottle2neck((conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer3): Res2Layer((0): Bottle2neck((conv1): Conv2d(512, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=2, stride=2, padding=0)(1): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(pool): AvgPool2d(kernel_size=3, stride=2, padding=1)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(4): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(5): Bottle2neck((conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))))(layer4): Res2Layer((0): Bottle2neck((conv1): Conv2d(1024, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): AvgPool2d(kernel_size=2, stride=2, padding=0)(1): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(pool): AvgPool2d(kernel_size=3, stride=2, padding=1)(convs): ModuleList((0): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(2): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottle2neck((conv1): Conv2d(2048, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): Bottle2neck((conv1): Conv2d(2048, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(convs): ModuleList((0): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(1): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(2): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(bns): ModuleList((0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))))init_cfg={'type': 'Pretrained', 'checkpoint': 'torchvision://resnet50'}(neck): FPN((lateral_convs): ModuleList((0): ConvModule((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)))(1): ConvModule((conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)))(2): ConvModule((conv): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))))(fpn_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(3): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))(4): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))))init_cfg={'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}(bbox_head): LADHead((loss_cls): FocalLoss()(loss_bbox): GIoULoss()(relu): ReLU(inplace=True)(cls_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(3): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(reg_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(3): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(atss_cls): Conv2d(256, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(atss_reg): Conv2d(256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(atss_centerness): Conv2d(256, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(scales): ModuleList((0): Scale()(1): Scale()(2): Scale()(3): Scale()(4): Scale())(loss_centerness): CrossEntropyLoss(avg_non_ignore=False))

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/217133.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

分层自动化测试的实战思考!

自动化测试的分层模型 自动化测试的分层模型,我们应该已经很熟悉了,按照分层测试理念,自动化测试的投入产出应该是一个金字塔模型。越是向下,投入/产出比就越高,但开展的难易程度/成本和技术要求就越高,但…

附录C 流水线:基础与中级概念

1. 引言 1.1 什么是流水线? 流水线爱是一种将多条指令重叠执行的实现技术,它利用了一条指令所需的多个操作之间的并行性。(指令操作的非原子性和指令类型的多样性) 在计算流水线中,每个步骤完成指令的一部分&#x…

Leetcode143 重排链表

重排链表 题解1 线性表 给定一个单链表 L 的头节点 head ,单链表 L 表示为: L0 → L1 → … → Ln - 1 → Ln请将其重新排列后变为: L0 → Ln → L1 → Ln - 1 → L2 → Ln - 2 → …不能只是单纯的改变节点内部的值,而是需要实际…

知识笔记(四十七)———什么是mysql

MySQL是一个开源的关系型数据库管理系统(RDBMS),它使用SQL(结构化查询语言)作为操作和管理数据的语言。MySQL广泛应用于各种应用程序和网站中,是最受欢迎的开源数据库之一。 以下是MySQL的一些主要特点和功…

mumu模拟器,adb devices 忽然就不显示设备解决方法

依次执行以下 adb kill-server adb start-server adb devices

现代物流系统的分析与设计

目 录 引言 3一、系统分析 4 (一)需求分析 4 (二)可行性分析 4 二、 总体设计 4 (一)项目规划 4 (二)系统功能结构图 5 三、详细设计 6 (一)系统登录设计 6 …

【技术分享】企业网必不可少的NAT技术

NAT是一种地址转换技术,它可以将IP数据报文头中的IP地址转换为另一个IP地址,并通过转换端口号达到地址重用的目的。NAT作为一种缓解IPv4公网地址枯竭的过渡技术,由于实现简单,得到了广泛应用。 NAT解决了什么问题? 随…

线程按顺序循环执行

假设有3个线程,依次打印A、B、C,按顺序循环打印100次。 这个其实是线程通信,如果只是按顺序执行,用只有一个线程的线程池,依次提交线程任务就行,但是这里还不是每个线程只执行一次,需要循环重复打印。 这里有两种处理方式,一种是搞个全局int变量,对线程数取模,得到0~…

01.Git分布式版本控制工具

一、Git简介 Git是一个开源的分布式版本控制系统,可以有效、高速地进行从很小到非常大的项目的版本管理。 Git是Linus Torvalds为了帮助管理Linux内核开发而开发的一个开放源码的版本控制软件。 二、版本控制器方式 1.集中式版本控制工具 版本库放在中央服务器中&…

LinuxBasicsForHackers笔记 -- 日志系统

日志文件存储有关操作系统和应用程序运行时发生的事件的信息,包括任何错误和安全警报。 rsyslog 日志守护进程 Linux 使用名为 syslogd 的守护进程自动记录计算机上的事件。 rsyslog 配置文件 与 Linux 中的几乎所有应用程序一样,rsyslog 由位于 /et…

力扣-242. 有效的字母异位词

文章目录 力扣题目代码分析 力扣题目 给定两个字符串 s 和 t ,编写一个函数来判断 t 是否是 s 的字母异位词。 注意:若 s 和 t 中每个字符出现的次数都相同,则称 s 和 t 互为字母异位词。 示例 1: 输入: s “anagram”, t “nagaram” …

【lesson11】表的约束(4)

文章目录 表的约束的介绍唯一键约束测试建表插入测试建表插入测试建表插入测试修改表插入测试 表的约束的介绍 真正约束字段的是数据类型,但是数据类型约束很单一,需要有一些额外的约束,更好的保证数据的合法性,从业务逻辑角度保…

docker二 redis单机安装

创建文件夹 mkdir -p /usr/local/redis/data /usr/local/redis/logs /usr/local/redis/conf chmod -R 777 /usr/local/redis/data* chmod -R 777 /usr/local/redis/logs*另一种风格 # 创建 redis 配置存放目录 mkdir -p /home/docker/redis/conf && chmod 777 /home/…

关于学习计算机的心得与体会

也是隔了一周没有发文了,最近一直在准备期末考试,后来想了很久,学了这么久的计算机,这当中有些收获和失去想和各位正在和我一样在学习计算机的路上的老铁分享一下,希望可以作为你们碰到困难时的良药。先叠个甲&#xf…

Appium 自动化自学篇 —— 初识Appium自动化!

Appium 简介 随着移动终端的普及,手机应用越来越多,也越来越重要。而作为测试 的我们也要与时俱进,努力学习手机 App 的相关测试,文章将介绍手机自动化测试框架 Appium 。 那究竟什么是 Appium 呢? 接下来我们一起来学习PythonS…

【Python】 pdf2image中所需要的poppler文件

问题 在使用pdf2image是需要依赖poppler这个可执行文件, 网上找不到相应的文件。 使用 from PIL import Image import fitz from pdf2image import convert_from_pathpdf_file rD:\workspace\python学习笔记.pdf save_path rD:\workspace\\long_image.png popple…

分布式环境认证和授权-基于springboot+JWT+拦截器实现-实操+源码下载

1、功能概述? 1、当用户登录的时候,将用户的信息通过JWT进行加密和签名,并将JWT产生了token信息保存到当前浏览器的localStoragee中,即本地存储中。 2、当用户登录成功后,访问其他资源的时候,程序从localStorage中获…

二蛋赠书十一期:《TypeScript入门与区块链项目实战》

前言 大家好!我是二蛋,一个热爱技术、乐于分享的工程师。在过去的几年里,我一直通过各种渠道与大家分享技术知识和经验。我深知,每一位技术人员都对自己的技能提升和职业发展有着热切的期待。因此,我非常感激大家一直…

Backtrader 文档学习-Quickstart

Backtrader 文档学习-Quickstart 0. 前言 backtrader,功能十分完善,有完整的使用文档,安装相对简单(直接pip安装即可)。 优点是运行速度快,支持pandas的矢量运算;支持参数自动寻优运算&#x…

DNS漫游指南:从网址到IP的奇妙之旅

当用户在浏览器中输入特定网站时发生的整个端到端过程可以参考下图 1*4vb-NMUuYTzYBYUFSuSKLw.png 问题: 什么是 DNS? 答案 → DNS 指的是域名系统(Domain Name System)。DNS 是互联网的目录,将人类可读的域名&#…