使⽤COCO数据集训练YOLOX

 

注意: 训练的时候,如果GPU不够,可以修改batchsize大小。 

(yolox) xuefei@f123:/mnt/d/BaiduNetdiskDownload/CV/YOLOX$ ls
LICENSE      README.md      assets  checkpoints  demo  exps        requirements.txt  setup.py  tools  yolox
MANIFEST.in  YOLOX_outputs  build   datasets     docs  hubconf.py  setup.cfg         tests     venv   yolox.egg-info

(yolox) xuefei@f123:/mnt/d/BaiduNetdiskDownload/CV/YOLOX$
(yolox) xuefei@f123:/mnt/d/BaiduNetdiskDownload/CV/YOLOX$ python -m yolox.tools.train -n yolox-s -d 1 -b 16 --fp16
2024-02-02 08:36:25 | INFO     | yolox.core.trainer:130 - args: Namespace(batch_size=16, cache=None, ckpt=None, devices=1, dist_backend='nccl', dist_url=None, exp_file=None, experiment_name='yolox_s', fp16=True, logger='tensorboard', machine_rank=0, name='yolox-s', num_machines=1, occupy=False, opts=[], resume=False, start_epoch=None)
2024-02-02 08:36:25 | INFO     | yolox.core.trainer:131 - exp value:
╒═══════════════════╤════════════════════════════╕
│ keys              │ values                     │
╞═══════════════════╪════════════════════════════╡
│ seed              │ None                       │
├───────────────────┼────────────────────────────┤
│ output_dir        │ './YOLOX_outputs'          │
├───────────────────┼────────────────────────────┤
│ print_interval    │ 10                         │
├───────────────────┼────────────────────────────┤
│ eval_interval     │ 10                         │
├───────────────────┼────────────────────────────┤
│ dataset           │ None                       │
├───────────────────┼────────────────────────────┤
│ num_classes       │ 80                         │
├───────────────────┼────────────────────────────┤
│ depth             │ 0.33                       │
├───────────────────┼────────────────────────────┤
│ width             │ 0.5                        │
├───────────────────┼────────────────────────────┤
│ act               │ 'silu'                     │
├───────────────────┼────────────────────────────┤
│ data_num_workers  │ 4                          │
├───────────────────┼────────────────────────────┤
│ input_size        │ (640, 640)                 │
├───────────────────┼────────────────────────────┤
│ multiscale_range  │ 5                          │
├───────────────────┼────────────────────────────┤
│ data_dir          │ None                       │
├───────────────────┼────────────────────────────┤
│ train_ann         │ 'instances_train2017.json' │
├───────────────────┼────────────────────────────┤
│ val_ann           │ 'instances_val2017.json'   │
├───────────────────┼────────────────────────────┤
│ test_ann          │ 'instances_test2017.json'  │
├───────────────────┼────────────────────────────┤
│ mosaic_prob       │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ mixup_prob        │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ hsv_prob          │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ flip_prob         │ 0.5                        │
├───────────────────┼────────────────────────────┤
│ degrees           │ 10.0                       │
├───────────────────┼────────────────────────────┤
│ translate         │ 0.1                        │
├───────────────────┼────────────────────────────┤
│ mosaic_scale      │ (0.1, 2)                   │
├───────────────────┼────────────────────────────┤
│ enable_mixup      │ True                       │
├───────────────────┼────────────────────────────┤
│ mixup_scale       │ (0.5, 1.5)                 │
├───────────────────┼────────────────────────────┤
│ shear             │ 2.0                        │
├───────────────────┼────────────────────────────┤
│ warmup_epochs     │ 5                          │
├───────────────────┼────────────────────────────┤
│ max_epoch         │ 300                        │
├───────────────────┼────────────────────────────┤
│ warmup_lr         │ 0                          │
├───────────────────┼────────────────────────────┤
│ min_lr_ratio      │ 0.05                       │
├───────────────────┼────────────────────────────┤
│ basic_lr_per_img  │ 0.00015625                 │
├───────────────────┼────────────────────────────┤
│ scheduler         │ 'yoloxwarmcos'             │
├───────────────────┼────────────────────────────┤
│ no_aug_epochs     │ 15                         │
├───────────────────┼────────────────────────────┤
│ ema               │ True                       │
├───────────────────┼────────────────────────────┤
│ weight_decay      │ 0.0005                     │
├───────────────────┼────────────────────────────┤
│ momentum          │ 0.9                        │
├───────────────────┼────────────────────────────┤
│ save_history_ckpt │ True                       │
├───────────────────┼────────────────────────────┤
│ exp_name          │ 'yolox_s'                  │
├───────────────────┼────────────────────────────┤
│ test_size         │ (640, 640)                 │
├───────────────────┼────────────────────────────┤
│ test_conf         │ 0.01                       │
├───────────────────┼────────────────────────────┤
│ nmsthre           │ 0.65                       │
╘═══════════════════╧════════════════════════════╛
2024-02-02 08:36:25 | INFO     | yolox.core.trainer:137 - Model Summary: Params: 8.97M, Gflops: 26.93
2024-02-02 08:36:28 | INFO     | yolox.data.datasets.coco:63 - loading annotations into memory...
2024-02-02 08:36:38 | INFO     | yolox.data.datasets.coco:63 - Done (t=10.09s)
2024-02-02 08:36:38 | INFO     | pycocotools.coco:86 - creating index...
2024-02-02 08:36:39 | INFO     | pycocotools.coco:86 - index created!
2024-02-02 08:36:55 | INFO     | yolox.core.trainer:155 - init prefetcher, this might take one minute or less...
2024-02-02 08:36:58 | INFO     | yolox.data.datasets.coco:63 - loading annotations into memory...
2024-02-02 08:36:59 | INFO     | yolox.data.datasets.coco:63 - Done (t=0.58s)
2024-02-02 08:36:59 | INFO     | pycocotools.coco:86 - creating index...
2024-02-02 08:36:59 | INFO     | pycocotools.coco:86 - index created!
2024-02-02 08:36:59 | INFO     | yolox.core.trainer:191 - Training start...
2024-02-02 08:36:59 | INFO     | yolox.core.trainer:192 -
YOLOX(
  (backbone): YOLOPAFPN(
    (backbone): CSPDarknet(
      (stem): Focus(
        (conv): BaseConv(
          (conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (dark2): Sequential(
        (0): BaseConv(
          (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark3): Sequential(
        (0): BaseConv(
          (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (1): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (2): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark4): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (1): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (2): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark5): Sequential(
        (0): BaseConv(
          (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): SPPBottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): ModuleList(
            (0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)
            (1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False)
            (2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False)
          )
          (conv2): BaseConv(
            (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
    )
    (upsample): Upsample(scale_factor=2.0, mode=nearest)
    (lateral_conv0): BaseConv(
      (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_p4): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (reduce_conv1): BaseConv(
      (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_p3): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (bu_conv2): BaseConv(
      (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_n3): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (bu_conv1): BaseConv(
      (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_n4): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
  )
  (head): YOLOXHead(
    (cls_convs): ModuleList(
      (0): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (1): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (2): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
    (reg_convs): ModuleList(
      (0): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (1): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (2): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
    (cls_preds): ModuleList(
      (0): Conv2d(128, 80, kernel_size=(1, 1), stride=(1, 1))
      (1): Conv2d(128, 80, kernel_size=(1, 1), stride=(1, 1))
      (2): Conv2d(128, 80, kernel_size=(1, 1), stride=(1, 1))
    )
    (reg_preds): ModuleList(
      (0): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
      (1): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
      (2): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
    )
    (obj_preds): ModuleList(
      (0): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
      (1): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
      (2): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
    )
    (stems): ModuleList(
      (0): BaseConv(
        (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (1): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (2): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
    )
    (l1_loss): L1Loss()
    (bcewithlog_loss): BCEWithLogitsLoss()
    (iou_loss): IOUloss()
  )
)
2024-02-02 08:36:59 | INFO     | yolox.core.trainer:203 - ---> start train epoch1
2024-02-02 08:37:07 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 10/7393, gpu mem: 3713Mb, mem: 6.5Gb, iter_time: 0.751s, data_time: 0.008s, total_loss: 20.3, iou_loss: 4.8, l1_loss: 0.0, conf_loss: 13.7, cls_loss: 1.9, lr: 1.830e-10, size: 640, ETA: 19 days, 6:56:56
2024-02-02 08:37:12 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 20/7393, gpu mem: 3713Mb, mem: 6.5Gb, iter_time: 0.467s, data_time: 0.001s, total_loss: 15.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 9.0, cls_loss: 2.1, lr: 7.318e-10, size: 640, ETA: 15 days, 15:17:49
2024-02-02 08:37:18 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 30/7393, gpu mem: 3713Mb, mem: 6.5Gb, iter_time: 0.616s, data_time: 0.002s, total_loss: 16.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 10.0, cls_loss: 2.1, lr: 1.647e-09, size: 576, ETA: 15 days, 16:38:41
2024-02-02 08:37:30 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 40/7393, gpu mem: 4978Mb, mem: 6.5Gb, iter_time: 1.207s, data_time: 0.002s, total_loss: 17.8, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 11.1, cls_loss: 2.0, lr: 2.927e-09, size: 800, ETA: 19 days, 12:21:38
2024-02-02 08:37:36 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 50/7393, gpu mem: 4978Mb, mem: 6.5Gb, iter_time: 0.630s, data_time: 0.002s, total_loss: 15.8, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 8.9, cls_loss: 2.2, lr: 4.574e-09, size: 544, ETA: 18 days, 20:17:13
2024-02-02 08:37:45 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 60/7393, gpu mem: 4978Mb, mem: 6.5Gb, iter_time: 0.916s, data_time: 0.002s, total_loss: 17.5, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 10.9, cls_loss: 1.9, lr: 6.587e-09, size: 736, ETA: 19 days, 14:59:49
2024-02-02 08:37:55 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 70/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.994s, data_time: 0.002s, total_loss: 18.5, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 11.8, cls_loss: 2.1, lr: 8.965e-09, size: 800, ETA: 20 days, 11:11:27
2024-02-02 08:38:03 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 80/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.816s, data_time: 0.003s, total_loss: 20.7, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 13.8, cls_loss: 2.2, lr: 1.171e-08, size: 800, ETA: 20 days, 12:36:00
2024-02-02 08:38:10 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 90/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.631s, data_time: 0.002s, total_loss: 18.8, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 12.3, cls_loss: 1.9, lr: 1.482e-08, size: 640, ETA: 20 days, 1:04:39
2024-02-02 08:38:15 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 100/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.504s, data_time: 0.002s, total_loss: 15.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 9.1, cls_loss: 2.0, lr: 1.830e-08, size: 480, ETA: 19 days, 8:00:53
2024-02-02 08:38:22 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 110/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.746s, data_time: 0.002s, total_loss: 18.6, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 11.8, cls_loss: 2.1, lr: 2.214e-08, size: 672, ETA: 19 days, 7:36:25
2024-02-02 08:38:27 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 120/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.519s, data_time: 0.002s, total_loss: 15.7, iou_loss: 4.6, l1_loss: 0.0, conf_loss: 9.0, cls_loss: 2.0, lr: 2.635e-08, size: 512, ETA: 18 days, 19:38:05
2024-02-02 08:38:32 | INFO     | yolox.core.trainer:263 - epoch: 1/300, iter: 130/7393, gpu mem: 5059Mb, mem: 6.5Gb, iter_time: 0.463s, data_time: 0.001s, total_loss: 17.6, iou_loss: 4.7, l1_loss: 0.0, conf_loss: 11.1, cls_loss: 1.9, lr: 3.092e-08, size: 640, ETA: 18 days, 6:51:07

^C2024-02-02 08:38:32 | INFO     | yolox.core.trainer:196 - Training of experiment is done and the best AP is 0.00

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/661900.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

C语言——P/文件操作

一、为什么使用文件? 如果没有⽂件,我们写的程序的数据是存储在电脑的内存中,如果程序退出,内存回收,数据就丢失了,等再次运⾏程序,是看不到上次程序的数据的,如果要将数据进⾏持久…

数据结构—动态查找表

动态查找介绍 1. 动态查找的引入:当查找表以线性表的形式组织时,若对查找表进行插入、删除或排序操作,就必须移动大量的记录,当记录数很多时,这种移动的代价很大。 2. 动态查找表的设计思想:表结构本身是…

web前端开发--------阴影与转换

1.阴影分为文本阴影和盒子阴影 我们使用text-shadow属性为文本添加阴影效果,使用结构伪类为第一个子元素p添加阴影效果; 水平偏移量为负值时,表示阴影向左偏移; 垂直偏移量为负值时,表示阴影向上偏移。 …

【Vue】2-14、插槽 自定义指令

一、插槽 插槽&#xff08;Slot&#xff09;是 vue 为组件的封装者提供的能力。允许封装者在封装组件时&#xff0c;把不确定的&#xff0c;希望由用户指定的部分定义为插槽。 <template><div class"app-container"><h1>App 根组件</h1>&…

【Mysql】数据库架构学习合集

目录 1. Mysql整体架构1-1. 连接层1-2. 服务层1-3. 存储引擎层1-4. 文件系统层 2. 一条sql语句的执行过程2-1. 数据库连接池的作用2-2. 查询sql的执行过程2-1. 写sql的执行过程 1. Mysql整体架构 客户端&#xff1a; 由各种语言编写的程序&#xff0c;负责与Mysql服务端进行网…

大力说视频号第二课:视频号如何挂链接带货

最近&#xff0c;随着视频号带货的风潮&#xff0c;不少小伙伴已经成功跟上潮流&#xff0c;在这个平台上轻松赚取收入。 然而&#xff0c;仍有不少小伙伴对于如何在视频号中挂链接带货感到有些困惑。 目前&#xff0c;视频号的主流带货方式主要分为三种&#xff1a; 01 挂“…

(每日持续更新)信息系统项目管理(第四版)(高级项目管理)考试重点整理第9章 项目范围管理(四)

博主2023年11月通过了信息系统项目管理的考试&#xff0c;考试过程中发现考试的内容全部是教材中的内容&#xff0c;非常符合我学习的思路&#xff0c;因此博主想通过该平台把自己学习过程中的经验和教材博主认为重要的知识点分享给大家&#xff0c;希望更多的人能够通过考试&a…

回归预测 | Matlab实现CPO-LSTM【24年新算法】冠豪猪优化长短期记忆神经网络多变量回归预测

回归预测 | Matlab实现CPO-LSTM【24年新算法】冠豪猪优化长短期记忆神经网络多变量回归预测 目录 回归预测 | Matlab实现CPO-LSTM【24年新算法】冠豪猪优化长短期记忆神经网络多变量回归预测效果一览基本介绍程序设计参考资料 效果一览 基本介绍 1.Matlab实现CPO-LSTM【24年新算…

工业自动化中与多台PLC通讯的基本指南

与多台PLC进行通讯是工业自动化中常见的需求。通常&#xff0c;一台THM&#xff08;通常是触摸屏或人机界面&#xff09;会与多台PLC进行通讯&#xff0c;以实现数据交互和控制功能。以下是一个基本的步骤指南&#xff0c;用于实现1台THM与多台PLC的通讯&#xff1a; 确定通讯…

在本机搭建私有NuGet服务器

写在前面 有时我们不想对外公开源码&#xff0c;同时又想在VisualStudio中通过Nuget包管理器来统一管理这些内部动态库&#xff0c;这时就可以在本地搭建一个NuGet服务器。 操作步骤 1.下载BaGet 这是一个轻量级的NuGet服务器 2.部署BaGet 将下载好的release版本BaGet解…

Linux系统管理和Shell脚本笔试题

1、写一个sed命令&#xff0c;修改/tmp/input.txt文件的内容&#xff0c;要求&#xff1a;(1) 删除所有空行&#xff1b;(2) 在非空行前面加一个"AAA"&#xff0c;在行尾加一个"BBB"&#xff0c;即将内容为11111的一行改为&#xff1a;AAA11111BBB #写入内…

Android 中的卡顿优化

常见的手机卡顿现象&#xff1a; 视频加载慢&#xff1b;画面卡顿、卡死、黑屏&#xff1b;声音卡顿、音画不同步&#xff1b;动画帧卡顿&#xff0c;交互响应慢&#xff1b;滑动不跟手&#xff1b;列表自动更、滚动不流畅&#xff1b;网络响应慢、数据和画面展示慢&#xff1…

【机器学习 深度学习】卷积神经网络简述

&#x1f680;个人主页&#xff1a;为梦而生~ 关注我一起学习吧&#xff01; &#x1f4a1;专栏&#xff1a;机器学习 欢迎订阅&#xff01;相对完整的机器学习基础教学&#xff01; ⭐特别提醒&#xff1a;针对机器学习&#xff0c;特别开始专栏&#xff1a;机器学习python实战…

算法:箱子之形摆放

一、算法描述及解析 要求将一批箱子按从上到下以‘之’字形的顺序摆放在宽度为 n 的空地上&#xff0c;输出箱子的摆放位置&#xff0c; 例如&#xff1a;箱子ABCDEFG&#xff0c;空地宽为3。 如输入&#xff1a; ABCDEFG 3 输出&#xff1a; AFG BE CD 注&#xff1a;最后一行…

uni-app 经验分享,从入门到离职(三)——关于 uni-app 生命周期快速了解上手

文章目录 &#x1f4cb;前言⏬关于专栏 &#x1f3af;什么是生命周期&#x1f9e9;应用生命周期&#x1f4cc; 关于 App.vue/App.uvue &#x1f9e9;页面生命周期&#x1f4cc;关于 onShow 与 onLoad 的区别 &#x1f4dd;最后 &#x1f4cb;前言 这篇文章是本专栏 uni-app 基…

MacBook安装虚拟机Parallels Desktop

MacBook安装虚拟机Parallels Desktop 官方下载地址: https://www.parallels.cn/pd/general/ 介绍 Parallels Desktop 被称为 macOS 上最强大的虚拟机软件。可以在 Mac 下同时模拟运行 Win、Linux、Android 等多种操作系统及软件而不必重启电脑&#xff0c;并能在不同系统间随…

MySQL原理(一)架构组成之逻辑模块(2)缓存机制

前面提到了mysql的逻辑模块中包含Query Cache 。 一、查询缓存 1、作用 MySQL查询缓存即缓存查询数据的SQL文本及查询结果,用Key-Value的形式保存在服务器内存中。当查询命中缓存,MySQL会立刻返回结果,跳过了解析,优化和执行阶段。 2、查询缓存的命中条件 &#xff08;1&a…

canvas路径剪裁clip(图文示例)

查看专栏目录 canvas实例应用100专栏&#xff0c;提供canvas的基础知识&#xff0c;高级动画&#xff0c;相关应用扩展等信息。canvas作为html的一部分&#xff0c;是图像图标地图可视化的一个重要的基础&#xff0c;学好了canvas&#xff0c;在其他的一些应用上将会起到非常重…

(软件分享)Fotor - AI照片编辑器

【应用名称】&#xff1a;Fotor - AI照片编辑器 【适用平台】&#xff1a;#Android 【软件标签】&#xff1a;#Fotor 【应用版本】&#xff1a;7.5.0.2➡7.5.1.5 【应用大小】&#xff1a;225MB 【软件说明】&#xff1a;软件升级更新。Fotor 包含编辑照片所需的所有工具。用户…

踩坑系列——mysql数据库字段类型为tinyint输入字符串条件查询无效

背景 排查问题发现有个查询sql的条件传的字符串‘ENABLE’&#xff0c;而数据库这个字段类型是tinyint&#xff0c;值只有0和1&#xff0c;看查询结果过滤出的都是值为0的数据。按自己理解这个语句应该查不出数据&#xff0c;但是结果非预期 排查 问了下ChatGpt给的回答是这…