Colab/PyTorch - 006 Mask RCNN Instance Segmentation
- 1. 源由
- 2. 用 PyTorch 实现 Mask R-CNN
- 2.1 输入输出
- 2.2 预训练模型
- 2.3 模型预测
- 2.4 目标检测流程
- 2.5 推理
- 示例一
- 示例二
- 示例三
- 3. 推断时间比较(CPU v.s. GPU)
- 4. 总结
- 5. 参考资料
1. 源由
在《Colab/PyTorch - 004 Torchvision Semantic Segmentation》的源由里面,我们分析了关于多因素(图像)分析难度进阶的一个列表。
随着我们对技术的深入,以及问题复杂度的增加,一个非常自然的想法是,当识别出物体的边界框,希望找出边界框内哪些像素属于该物体。Mask R-CNN 就是其中一种算法。
Mask R-CNN 的架构是 Faster R-CNN 的扩展,Faster R-CNN 架构包含以下组件:
- 卷积层:输入图像通过多个卷积层以创建特征图。如果你是初学者,可以将卷积层看作一个黑盒,它接收一个3通道的输入图像,并输出一个空间维度较小(7×7),但通道数量很多(512)的“图像”。
- 区域建议网络(RPN):卷积层的输出用于训练一个网络,该网络提出包含物体的区域。
- 分类器:同样的特征图也用于训练一个分类器,为边界框内的物体分配一个标签。
还记得 Faster R-CNN 比 Fast R-CNN 更快,因为特征图计算一次后可被 RPN 和分类器重复使用。
Mask R-CNN 更进一步。在将特征图输入 RPN 和分类器的同时,它还用这些特征图预测边界框内物体的二值掩码。Mask R-CNN 掩码预测部分的方法是,它是一个用于语义分割的全卷积网络(FCN)。唯一的区别在于,这个 FCN 应用于边界框,并且与 RPN 和分类器共享卷积层。
下图展示了一个非常高层次的架构。
2. 用 PyTorch 实现 Mask R-CNN
Colab上运行,需要将制作好的数据集上传Google云存储。
照片可以直接下载,也可以复制到目录位置/content/drive/MyDrive/mask_rcnn/
。
# import necessary libraries
from PIL import Image
import matplotlib.pyplot as plt
import torch
import torchvision.transforms as T
import torchvision
import torch
import numpy as np
import cv2
import random
import time
import os# Test on Google Drivefrom google.colab import drive
drive.mount('/content/drive')
2.1 输入输出
模型期望输入是一个形状为 (n, c, h, w) 的张量图像列表,值的范围在 0-1 之间。图像的尺寸不需要固定。
- n 是图像的数量
- c 是通道数,对于 RGB 图像来说是 3
- h 是图像的高度
- w 是图像的宽度
模型返回:
- 边界框的坐标,
- 模型预测存在于输入图像中的类别标签及其分数,
- 每个类别标签对应的掩码。
2.2 预训练模型
# get the pretrained model from torchvision.models
# Note: pretrained=True will get the pretrained weights for the model.
# model.eval() to use the model for inference
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
model.eval()
/usr/local/lib/python3.10/dist-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.warnings.warn(
/usr/local/lib/python3.10/dist-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MaskRCNN_ResNet50_FPN_Weights.COCO_V1`. You can also use `weights=MaskRCNN_ResNet50_FPN_Weights.DEFAULT` to get the most up-to-date weights.warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /root/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
100%|██████████| 170M/170M [00:01<00:00, 92.7MB/s]MaskRCNN((transform): GeneralizedRCNNTransform(Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])Resize(min_size=(800,), max_size=1333, mode='bilinear'))(backbone): BackboneWithFPN((body): IntermediateLayerGetter((conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(bn1): FrozenBatchNorm2d(64, eps=0.0)(relu): ReLU(inplace=True)(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(layer1): Sequential((0): Bottleneck((conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(64, eps=0.0)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(64, eps=0.0)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(256, eps=0.0)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): FrozenBatchNorm2d(256, eps=0.0)))(1): Bottleneck((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(64, eps=0.0)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(64, eps=0.0)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(256, eps=0.0)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(64, eps=0.0)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(64, eps=0.0)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(256, eps=0.0)(relu): ReLU(inplace=True)))(layer2): Sequential((0): Bottleneck((conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(128, eps=0.0)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(128, eps=0.0)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(512, eps=0.0)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): FrozenBatchNorm2d(512, eps=0.0)))(1): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(128, eps=0.0)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(128, eps=0.0)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(512, eps=0.0)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(128, eps=0.0)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(128, eps=0.0)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(512, eps=0.0)(relu): ReLU(inplace=True))(3): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(128, eps=0.0)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(128, eps=0.0)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(512, eps=0.0)(relu): ReLU(inplace=True)))(layer3): Sequential((0): Bottleneck((conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(256, eps=0.0)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(256, eps=0.0)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(1024, eps=0.0)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): FrozenBatchNorm2d(1024, eps=0.0)))(1): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(256, eps=0.0)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(256, eps=0.0)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(1024, eps=0.0)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(256, eps=0.0)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(256, eps=0.0)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(1024, eps=0.0)(relu): ReLU(inplace=True))(3): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(256, eps=0.0)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(256, eps=0.0)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(1024, eps=0.0)(relu): ReLU(inplace=True))(4): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(256, eps=0.0)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(256, eps=0.0)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(1024, eps=0.0)(relu): ReLU(inplace=True))(5): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(256, eps=0.0)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(256, eps=0.0)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(1024, eps=0.0)(relu): ReLU(inplace=True)))(layer4): Sequential((0): Bottleneck((conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(512, eps=0.0)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(512, eps=0.0)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(2048, eps=0.0)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): FrozenBatchNorm2d(2048, eps=0.0)))(1): Bottleneck((conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(512, eps=0.0)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(512, eps=0.0)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(2048, eps=0.0)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): FrozenBatchNorm2d(512, eps=0.0)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): FrozenBatchNorm2d(512, eps=0.0)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): FrozenBatchNorm2d(2048, eps=0.0)(relu): ReLU(inplace=True))))(fpn): FeaturePyramidNetwork((inner_blocks): ModuleList((0): Conv2dNormActivation((0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)))(1): Conv2dNormActivation((0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)))(2): Conv2dNormActivation((0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)))(3): Conv2dNormActivation((0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))))(layer_blocks): ModuleList((0-3): 4 x Conv2dNormActivation((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))))(extra_blocks): LastLevelMaxPool()))(rpn): RegionProposalNetwork((anchor_generator): AnchorGenerator()(head): RPNHead((conv): Sequential((0): Conv2dNormActivation((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True)))(cls_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))(bbox_pred): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))))(roi_heads): RoIHeads((box_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(7, 7), sampling_ratio=2)(box_head): TwoMLPHead((fc6): Linear(in_features=12544, out_features=1024, bias=True)(fc7): Linear(in_features=1024, out_features=1024, bias=True))(box_predictor): FastRCNNPredictor((cls_score): Linear(in_features=1024, out_features=91, bias=True)(bbox_pred): Linear(in_features=1024, out_features=364, bias=True))(mask_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(14, 14), sampling_ratio=2)(mask_head): MaskRCNNHeads((0): Conv2dNormActivation((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True))(1): Conv2dNormActivation((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True))(2): Conv2dNormActivation((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True))(3): Conv2dNormActivation((0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): ReLU(inplace=True)))(mask_predictor): MaskRCNNPredictor((conv5_mask): ConvTranspose2d(256, 256, kernel_size=(2, 2), stride=(2, 2))(relu): ReLU(inplace=True)(mask_fcn_logits): Conv2d(256, 91, kernel_size=(1, 1), stride=(1, 1))))
)
2.3 模型预测
# These are the classes that are available in the COCO-Dataset
COCO_INSTANCE_CATEGORY_NAMES = ['__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus','train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign','parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow','elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A','handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball','kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket','bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl','banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza','donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table','N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone','microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book','clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
]def get_prediction(img_path, threshold):"""get_predictionparameters:- img_path - path of the input imagemethod:- Image is obtained from the image path- the image is converted to image tensor using PyTorch's Transforms- image is passed through the model to get the predictions- masks, classes and bounding boxes are obtained from the model and soft masks are made binary(0 or 1) on masksie: eg. segment of cat is made 1 and rest of the image is made 0"""img = Image.open(img_path)transform = T.Compose([T.ToTensor()])img = transform(img)pred = model([img])pred_score = list(pred[0]['scores'].detach().numpy())pred_t = [pred_score.index(x) for x in pred_score if x>threshold][-1]masks = (pred[0]['masks']>0.5).squeeze().detach().cpu().numpy()pred_class = [COCO_INSTANCE_CATEGORY_NAMES[i] for i in list(pred[0]['labels'].numpy())]pred_boxes = [[(i[0], i[1]), (i[2], i[3])] for i in list(pred[0]['boxes'].detach().numpy())]masks = masks[:pred_t+1]pred_boxes = pred_boxes[:pred_t+1]pred_class = pred_class[:pred_t+1]return masks, pred_boxes, pred_class
- 图像是从图像路径中获取的。
- 图像通过 PyTorch 的 transforms 转换为图像张量。
- 图像通过模型进行预测。
- 从模型中获取掩码、预测类别和边界框坐标,并将软掩码二值化(0 或 1)。例如,猫的部分被设为 1,图像的其余部分被设为 0。
每个预测对象的掩码被赋予一组预定义的 11 种颜色中的一种随机颜色,以便在输入图像上可视化掩码。
def random_colour_masks(image):"""random_colour_masksparameters:- image - predicted masksmethod:- the masks of each predicted object is given random colour for visualization"""colours = [[0, 255, 0],[0, 0, 255],[255, 0, 0],[0, 255, 255],[255, 255, 0],[255, 0, 255],[80, 70, 180],[250, 80, 190],[245, 145, 50],[70, 150, 250],[50, 190, 190]]r = np.zeros_like(image).astype(np.uint8)g = np.zeros_like(image).astype(np.uint8)b = np.zeros_like(image).astype(np.uint8)r[image == 1], g[image == 1], b[image == 1] = colours[random.randrange(0,10)]coloured_mask = np.stack([r, g, b], axis=2)return coloured_mask
2.4 目标检测流程
def instance_segmentation_api(img_path, threshold=0.5, rect_th=3, text_size=3, text_th=3):"""instance_segmentation_apiparameters:- img_path - path to input imagemethod:- prediction is obtained by get_prediction- each mask is given random color- each mask is added to the image in the ration 1:0.8 with opencv- final output is displayed"""masks, boxes, pred_cls = get_prediction(img_path, threshold)img = cv2.imread(img_path)img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)for i in range(len(masks)):rgb_mask = random_colour_masks(masks[i])img = cv2.addWeighted(img, 1, rgb_mask, 0.5, 0)cv2.rectangle(img, (int(boxes[i][0][0]), int(boxes[i][0][1])), (int(boxes[i][1][0]), int(boxes[i][1][1])),color=(0, 255, 0), thickness=rect_th)cv2.putText(img,pred_cls[i], (int(boxes[i][0][0]), int(boxes[i][0][1])), cv2.FONT_HERSHEY_SIMPLEX, text_size, (0,255,0),thickness=text_th)plt.figure(figsize=(20,30))plt.imshow(img)plt.xticks([])plt.yticks([])plt.show()
- 掩码、预测类别和边界框是通过 get_prediction 获取的。
- 每个掩码从 11 种颜色的集合中随机赋予一种颜色。
- 每个掩码以 1:0.5 的比例通过 OpenCV 添加到图像中。
- 使用 cv2.rectangle 绘制边界框,并将类别名称标注为文本。
- 显示最终输出。
2.5 推理
示例一
#!wget https://www.wsha.org/wp-content/uploads/banner-diverse-group-of-people-2.jpg -O mrcnn_standing_people.jpg
image_file = "mrcnn_standing_people.jpg"
full_image_path = os.path.join(directory_path, image_file)
download_image("https://www.wsha.org/wp-content/uploads/banner-diverse-group-of-people-2.jpg", full_image_path)instance_segmentation_api(full_image_path, 0.75)
示例二
#!wget https://hips.hearstapps.com/hmg-prod.s3.amazonaws.com/images/10best-cars-group-cropped-1542126037.jpg -O mrcnn_cars.jpg
image_file = "mrcnn_cars.jpg"
full_image_path = os.path.join(directory_path, image_file)
download_image("https://hips.hearstapps.com/hmg-prod.s3.amazonaws.com/images/10best-cars-group-cropped-1542126037.jpg", full_image_path)instance_segmentation_api(full_image_path, 0.9, rect_th=5, text_size=5, text_th=5)
示例三
#!wget https://cdn.pixabay.com/photo/2013/07/05/01/08/traffic-143391_960_720.jpg -O mrcnn_traffic.jpg
image_file = "mrcnn_traffic.jpg"
full_image_path = os.path.join(directory_path, image_file)
download_image("https://cdn.pixabay.com/photo/2013/07/05/01/08/traffic-143391_960_720.jpg", full_image_path)instance_segmentation_api(full_image_path, 0.6, rect_th=2, text_size=2, text_th=2)
3. 推断时间比较(CPU v.s. GPU)
def check_inference_time(image_path, gpu=False):model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)model.eval()img = Image.open(image_path)transform = T.Compose([T.ToTensor()])img = transform(img)if gpu:model.cuda()img = img.cuda()else:model.cpu()img = img.cpu()start_time = time.time()pred = model([img])end_time = time.time()return end_time-start_time# Let's run inference on all the downloaded images and average their inference time
#img_paths = [path for path in os.listdir("./") if path.split(".")[-1].lower() in ["jpeg", "jpg", "png"] ]# Get a list of image paths in the specified directory
img_paths = [os.path.join(directory_path, path) for path in os.listdir(directory_path) if path.split(".")[-1].lower() in ["jpeg", "jpg", "png"]]gpu_time = sum([check_inference_time(img_path, gpu=True) for img_path in img_paths])/len(img_paths)
cpu_time = sum([check_inference_time(img_path, gpu=False) for img_path in img_paths])/len(img_paths)print('\n\nAverage Time take by the model with GPU = {}s\nAverage Time take by the model with CPU = {}s'.format(gpu_time, cpu_time))
GPU耗时显著优于CPU。
Average Time take by the model with GPU = 0.32508648525584827s
Average Time take by the model with CPU = 8.285651618784124s
4. 总结
总的来说,简单应用通用模型来解决一些应用类问题,并不复杂。
难点在于有效数据的收集,标记,以及特殊应用模型的建模以及学习。
好在,后面我们将会面对的实际问题,都有比较好的算法,比如:Yolo算法等。
测试代码:006 PyTorch Mask RCNN
5. 参考资料
【1】Colab/PyTorch - Getting Started with PyTorch