Tensorflow Object detection API 在 Windows10 配置

Tensorflow Object detection API 在 Windows10 下的配置不如在 Ubuntu 下配置方便,但还是有方法的,介绍一下我的配置流程。
官方目标检测的demo中调用了大量的py文件,不利于项目的部署,因此我将其合并为两个文件
##1.Tensorflow models windows 下的环境配置
默认安装好了显卡驱动、cuda、cudnn

(1)下载anaconda3并安装

https://www.anaconda.com/download/

(2)安装tensorflow-gpu (我安装了1.10.0)

# For CPU
pip install tensorflow
# For GPU
pip install tensorflow-gpu==版本号(如1.10.0)

(3)下载 tensorflow GitHub上的models源码

git clone https://github.com/tensorflow/models.git

(4)依赖包的安装

官方给出的安装方法:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md
pip install protobuf-compiler python-pil python-lxml python-tk
pip install Cython
pip install contextlib2
pip install jupyter
pip install matplotlib

(5)coco API 的安装

# 官方给出的步骤,此方法在Windows下并不能直接用
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
make
cp -r pycocotools <path_to_tensorflow>/models/research/
# 首先为了编译,下载 mingw32-make
mingw官网下载地址:https://sourceforge.net/projects/mingw/files/latest/download?source=files
# 配置环境变量
C:\MINGW\bin
# 安装
cmd输入命令:mingw-get install gcc g++ mingw32-make
# 安装完成之后 将 mingw32-make.exe 放到 ...cocoapi/PythonAPI之下,之后输入指令进行编译
mingw32-make
# 然后在pycocotools文件夹下会生成 相应的python文件,将整个文件夹移动到tensorflow-models下的research目录下

(6)安装research

# 切换到以下路径并输入指令
/models/research/
python setup.py build
python setup.py install

(7)编译protobuf

# 下载 protoc 并安装
https://github.com/protocolbuffers/protobuf/releases
# 切换到/models/research/
protoc object_detection/protos/*.proto --python_out=.
# 如果提示错误就按照文件名一个一个生成即可 生成之后,有如下python文件:

这里写图片描述

(8)测试安装是否成功

python object_detection/builders/model_builder_test.py
# 返回 OK 表明安装成功

这里写图片描述

注意

有可能会遇到 “No model named nets” 的情况,原因是slim没有编译,按如下解决:

cd models/research/slim
python setup.py build
python setup.py instal

##二.tensorflow目标检测python脚本整合
PATH_TO_LABELS = ‘’ 需要用绝对路径指定,否则会报错,并且最好路径中不要包含中文。UnicodeEncodeError: ‘utf-8’ codec can’t encode character ‘\udcd5’ in position 2201: surrogates not allowed
MODEL_NAME = ‘ssd_mobilenet_v2_coco_2018_03_29’
MODEL_FILE = MODEL_NAME + ‘.tar.gz’
PATH_TO_FROZEN_GRAPH = MODEL_NAME + ‘/frozen_inference_graph.pb’
需要指定图片路径:
PATH_TO_TEST_IMAGES_DIR = ‘test_images’
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, ‘image{}.jpg’.format(i)) for i in range(4, 5) ]

从官方下载模型并解压:

# 模型下载地址
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

以下为整理的目标检测源码

需要将models\research\object_detection\protos路径下的string_int_label_map_pb2.py文件放到与之相同路径下,因为在文件中调用了该py文件。

# -*- coding: utf-8 -*-
"""
Created on Tue Sep  4 22:54:32 2018@author: 34123
"""import numpy as np
import os
import tensorflow as tf
from matplotlib import pyplot as plt
from PIL import Image#from object_detection.utils import label_map_util
#from object_detection.utils import visualization_utils as vis_utilMODEL_NAME = 'ssd_mobilenet_v2_coco_2018_03_29'
MODEL_FILE = MODEL_NAME + '.tar.gz'
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'# List of the strings that is used to add correct label for each box.
#PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
PATH_TO_LABELS = 'C:\\Users\\34123\Desktop\\Block Programing\\tensorflow_demo\\ssd_mobilenet_v2_coco_2018_03_29\\mscoco_label_map.pbtxt'NUM_CLASSES = 90detection_graph = tf.Graph()
with detection_graph.as_default():od_graph_def = tf.GraphDef()with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:serialized_graph = fid.read()od_graph_def.ParseFromString(serialized_graph)tf.import_graph_def(od_graph_def, name='')################################################################################################
import string_int_label_map_pb2
from google.protobuf import text_formatdef _validate_label_map(label_map):"""Checks if a label map is valid.Args:label_map: StringIntLabelMap to validate.Raises:ValueError: if label map is invalid."""for item in label_map.item:if item.id < 0:raise ValueError('Label map ids should be >= 0.')if (item.id == 0 and item.name != 'background' anditem.display_name != 'background'):raise ValueError('Label map id 0 is reserved for the background label')import logging
def convert_label_map_to_categories(label_map,max_num_classes,use_display_name=True):"""Loads label map proto and returns categories list compatible with eval.This function loads a label map and returns a list of dicts, each of whichhas the following keys:'id': (required) an integer id uniquely identifying this category.'name': (required) string representing category namee.g., 'cat', 'dog', 'pizza'.We only allow class into the list if its id-label_id_offset isbetween 0 (inclusive) and max_num_classes (exclusive).If there are several items mapping to the same id in the label map,we will only keep the first one in the categories list.Args:label_map: a StringIntLabelMapProto or None.  If None, a default categorieslist is created with max_num_classes categories.max_num_classes: maximum number of (consecutive) label indices to include.use_display_name: (boolean) choose whether to load 'display_name' fieldas category name.  If False or if the display_name field does not exist,uses 'name' field as category names instead.Returns:categories: a list of dictionaries representing all possible categories."""categories = []list_of_ids_already_added = []if not label_map:label_id_offset = 1for class_id in range(max_num_classes):categories.append({'id': class_id + label_id_offset,'name': 'category_{}'.format(class_id + label_id_offset)})return categoriesfor item in label_map.item:if not 0 < item.id <= max_num_classes:logging.info('Ignore item %d since it falls outside of requested ''label range.', item.id)continueif use_display_name and item.HasField('display_name'):name = item.display_nameelse:name = item.nameif item.id not in list_of_ids_already_added:list_of_ids_already_added.append(item.id)categories.append({'id': item.id, 'name': name})return categoriesdef load_labelmap(path):"""Loads label map proto.Args:path: path to StringIntLabelMap proto text file.Returns:a StringIntLabelMapProto"""with tf.gfile.GFile(path, 'r') as fid:label_map_string = fid.read()label_map = string_int_label_map_pb2.StringIntLabelMap()try:text_format.Merge(label_map_string, label_map)except text_format.ParseError:label_map.ParseFromString(label_map_string)_validate_label_map(label_map)return label_mapdef create_category_index(categories):"""Creates dictionary of COCO compatible categories keyed by category id.Args:categories: a list of dicts, each of which has the following keys:'id': (required) an integer id uniquely identifying this category.'name': (required) string representing category namee.g., 'cat', 'dog', 'pizza'.Returns:category_index: a dict containing the same entries as categories, but keyedby the 'id' field of each category."""category_index = {}for cat in categories:category_index[cat['id']] = catreturn category_index###############################################################################################label_map = load_labelmap(PATH_TO_LABELS)
categories = convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = create_category_index(categories)##########################from object_detection.utils import ops as utils_ops###############################
def reframe_box_masks_to_image_masks(box_masks, boxes, image_height,image_width):"""Transforms the box masks back to full image masks.Embeds masks in bounding boxes of larger masks whose shapes correspond toimage shape.Args:box_masks: A tf.float32 tensor of size [num_masks, mask_height, mask_width].boxes: A tf.float32 tensor of size [num_masks, 4] containing the boxcorners. Row i contains [ymin, xmin, ymax, xmax] of the boxcorresponding to mask i. Note that the box corners are innormalized coordinates.image_height: Image height. The output mask will have the same height asthe image height.image_width: Image width. The output mask will have the same width as theimage width.Returns:A tf.float32 tensor of size [num_masks, image_height, image_width]."""# TODO(rathodv): Make this a public function.def reframe_box_masks_to_image_masks_default():"""The default function when there are more than 0 box masks."""def transform_boxes_relative_to_boxes(boxes, reference_boxes):boxes = tf.reshape(boxes, [-1, 2, 2])min_corner = tf.expand_dims(reference_boxes[:, 0:2], 1)max_corner = tf.expand_dims(reference_boxes[:, 2:4], 1)transformed_boxes = (boxes - min_corner) / (max_corner - min_corner)return tf.reshape(transformed_boxes, [-1, 4])box_masks_expanded = tf.expand_dims(box_masks, axis=3)num_boxes = tf.shape(box_masks_expanded)[0]unit_boxes = tf.concat([tf.zeros([num_boxes, 2]), tf.ones([num_boxes, 2])], axis=1)reverse_boxes = transform_boxes_relative_to_boxes(unit_boxes, boxes)return tf.image.crop_and_resize(image=box_masks_expanded,boxes=reverse_boxes,box_ind=tf.range(num_boxes),crop_size=[image_height, image_width],extrapolation_value=0.0)image_masks = tf.cond(tf.shape(box_masks)[0] > 0,reframe_box_masks_to_image_masks_default,lambda: tf.zeros([0, image_height, image_width, 1], dtype=tf.float32))return tf.squeeze(image_masks, axis=3)#######################################################################################################
import collections
import PIL.ImageColor as ImageColor
import PIL.ImageDraw as ImageDraw
import PIL.ImageFont as ImageFontSTANDARD_COLORS = ['AliceBlue', 'Chartreuse', 'Aqua', 'Aquamarine', 'Azure', 'Beige', 'Bisque','BlanchedAlmond', 'BlueViolet', 'BurlyWood', 'CadetBlue', 'AntiqueWhite','Chocolate', 'Coral', 'CornflowerBlue', 'Cornsilk', 'Crimson', 'Cyan','DarkCyan', 'DarkGoldenRod', 'DarkGrey', 'DarkKhaki', 'DarkOrange','DarkOrchid', 'DarkSalmon', 'DarkSeaGreen', 'DarkTurquoise', 'DarkViolet','DeepPink', 'DeepSkyBlue', 'DodgerBlue', 'FireBrick', 'FloralWhite','ForestGreen', 'Fuchsia', 'Gainsboro', 'GhostWhite', 'Gold', 'GoldenRod','Salmon', 'Tan', 'HoneyDew', 'HotPink', 'IndianRed', 'Ivory', 'Khaki','Lavender', 'LavenderBlush', 'LawnGreen', 'LemonChiffon', 'LightBlue','LightCoral', 'LightCyan', 'LightGoldenRodYellow', 'LightGray', 'LightGrey','LightGreen', 'LightPink', 'LightSalmon', 'LightSeaGreen', 'LightSkyBlue','LightSlateGray', 'LightSlateGrey', 'LightSteelBlue', 'LightYellow', 'Lime','LimeGreen', 'Linen', 'Magenta', 'MediumAquaMarine', 'MediumOrchid','MediumPurple', 'MediumSeaGreen', 'MediumSlateBlue', 'MediumSpringGreen','MediumTurquoise', 'MediumVioletRed', 'MintCream', 'MistyRose', 'Moccasin','NavajoWhite', 'OldLace', 'Olive', 'OliveDrab', 'Orange', 'OrangeRed','Orchid', 'PaleGoldenRod', 'PaleGreen', 'PaleTurquoise', 'PaleVioletRed','PapayaWhip', 'PeachPuff', 'Peru', 'Pink', 'Plum', 'PowderBlue', 'Purple','Red', 'RosyBrown', 'RoyalBlue', 'SaddleBrown', 'Green', 'SandyBrown','SeaGreen', 'SeaShell', 'Sienna', 'Silver', 'SkyBlue', 'SlateBlue','SlateGray', 'SlateGrey', 'Snow', 'SpringGreen', 'SteelBlue', 'GreenYellow','Teal', 'Thistle', 'Tomato', 'Turquoise', 'Violet', 'Wheat', 'White','WhiteSmoke', 'Yellow', 'YellowGreen'
]def draw_bounding_box_on_image(image,ymin,xmin,ymax,xmax,color='red',thickness=4,display_str_list=(),use_normalized_coordinates=True):"""Adds a bounding box to an image.Bounding box coordinates can be specified in either absolute (pixel) ornormalized coordinates by setting the use_normalized_coordinates argument.Each string in display_str_list is displayed on a separate line above thebounding box in black text on a rectangle filled with the input 'color'.If the top of the bounding box extends to the edge of the image, the stringsare displayed below the bounding box.Args:image: a PIL.Image object.ymin: ymin of bounding box.xmin: xmin of bounding box.ymax: ymax of bounding box.xmax: xmax of bounding box.color: color to draw bounding box. Default is red.thickness: line thickness. Default value is 4.display_str_list: list of strings to display in box(each to be shown on its own line).use_normalized_coordinates: If True (default), treat coordinatesymin, xmin, ymax, xmax as relative to the image.  Otherwise treatcoordinates as absolute."""draw = ImageDraw.Draw(image)im_width, im_height = image.sizeif use_normalized_coordinates:(left, right, top, bottom) = (xmin * im_width, xmax * im_width,ymin * im_height, ymax * im_height)else:(left, right, top, bottom) = (xmin, xmax, ymin, ymax)draw.line([(left, top), (left, bottom), (right, bottom),(right, top), (left, top)], width=thickness, fill=color)try:font = ImageFont.truetype('arial.ttf', 24)except IOError:font = ImageFont.load_default()# If the total height of the display strings added to the top of the bounding# box exceeds the top of the image, stack the strings below the bounding box# instead of above.display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]# Each display_str has a top and bottom margin of 0.05x.total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)if top > total_display_str_height:text_bottom = topelse:text_bottom = bottom + total_display_str_height# Reverse list and print from bottom to top.for display_str in display_str_list[::-1]:text_width, text_height = font.getsize(display_str)margin = np.ceil(0.05 * text_height)draw.rectangle([(left, text_bottom - text_height - 2 * margin), (left + text_width,text_bottom)],fill=color)draw.text((left + margin, text_bottom - text_height - margin),display_str,fill='black',font=font)text_bottom -= text_height - 2 * margindef draw_bounding_boxes_on_image(image,boxes,color='red',thickness=4,display_str_list_list=()):"""Draws bounding boxes on image.Args:image: a PIL.Image object.boxes: a 2 dimensional numpy array of [N, 4]: (ymin, xmin, ymax, xmax).The coordinates are in normalized format between [0, 1].color: color to draw bounding box. Default is red.thickness: line thickness. Default value is 4.display_str_list_list: list of list of strings.a list of strings for each bounding box.The reason to pass a list of strings for abounding box is that it might containmultiple labels.Raises:ValueError: if boxes is not a [N, 4] array"""boxes_shape = boxes.shapeif not boxes_shape:returnif len(boxes_shape) != 2 or boxes_shape[1] != 4:raise ValueError('Input must be of size [N, 4]')for i in range(boxes_shape[0]):display_str_list = ()if display_str_list_list:display_str_list = display_str_list_list[i]draw_bounding_box_on_image(image, boxes[i, 0], boxes[i, 1], boxes[i, 2],boxes[i, 3], color, thickness, display_str_list)def draw_bounding_box_on_image_array(image,ymin,xmin,ymax,xmax,color='red',thickness=4,display_str_list=(),use_normalized_coordinates=True):"""Adds a bounding box to an image (numpy array).Bounding box coordinates can be specified in either absolute (pixel) ornormalized coordinates by setting the use_normalized_coordinates argument.Args:image: a numpy array with shape [height, width, 3].ymin: ymin of bounding box.xmin: xmin of bounding box.ymax: ymax of bounding box.xmax: xmax of bounding box.color: color to draw bounding box. Default is red.thickness: line thickness. Default value is 4.display_str_list: list of strings to display in box(each to be shown on its own line).use_normalized_coordinates: If True (default), treat coordinatesymin, xmin, ymax, xmax as relative to the image.  Otherwise treatcoordinates as absolute."""image_pil = Image.fromarray(np.uint8(image)).convert('RGB')draw_bounding_box_on_image(image_pil, ymin, xmin, ymax, xmax, color,thickness, display_str_list,use_normalized_coordinates)np.copyto(image, np.array(image_pil))def draw_mask_on_image_array(image, mask, color='red', alpha=0.4):"""Draws mask on an image.Args:image: uint8 numpy array with shape (img_height, img_height, 3)mask: a uint8 numpy array of shape (img_height, img_height) withvalues between either 0 or 1.color: color to draw the keypoints with. Default is red.alpha: transparency value between 0 and 1. (default: 0.4)Raises:ValueError: On incorrect data type for image or masks."""if image.dtype != np.uint8:raise ValueError('`image` not of type np.uint8')if mask.dtype != np.uint8:raise ValueError('`mask` not of type np.uint8')if np.any(np.logical_and(mask != 1, mask != 0)):raise ValueError('`mask` elements should be in [0, 1]')if image.shape[:2] != mask.shape:raise ValueError('The image has spatial dimensions %s but the mask has ''dimensions %s' % (image.shape[:2], mask.shape))rgb = ImageColor.getrgb(color)pil_image = Image.fromarray(image)solid_color = np.expand_dims(np.ones_like(mask), axis=2) * np.reshape(list(rgb), [1, 1, 3])pil_solid_color = Image.fromarray(np.uint8(solid_color)).convert('RGBA')pil_mask = Image.fromarray(np.uint8(255.0*alpha*mask)).convert('L')pil_image = Image.composite(pil_solid_color, pil_image, pil_mask)np.copyto(image, np.array(pil_image.convert('RGB')))def draw_keypoints_on_image(image,keypoints,color='red',radius=2,use_normalized_coordinates=True):"""Draws keypoints on an image.Args:image: a PIL.Image object.keypoints: a numpy array with shape [num_keypoints, 2].color: color to draw the keypoints with. Default is red.radius: keypoint radius. Default value is 2.use_normalized_coordinates: if True (default), treat keypoint values asrelative to the image.  Otherwise treat them as absolute."""draw = ImageDraw.Draw(image)im_width, im_height = image.sizekeypoints_x = [k[1] for k in keypoints]keypoints_y = [k[0] for k in keypoints]if use_normalized_coordinates:keypoints_x = tuple([im_width * x for x in keypoints_x])keypoints_y = tuple([im_height * y for y in keypoints_y])for keypoint_x, keypoint_y in zip(keypoints_x, keypoints_y):draw.ellipse([(keypoint_x - radius, keypoint_y - radius),(keypoint_x + radius, keypoint_y + radius)],outline=color, fill=color)def draw_keypoints_on_image_array(image,keypoints,color='red',radius=2,use_normalized_coordinates=True):"""Draws keypoints on an image (numpy array).Args:image: a numpy array with shape [height, width, 3].keypoints: a numpy array with shape [num_keypoints, 2].color: color to draw the keypoints with. Default is red.radius: keypoint radius. Default value is 2.use_normalized_coordinates: if True (default), treat keypoint values asrelative to the image.  Otherwise treat them as absolute."""image_pil = Image.fromarray(np.uint8(image)).convert('RGB')draw_keypoints_on_image(image_pil, keypoints, color, radius,use_normalized_coordinates)np.copyto(image, np.array(image_pil))def visualize_boxes_and_labels_on_image_array(image,boxes,classes,scores,category_index,instance_masks=None,instance_boundaries=None,keypoints=None,use_normalized_coordinates=False,max_boxes_to_draw=20,min_score_thresh=.5,agnostic_mode=False,line_thickness=4,groundtruth_box_visualization_color='black',skip_scores=False,skip_labels=False):"""Overlay labeled boxes on an image with formatted scores and label names.This function groups boxes that correspond to the same locationand creates a display string for each detection and overlays theseon the image. Note that this function modifies the image in place, and returnsthat same image.Args:image: uint8 numpy array with shape (img_height, img_width, 3)boxes: a numpy array of shape [N, 4]classes: a numpy array of shape [N]. Note that class indices are 1-based,and match the keys in the label map.scores: a numpy array of shape [N] or None.  If scores=None, thenthis function assumes that the boxes to be plotted are groundtruthboxes and plot all boxes as black with no classes or scores.category_index: a dict containing category dictionaries (each holdingcategory index `id` and category name `name`) keyed by category indices.instance_masks: a numpy array of shape [N, image_height, image_width] withvalues ranging between 0 and 1, can be None.instance_boundaries: a numpy array of shape [N, image_height, image_width]with values ranging between 0 and 1, can be None.keypoints: a numpy array of shape [N, num_keypoints, 2], canbe Noneuse_normalized_coordinates: whether boxes is to be interpreted asnormalized coordinates or not.max_boxes_to_draw: maximum number of boxes to visualize.  If None, drawall boxes.min_score_thresh: minimum score threshold for a box to be visualizedagnostic_mode: boolean (default: False) controlling whether to evaluate inclass-agnostic mode or not.  This mode will display scores but ignoreclasses.line_thickness: integer (default: 4) controlling line width of the boxes.groundtruth_box_visualization_color: box color for visualizing groundtruthboxesskip_scores: whether to skip score when drawing a single detectionskip_labels: whether to skip label when drawing a single detectionReturns:uint8 numpy array with shape (img_height, img_width, 3) with overlaid boxes."""# Create a display string (and color) for every box location, group any boxes# that correspond to the same location.box_to_display_str_map = collections.defaultdict(list)box_to_color_map = collections.defaultdict(str)box_to_instance_masks_map = {}box_to_instance_boundaries_map = {}box_to_keypoints_map = collections.defaultdict(list)if not max_boxes_to_draw:max_boxes_to_draw = boxes.shape[0]for i in range(min(max_boxes_to_draw, boxes.shape[0])):if scores is None or scores[i] > min_score_thresh:box = tuple(boxes[i].tolist())if instance_masks is not None:box_to_instance_masks_map[box] = instance_masks[i]if instance_boundaries is not None:box_to_instance_boundaries_map[box] = instance_boundaries[i]if keypoints is not None:box_to_keypoints_map[box].extend(keypoints[i])if scores is None:box_to_color_map[box] = groundtruth_box_visualization_colorelse:display_str = ''if not skip_labels:if not agnostic_mode:if classes[i] in category_index.keys():class_name = category_index[classes[i]]['name']else:class_name = 'N/A'display_str = str(class_name)if not skip_scores:if not display_str:display_str = '{}%'.format(int(100*scores[i]))else:display_str = '{}: {}%'.format(display_str, int(100*scores[i]))box_to_display_str_map[box].append(display_str)if agnostic_mode:box_to_color_map[box] = 'DarkOrange'else:box_to_color_map[box] = STANDARD_COLORS[classes[i] % len(STANDARD_COLORS)]# Draw all boxes onto image.for box, color in box_to_color_map.items():ymin, xmin, ymax, xmax = boxif instance_masks is not None:draw_mask_on_image_array(image,box_to_instance_masks_map[box],color=color)if instance_boundaries is not None:draw_mask_on_image_array(image,box_to_instance_boundaries_map[box],color='red',alpha=1.0)draw_bounding_box_on_image_array(image,ymin,xmin,ymax,xmax,color=color,thickness=line_thickness,display_str_list=box_to_display_str_map[box],use_normalized_coordinates=use_normalized_coordinates)if keypoints is not None:draw_keypoints_on_image_array(image,box_to_keypoints_map[box],color=color,radius=line_thickness / 2,use_normalized_coordinates=use_normalized_coordinates)return image#######################################################################################################def load_image_into_numpy_array(image):(im_width, im_height) = image.sizereturn np.array(image.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)# For the sake of simplicity we will use only 2 images:
# -image1.jpg  -image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(6, 7) ]# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)#######################################################################################################def run_inference_for_single_image(image, graph):with graph.as_default():config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True)) #限制GPU资源分配,刚开始分配少量资源,然后按需慢慢增加GPU资源。with tf.Session(config=config) as sess:# Get handles to input and output tensorsops = tf.get_default_graph().get_operations()all_tensor_names = {output.name for op in ops for output in op.outputs}tensor_dict = {}for key in ['num_detections', 'detection_boxes', 'detection_scores','detection_classes', 'detection_masks']:tensor_name = key + ':0'if tensor_name in all_tensor_names:tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(tensor_name)if 'detection_masks' in tensor_dict:# The following processing is only for single imagedetection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])detection_masks_reframed = reframe_box_masks_to_image_masks(detection_masks, detection_boxes, image.shape[0], image.shape[1])detection_masks_reframed = tf.cast(tf.greater(detection_masks_reframed, 0.5), tf.uint8)# Follow the convention by adding back the batch dimensiontensor_dict['detection_masks'] = tf.expand_dims(detection_masks_reframed, 0)image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')# Run inferenceoutput_dict = sess.run(tensor_dict,feed_dict={image_tensor: np.expand_dims(image, 0)})# all outputs are float32 numpy arrays, so convert types as appropriateoutput_dict['num_detections'] = int(output_dict['num_detections'][0])output_dict['detection_classes'] = output_dict['detection_classes'][0].astype(np.uint8)output_dict['detection_boxes'] = output_dict['detection_boxes'][0]output_dict['detection_scores'] = output_dict['detection_scores'][0]if 'detection_masks' in output_dict:output_dict['detection_masks'] = output_dict['detection_masks'][0]return output_dictfor image_path in TEST_IMAGE_PATHS:image = Image.open(image_path)# the array based representation of the image will be used later in order to prepare the# result image with boxes and labels on it.image_np = load_image_into_numpy_array(image)# Expand dimensions since the model expects images to have shape: [1, None, None, 3]image_np_expanded = np.expand_dims(image_np, axis=0)# Actual detection.output_dict = run_inference_for_single_image(image_np, detection_graph)# Visualization of the results of a detection.visualize_boxes_and_labels_on_image_array(image_np,output_dict['detection_boxes'],output_dict['detection_classes'],output_dict['detection_scores'],category_index,instance_masks=output_dict.get('detection_masks'),use_normalized_coordinates=True,line_thickness=8)plt.figure(figsize=IMAGE_SIZE)plt.imshow(image_np)print(output_dict)

##测试结果
这里写图片描述

任何程序错误,以及技术疑问或需要解答的,请添加

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/547190.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

使用jq的toggle函数实现全选功能遇到的问题

2019独角兽企业重金招聘Python工程师标准>>> 今天做网站后台管理的时候&#xff0c;要实现一个单选全选的功能&#xff0c;很简单的功能&#xff0c;不过&#xff0c;遇到了一个很诡异的问题&#xff0c;写出来跟大家分享下。 功能就不赘述了&#xff0c;大家都懂&…

linq to js使用汇总

用途&#xff1a;方便js操作查询json数据。 下载网址&#xff1a;http://jslinq.codeplex.com/ 使用方法&#xff1a;只需要引用linq.js即可。 查询方法&#xff1a; 一、where查询 var myList [{ Name: "Jim", Age: 20 },{ Name: "Kate", Age: 21 },…

GO国内镜像加速模块下载

众所周知&#xff0c;国内网络访问国外资源经常会出现不稳定的情况。 Go 生态系统中有着许多中国 Gopher 们无法获取的模块&#xff0c;比如最著名的 golang.org/x/...。并且在中国大陆从 GitHub 获取模块的速度也有点慢。 因此设置 CDN 加速代理就很有必要了&#xff0c;以下…

JS过滤emoji

function filterEmoji(text){var ranges [\ud83c[\udf00-\udfff], \ud83d[\udc00-\ude4f], \ud83d[\ude80-\udeff]];return text.replace(new RegExp(ranges.join(|), g), );} 如果上述代码不能适用所有情景&#xff0c;适用调用前的字符length和处理后的字符长度去对比&#…

AbstractEndpoint 和 ProtocolHandler

2019独角兽企业重金招聘Python工程师标准>>> AbstractEndpoint 和 ProtocolHandler /** Licensed to the Apache Software Foundation (ASF) under one or more* contributor license agreements. See the NOTICE file distributed with* this work for additiona…

HOG + SVM 实现图片分类(python3)

前言 大家能看到这篇文章&#xff0c;想必对HOG还是有些了解了&#xff0c;那我就不赘述了&#xff0c;其实我自己不太懂&#xff0c;但是还是比刚开始好一些了。下面我的代码是参考这位博主的&#xff1a;点我查看 上面那位博主是用的cifar-10数据集&#xff0c;但是我们的数…

sql无限递归查询

--------------所有子集数据包括自己---------------------CREATE PROCEDURE ALLSONID INTASBEGINWITH CTE AS(SELECT ID,PID,NAME,0 AS LVL FROM TEST1WHERE ID IDUNION ALLSELECT D.ID,D.PID,D.NAME,LVL1 FROM CTE C INNER JOIN TEST1 DON C.ID D.PID)SELECT * FROM CTEEND…

随机过程及其在金融领域中的应用 第三章 习题 及 答案

随机过程及其在金融领域中的应用 第三章 习题 及 答案 本文地址: http://blog.csdn.net/caroline_wendy/article/details/16879515 包含: 1, 2, 5, 15; 第1题: 第2题: 第5题: 第15题: 转载于:https://blog.51cto.com/spikeking/1388002

Fiddler手机抓包(iPhone)

Fiddler不但能截获各种浏览器发出的HTTP/HTTPS请求&#xff0c;也可以截获各种移动设备&#xff08;包括Andriod和IOS&#xff09;发出的HTTP/HTTPS请求。最关键的是&#xff0c;Fiddler还可以断点调试&#xff0c;修改Request和Response&#xff0c;而且即便抓包的是IOS设备&a…

聊聊asp.net中Web Api的使用

扯淡 随着app应用的崛起&#xff0c;后端服务开发的也越来越多&#xff0c;除了很多优秀的nodejs框架之外&#xff0c;微软当然也会在这个方面提供更便捷的开发方式。这是微软一贯的作风&#xff0c;如果从开发的便捷性来说的话微软是当之无愧的老大哥&#xff0c;只是鱼和熊掌…

python脚本去除文件名里的空格

原始文件名里很多空格&#xff0c;写了个python脚本去除文件名里的空格 import osfilepath"image" # 文件目录名 zimulus os.listdir(filepath)for musicname in zimulus:#改目录下的文件名oldmusicpath filepath \\ musicnamenewmusicname musicname.replace( …

plsql查询乱码问题解决

步骤一&#xff1a;新建变量&#xff0c;设置变量名&#xff1a;NLS_LANG&#xff0c;变量值&#xff1a;SIMPLIFIED CHINESE_CHINA.ZHS16GBK&#xff0c;确定即可&#xff1b; 步骤二&#xff1a; 退出plsql&#xff0c;重新登陆plsql。输入sql语句&#xff0c;执行&#xff0…

eclipse打开文件所在目录

设置 添加扩展工具&#xff0c;添加步骤如下&#xff1a; Run-->External Tools-->External Tools Configurations... new 一个 programlocation 里面填 &#xff1a;C:/WINDOWS/explorer.exeArguments 里面填: ${container_loc}设置完成之后&#xff0c;选择文件&#…

python saml2 认证实例程序demo

# pip install pysaml2 from saml2.client import Saml2Client from saml2.config import Config as Saml2Configmetadata_filepath acs_endpoint_url entity_id# 获取跳转网址 saml_settings {metadata: {local: [authenticator_self.metadata_filepath]}, service: {sp: {end…

找回Kitkat的AppOps

2019独角兽企业重金招聘Python工程师标准>>> How to invoke AppOps in Android 4.4 KITKAT % adb shell am start -n com.android.settings/com.android.settings.Settings \ -e :android:show_fragment com.android.settings.applications.AppOpsSummary \ --activ…

检索COM类工厂中CLSID为{00024500-0000-0000-C000-000000000046}的组件时失败

具体解决方法如下: 1:在服务器上安装office的Excel软件&#xff1b; 2:在"开始"->"运行"中输入dcomcnfg.exe启动"组件服务"&#xff1b; 3:依次双击"组件服务"->"计算机"->"我的电脑"->"DCOM…

win2003 sp2+iis 6.0上部署.net 2.0和.net 4.0网站的方法

网站环境 IIS6.0,操作系统Windows server2003 sp2,服务器之前已经部署了.net 2.0和asp的网站,现在要部署新开发的.net 4.0网站.本来认为很简单&#xff0c;却遇到了很多问题&#xff0c;下面详细描述下过程&#xff1a; 1.官网下载.net framework4.0,下载地址:http://www.micro…

python+opencv实现机器视觉基础技术(2)(宽度测量,缺陷检测,医学检测

本篇博客接着讲解机器视觉的有关技术和知识。包括宽度测量&#xff0c;缺陷检测&#xff0c;医学处理。 一&#xff1a;宽度测量 在传统的自动化生产中&#xff0c;对于尺寸的测量&#xff0c;典型的方法就是千分尺、游标卡尺、塞尺等。而这些测量手段测量精度低、速度慢&…

web api添加拦截器

实现思路 1.标识控制器有拦截特性&#xff1b; 2.控制器拦截处理&#xff1b; 代码实现 1.标识控制器有拦截特性&#xff0c;代码&#xff1a; [MyFilter]public string PostFindUser([FromBody]Userinfo user){return string.Format("{0}是好人~", user.Name);}…

map 与 unordered_map

两者效率对比&#xff1a; #include <iostream> #include <string> #include <map> #include <unordered_map> #include <sys/time.h> #include <list>using namespace std;template<class T> void fun(const T& t, int sum) {f…