院子摄像头的监控

院子摄像头的监控和禁止区域入侵检测相比,多了2个功能:1)如果检测到有人入侵,则把截图保存起来,2)如果检测到有人入侵,则向数据库插入一条事件数据。

      打开checkingfence.py,添加如下代码:

# -*- coding: utf-8 -*-'''
禁止区域检测主程序
摄像头对准围墙那一侧用法: 
python checkingfence.py
python checkingfence.py --filename tests/yard_01.mp4
'''# import the necessary packages
from oldcare.track import CentroidTracker
from oldcare.track import TrackableObject
from imutils.video import FPS
import numpy as np
import imutils
import argparse
import time
import dlib
import cv2
import os
import subprocess# 得到当前时间
current_time = time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))
print('[INFO] %s 禁止区域检测程序启动了.'%(current_time))# 传入参数
ap = argparse.ArgumentParser()
ap.add_argument("-f", "--filename", required=False, default = '',help="")
args = vars(ap.parse_args())# 全局变量
prototxt_file_path='models/mobilenet_ssd/MobileNetSSD_deploy.prototxt'
# Contains the Caffe deep learning model files. 
#We’ll be using a MobileNet Single Shot Detector (SSD), 
#“Single Shot Detectors for object detection”.
model_file_path='models/mobilenet_ssd/MobileNetSSD_deploy.caffemodel' 
output_fence_path = 'supervision/fence'
input_video = args['filename']
skip_frames = 30 # of skip frames between detections
# your python path
python_path = '/home/reed/anaconda3/envs/tensorflow/bin/python' # 超参数
# minimum probability to filter weak detections
minimum_confidence = 0.80 # 物体识别模型能识别的物体(21种)
CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat","bottle", "bus", "car", "cat", "chair", "cow", "diningtable","dog", "horse", "motorbike", "person", "pottedplant", "sheep","sofa", "train", "tvmonitor"]# if a video path was not supplied, grab a reference to the webcam
if not input_video:print("[INFO] starting video stream...")vs = cv2.VideoCapture(0)time.sleep(2)
else:print("[INFO] opening video file...")vs = cv2.VideoCapture(input_video)# 加载物体识别模型
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe(prototxt_file_path, model_file_path)# initialize the frame dimensions (we'll set them as soon as we read
# the first frame from the video)
W = None
H = None# instantiate our centroid tracker, then initialize a list to store
# each of our dlib correlation trackers, followed by a dictionary to
# map each unique object ID to a TrackableObject
ct = CentroidTracker(maxDisappeared=40, maxDistance=50)
trackers = []
trackableObjects = {}# initialize the total number of frames processed thus far, along
# with the total number of objects that have moved either up or down
totalFrames = 0
totalDown = 0
totalUp = 0# start the frames per second throughput estimator
fps = FPS().start()# loop over frames from the video stream
while True:# grab the next frame and handle if we are reading from either# VideoCapture or VideoStreamret, frame = vs.read()# if we are viewing a video and we did not grab a frame then we# have reached the end of the videoif input_video and not ret:breakif not input_video:frame = cv2.flip(frame, 1)# resize the frame to have a maximum width of 500 pixels (the# less data we have, the faster we can process it), then convert# the frame from BGR to RGB for dlibframe = imutils.resize(frame, width=500)rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)# if the frame dimensions are empty, set themif W is None or H is None:(H, W) = frame.shape[:2]# initialize the current status along with our list of bounding# box rectangles returned by either (1) our object detector or# (2) the correlation trackersstatus = "Waiting"rects = []# check to see if we should run a more computationally expensive# object detection method to aid our trackerif totalFrames % skip_frames == 0:# set the status and initialize our new set of object trackersstatus = "Detecting"trackers = []# convert the frame to a blob and pass the blob through the# network and obtain the detectionsblob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5)net.setInput(blob)detections = net.forward()# loop over the detectionsfor i in np.arange(0, detections.shape[2]):# extract the confidence (i.e., probability) associated# with the predictionconfidence = detections[0, 0, i, 2]# filter out weak detections by requiring a minimum# confidenceif confidence > minimum_confidence:# extract the index of the class label from the# detections listidx = int(detections[0, 0, i, 1])# if the class label is not a person, ignore itif CLASSES[idx] != "person":continue# compute the (x, y)-coordinates of the bounding box# for the objectbox = detections[0, 0, i, 3:7]*np.array([W, H, W, H])(startX, startY, endX, endY) = box.astype("int")# construct a dlib rectangle object from the bounding# box coordinates and then start the dlib correlation# trackertracker = dlib.correlation_tracker()rect = dlib.rectangle(startX, startY, endX, endY)tracker.start_track(rgb, rect)# add the tracker to our list of trackers so we can# utilize it during skip framestrackers.append(tracker)# otherwise, we should utilize our object *trackers* rather than#object *detectors* to obtain a higher frame processing throughputelse:# loop over the trackersfor tracker in trackers:# set the status of our system to be 'tracking' rather# than 'waiting' or 'detecting'status = "Tracking"# update the tracker and grab the updated positiontracker.update(rgb)pos = tracker.get_position()# unpack the position objectstartX = int(pos.left())startY = int(pos.top())endX = int(pos.right())endY = int(pos.bottom())# draw a rectangle around the peoplecv2.rectangle(frame, (startX, startY), (endX, endY),(0, 255, 0), 2)# add the bounding box coordinates to the rectangles listrects.append((startX, startY, endX, endY))# draw a horizontal line in the center of the frame -- once an# object crosses this line we will determine whether they were# moving 'up' or 'down'cv2.line(frame, (0, H // 2), (W, H // 2), (0, 255, 255), 2)# use the centroid tracker to associate the (1) old object# centroids with (2) the newly computed object centroidsobjects = ct.update(rects)# loop over the tracked objectsfor (objectID, centroid) in objects.items():# check to see if a trackable object exists for the current# object IDto = trackableObjects.get(objectID, None)# if there is no existing trackable object, create oneif to is None:to = TrackableObject(objectID, centroid)# otherwise, there is a trackable object so we can utilize it# to determine directionelse:# the difference between the y-coordinate of the *current*# centroid and the mean of *previous* centroids will tell# us in which direction the object is moving (negative for# 'up' and positive for 'down')y = [c[1] for c in to.centroids]direction = centroid[1] - np.mean(y)to.centroids.append(centroid)# check to see if the object has been counted or notif not to.counted:# if the direction is negative (indicating the object# is moving up) AND the centroid is above the center# line, count the objectif direction < 0 and centroid[1] < H // 2:totalUp += 1to.counted = True# if the direction is positive (indicating the object# is moving down) AND the centroid is below the# center line, count the objectelif direction > 0 and centroid[1] > H // 2:totalDown += 1to.counted = Truecurrent_time = time.strftime('%Y-%m-%d %H:%M:%S',time.localtime(time.time()))event_desc = '有人闯入禁止区域!!!'event_location = '院子'print('[EVENT] %s, 院子, 有人闯入禁止区域!!!' %(current_time))cv2.imwrite(os.path.join(output_fence_path, 'snapshot_%s.jpg' %(time.strftime('%Y%m%d_%H%M%S'))), frame)# insert into databasecommand = '%s inserting.py --event_desc %s --event_type 4 --event_location %s' %(python_path, event_desc, event_location)p = subprocess.Popen(command, shell=True)  # store the trackable object in our dictionarytrackableObjects[objectID] = to# draw both the ID of the object and the centroid of the# object on the output frametext = "ID {}".format(objectID)cv2.putText(frame, text, (centroid[0] - 10, centroid[1] - 10),cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)cv2.circle(frame, (centroid[0], centroid[1]), 4, (0, 255, 0), -1)# construct a tuple of information we will be displaying on the# frameinfo = [#("Up", totalUp),("Down", totalDown),("Status", status),]# loop over the info tuples and draw them on our framefor (i, (k, v)) in enumerate(info):text = "{}: {}".format(k, v)cv2.putText(frame, text, (10, H - ((i * 20) + 20)),cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)# show the output framecv2.imshow("Prohibited Area", frame)k = cv2.waitKey(1) & 0xff if k == 27:break# increment the total number of frames processed thus far and# then update the FPS countertotalFrames += 1fps.update()# stop the timer and display FPS information
fps.stop()
print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) # 14.19
print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # 90.43# close any open windows
vs.release()
cv2.destroyAllWindows()

执行下行命令即可运行程序:

python checkingfence.py --filename tests/yard_01.mp4

      同学们如果可以把摄像头挂在高处,也可以通过摄像头捕捉画面。程序运行方式如下:

python checkingfence.py

      程序运行结果如下图:

 

image.png

图1 程序运行效果

image.png

图2 程序运行控制台的输出

      supervision/fence目录下出现了入侵的截图。

image.png

图3 入侵截图被保存

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/765526.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Springboot笔记-03

1.properties配置文件 #配制oerson的值 person.lastname张三 person.age12 person.birth2017/12/12 person.bossfalse person.dog.namedag person.dog.age15 person.maps.k1v1 person.maps.k212 person.listsa,b,c运行结果乱码 因为idea默认是utf-8编码而properties是ascall编…

Bytebase 2.14.1 - 分支 (Branching) 功能支持 Oracle

&#x1f680; 新功能 分支 (Branching) 功能支持 Oracle。为 SQL 编辑器添加了项目选择器。 新增 SQL 审核规范&#xff1a; 禁止混合 DDL、DML 语句。禁止对同一张表进行不同类型的 DML 变更 (UPDATE,INSERT,DELETE)。 &#x1f514; 重大变更 工作空间设置中的「数据访问…

大屏可视化综合展示解决方案

1.系统概述 1.1.需求分析 1.2.重难点分析 1.3.重难点解决措施 2.系统架构设计 2.1.系统架构图 2.2.关键技术 2.3.接口及要求 3.系统功能设计 3.1.功能清单列表 3.2.数据源管理 3.3.数据集管理 3.4.视图管理 3.5.仪表盘管理 3.6.移动端设计 3.1.系统权限设计 3.…

利用Scala与Apache HttpClient实现网络音频流的抓取

概述 在当今数字化时代&#xff0c;网络数据的抓取和处理已成为许多应用程序和服务的重要组成部分。本文将介绍如何利用Scala编程语言结合Apache HttpClient工具库实现网络音频流的抓取。通过本文&#xff0c;读者将学习如何利用强大的Scala语言和Apache HttpClient库来抓取网…

Python代码实现Excel表格转HTML文件

Excel工作簿是常用的表格格式&#xff0c;广泛用于组织、分析及展示数据。Excel文件通常需要专门的文档阅览器进行查看。如果我们想要以更兼容的方式展示Excel表格&#xff0c;可以将其转换为HTML格式&#xff0c;使其能够在各种浏览器中直接进行查看。同时&#xff0c;将Excel…

【小白入门篇1】GPT到底是怎样练成?

由于具有代表性的OpenAI公司GPT模型并没有开源&#xff0c;所以本章节是参考一些开源和现有课程&#xff08;李宏毅&#xff09;讲解ChatGPT原理。本章没有涉及到很多数学运算&#xff0c;比较适合小白了解GPT到底是怎么练成。GPT的三个英文字母分别代表Generative(生成式)&…

Git 分布式版本控制系统基本概念和操作命令

目录 Git 基本概念 功能特点 工作流程 操作命令 新建代码库 配置 增删文件 代码提交 分支 标签 查看信息 远程同步 撤销 其他 小结 Git Git 是一个开源的分布式版本控制系统&#xff0c;用于跟踪文件的变更历史。它最初由 Linux Torvalds 设计&#xff0c;用于…

php闭包应用

laravel 路由 bingTo 把路由URL映射到匿名回调函数上&#xff0c;框架会把匿名回调函数绑定到应用对象上&#xff0c;这样在匿名函数中就可以使用$this关键字引用重要的应用对象。Illuminate\Support\Traits\Macroable的__call方法。 自己写一个简单的demo: <?php <?…

Docker-Image

Docker Docker 镜像是什么为什么需要镜像镜像命令总览docker imagesdocker tagdocker pulldocker pushdocker rmidocker savedocker loaddocker image inspectdocker historydocker importdocker image prunedocker build Docker 镜像是什么 Docker image 本质上是一个 read-on…

2024 年 5 个 Linux 开源数字化学习平台

与其他行业一样&#xff0c;教育界多年来一直在经历数字化转型的过程。随着数字化学习平台的建立&#xff0c;目前只要能上网&#xff0c;任何人都可以接受教育。 “e-learning”一词的意思是“数字化学习”&#xff0c;是当今最常用的词之一。 它指的是通常在互联网上进行的培…

一文带你弄懂JVM与JAVA体系结构

文章目录 1.JVM 与 Java 体系结构1.1. 前言1.2. 一些参考书目1.3. Java 及 JVM 简介1.4. Java 发展的重大事件1.5. 虚拟机与 Java 虚拟机1.6. JVM 的整体结构1.7. Java 代码执行流程1.8. JVM 的架构模型1.9. JVM 的生命周期 1.JVM 与 Java 体系结构 1.1. 前言 作为 Java 工程…

带有GUI界面的电机故障诊断(MSCNN-BILSTM-ATTENTION模型,TensorFlow框架,有中文注释,带有六种结果可视化)

本次创作最主要是在MSCNN-BILSTM-ATTENTION模型&#xff08;可轻松替换为其它模型&#xff09;基础上&#xff0c;搭建GUI测试界面&#xff0c;方便对你想要测试的数据的进行测试&#xff0c;同时进行了全面的结果可视化&#xff1a;1.训练集和测试集的准确率曲线&#xff0c;2…

NIO简介以及用NIO实现一个群聊系统

一、BIO的工作原理 传统Io(BIO)的本质就是面向字节流来进行数据传输的 ①:当两个进程之间进行相互通信&#xff0c;我们需要建立一个用于传输数据的管道(输入流、输出流)&#xff0c;原来我们传输数据面对的直接就是管道里面一个个字节数据的流动&#xff08;我们弄了一个 by…

Git原理及使用

1、Git初识 Git是一种版本控制器: 对于同一份文件,做多次改动,Git会记录每一次改动前后的文件。 通俗的讲就是⼀个可以记录⼯程的每⼀次改动和版本迭代的⼀个管理系统,同时也⽅便多⼈协同作业。 注意: Git其实只能跟踪⽂本⽂件的改动,⽐如TXT⽂件,⽹⻚,所有的程序代码…

RabbitMQ之Plugins插件----AMQP对接MQTT

1.启用插件 rabbitmq-plugins enable rabbitmq_mqtt 2.检查是否启动成功&#xff0c;打开rabbitmq后台 3.概念&#xff1a; AMQP是由交换器和queue队列组成的消息队列机制&#xff0c;MQTT是由订阅主题组成的消息机制 1.MQTT创建连接时会向rabbitmq创建一个自己的queue&…

内网横向移动小结

windows Windows-Mimikatz 适用环境&#xff1a; 微软为了防止明文密码泄露发布了补丁 KB2871997&#xff0c;关闭了 Wdigest 功能。当系统为 win10 或 2012R2 以上时&#xff0c;默认在内存缓存中禁止保存明文密码&#xff0c;此时可以通过修改注册表的方式抓取明文&#xff…

总结 | vue3项目初始化(附相应链接)

如何运行 vue 项目&#xff1a;vscode运行vue项目_vscode启动vue项目命令-CSDN博客 vue3项目搭建 目录管理 git管理&#xff1a;vue3项目搭建并git管理_git 新建vue3项目-CSDN博客 目录调整&#xff1a;vue3项目 - 目录调整-CSDN博客 vscode中快速生成vue3模板&#xff1a…

基于ssm高校专业信息管理系统设计与实现论文

摘 要 互联网发展至今&#xff0c;无论是其理论还是技术都已经成熟&#xff0c;而且它广泛参与在社会中的方方面面。它让信息都可以通过网络传播&#xff0c;搭配信息管理工具可以很好地为人们提供服务。针对高校专业信息管理混乱&#xff0c;出错率高&#xff0c;信息安全性差…

3.23项目:聊天室

1、 基于UDP的网络聊天室 项目需求&#xff1a; 如果有用户登录&#xff0c;其他用户可以收到这个人的登录信息如果有人发送信息&#xff0c;其他用户可以收到这个人的群聊信息如果有人下线&#xff0c;其他用户可以收到这个人的下线信息服务器可以发送系统信息 服务器 #inc…

一、SpringBoot基础搭建

本教程主要给初学SpringBoot的开发者&#xff0c;通过idea搭建单体服务提供手把手教学例程&#xff0c;主要目的在于理解环境的搭建&#xff0c;以及maven模块之间的整合与调用 源码&#xff1a;jun/learn-springboot 以商城项目为搭建例子&#xff0c;首先计划建1个父模块&…