OPENCV Video for Target detection _ video

Source: Internet
Author: User
Tags dnn

The deeplearning feature is integrated in the Opencv3.3 version. It implements the reasoning of the two frameworks for Caffe and TensorFlow, but does not support training. This paper uses Caffe training files to detect targets. The whole idea is to read the video file first, then load the model file, and finally read each frame of the video to detect it. System: ubuntu16.04 python:2.7 Model file: MOBILENET-SSD arbitrary video file 1. Install OpenCV

If you install OpenCV you can skip this section, this section simply copies some commands and there may be errors in the actual installation.
Install opencv3.3

Step #1 £ Install OpenCV dependencies on Ubuntu 16.04

Ubuntu 16.04:how to install OpenCV
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install build-essential cmake pkg-config
$ sudo apt-get install Libjpeg8-dev libtiff5-dev libjasper-dev Libpng12-dev
$ sudo apt-get install Libavcodec-dev libavformat-dev libswscale-dev libv4l-dev $ sudo apt-get install libxvidcore-de
V Libx264-dev
$ sudo apt-get install Libgtk-3-dev
$ sudo apt-get install Libatlas-base-dev Gfortran
$ sudo apt-get install Python2.7-dev Python3.5-dev

Step #2: Download the OpenCV source

$ cd ~
$ wget-o opencv.zip https://github.com/Itseez/opencv/archive/3.3.0.zip
$ unzip Opencv.zip

Step #3: Setup your python environment-python 2.7 or Python 3

$ cd ~
$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python get-pip.py

Step #4: Configuring and Compiling OpenCV on Ubuntu 16.04

$ cd ~/opencv-3.3.0/
$ mkdir Build
$ cd Build
$ cmake-d cmake_build_type=release \
    d cmake_install_pref ix=/usr/local \
    D-install_python_examples=on \
    D-install_c_examples=off \-
    D Opencv_extra_modules_ Path=~/opencv_contrib-3.3.0/modules \
    D-python_executable=~/.virtualenvs/cv/bin/python \
    d BUILD_ Examples=on..
$ make-j4
$ sudo make install
$ sudo ldconfig

Step #6: Testing your OpenCV Install

$ cd ~
$ workon CV
$ python
python 2.7.12 (default, Nov 2016, 06:48:10) 
[GCC 5.4.0 20160609] on linux2< C4/>type "Help", "copyright", "credits" or "license" for the more information.
>>> Import cv2
>>> cv2.__version__
' 3.3.0 '
>>>
For testing

Need to modify several places in the code, change the path "–prototxt", "–model, Videopath"

#import the necessary packages from Imutils.video import videostream from imutils.video import FPS import NumPy as NP Impo RT argparse Import imutils Import time import cv2 Videopath = "/home/user/desktop/test.mp4" # construct the argument pars E and parse the arguments ap = argparse.
    Argumentparser () ap.add_argument ("P", "--prototxt", default= '/home/user/desktop/mobilenetssd_deploy.prototxt ', help= "Path to Caffe ' deploy ' prototxt file") ap.add_argument ("-M", "--model", default= "/HOME/USER/DESKTOP/MOBILENETSSD" _deploy.caffemodel ", help=" path to Caffe pre-trained model ") ap.add_argument ("-C ","--confidence ", Type=float, default =0.2, help= "minimum probability to filter weak detections") args = VARs (Ap.parse_args ()) # Detect, then generate a SE t of bounding box colors for each class CLASSES = ["Background", "Aeroplane", "Bicycle", "Bird", "Boat", "bottle", "bu
S "," car "," cat "," chair "," cow "," diningtable "," dog "," horse "," motorbike "," person "," pottedplant "," sheep ",    "Sofa", "Train", "tvmonitor"] COLORS = np.random.uniform (0, 255, size= (Len (CLASSES), 3)) # Load our serialized model From disk print ([INFO] loading model ... ") NET = Cv2.dnn.readNetFromCaffe (args[" Prototxt "], args[" model ") # Initialize The video stream, allow the Cammera sensor to warmup, # and initialize the FPS counter print (' [INFO] starting video stream .. ') vs = Cv2. Videocapture (Videopath) time.sleep (2.0) fps = fps (). Start () print (' Open ', vs.isopened ()) #loop over the frames from the V Ideo Stream while True: #grap the "frame" threaded video stream and resize it #to have a maximum width of Pixels ret,frame = Vs.read () print (' Shape: ', type (frame)) if frame is none:break frame = IM Utils.resize (frame,width=400) #grab the frame dimensions and convert it to a blob (h,w) = Frame.shape[:2] Bol b = Cv2.dnn.blobFromImage (frame,0.007843, (300,300), 127.5) #pass The BLOB through the network and obtain the detection S and PreDictions Net.setinput (bolb) detections = Net.forward () for I in Np.arange (0, detections.shape[2]): # E Xtract the confidence (i.e., probability) associated with the # prediction confidence = detections[0, 0, I
        , 2] # filter out weak detections by ensuring the ' confidence ' is # greater than the minimum confidence
            If confidence > args["confidence"]: # Extract the index of the class label from the ' detections ', # then compute the (x, y)-coordinates of the bounding box for # the object idx = Int (det Ections[0, 0, I, 1]) box = detections[0, 0, I, 3:7] * Np.array ([w, H, W, H]) (StartX, Starty, EndX , EndY) = Box.astype ("int") # Display the prediction label = ' {}: {:. 2f}% '. Format (Classes[idx], C Onfidence *) Print ("[INFO] {}". Format (label)) Cv2.rectangle (frame, (StartX, Starty), (EndX, en
                   DY),       COLORS[IDX], 2 y = startY-15 if startY-15 > Else Starty + cv2.puttext (frame, Label, (StartX, y), Cv2. Font_hershey_simplex, 0.5, Colors[idx], 2 #show the output frame cv2.imshow ("frame", frame) key = Cv2.waitkey ( 1 & 0xff If key = = Ord (' q '): Break Fps.update () Fps.stop () vs.release () cv2.destroyallwindows ()

If OPENCV does not load video files, there may be a lack of corresponding ffmpeg, or conflicts with older versions, you can uninstall the old version and the corresponding version dependencies. Reference Literature https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/https:// www.pyimagesearch.com/2016/10/24/ubuntu-16-04-how-to-install-opencv/https://github.com/chuanqi305/ MOBILENET-SSD

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.