CMU Open Source: Multi-Objective human body key point real-time detection

Source: Internet
Author: User
Tags doxygen wrapper

Openpose is a use of OpenCV and Caffe and written in C + + open Source Library, used to achieve multithreading of multiple key real-time detection, the author includes gines Hidalgo,zhe cao,tomas simon,shih-en Joo and Yaser Sheikh.

Going to join (but already implemented.) Body + gesture + face estimation display:

Although the library uses Caffe, the code can easily be ported to other frameworks, such as TensorFlow or torch. If you implement any of this code, please issue a merge request and we will be happy to add your implementation to the library.

The use of Openpose for free Non-commercial uses is free and can be republished in this case. Please check the certificate for more details. If there is a commercial use please contact the author.

Main functionality of the Openpose library:

Multiple people 15 or 18 key points body pose estimation and rendering:

Multi-person 2*21 key point gesture estimation and rendering (open source in the next 1-2 months). )

Multiplayer 70 critical point face estimation and rendering (open source in the next 2-3 months). )

Flexible and easy to configure multithreaded modules.

Image, video, and Network Camera reader.

Ability to be in multiple formats (json,xml,png,jpg,...) Save and load the results.

Small display window for result visualization and GUI.

All of these features are encapsulated in an Easy-to-use Openpose wrapper class.

The results of the attitude estimation are based on the ECCV 2016 sample "Realtimemultiperson Pose estimation", and the C + + code in Zhe cao,tomas simon,shih-en Wei,yaser Sheikh. The Complete project library contains the MATLAB and Python versions, as well as the training code.

ECCV 2016 attitude estimation video

How to install:

Installation steps are described in GitHub: doc/installation.md files.

How to get started quickly:

Most user use cases do not need to have a deep understanding of the library, so users may only be able to use demo or simple openpoes encapsulation. So you don't have to worry too much about the details of the Openpose library.

Sample Example:

If in your use case you just want to process an image or video, or a webcam folder and display or store the pose results.

Then you don't have to care about the implementation details of the Openpose library and just read the contents of one page in the doc/demo_overview.md file.

Openpose Package:

In your use case if you intend to read a particular image format and/or add a specific post-processing function and/or implement your own display or storage function.

(almost) ignore the implementation of the library itself, just take a look at examples/tutorial_wrapper/. The tutorials on wrapper.

Note: You do not need to modify the Openpose source code or routines so that you can upgrade the Openpose library directly at any time without modifying your code. You can create your custom code in examples/user_code/and compile it in the Openpose folder using the Make all command.

Openpose Library:

In your use case if you want to change the intrinsic function and/or extend its functionality. First, take a look at the sample and the Openpose package. Second, read the following two sections: Openpose Overview and functional extensions.

1.OpenPose Overview: Learn about the basics of our library source in Doc/library_overview.md.

2. Functional extension: Learn how to expand our library in DOC/LIBRARY_EXTEND_FUNCTIONALITY.MD.

3. Add an additional module: Learn how to add an additional module in the DOC/LIBRARY_ADD_NEW_MODULE.MD.

Doxygen Document Automatic generation

You can generate the document by running the following command. The document will be generated here doc/doxygen/html/index.html. You only need to double-click to open it (your default browser will automatically display it).

CD Doc/doxygen Doc_autogeneration.doxygen

How to Output:

1. Output format

There are two alternative methods for storing position information (X,y,score) in various parts of the body. Flag write_pose is stored using the default format cv::filestorage in OpenCV (Json,xml and YML). However, the JSON format is supported only after OpenCV3.0 versions. Therefore, the flag bit Write_pose_json is specifically designed to store human posture data in a custom JSON format. As a result, each JSON file has an array of people objects, each of which has a body_parts array that contains position information and detection confidence for each part of the body, in the form of X1,Y1,C1,X2,Y2,C2,... coordinates x and y can be normalized to the interval [0,1],[-1,1],[0, source size],[0, output dimensions], and so on, depending on the flag bit scale_mode. In addition, the C value is the confidence level in the interval [0,1].

The order of the key points in the body, whether Coco (18 body parts) or MPI (15 body parts), is the Pose_body_part_ in the header file include/openpose/pose/poseparameters.hpp mapping to describe. Take the coco format for example:

For the storage format of the hotspot map, it is not isolated to store 67 hot maps (18 body parts + backgrounds +2*19 a PAF file), but these graphs are spliced vertically to form a huge (wide * hotspot map quantity) * (high) matrix. That is, the library is stitching up the hotspot map by column. For example, a vector [0, a single hotspot graph width] contains the first hotspot graph, and the vector [single hotspot width +1,2* single hotspot graph width] contains the second chapter hotspot map, and so on. Note that some display tools cannot display the resulting image under a given size. However, Chrome and Firefox can properly open them.

The order of storage is the body part + background +PAF file. Any kind of information can be shielded by the program's flag bits. If the background is blocked, the final image will be a combination of body parts and PAF information. Body parts information and background compliance with pose_coco_body_parts or

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.