PETS-ICVS Datasets Data Set _pets-icvs

Source: Internet
Author: User
Tags ftp site
PETS-ICVS datasets

Warning:you are strongly advised to view the Smart meeting specification file available This is before any data.  This would allow you to determine which part of the "data is" most appropriate for you. The total size of the dataset is 5.9 Gb.

The JPEG images for the Pets-icvs May is obtained from

You can also download all files under one directory using wget.
Please have a http://www.gnu.org/software/wget/wget.html for more details.

Note:there appears to be some problems accessing the FTP site using Netscape. If you are have problems, please try using Internet Explorer instead, or access via direct FTP as shown above.

Important instructions are given at the bottom of this page in processing the datasets-please read these carefully.


Annotation of Datasets (Ground Truth)

The following annotations are available for the datasets:

1. Eye positions of people in scenarios A, B and D. The format is described in the specification file link above, i.e.
Image0001.jpg 3 left_eye_center_x left_eye_center_y right_eye_center_x right_eye_center_y
(where left and right are as seen by the camera, rather than the persons left/right).
Image coordinates:the origin is in the top left.
Every 10th frame is annotated.
The annotation is available here.

2. Facial expression and gaze estimation for scenarios A and D, cameras 1-2.
The annotation is available here.

3. Gesture/action annotations for scenarios B and D, cameras 1-2.
The annotation is available here.

You are are strongly encouraged to evaluate your results against the appropiate data above and then in your paper .

PETS-ICVS consists of datasets for a smart meeting.


Two views to the smart meeting room without participants. The environment consists of three cameras:one mounted on each of two opposing-walls, and an omnidirectional camera OneD at the centre of the room.


View from Camera 1. Click on the background image for the full resolution version.


View from Camera 2. Click on the background image for the full resolution version.


View from Camera 3. Click on the background image for the full resolution version.

The measurements for the "smart meeting" room may is found in the following calibration file (Powerpoint).

The Task

The overall task is to automatically annotate, the smart meeting.

The dataset consists of four scenarios A, B, C and D.

Each scenario consists of a number of separate sub-tasks. For each frame, the requirement are to Perform:face localisation (centre location of eyes) recognition of facial Expressio n Recognition of Face/hand gesture estimation of Face/head direction (gaze) recognition of actions note.  Requirement of the "your paper to" All of the tasks stated above.  You may address one or the "the" tasks in any of the scenarios. For example, if you are specialise in action recognition, your may wish to submit a paper which addresses this aspect alone, I.  E. Annotation on a frame-by-frame basis of actions performed within one of the scenarios. Your annotation May is based on one or more of the 3 camera views used.

A full specification of the "DataSet is available" here including details of scenarios a-d, List of actions/gestures and FAC ial expressions.

The results in your paper can is based on any of the data supplied in the dataset.
The images may is converted to the any other format as appropriate, e.g. subsampled, or converted to monochrome. All results reported in the paper should clearly indicate which part of the test data are used, ideally with reference to F Rame numbers where appropriate, e.g. scenario B, ...

There is no requirement to the "all" test data, however you are encouraged to test your on as M Uch of the test data as possible.

The results must is submitted along with the paper and the results in XML format.

The paper that you submit may is based on previously published tracking METHODS/ALGORITHMS (including papers submitted to  The main ICVS conference). The importance is this your paper must the results using the datasets.

Your are strongly encouraged to evaluate your results against the ground truth given above and a in your paper Ion.

Acknowledgements
The sequences have been provided by the consortium of Project Fgnet (ist-2000-26434) http://www.fg-net.org
With additional support provided by the Swiss National Centre of Competence in (NCCR) on Interactive I  nformation Management (IM) 2. The NCCR is managed by the Swiss National Science Foundation on behalf of the the the "the".

If you are have any queries please email pets-icvs@visualsurveillance.org.


From: http://www-prima.inrialpes.fr/fgnet/data/08-pets2003/pets-icvs-db.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.