A preliminary study of TOF

Source: Internet
Author: User

Introduction to TOF

Tof is a shorthand for time of flight, literal translation of the meaning of flight times. The so-called time-of-flight 3D imaging is to obtain the target distance by transmitting the light pulses continuously to the target and then receiving the light from the object using the sensor to detect the flying (round-trip) time of the light pulses. This technique is similar to the principle of 3D laser sensor, except that 3D laser sensor is point-by-spot scanning, while the TOF camera is the depth information of the whole image at the same time. TOF cameras and common machine vision imaging processes are similar, are composed of light sources, optical components, sensors, control circuits and processing circuits, such as several units. Compared with binocular measurement systems, which are very similar to non-invasive three-dimensional detection and application, the TOF camera has a fundamentally different 3D imaging mechanism. Binocular stereo measurement through the right and left stereo image matching, and then through the triangulation method to carry out stereo detection, and the TOF camera is through into, reflected light detection to obtain the target distance.

tof technology uses active light detection mode, unlike the general lighting requirements, the purpose of the TOF irradiation unit is not illumination, but the use of the incident light signal and reflected light signal changes to carry out distance measurement, so, The TOF irradiation unit is the high-frequency modulation of the light after the transmission, such as shown in the use of LED or laser diode emitted by the pulse, pulse can reach 100MHz. Like a normal camera, the front of the TOF camera chip requires a lens that collects light. However, unlike ordinary optical lenses, it is necessary to add a bandpass filter to ensure that only light with the same wavelength as the light source can enter. At the same time, because the optical imaging system has the perspective effect, the different distance scenes are concentric spheres with different diameters, rather than parallel planes, so it is necessary for the subsequent processing units to correct this error in practical use. As the core of the TOF camera, each cell of the TOF chip records the phase of the incident light between the camera and the object respectively. The sensor structure is similar to a normal image sensor, but more complex than an image sensor, it contains 2 or more shutter, used to sample reflected light at different times. For this reason, the tof chip pixels are much larger than the average image sensor pixel size, typically around 100um. Both the irradiation unit and the TOF sensor require high-speed signal control in order to achieve high depth measurement accuracy. For example, the synchronous signal between the irradiated light and the TOF sensor has a 10ps offset, which is equivalent to the displacement of 1.5mm. And the current CPU to 3GHz, the corresponding clock period is 300ps, then the corresponding depth resolution of 45mm. The calculation unit is mainly to complete the data correction and calculation work, the distance information can be obtained by calculating the relative phase shift relation between incident light and reflected light.

The advantages of TOF: Compared to stereo cameras or triangulation systems, the TOF camera is small in size and booms to the general camera, and is ideal for applications where light and small volume cameras are needed. The TOF camera is able to calculate depth information in real time, up to dozens of to 100fps. The depth of the tof information. But the binocular stereoscopic camera needs to use the complex correlation algorithm, the processing speed is relatively slow. The depth calculation of tof is not affected by the gray level and characteristics of the object surface, so it can be detected very accurately. But binocular stereoscopic camera needs the target to have the good characteristic change, otherwise will not be able to carry on the depth computation. The depth calculation accuracy of TOF does not change with the distance change, the basic stability at the CM level, this for some large-scale motion application occasions is very meaningful.

1.2 tof Research Institute

<1> Dynamic 3D Vision (2006-2010):

Research area: Multi-chip 2d/3d sensor, dynamic scene reconstruction, target location recognition and light field calculation

Official website: www.zess.uni-siegen.de/pmd-home/dyn3d

<2>Artts (2007-2010):

Full name: "Action recognition and Tracking based on time-of-flight sensors"

Official website: http://www.artts.eu

Research area: development of smaller and cheaper next-generation TOF cameras; combining HDTV with TOF (iii) multi-modal interface and interactive system based on motion tracking and recognition algorithm

<3>Lynkeus (2006-2009):

Official website: www.lynkeus-3d.de

Research area: High-resolution and robust TOF sensors for industrial applications, such as automation and robotic navigation

<4>3d4you (2008-2010):

Official website: www.3d4you.eu

Research area: Build 3d-tv product line, get point cloud data in real time from 3 movies and convert to 3D display to home TV 3d4you application Tof range Cameras initializes the depth of multiple high-definition cameras and initializes the depth of the 3D scene image.

<5>moses (2008-2012):

Full name: "Multi-modal Sensor Systems for Environmental ex-ploration (MOSES)"

Research areas: Multi-sensor-based applications, including TOF-based human-computer interface and Multisensor Fusion

Official website: www.zess.uni-siegen.de/ipp_home/moses

1.3Application Fields of Tof

The main areas of application of the TOF camera currently include:

<1> Logistics Industry: Quickly obtain package weight (i.e. volume) through a TOF camera to optimize packing and carry out freight evaluation;

<2> Security and monitoring: Carry out peoplecounting to determine the number of entrants not exceeding the upper limit; Through the counting of the traffic flow or complex transportation system, the statistical analysis design of the security system is realized; monitoring of sensitive areas; machine vision: industrial positioning, Industrial guidance and volume estimation, replacement of the station occupies a large amount of space, based on infrared light for production safety control equipment;

<3> robots: Provide better obstacle avoidance information in the field of autonomous driving, and guide the robot in installation, quality control, and raw material picking applications;

<4> Medical and biological: foot orthopedic modeling, patient activity/condition monitoring, surgical assistance, facial 3D recognition;

<5> Interactive Entertainment: Motion posture detection, expression recognition, entertainment advertising

1.4Features of the TOF camera

<1> Advantages:

1. Relative two-dimensional images can be obtained by distance information to obtain a richer position relationship between objects, i.e.

Distinguish the foreground from the background;

2. The depth information can still complete the segmentation, marking, recognition, tracking and other traditional applications of the target image;

3. After further deepening processing, can complete three-dimensional modeling and other applications;

4. Ability to quickly complete the identification and tracking of targets;

5. The cost of the main accessories is relatively low, including CCD and ordinary LED, etc., to the future popularization of production and use of favorable;

6. With the help of CMOS characteristics, a large amount of data and information can be obtained, which is very effective for the attitude judgment of complex objects.

7. No need to scan equipment to assist the work

<2> Disadvantages:

1. Relative to the ordinary digital camera, its cost is still high, affecting the current popularity of the product use rate;

2. The camera itself is still limited by the development of hardware, the speed of replacement is faster;

3. The measuring distance is shorter than the conventional measuring instrument, and generally does not exceed 10 meters;

4. The measurement results are affected by the properties of the analyte;

5. Most of the measurement results of the machine are obviously disturbed by external environment, especially by the external light source.

6. The resolution is relatively low, this paper studies the PMD Camcube 2.0 model camera, for the current resolution

The highest 3D camera with a resolution of only 204x204 pixels;

7. System error and random error have obvious effect on the result, and need to be processed later.

2 Depth Camera comparison

The current depth camera has tof, structured light, laser scanning and several other. Mainly used in robotics, interactive games and other applications. Many of them refer to the TOF camera. Currently the mainstream of the TOF camera manufacturers have PMD, Mesa, Optrima, Microsoft, and several, of which Mesa in the field of scientific research, the camera compact, and PMD is the only one by one can be used in indoor and outdoor tof camera, and can have a variety of detection distance, can be used for scientific research, Industrial and other occasions. and Optrima, Microsoft (not the real tof technology) camera is mainly for home, entertainment applications, low price.

<1>mesa Company: SR4000

Official website: www.mesa-imaging.ch

<2>PMD Company: CamCube3.0

Official website: www.pmdtec.com

<3> Canesta Company: XZ422

Official website: www.canesta.com

<4> Fotonic Co., Ltd.

Official website:http://www.fotonic.com/content/Company/Default.aspx

2.1 Mesa Series Introduction

MESA Imaging AG was established in July 2006 and is dedicated to the production and sales of the world's leading 3D flight time (TOF) Depth mapping camera. The camera's image chip technology enables real-time acquisition of three-dimensional data columns (often referred to as depth images) and is integrated in a compact firmware. MESA has won the Swissranger Technology Innovation Award in this field, and many successful experiences can bring customers customized camera solutions. Mesa's products are capable of single camera 3D imaging. It uses the time of flight to get the target distance by sending a continuous light pulse to the target and then using the sensor to receive the light from the object and to detect the flight (round-trip) time of the light pulse. Compared with other stereoscopic imaging methods, this method has the characteristics of good real-time and no dead zone. Mesa's chips are produced by specialized manufacturers, and the Ccd/cmos production process is introduced to ensure the independence and optimal configuration of the photoelectric function modules. This ensures that the underlying noise and subsequent distance measurement capabilities of the chips used in Mesa are significantly superior to those based on standard CMOS process manufacturing. Its model is SR4000

The SR4000 3D range camera can output 3-D distance and amplitude values in real time at a video frame rate. Based on the time flight principle (Time-of-flight TOF), the camera consists of a built-in laser light source that emits light that bounces back to the camera after the objects in the scene are reflected, and the pixel points in each image sensor measure the time interval individually and calculate the distance value independently. Designed for indoor environments, the SR4000 can be easily connected to a computer or network via the USB2.0 or Ethernet (Ethernet) interface to quickly generate real-time depth graphs. Represents the Mesa Company's 4th-generation time Flight principle camera, it can output a stable distance value, beautiful appearance, strong, small size (x 68mm) (USB version). SR4000 randomly includes the driver and software interface program, and the user can create more applications through the interface program.

2.2 PMD TEC Series

CAMCUBE3.0 is the world's first high-precision depth camera that can be used outdoors, which facilitates applications such as car-assisted driving and mobile robotics. In vehicles and other vehicles, parking, driving, etc. have been through the driver's direct observation and experience to complete, due to human experience error or mental state influence, in the actual process, there will inevitably be a variety of conditions. With the TOF camera 3D detection, it is convenient to detect the external environment, and the driver to remind and assist the role of driving.

Pmdtec, a German company, was originally a central laboratory for a Research sensor system (zess) at the University of Ceylon in Germany, separating the company from the German University of Ceylon in 2002, after being acquired by another company to form the current Pmdtec company. The company studied 3D TOF Imaging (Time flight technology) for more than 10 years. In 2011, Omek Interactive and Pmdtechnologies announced a strategic partnership to provide gesture recognition and body tracking solutions, which laid a solid foundation for future commercial applications. The company's products have been developed to the third generation-CamCube3.0. The 3D camera has a resolution of 200*200, which allows for depth information and grayscale images of the scene at 40 frames per second. The CamCube3.0 has a very high sensitivity, which allows for higher accuracy and greater detection distances over a shorter shutter time. Due to its exclusive SBI technology, TOF is a small number of tof cameras that can be used both indoors and outdoors, and can detect fast moving targets. But the disadvantage is that the price is expensive, not including tax, to 12000 dollars. So only suitable for scientific research, for the civilian still have a long way to go.

Pmdtec Company website: http://www.pmdtec.com/

Pmdtec wiki:http://en.wikipedia.org/wiki/pmdtechnologies

2.3 NATAL

Natal is not a tof-based principle, PrimeSense provides Microsoft with its three-dimensional measurement technology and is applied to project Natal. The PrimeSense Company's homepage mentions its use of a light-coding (optical coding) technology. Unlike conventional tof or structured light measurement techniques, coding uses continuous illumination (rather than pulses) and does not require a specially-designed photosensitive chip, but requires a common CMOS sensor, which reduces the cost of the solution. Light coding, as the name implies is to use the lighting to measure the space required to code, in the final analysis or structural light technology. But unlike the traditional structured light method, his light source is not a periodic change of two-dimensional image coding, but a three-dimensional depth of the "body coding." This light source, called laser speckle (speckle), is a random diffraction spot formed when a laser is exposed to a rough object or penetrates through a frosted glass. These speckles have a high degree of randomness, and they change patterns as distances vary. Also Kinect means that the pattern of speckles in any two spaces is different. As long as the structure of light in space, the entire space is marked, put an object into this space, just look at the speckle pattern above the object, you can know where the object is. Of course, before this to the entire space of the speckle patterns are recorded, so first to do a light source calibration. In PrimeSense's patent, the calibration method is as follows: at intervals, take a reference plane and record the speckle pattern on the reference plane. Assuming that the user activity space specified by Natal is 1 meters to 4 meters from the TV range, each 10cm takes a reference plane, then the calibration we have saved 30 speckle images. When a measurement is required, a speckle image of the scene to be measured is taken, and the image and the 30 reference images we save are then connected in turn, so that we get 30 correlation images, and the position of the object exists in the space, the peak is displayed on the correlation image. By stacking these peaks together and then passing some interpolation, you get the three-dimensional shape of the whole scene.

2.4 PrimeSense

The most common image capture device today is a digital camera. The digital camera outputs a pixel matrix with each pixel representing a color value. This is a two-dimensional (2D) visual technique. 3D Vision is the ability to capture the depth of a target (also known as the Z axis, range, distance) and its surroundings, in addition to the spatial position (x and Y axes) and colors of the target. A 3D vision system outputs both a terrain view and a color view for each scene. PrimeSense is a non-factory semiconductor company. Their technology gives TV, set-top boxes, living room computers and other consumer electronics products the ability to interact naturally. His most triumphant two words are: depth . Their primesensor products include Reference Design and NITE middleware. Primesensor Reference Design is a low-cost, plug-and-play, USB powered device that can be placed on the top or side of a TV or monitor, or embedded in it. Referencedesign can generate real-time depth, color, and audio data for living room scenes. It can work in a variety of indoor lighting conditions (including a dark and very bright room). It does not require the user to wear or hold anything, no calibration, and no need for the host processor to do any operations. Primesensor's design includes an advanced visual data processing middleware optimized for CE product--nite for the mass market. NITE provides an algorithmic framework for developing rich, natural interactive applications. The NITE SDK (software Development Kit) provides a document-specific API and framework that enables the design and development of GUI (graphical user interface) and game development.

Http://labs.manctl.com/rgbdemo/index.php/Main/Download RGBD calibration, must see!!

http://www.ee.oulu.fi/~dherrera/kinect/most important!!!

Http://nicolas.burrus.name/index.php/Research/KinectCalibration ToF Calibration

http://www.rgbdtoolkit.com/

A preliminary study of TOF

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.