About the past, present and future of unmanned driving, read this article is enough!

Source: Internet
Author: User
Tags rand radar

Transferred from: http://36kr.com/p/5097118.html

Unmanned driving: The future of the beginning

We have opened the prologue to fully automatic unmanned driving, and after the curtain the wonderful future will be, let us first review the history of Silicon Valley, and then look forward to the future of unmanned driving.

As shown in Figure 1-1, modern information technology began in the the 1960s, and both Fairchild and Intel pioneered the new era of information technology through the innovation of silicon Crystal microprocessors, which is also the origin of Silicon Valley. Microprocessor technology has greatly improved the industrial productivity and advanced the development of modern industry. In the 1980s, with the development of software systems such as Xerox Alto, Apple Lisa and Microsoft Windows, the graphical interface was widely used, the concept of personal computers emerged and began to spread, and modern information technology was based on it.

In the early 21st century, with the gradual popularization of personal computers and the large-scale application of the background, Google's emergence through the Internet and search engines to connect people with the vast information such as Xinghai, so far, the development of modern information technology to the third stage. Facebook, which began in 2004, pushed modern information technology into its fourth phase through an innovative social networking model. At this point, the interaction of human communication from the offline expansion to the online, human society on the World Wide Web has been the initial migration and gradually mature and perfected.

With the expansion of the population of the Internet, Airbnb and Uber, through shared economic thinking, have spread the economic model of human society directly to the Internet society, using Internet + mobile devices to directly connect different users ' economic behavior, and get a wide range of success. The development of information technology at every stage and its subsequent drive to innovation have dramatically changed the way human access to information is needed and acquired. Especially for the latter stages, the Internet is a fundamental condition, and most services are communicated to end users via the Internet.

Now, we go to the sixth stage of information technology development, the robot begins to appear as the bearer of the service, one of the concrete examples is the unmanned driving product. Unmanned driving is not a single new technology, but a series of technology integration, through the effective integration of many technologies, in the case of unmanned driving safely to the passengers. This chapter describes the classification of driverless, key applications in ADAS, the many technologies involved in unmanned driving, and discusses how to integrate technology safely and efficiently in an unmanned system. the unmanned drive that is coming

It is expected that by 2021, unmanned vehicles will enter the market and start a new phase. The World Economic Forum estimates that the digital transformation of the automotive industry will create $67 billion worth of $3.1 trillion in social benefits, including improvements in unmanned vehicles, passenger connectivity and the complete ecosystem of the entire transportation industry.

It is estimated that the market potential of semi-automatic driving and fully automatic driving vehicles will be considerable in the coming decades. In 2035 alone, for example, there will be about 8.6 million autonomous vehicles in China, of which about 3.4 million are fully autonomous and 5.2 million are semi-automatic.

"Sales of Chinese cars, buses, taxis and related transportation services are expected to exceed $1.5 trillion in annual revenue," said some industry authorities. "The Boston Consulting Group predicts that" the global market share of unmanned vehicles will reach 25%, which will take 15-20 years. "Since unmanned vehicles are expected to go public by 2021, this means that in 2035-2040, unmanned vehicles will account for 25% of the global market.

The impact of driverless cars on the automotive industry is so great that it is unprecedented. Research shows that unmanned driving can lead to disruptive improvements in areas such as enhancing highway safety, easing traffic congestion, and reducing air pollution. Enhance Highway safety

Highway accidents are a major problem facing the world. In the United States, an estimated 35000 people die each year in a car accident, the figure in China is about 260000 people. The number of deaths in Japan on highway accidents is about 4000 per year. According to the World Health Organization, 1.24 million people die every year in highway accidents.

It is estimated that the fatal car accident will cause a loss of $260 billion a year, while the injury to the car accident will cost $365 billion. The highway accident caused a loss of $625 billion a year. The US Rand research shows that "39% of car crashes in 2011 involve alcohol driving." "It is almost certain that, in this regard, unmanned vehicles will bring significant improvements to avoid car crashes and casualties." In China, about 60% of traffic accidents and cyclists, pedestrians or electric bicycles collided with cars and trucks. 94% of motor vehicle accidents in the United States are related to human error.

A study by the American Institute of Highway Safety and Insurance has shown that installing automatic safety devices can reduce the number of fatalities in highway accidents by 31%, saving 11,000 lives a year. Such devices include the front collision warning system, collision braking, lane departure warning, and blind spot detection. easing traffic congestion

Traffic congestion is almost always a problem for every metropolitan city. In the United States, for example, each driver experienced an average of 40 hours of traffic congestion per year, with an annual cost of $121 billion. In Moscow, Istanbul, Mexico City or Rio de Janeiro, the time wasted is longer, "every driver spends more than 100 hours a year in traffic jams." In China, more than 1 million vehicles in the city have 35, more than 2 million cities have 10. In the busiest urban areas, about 75% of the roads will be congested with spikes. "The total number of private cars in China has reached 126 million, an increase of 15% per cent, with 5.6 million vehicles in Beijing alone."

Donald Shoup's study found that 30% of all traffic jams in metropolitan areas are due to the fact that drivers are circling around the business district in search of nearby parking lots. This is an important reason for traffic congestion, air pollution and environmental deterioration. "About 30% of the carbon dioxide emissions that contribute to climate change come from cars". In addition, it is estimated that there are 23%~45% traffic jams in urban areas that occur at the intersection of roads. Traffic lights and stop signs do not work because they are stationary and cannot be considered for traffic flow. The green or red light is set in advance at a fixed interval, regardless of the volume of traffic in one Direction.

Once the unmanned vehicle is gradually put into use, and accounted for a relatively large proportion of traffic, the vehicle sensor will be able to work with the intelligent transport system to optimize the traffic flow at the intersection. The interval of traffic lights will also be dynamic, depending on the real-time movement of traffic flow. This can be achieved by increasing the efficiency of vehicle traffic and easing congestion. Reduce air pollution

Automobile is one of the main reasons for the decline of air quality. Research by the RAND Corporation shows that "unmanned driving technology can improve fuel efficiency and improve 4%~10% fuel efficiency by smoother acceleration and deceleration than by manual driving." "Because the smog in industrial areas is related to the number of cars, increasing the number of unmanned vehicles can reduce air pollution," he added. A 2016-year study estimated that "cars are 40% more polluting than vehicles when they are at a red light or when traffic jams." ”

The unmanned vehicle sharing system also brings the benefits of reducing emissions and saving energy. Researchers at the University of Texas at Austin studied sulfur dioxide, carbon monoxide, nitrogen oxides, volatile organic compounds, greenhouse gases and fine particulate matter. "The use of unmanned vehicle sharing systems not only saves energy, but also reduces emissions of various pollutants," the results found. ”

Uber, a car company, found that 50% and 30% of the company's vehicles in San Francisco and Los Angeles were multi-passenger carpool. On a global scale, this figure is 20%. Whether it is a traditional car or an unmanned car, the more passengers carpool, the better the environment, the more it can alleviate traffic congestion. Changing the mode of one-man car will greatly improve the air quality. grading of autonomous driving

In 2013, the National Highway Traffic Safety Administration (NHTSA, which established a variety of regulations and standards) released the five-level standard for automotive automation, dividing the autonomous driving function into 5 levels: the 0~4 level, to address the explosion of automotive active safety technology. Let's look at the definition of NHTSA, as shown in the figure.

Level 0: No automation

Without any automatic driving function, the driver has absolute control over all the functions of the car. Drivers are responsible for starting, braking, operating and observing road conditions. Any driver assistance technology, as long as people still need to control the car, belong to level 0. Therefore, the existing forward collision warning, lane departure warning, as well as automatic wiper and automatic headlight control, although some intelligent, but still belong to level 0.

Level 1: Single-function automation

The driver is still responsible for driving safety, but can give up some control to the system management, some functions have been automated, such as the Common adaptive Cruising (Adaptive Cruise CONTROL,ACC), should be braking assistance (Emergency Brake Assist, EBA) and Lane keeping (Lane-keep support,lks). Level 1 is characterized by a single function that allows the driver to not control the hands and feet simultaneously.

Level 2: Partial automation

Drivers and cars to share control, the driver in some pre-set environment can not operate the car, that is, the hands and feet at the same time out of control, but the driver still need to be on standby, responsible for driving safety, and ready to take over a short period of time the car driving rights. such as the combination of ACC and LKS formed by the car-following function. The core of level 2 is not to have more than two functions, but the driver can no longer be the main operator. The Tesla push autopilot is also a Level 2 feature.

Level 3: Conditional automation

Automatic control in limited cases, such as in a pre-defined road segment (e.g., high-speed and low-flow urban sections), auto-driving can be entirely responsible for the entire vehicle, but when in an emergency, the driver still needs to take over the car at some point, but there is sufficient warning time, such as the road Work ahead). Level 3 will liberate the driver, which is no longer responsible for driving safety, and does not need to monitor road conditions.

Level 4: Fully automated (unmanned), without driver or passenger intervention

From the point of departure to the destination without assistance from the person. With only the start and end point information, the car will be responsible for the safety of the vehicle and completely independent of driver interference. No one can ride when driving (such as empty freight).

Another rating for autonomous driving comes from the American Society of Mobile Engineers (SAE), which defines automatic driving technology as a total of 0~5 levels. The SAE definition is consistent with NHTSA in the autonomous driving 0~3 level, emphasizing automation without automation, driving support, partial automation and conditions.

The only difference is that the SAE has further subdivided the full automation of NHTSA, emphasizing the requirements for the environment and the road. Autonomous driving under SAE-LEVEL4 needs to be carried out under certain road conditions, such as a closed park or a fixed lane, which can be said to be a highly automated drive for a specific scenario. Sae-level5 is not limited to the driving environment, can automatically deal with a variety of complex vehicles, new and road environment.

To sum up, the automatic driving function achieved by different levels is also increased by layer, and ADAS (Advanced Driving Assistant System) is a high-level driving assistant, which belongs to the autonomous driving 0~2 class. As shown in table 1-1, the functions implemented in L0 are only capable of sensing and decision-making alarms such as night vision systems, traffic identification, pedestrian detection, lane departure warning, etc.

L1 realizes a single control class function, such as support active emergency braking, adaptive cruise control system, etc., as long as one of them can achieve L1.

The L2 implements a variety of control functions such as vehicles with AEB and LKA functions.

L3 enables autonomous driving under certain conditions and is taken over by a human driver when certain conditions are exceeded.

L4 in SAE refers to unmanned driving under certain conditions, such as unmanned driving in a closed park, such as the unmanned driving service operated by Baidu in the Wu Town Scenic area.

The L5 in SAE is the ultimate goal, completely unmanned. Unmanned driving is the highest level of automatic driving, it is the final form of automatic driving.

Fully automatic unmanned vehicles may be safer than semi-automatic cars because they can be used to exclude human errors and unwise judgments when vehicles are driving. For example, a survey by the Virginia Tech Jiaotong Institute showed that "the driver of a L3-class autonomous vehicle takes an average of 17 seconds to respond to a request to take over a vehicle, and at this time an 65-mile (105-kilometer)-per-hour car has already opened 1621 feet (494 meters)-more than 5 football pitches." "Baidu's engineers also found similar results.

It takes 1.2 seconds for the driver to see the object from the road to the brakes, which is much longer than 0.2 seconds for the car's computer. This time difference means that if the car is 120-kilometer (75 miles) an hour, and when the driver stops, the car has already opened 40 meters (44 yards), and if it is a car computer to judge, the distance is only 6.7 meters (7 yards). In many accidents, this gap will determine the life and death of passengers. Thus, the highest level of driverless driving is the "ultimate goal" for the future development of the automotive industry. Introduction to unmanned driving system

Unmanned system is a complex system, as shown in the figure, the system consists of three parts: The algorithm end, client side and the cloud. The algorithm end includes the algorithm for the key steps of sensing, perception and decision-making, including the robot operating system and hardware platform, and the cloud includes data storage, simulation, high-precision map drawing and deep learning model training.

The algorithmic end extracts meaningful information from the sensor's raw data to understand the surrounding environment and make decisions based on environmental changes. Client-side fusion of many algorithms to meet the requirements of real-time and reliability. For example, the sensor produces raw data at 60Hz speed, and the client side needs to ensure that the longest pipeline processing cycle can be completed in 16ms. The Cloud platform provides offline computing and storage capabilities for unmanned vehicles. With the cloud platform, we are able to test new algorithms, update high-precision maps, and train more efficient identification, tracking, and decision-making models. Unmanned driving algorithm

The algorithm system consists of several parts: first, sensing, and extracting meaningful information from the sensor raw data; second, perceiving to locate the location of the vehicle and perceive the environment in which it is located; Thirdly, make decisions so that you can arrive at your destination reliably and safely. Sensing

Typically, an unmanned vehicle is equipped with many different types of main sensors. Each type of sensor has different advantages and disadvantages, so the sensor data from different sensors should be effectively fused. The sensors commonly used in unmanned driving are now available in the following categories.

(1) Gps/imu:

The Gps/imu sensing system helps unmanned vehicles to self-locate by using global positioning and inertia update data up to the Hz frequency. GPS is a relatively accurate positioning sensor, but its update frequency is too low, only 10Hz, not enough to provide real-time location updates. The accuracy of the IMU is reduced over time, so the accuracy of the location update is not guaranteed over a long distance, but it has the real-time nature of GPS, and the IMU can be updated at 200Hz or higher. By integrating GPS and IMU, we can provide location updates that are accurate and real-time enough for vehicle positioning.

(2) LIDAR:

LiDAR can be used to map, locate and avoid obstacles. Radar accuracy is very high, so in unmanned vehicle design radar is usually used as the main sensor. LiDAR is a laser light source, through the detection of laser and detected by the interaction of optical signals to complete the remote sensing measurement. LiDAR can be used to generate high-precision maps, and for high-definition maps to complete the positioning of mobile vehicles, as well as to meet the requirements of obstacle avoidance. Taking the Velodyne 64-beam LiDAR as an example, it can perform 10Hz rotations and can reach 1.3 million readings per second.

(3) Camera:

Cameras are widely used in object recognition and object tracking, and cameras are the main solution in lane detection, traffic light detection, and sidewalk detection. In order to enhance safety, the existing unmanned vehicle implementation usually uses at least eight cameras around the car body, from the front, back, left and right four dimensions to complete object discovery, identification, tracking and other tasks. These cameras usually work at 60Hz, and when multiple cameras are working simultaneously, they produce a whopping 1.8GB of data per second.

(4) Radar and Sonar:

The radar emits the energy of the electromagnetic wave to a certain direction in the space, in which the object reflects the electromagnetic wave, and the radar receives this reflected wave to extract some relevant information about the object, including the distance from the target object to the radar, the distance change rate or the radial velocity, azimuth, height, etc. The radar and sonar system is the last guarantee to avoid the obstacle. The data generated by the radar sonar is used to indicate the distance of the nearest obstacle in the direction of the vehicle. Once the system detects that there are obstacles ahead, there is a great risk of collision, and the unmanned vehicle will start the emergency brakes to complete the obstacle avoidance. Therefore, the data generated by the radar and sonar system do not require too much processing, usually can be directly controlled by the processor, do not need the main calculation line intervention, so can achieve steering, braking or pre-tensioning seat belts and other emergency functions. Perception

After acquiring the sensing information, the data is pushed to the perceptual subsystem to fully understand the environment in which the vehicle is located. The main thing to do in this sense subsystem is three things: Positioning, object recognition and tracking.

1) Positioning

GPS provides relatively accurate location information at a lower update frequency, while the IMU provides low accuracy location information with a high update frequency. We can combine the advantages of two types of data using Kalman filtering, combining to provide accurate and real-time location information updates.

As shown in Figure 1-4, the IMU is updated every 5ms, but the constant cumulative accuracy of the error continues to decrease. Fortunately, every 100ms, we can get a GPS data update to help us correct the IMU accumulation error. As a result, we can finally get real-time and accurate location information.

However, we cannot just rely on such a combination of data to complete the positioning work. There are three reasons: first, the positioning accuracy is only within one meter; second, the GPS signal has a natural multipath problem will introduce noise interference; third, GPS must operate in a non-enclosed environment, so GPS is not applicable in scenarios such as tunnels.

As a supplement, the camera is also used for positioning. To simplify, as shown in Figure 1-5, vision-based positioning consists of three basic steps:

① by triangulation of stereoscopic image, the disparity map is first obtained to calculate the depth information of each point.

② by matching the salient features between frames of continuous stereoscopic images, the correlation can be established by the characteristics of different frames, and the motion of the two frames is estimated.

③ calculates the current position of the vehicle by comparing the captured salient features and the points on the known map. However, the vision-based positioning method is very sensitive to lighting conditions, so its use is limited and the reliability is finite.

Therefore, lidar with a large number of particle filters is often used as the main sensor for vehicle positioning. The point cloud generated by LiDAR has a "shape description" of the environment, but it is not enough to differentiate between different points. With particle filtering, the system can compare known maps with observed specific shapes to reduce positional uncertainties.

To locate a moving vehicle in a map, a particle filtering method can be used to correlate known maps and LIDAR measurements. Particle filtering can achieve real-time positioning in 10cm accuracy and is particularly effective in complex urban environments. However, LiDAR also has its inherent disadvantage: if there are suspended particles in the air (such as raindrops or dust), then the measurement results will be greatly disturbed.

So, as shown in Figure 1-6, we need to use multiple sensor fusion techniques for multi-type sensing data fusion, processing to integrate the benefits of all sensors to achieve reliable and accurate positioning.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.