The key technology toward the self-driving car

Jianfeng Zhao (School of Automotive and Transportation Engineering, Shenzhen Polytechnic, Shenzhen, China)
Bodong Liang (School of Automotive and Transportation Engineering, Shenzhen Polytechnic, Shenzhen, China)
Qiuxia Chen (School of Automotive and Transportation Engineering, Shenzhen Polytechnic, Shenzhen, China)

International Journal of Intelligent Unmanned Systems

ISSN: 2049-6427

Article publication date: 2 January 2018

67706

Abstract

Purpose

The successful and commercial use of self-driving/driverless/unmanned/automated car will make human life easier. The paper aims to discuss this issue.

Design/methodology/approach

This paper reviews the key technology of a self-driving car. In this paper, the four key technologies in self-driving car, namely, car navigation system, path planning, environment perception and car control, are addressed and surveyed. The main research institutions and groups in different countries are summarized. Finally, the debates of self-driving car are discussed and the development trend of self-driving car is predicted.

Findings

This paper analyzes the key technology of self-driving car and illuminates the state-of-art of the self-driving car.

Originality/value

The main research contents and key technology have been introduced. The research progress as well as the research institution has been summarized.

Keywords

Citation

Zhao, J., Liang, B. and Chen, Q. (2018), "The key technology toward the self-driving car", International Journal of Intelligent Unmanned Systems, Vol. 6 No. 1, pp. 2-20. https://doi.org/10.1108/IJIUS-08-2017-0008

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Jianfeng Zhao, Bodong Liang and Qiuxia Chen

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Nowadays, even though vehicle driving assistive technology has been assembled in the premium cars on a large scale, the concept of the self-driving car has constantly appeared in various news and reports (Ross, 2014; Ackerman, 2016a; Harris, 2016; Hassler, 2017; Computerworld, 2012). However, due to few literatures (Van Arem, 2014; Ibanez-Guzman et al., 2012) reviewing the key technology of self-driving car, many problems are ambiguous, i.e., what is the progress of self-driving car? Is the large-scale commercial use of self-driving car helpful to the human society? To address the above problems, this paper investigates the key technology of self-driving car, discusses its implementation obstacles and summarizes the whole picture of the technology progress, which are expected to be helpful for the reader to understand the commercial use of the self-driving car.

Generally, the self-driving car (Berger, 2014; Berger and Dukaczewski, 2014; Walker et al., 2001; Thrun, 2010; Baruch, 2016; Barker et al., 2013; Litman, 2015; Levinson et al., 2011), also termed as the wheeled mobile robot, is a kind of intelligent car, which arrives at a destination based on the information obtained from automotive sensors, including the perception of the path environment, information of the route and car control. The main characteristic of self-driving car is transporting people or objects to a predetermined target without humans driving the car. According to the National Highway Traffic Safety Administration, the self-driving car can be classified into four levels, as described in Table I. Due to the maturity of Levels 1 and 2, this paper discusses Levels 3 and 4.

2. The key technology of a self-driving car

The automatic control, architecture, artificial intelligence, computer vision and many other technologies are integrated into the self-driving car, which is a product of the highly developed computer science, pattern recognition and intelligent control technology. From a different viewpoint, the technology of self-driving car represents the level of scientific research and industrial strength of a country. However, few papers have surveyed the technology process of a self-driving car due to its complexity. In view of this problem, this paper proposed a new classification, as shown in Figure 1, for the key technology of self-driving car according to the function implementation, which will make the description easy and clear.

Compared with manual driving, it is the key characteristic of a self-driving car that using automation equipment to replace the human driver. Based on this characteristic and functional requirement on driving and on-board equipment module, the core technology of self-driving car is classified into four key parts, which are known as car navigation system, path planning, environment perception and car control. The detailed description is provided in the following sections.

Compared with the classification method according to automotive level, this paper proposes a new classification according to the function realization of the self-driving car. This classification is able to clearly express the technical requirements of a self-driving car, helping the researchers and relevant enterprises to understand the technical implementation of self-diving car; meanwhile, it is able to clearly describe the key technologies of implementing the self-driving car and its latest progress.

From the view of classification, this paper divides the key technology of self-driving car into four parts according to the function of a self-driving car: environment perception, car navigation, path planning, and the car control. Each part is independent with others no overlapping coverage. This classification is inspired by the operation steps of human driving vehicles and is easy for researchers to understand.

2.1 Car navigation system

During self-driving, two issues, which are the current location of the car and how to go from the location to the destination, must be resolved. Certainly, the above two issues can be solved by a human’s own knowledge in human driving. However, in self-driving, the car must be able to automatically and intelligently locate its position and perform the path planning to destination. For this objective, the on-board car navigation system is deployed on the self-driving car.

The structure of car navigation system and its metadata processing model are depicted in Figure 2. In the car navigation system, geographic information system and global positioning system (GPS) are equipped to receive the location information such as longitude and latitude from the satellite. These information, together with the road information generated by location system and digital map database, serve as the source data inputted into the map-matching model, where the intelligent path planning algorithms (i.e. Dijkstra algorithm, Bellman-Ford algorithm) are utilized to enable the path planning calculation. After calculation, the self-driving car can locate itself. With the information of the self-driving car’s location and the destination, the driving route can also be programmed and calculated by the path planning model.

2.2 Location system

The main purpose of the location system is to determine the vehicle location, which generally can be classified into relative location, absolute location and hybrid location. For relative location, the current position of self-driving car is obtained by adding the moving distance and direction to the prior position. For instance, inertial navigation system (INS) (Farrell and Barth, 1999) is a common relative location system. In INS, the vehicle angular velocity and accelerated velocity are obtained by the gyroscope sensor and accelerometer installed in the car. By integrating these data (i.e. angular velocity, accelerated velocity), the car’s relative course angle and speed can be calculated. Similarly, the car’s direction and mileage can be obtained by integrating the course angle and speed once again. Combining with the prior vehicle location, the current vehicle location can be calculated. However, due to the vehicle vibration during moving, it is inevitable to lead to the deviation between the calculated location and actual location.

The absolute location method is used to locate the vehicle’s position according to the information obtained from positioning system. A common positioning system is the satellite-based system, such as GPS, GLONASS, Galileo, Beidou and so on. However, the satellite signal is prone to the interference from the weather conditions and urban environment, such as building and mountain, which will cause error and noise in the location signal, and thus the measured absolute location is not accurate.

The hybrid location, which combines the characteristics of the above two locating methods, is the most common method used in obtaining the position of a self-driving car. For instance, the self-driving car of Shaihai Jiaotong University involves a typical hybrid location implementation system, which implements the Gmouse UB-353 USB GPS model and Analog Device ADIS16300 INS (Yida, 2013) to obtain information of the location.

GPS/INS can be not only used for navigation, but also for location applications, such as turning. For instance, Zhu et al. (2012) proposed a new vehicle cross-road turning method based on the GPS/INS information. According to this method, the vehicle turning can be achieved by adopting a predefined map, which is generated by the line curve-fitting and predicting method based on the location and road condition given by GPS/INS. Carnegie Mellon University (Urmson and Whittaker, 2017) made use of sparse GPS data combined with the aerial imagery to locate the self-diving car in the road, which was named Boss.

The major GPS/inertial measurement unit (IMU) manufacturers are as follows: NovAtel, Leica, CSI Wireless and Thales Navigation, etc. NovAtel proposed the SPAN technology. SPAN combines the GPS location with absolute accuracy with the IMU gyro and accelerometer measurements stability to provide a solution with 3D position, velocity and attitude. Even when the GPS signal is blocked, it can provide a stable and continuous solution. Based on the SPAN technology, NovAtel has two major GPS/IMU products: SPAN-CPT integrated navigation system and SPAN-FSAS fractional navigation system. SPAN-CPT uses a Novatel professional high-definition GPS board card and the German iMAR company’s fiber optic gyro IMU. Its solution accuracy can be applied to different positioning requirements in different modes, including SBAs, L-band (Omnistar and CDGPS) and RTK difference and so on. This system has the highest course accuracy of 0.05°, and the pitch rolling accuracy is 0.015°. SPAN-FSAS also uses the German IMAR company high-definition (HD) close-loop technology IMU, and the gyro deviation obtained is less than 0.75 degree/hour and the accelerometer deviation is less than 1 mg. By combining it with NovAtel FlexPak6™ or ProPak6™, the combination navigation solution can be achieved. The output speed of the GNSS+INS system is up to 200 Hz while IMU-FSAS sending inertial measurement data to GNSS receiver.

2.3 Electronic map (EM)

EM is used for digital map information storage, which mainly includes geographical characteristics, traffic information, building information, traffic signs, road facilities, etc. Nowadays, most of the EMs which are used in a self-driving car are the EMs designed for humans. It is expected that special EMs for self-driving, such as automatic road sign recognition, car’s driving information interacting among self-driving cars, will be developed in the future.

Now, the EM for self-driving car named HD map has already shown up. Compared with the traditional map, on the one hand, the accuracy of absolute coordinates of an HD map is higher. For example, it is declared that its next generation of drawing applications will be accurate in centimeters and, on the other hand, the road traffic information elements are richer and more detailed. In particular, the HD map is divided into three layers: the active layer, the dynamic layer and the analytical layer:

  1. Active layer, compared to the traditional map, adds HD road-level data (road shape, slope, curvature, laying, direction, etc.), the data of lane attribute (lane type, lane width, etc.) and the elevated objects, guardrail, trees, road edge types, roadside landmarks and other large target data.

  2. Dynamic layer will update real-time traffic data from other vehicle sensors and road sensors. The update and supplement is in real time. This is the second phase of HD map, namely, network integration-collaborative perception.

  3. Analysis layer helps train self-driving car by analyzing the real-time big data of human driving records. Therefore, the HD map enters the third phase of network integration-coordinated decision-making and control.

At present, the ADAS map has the activity layer information and the accuracy is 1-5 m. For example, BMW Adaptive Speed Recommendation (ASR) will remind users to slow down ahead of 50-300 meters in case of a slowdown area, the concrete meters will adjust depending on the current speed, the braking speed and the time of driver responding time will adjust; at the turn of the road, ASR will consider the road width, the number of lanes, the whole road condition, etc. to calculate the reasonable speed of the car.

Current HD map is of an ADAS level, which can be applied to L2/L3 self-driving. In the future, by incorporating the data processing facilities of internet of car by bringing 5 G, taking into account the nature of computer vision, considering 3D modeling technology, the development of cloud computing technology based on the deep-learning environment perception and end-closed loop real-time update, HD map will gradually have highly automated driving level. This paper expects that the HD map will gradually mature with the 5 G standard establishment and with the artificial intelligence eruption entering the mature stage, and become one of key technologies to support intelligent driving network.

2.4 Map matching

Map matching, which is the foundation of the path planning, calculates out the car’s location by using the geographical information from GPS/INS and the map information from EM. During the calculation, the advanced fusing technique is employed to fuse the longitude and attitude or other coordinates information into the EM. From the practical viewpoint, the output of car location should be accurate and time efficient. In this regard, it is an important issue to find a good method to fuse the information from GPS and INS. In fact, sometimes the satellite signal in GPS or the INS could be lost, therefore, a good data fusion method that can integrate the information from the existing location and route scenario will greatly enhance the accuracy, robustness and reliability.

Therefore, it is the research hotspot to make use of vehicle running characteristics in map matching, for example, those literatures proposed a novel method to solve map matching (Liu et al., 2017; Rohani et al., 2016; Zeng et al., 2016). Besides, hidden Markov model (HMM) and heuristic algorithms are some competitive algorithms in those methods, for example, the literature (Mohamed et al., 2017) presents a new method named SnapNet, which provides accurate real-time map matching for a cellular-based trajectory trace and employs a novel incremental HMM algorithm to solve the problem. In the paper of Jagadeesh and Srikanthan (2017), a novel map-matching solution is proposed which combines the widely used approach based on a HMM with the concept of drivers’ route choice. The similar articles using HMM include: Atia et al. (2017), Zhou et al. (2017) and Wang and Zimmermann (2014) and so on. We argue that there will be more and more heuristic algorithms for map matching, for example, the literature (Gong et al., 2017) develops a novel map-matching model that considers local geometric/topological information and a global similarity measure simultaneously and adopts the ant colony algorithm to accomplish the optimization goal in this complex model.

2.5 Global Path Planning

Global Path Planning is used to determine the optimal driving path between the start point and end point. Generally, the typical path planning algorithms, such as Dijkstra algorithm, Bellman-Ford algorithm, Floyd algorithm and heuristic algorithm (Seshan and Maitra, 2014) are employed to fuse the EM information and calculate the optimal path. Due to the global path, planning is at mature stage and already implemented commercially on a large scale, so this paper will not cover this topic.

2.6 The next step of navigation system

In path planning, the module of location is required to integrate the information from EM. Even though the key technology of location (i.e. location system and the EM) in self-driving car has been matured and implemented at the commercial level, there are still many challenges that we have to face in the future:

  1. The tradeoff between the cost and the accuracy: the current location system in a self-driving car depends mainly on the satellite location system; however, to achieve the stable and accuracy of satellite signal, high-accuracy location information extraction is required, and then high cost is required to spend on the additional equipment. Therefore, it is necessary to reduce the cost in the future large-scale commercial use, while at the same time maintaining the accuracy of location.

  2. The tradeoff between the location accuracy and speed: It is necessary to accurately locate the self-driving car even in high-speed moving scenario; however, higher speed leads to fast update of the location information, and more information is required to be integrated. However, due to the limited computation ability and processing speed (i.e. CPU) of the equipment, the in-time calculation of location information cannot be achieved, and thus it leads to the inaccuracy of location. Therefore, obtaining high-accuracy location under high-speed condition is a future research direction.

  3. The special EM for self-driving car: in recent times, the general EM is utilized in self-driving, while it is necessary to develop a special EM for a self-driving car to consider the human identity, i.e. hobby of human, profession of human, which can reduce the response time of EM.

2.7 Environment perception

Environment perception is the second module of a self-driving car. To provide necessary information for a car’s control decision, the car is required to independently perceive surrounding environment. The major methods of environment perception include laser navigation, visual navigation and radar navigation.

During environment perception, multi-sensors (i.e. laser sensor, radar sensor) are deployed to sense the comprehensive information from the environment, which are then fused to perceive the environment. Among the sensors, the laser sensor is utilized for bridging between the real world and data world, radar sensor is used for distance perception and visual sensor is for traffic sign recognition. A typical recognition scheme is shown in Figure 3, the self-driving car fuses data from laser sensors, radar sensors and visual sensors, and generates the surrounding environment perception, such as road edge stone, obstacles, road marking and so on.

2.8 Laser perception

Strictly speaking, laser perception system is a kind of radar system. In laser perception, a continuous laser or laser pulse is launched to the target, and a reflected signal is received at the transmitter. By measuring the reflection time, reflection signal strength and the shift of the operation frequency, the cloud data of target point can be generated, then the testing object information, such as location (distance and angle), shape (size) and state (velocity and attitude) can be calculated out.

Laser sensor is the main sensor in environment perception (Ackerman, 2016b). According to the dimension of sensed information, laser sensor can be classified into single-line laser radar, multi-line laser radar and three-dimensional omnidirectional laser radar (Fei, 2012). These lasers usually work in complicated outdoor environment. Different kinds of laser radar have their defects. For instance, false detection usually occurs in a single-line laser radar due to less information sensed by the single-line radar; owing to the asymmetry information from different line in multiple-line laser radar, the accuracy of output is limited and insufficient, and the vision range is smaller due to the smaller inter-section region among the lines; due to the large amount of data generated in a three-dimensional omnidirectional laser radar, it is difficult for the algorithm to generate the real-time output. Therefore, in the complex outdoor environment, especially the moving of the cars and human, it is challenging to reasonably configure different kinds of sensors to achieve the location of a moving obstacle, and improve the output performance in terms of vision, accuracy and real time.

To speed up the process of implementing self-driving car into the market, many other issues, such as accuracy, cost, are required to be considered. Since laser sensor is the main sensor of self-driving car in the environment perception, the accuracy and reliability of laser radar is important, and it represents the degree of maturity of a self-driving car. In addition, the cost is also an important factor in deploying the laser sensors. For instance, the 3D omnidirectional laser radar HDL-64E provided by Velodyne company installed on the Google self-driving car can achieve high accuracy and reliability, however, it cost up to $80,000. Therefore, how to reduce the price of laser sensor in a self-driving car is another important research topic.

In the field of LIDAR, the major manufacturers are Velodyne LiDAR, Inc., Ibeo Automotive Systems GmbH, Quanergy Systems, Inc. etc. Velodyne does not provide algorithm products, but the laser radar raw data to the automakers. It has three LiDAR productions: HDL-64E (64 lines), HDL-32E (32 lines), VLP-16 (line 16). Unlike Velodyne, Ibeo’s products offer a complete solution including the hardware and software. For the current lbeo self-driving car, miniLUX and LUX are the common two products used in the multiple point layout combination. Quanergy is a new venture company in the field of LiDAR. In 2016 CES, Quanergy showed the new product S3, its size is close to the business card box.

2.9 Radar perception

Radar perception is generally used for distance detection, which is achieved by calculating the return time of millimeter wave transmitted by the radar sensor. As the radar detecting distance has been relatively matured, this paper will not cover this topic, but will only introduce some productions.

The major global suppliers of automotive millimeter wave radar are traditional enterprises with advantage of automotive electronic, such as Bosch, Continental, Hella, Fujitsu Ten, DENSO, TRW, Delphi, Autoliv and Valeo and so on. Among them, the core product of Bosch is a long-range millimeter wave radar, which is mainly used in the ACC system. The latest product LRR4 can detect vehicles 250 meters away, as currently it is the only wave radar with farthest detection range of millimeter and has the highest market share. However, the customers of Bosch mainly concentrate on Audi and Volkswagen. Continental has a wide range of customers and a complete product line. The main products are 24 GHz millimeter wave radar, which has a high market share in the field of Stop & Go ACC. Hella has the widest range of customers in the 24GHz-ISM field. Ten million 4 GHz millimeter wave radar has been off the assembly line, 6.5 million have been shipped; therefore, the market share of Hella is first in the world. The fourth-generation 24 GHz radar sensor will be ready for global production in 2017. Fujitsu and Denso dominate the Japanese market, where Fujitsu has a slightly better market share. Fujitsu, Panasonic and Denso will be strong competitors in the 79 GHz radar market in the future.

2.10 Visual perception

Visual perception is necessary for a self-driving car, i.e. it is necessary to identify the traffic signals. Nowadays, most traffic signals are designed for the human vision; therefore, it is necessary to recognize the traffic signal. Besides, the machine vision is also used for location, navigation, to judge the motion and so on. However, it is complex that environment perception use vision due to the large amount of information and inefficient algorithms. Specifically, the most complex visual perception is how to ensure the reliability and robustness of the algorithm (Ben-Afia et al., 2014).

There are two main development directions in visual-perception-based intelligent vehicle navigation. One is primarily visual Simultaneous Localization And Mapping (SLAM) based on the map. Another is visual understanding based on the understanding of captured image, which use the machine vision and machine learning to process the image, the self-driving car then reconstructs the 3D scene for navigating and recognizing the traffic lights, traffic signs and stop line (Gim Hee Lee and Pollefeys, 2013; Hane et al., 2015). On account of the intense research for machine visual, this paper skips introducing this part, the literature (Ben-Afia et al., 2014) can be used as a reference, which details the visual navigation. This paper mainly introduces the SLAM.

SLAM problem can be described as the robot performs moving within an unknown location in an unknown environment. During the moving, the robot locates itself according to the position estimation and sensor data, and at the same time builds the incremental map. The schematic diagram of SLAM including the localization and mapping is shown in Figure 4. In the figure, Sk are the data gathered from the sensors, Mk−1 is the local map at previous time k−1, Rk is the self-driving car position at time k. When the robot starts moving from an unknown location in an unknown environment, the robot locates itself and builds the incremental map during the moving based on the position estimation information Rk and sensed data Sk.

Four issues are required to be solved in SLAM. They are: how to express the environment, namely environment map expression method; how to get the environment information, i.e. robot roams in the environment and records the perception data from sensors, which are related to the robot location and environment feature extraction; how to express the obtained environment information and refresh the map according to the environment information, which require to find a suitable way to describe and handle the uncertain situation; and develop a stable and reliable SLAM.

In view of calculating the complexity problem for Scale Invariant Feature Transform (SIFT), a new method called Iterative SIFT Monte Carlo Localization SLAM was proposed in the literature (Dongbo, 2012).The SLAM solution based on vision of the indoor robot in the literature (Dongbo, 2012) is also helpful for a self-driving car. The environment perception function is prone to the influence from the vehicle condition, vibration, performance degradation, roads emergency, weather conditions, accident shade, and so on. So it is a great challenge for a self-driving car to improve the reliability and robustness of laser radar and visual sensors. However, to make the self-driving car marketization, the challenges on reliability and robustness of laser radar and visual sensors are required to be solved. Therefore, the above issues are the key problems for self-driving car’s widespread application.

2.11 Vehicle control

Vehicle control mainly includes vehicle speed and direction control. Generally, the functionalities of vehicle control are the vehicle’s status perception and the development of vehicle’s control method. The position of vehicle control in a self-driving framework is shown in Figure 5. To achieve vehicle speed and direction calculation, the EM information including environment perception, vehicle status, driving target, traffic regulations and driving knowledge are fed as input into the perception module, then the vehicle control algorithm performs the calculation of the control target, which is then passed into the vehicle control system. Finally, the vehicle control system executes those instructions to control the vehicle’s direction, speed, light, horn and so on.

The control platform is the core component of the self-driving car and controls the various systems of the vehicle, which includes the car anti-lock braking system, the car drive anti-slip system, the car electronic stability program, the automobile Sensotronic Brake Control, electronic brake force distribution, auxiliary brake system, supplementary restraint system and car radar anti-collision system, electronically controlled automatic transmission, continuously variable transmission, cruise control system, electronic control suspension, electric power steering system and so on. The control platform mainly includes two parts, i.e., electronic control unit (ECU) and communication bus. ECU mainly implements the control algorithm, whereas the communication bus realizes the communication function between ECU and mechanical parts.

2.12 The perception of vehicle speed and direction

The perception of vehicle self-status mainly includes vehicle speed and direction perception. Photoelectric code is usually used in speed perception, while both photoelectric angle code and potentiometer are employed in the direction perception. Photoelectric angle code disc is a widely utilized encoding digital sensor, which converts the measured angular displacement into digital signal output. There are two types of photoelectric angle code disc sensors, which are known as absolute photoelectric code disc and incremental photoelectric code disc. For an absolute photoelectric code disc, the vehicle angle is obtained by measuring the rotating object absolute position, while for the incremental photoelectric code disc, the vehicle angle is calculated out by measuring the accumulative angular displacement during the turning of the rotating object, and by integrating a period of accumulative angle (Yongfeng, 2007).

On the other hand, the vehicle can use GPS/INS or attitude and heading reference system (AHRS) to perceive self-status. AHRS is a high-performance three-dimensional motion measurement system based on micro-electromechanical systems. It consists of a three-axis gyroscope, three-axis inertial measurement unit (IMU), three-axis electronic compass and other auxiliary three-axis motion sensors. AHRS can timely provide 360-degree posture information in both static and dynamic environments; therefore, it is widely used in many automatic control system or test system.

2.13 Vehicle control method

PID algorithm or improved PID algorithm is usually adopted by vehicle control methods. PID control algorithm is the most common control algorithm in the current industrial production process. The principle of PID algorithm is shown in Figure 6. In the figure, r(t) is the input expired signal, e(t) is the feedback error signal, u(t) is the control signal calculated by PID control algorithm, and c(t) is the current actual output signal by the controlled object. From the figure, the PID algorithm employs three parameters and math operations, including proportion, integration and differentiation, to control the target. The difference between the control target and reality value is employed as the input of feedback loop to adjust the position of the target.

It is noticed that classic PID algorithm has the problems such as complex parameter adjustment, low adaptability and so on. Especially when the transmission system is highly nonlinear and the longitudinal interference is too complex, the control accuracy is too low. The improved PID algorithms employed by the most recent self-driving car can overcome the drawbacks of classic PID algorithm. For instance, a vehicle direction control method including longitudinal and lateral control was proposed in the literature (Pan, 2012). The longitudinal control was achieved by the expert control method formulated by the expert rules established from the driving experience. It was found that the control accuracy was highly improved by using the longitudinal control even when the system is nonlinear and longitudinal interference is complex. Moreover, a new autonomous vehicle’s lateral control algorithm based on a composite of the Cerebellar Model Articulation Controller and PID control was proposed in the literature. Combined with the longitudinal control, the system can compensate automatically even when the model and input signal change quickly and unpredictably, and thus the self-driving car can drive steadily and accurately in any kind of urban environments.

3. The main research institutions of self-driving car

The idea of self-driving car is same as of the vehicle invention. Until recent years, the self-driving car achieved the laboratory applications level with the development of sensor technology, computing technology and mobile internet. The self-driving cars that appear in the self-driving car trial represent the top level of this industry. The main research institutions can be found out by introducing several competitions.

The first challenge is the DARPA Grand Challenge, which was held in (DARPA, 2007)004, 2005 and 2007. In 2004, 15 teams joint the first challenge that was held in the Mojave Desert region of the USA. The goal of the challenge was passing through 240-km desert; however, no team could complete the task. Until 2005, five teams completed the task smoothly with the champing completion time of 6 hours and 54 minutes. Different from the previous two challenges, the third DARPA Challenge held in 2007 was an urban challenge, in which the test environment was an urban road. Six teams completed the competition requirements. The Stanford Racing team won the second good result. The teams from Virginia Tech, Blacksburg, Virginia and MIT, Cambridge, Massachusetts came in the third and fourth, respectively (Table II).

The second trail is The European Land-Robot trial (ELROB) (Trial, 2017); the first ELROB took place in the infantry training area near Hammelburg on May 15-18, 2006. Unlike the DARPA, European Robotics bridges the gap between industry and research in the field of ground robotics. The robotics’ competition also held in a game form, which includes the military and civilian form, and these two forms held in turn every year since 2007. The unmanned game was held in the odd-numbered years since 2007. The last four games scene (Scenarios) are named as reconnaissance and surveillance, autonomous navigation, camp security and transport mule, respectively, and the Transport Mule is a typical scene for a self-driving car.

The third trail was named “China Smart Car Future Challenge” and held since 2009. A major research plan “visual auditory information cognitive computing” (study period: 2008-2015) was launched by the National Natural Science Foundation of China (NNSFC) in August 2008. As an important component of NNSFC major research plan, the purpose of “China Smart Car Future Challenge” are fourfold: first is to exchange experience in a real physical environment; the second is to test the research progress of “visual auditory information cognitive computing;” the third is to explore the efficient calculation model and improve the computer’s ability to understand complex perception information and processing efficiency of huge amounts of heterogeneous information; and the fourth is to promote the major research plan to achieve better progress, and thus improve the original innovation of the major research. The first challenge, which includes comprehensive road test (about 15 kilometers highway and suburb road) and a special way test (urban road), was held in a closed environment in 2014. The self-driving car 4 S performance, namely, safety, smartness, smoothness and speed, is the emphasis of this challenge.

From the above trails, we can know a lot of famous self-driving car institutes, which are summarized in Tables III and IV.

Moreover, the development process of a self-driving car can be represented by some typical events, the details of which are described as follows:

In September 2011, a free self-driving car trail named “Made in Germany” was held in the University of Berlin. The self-driving car set off from the Brandenburg gate in Berlin, passed through Berlin International Conference Center and finally returned to the starting point safely. During this journey, the self-driving car traveled nearly 20 kilometers, which includes 46 traffic lights and two island rings.

In July 2013, a self-driving car developed by the Artificial Vision and Intelligent Systems Laboratory (VisLab) of the Parma University, Italy drove around the old section of Parma city without any human participation. The self-driving car passed successfully the single two-lane and roundabout, recognized the traffic lights, pedestrians crossing the road, man-made raised pavement conditions and so on.

Other events in VisLab include:

  • In 2010, a driverless van has completed the longest-ever trip of around 13,000 kilometers (8,077 miles) that began from Italy and ended in China within three months (Kent, 2010).

  • In 2013, a driverless car testing program named PROUD-Car Test was held by Vislab. It can be seen that a car with nobody on the driver seat can move successfully on a mixed traffic route (rural, freeway and urban) open to public traffic.

  • In 2014, a new driverless vehicle named Deeva was designed with an appearance similar to normal vehicle (VisLAB, 2016).

Nonetheless, Google is the most famous agency in area of self-driving car.

Google’s self-driving car is developed by a team headed by Sebastian Thrun, who is the Director of Artificial Intelligence Laboratory, Stanford University. The accumulation of Google’s self-driving car technology began in 2005, the first driving license was issued to Google’s self-driving car in Nevada, USA in May 2012. By the end of 2014, the project with eight self-driving cars has been tested on more than 700,000 kilometers; even if the driving road covered urban, highway, mountainous road and various roads, no proactive accident happened. Before the Christmas Eve of 2014, Google announced the “first real build” of their self-driving vehicle.

4. The trend and discussion of self-driving car

From the previous sections, we know that the experimental prototypes of the self-driving car have been developed, some typical self-driving cars have been tested on more than one million kilometers, and the test for driving license for a self-driving car has been issued by some states in USA . However, we should know that the realization of self-driving car is not only influenced by the self-driving car technology, but also by the vehicle cost, social habits, human psychology, law and so on; therefore, there will be a long way to go for the entire commercialization of self-driving car.

At present, self-driving car configures many sensors that are non-existent in a traditional car. For instance, the lasers and vision sensors used for environment perception are the typical key sensors. These sensors are expensive and have high requirement on using condition. Moreover, the service lifetime of these sensors will be greatly reduced when they are installed in the moving car. Meanwhile, the reliability is also worrying.

In addition, it needs time for the social habit of humans to accept the self-driving car, for instance, the share of self-driving car among the people, the development of automatic supporting facilities for self-driving car energy compensation, the human psychological acceptance degree for self-driving car, and the development of law to deal with the traffic accident on self-driving car.

Therefore, there are some obstacles in the way toward the self-driving car, especially, there are four parts.

4.1 Contention between camera and LiDAR

There are two key sensors for the self-driving car perceiving environment: LIDAR and Vision, and each has its own advantages and disadvantages. The advantage of LIDAR is able to perceive more clearly the surrounding environment of vehicle, less interfered by external factors, especially the light. The three-dimensional imaging laser radar is currently the most efficient sensor, but also the most accurate sensor obtaining a wide range of three-dimensional scene image. The drawback is that the current LIDAR production process is complex and expensive, taking Velodyne HDL-64 as an example, the current price is up to $80,000. Contrasting with the LiDAR, the Vision is low price, but the ability to perceive the environment is less than the laser, and is influenced by the quality of the algorithm and the environment, especially the light. The debate of environment perception is caused by both the pros and cons. For the implementation technology of a self-driving car, Google as a representative mainly uses LiDAR, and Tesla uses Vision. Some literature (Harris, 2015) argues that the cameras will replace LiDAR in the future, we believe that the production technology of LiDAR will have a new breakthrough along with the strong demand; therefore, the cost of the LiDAR will decrease significantly (Shchetko, 2014). Laser will be the main sensor of environmental perception under the full self-driving mode; however, Vision will also be used to assist self-driving to percept the environment. In future, a LIDAR-based, vision-assisted mixed environment perception model will be required.

4.2 Social habit

Social habit is a very important issue in sociological research. As the level of self-driving matures, it will have a great impact on people’s transportation. First of all, the taxi and the truck will be replaced, it is difficult to effectively reduce the impact on various industries brought by the technological progress. The second one is the impact on public transport. The self-driving car may be more convenient for the people to travel, on the other hand, it is possible to collapse the existing social transport model, leading to less bus services and more congested urban traffic. Finally, the self-driving cars may bring different perceptions of wealth. Today, sharing economy is growing in the world, in the future, the self-driving car must promote the development of sharing economic in the field of road traffic, then human beings will be more willing to share in the future. Some typical literatures, such as Richards and Stedmon (2016), Banks and Stanton (2016), Banks et al. (2014), Brooks (2017), Conejero et al. (2016), Yang and Coughlin (2014), Surden and Williams (2016), etc. discuss these issues.

4.3 Human psychology

The problem stems from two aspects, one is the human demand for security, and the other is the social and ethical issues. For more than one century, people have been accustomed to the control of vehicles. Different from other new things, the self-driving car is more likely to cause passengers injuries or even death. It has a great impact on the human psychology. Many people are not willing to use self-driving car in order to achieve more psychological security, while few people do not use self-driving because of their love for driving. Therefore, for a long time, it will be the era of co-existence of both self-driving car and human driving car. In terms of social ethics, how to choose between passenger safety and pedestrian safety when they are in danger? When the danger occurs, how to choose a young child and an old man? How to make an emergency safe haven is always a human psychological problem. This issue has also aroused the attention of scholars (Kirkpatrick, 2015; Baruch, 2016; Diels and Bos, 2016; Li et al., 2016; Lee et al., 2015).

4.4 Law problem

The current legal system already cannot meet the self-driving car. There are four problems: first is the license problem. At present, many countries do not make the rules for the self-driving cars. There are no countries or regions that give self-driving car license, only California and some American states issued a test permit and with the progress of the self-driving car, whether it is legal that the existing vehicles be equipped with self-driving control system?; Second is driving regulations. Whether the driving regulations of self-driving cars are determined according to the requirements of human driving, it is also an issue in the current legal profession. Third is the definition of responsibility. How to define the responsibility? Whether there should be someone sitting in the driver’s seat, whether the passengers sitting in the driving position should have the driving skills, whether the passengers should bear the corresponding responsibility, all those are the legal problems of self-driving. Fourth is the information security. Whether the self-driving car has the right to record the path of passage? Is the mapping of self-driving car related to information security in a country or region?

In any case, law problem is going forward with dispute (Greenblatt, 2016). The Vienna Convention for Road Traffic, which is on the road traffic management, was amended in the United Nations on March 23, 2016. It removes the obstacle for applying a self-driving car in the transportation. The 1958 Agreement, developed by the United Nations Coordination Forum on World Vehicle Regulations, proposes to remove the speed limit for the active steering function application, which is expected to be discussed in 2017. The USA is doing its best in the self-driving car legal framework, not only some states have enacted relevant test bills and developed rules, but also the federal formulates the related rules and laws: Federal Automated Vehicles Policy (USDO Transportation, 2016), Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act (US Congress, 2017). The other countries are only enacting some testing allowable terms and are considering the relevant legislation.

Nowadays, more and more driving assistance technologies originated from the self-driving car have been utilized in the traditional car. It can be predicted that the realization of self-driving car will gradually develop from the assistance driving to the self-driving in special environment (such as highway), and finally to the total self-driving. In recent times, many driving assistance technologies such as lane keeping assist, adaptive cruise control and so on, have been commercialized. In the near future, the commercial self-driving car under supervision in some special sections will be developed, such as the car will self-drive in a highway, which will be the milestone of self-driving. In the future, full self-driving car will be accepted as a common driving pattern.

Figures

A classification of the key technology for self-driving

Figure 1

A classification of the key technology for self-driving

On-board car navigation system

Figure 2

On-board car navigation system

A typical perception scheme of self-driving car

Figure 3

A typical perception scheme of self-driving car

SLAM schematic diagram

Figure 4

SLAM schematic diagram

The position of vehicle control in self-driving car Vehicle Control

Figure 5

The position of vehicle control in self-driving car Vehicle Control

The principle of PID algorithm

Figure 6

The principle of PID algorithm

The classification of vehicle automation by the National Highway Traffic Safety Administration (NHTSA)

Level Judgment standard
No-automation (Level 0) The driver completely controls the vehicle all the time
Function-specific automation (Level 1) Individual vehicle controls are automated, such as electronic stability control or automatic braking
Combined function automation (Level 2) At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping
Limited self-driving automation (Level 3) The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a “sufficiently comfortable transition time” for the driver to do so
Full self-driving automation (Level 4) The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars

The DARPA Grand Challenge Champions

Year Vehicle Team name Team home
2004 Sandstorm Red Team Carnegie Mellon University, Pittsburgh, Pennsylvania
2005 Stanley Stanford Racing Team Stanford University, Palo Alto, California
2007 Boss Tartan Racing Carnegie Mellon University, Pittsburgh, Pennsylvania

The outstanding participants in self-driving related project of ELROB over the year

Year Project Team/source
2007 Urban scenario: situation awareness in urban environment University of Würzburg
Telerob
University of Hannover
2009 Transport – Mule (non-urban) University of Hannover
University of Kaiserslautern
Robotics Inventions
2011 Transport – Mule Fraunhofer FKIE
University of Hannover
2013 Autonomous navigation using GPS, GLONASS and Galileo MuCAR/University of the Bundeswehr Munich
RIS/LAAS/CNRS
NAMT/Nizhny Novgorod Automotive Technical School (NAMT)

The best player in future challenge over the year

Year Team From
2009 Self-driving car Hunan University
2010 Intelligent Pioneer Institute of Advanced Manufacturing Technology, Hefei Institutes of Physical Science Chinese Academy of Science and Chery Central Research Institute
2011 Explore Lion National University of Defense Technology
2012 Brave Lion 3 Military Transportation University
2013 Smart 2 Beijing Institute of Technology and BYD Corporation
2014 Junjiao Lion Military Transportation University

References

Ackerman, E. (2016a), “After mastering Singapore’s streets, NuTonomy’s Robo-taxis are poised to take on new cities”, available at: https://spectrum.ieee.org/transportation/self-driving/after-mastering-singapores-streets-nutonomys-robotaxis-are-poised-to-take-on-new-cities (accessed October 5, 2017).

Ackerman, E. (2016b), “Lidar that will make self-driving cars affordable”, IEEE Spectrum, Vol. 53, p. 14.

Atia, M.M., Hilal, A.R., Stellings, C., Hartwell, E., Toonstra, J., Miners, W.B. and Basir, O.A. (2017), “A low-cost lane-determination system using GNSS/IMU fusion and HMM-Based multistage map matching”, IEEE Transactions on Intelligent Transportation Systems, Vol. 18 No. 11, pp. 1-11.

Banks, V.A. and Stanton, N.A. (2016), “Keep the driver in control: automating automobiles of the future”, Applied Ergonomics, Vol. 53, pp. 389-395.

Banks, V.A., Stanton, N.A. and Harvey, C. (2014), “Sub-systems on the road to vehicle automation: Hands and feet free but not ‘mind’ free driving”, Safety Science, Vol. 62, pp. 505-514.

Barker, J., Sam, M., Evan, B., Tim, B. and Justin, G. (2013), An Overview of the State of the Art in Autonomous Vehicle Technology and Policy, University of Washington School of Law, Washington, DC.

Baruch, J. (2016), “Steer driverless cars towards full automation”, Nature, Vol. 536, p. 127.

Ben-Afia, A., Deambrogio, L., Salos, D., Escher, A.-C., Macabiau, C., Soulier, L. and Gay-Bellile, V. (2014), “Review and classification of vision-based localisation techniques in unknown environments”, IET Radar, Sonar & Navigation, Vol. 8 No. 9, pp. 1059-1072, available at: http://digital-library.theiet.org/content/journals/10.1049/iet-rsn.2013.0389

Berger, C. (2014), “From a competition for self-driving miniature cars to a standardized experimental platform: concept, models, architecture, and evaluation”, Journal of Software Engineering for Robotics, Vol. 5 No. 1, pp. 63-79.

Berger, C. and Dukaczewski, M. (2014), “Comparison of architectural design decisions for resource-constrained self-driving cars – a multiple case-study”, Proceedings of the INFORMATIK, pp. 2157-2168.

Brooks, R. (2017), “The big problem with self-driving cars is people”, available at: https://spectrum.ieee.org/transportation/self-driving/the-big-problem-with-selfdriving-cars-is-people (accessed October 5, 2017).

Computerworld (2012), “Self-driving cars a reality for ‘ordinary people’ within 5 years, says Google’s Sergey Brin”, Communications of the ACM, available at: www.computerworld.com/article/2491635/vertical-it/self-driving-cars-a-reality-for--ordinary-people--within-5-years--says-google-s-sergey-b.html (accessed March 13, 2018).

Conejero, J.A., Jord, N.C. and Sanabria-Codesal, E. (2016), “An algorithm for self-organization of driverless vehicles of a car-rental service”, Nonlinear Dynamics, Vol. 84 No. 1, pp. 107-114.

DARPA (2007), “Urban challenge”, available at: http://archive.darpa.mil/grandchallenge/ (accessed May 13, 2016).

Diels, C. and Bos, J.E. (2016), “Self-driving carsickness”, Applied Ergonomics, Vol. 53, pp. 374-382.

Dongbo, L. (2012), “The research on methods of mobile robot particle filter localization and mapping”, doctor thesis, Hunan University, Changsha.

Farrell, J. and Barth, M. (1999), The Global Positioning System and Inertial Navigation, McGraw-Hill, New York, NY.

Fei, Y. (2012), Real-Time Detecting and Tracking of Moving Objects using 3D LIDAR, Zhejian University, Hangzhou.

Gim Hee Lee, F.F. and Pollefeys, M. (2013), “Motion estimation for self-driving cars with a generalized camera”, IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, pp. 2746-2753.

Gong, Y.-J., Chen, E., Zhang, X., Ni, L.M. and Zhang, J. (2017), “AntMapper: an ant colony-based map matching approach for trajectory-based applications”, IEEE Transactions on Intelligent Transportation Systems, Vol. 19 No. 2, pp. 390-401.

Greenblatt, N.A. (2016), “Self-driving cars and the law”, IEEE Spectrum, Vol. 53, pp. 46-51.

Hane, C., Sattler, T. and Pollefeys, M. (2015), “Obstacle detection for self-driving cars using only monocular cameras and wheel odometry”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, pp. 5101-5108.

Harris, M. (2015), “A cheaper way for robocars to avoid pedestrians”, IEEE Spectrum, Vol. 52 No. 7, p. 16.

Harris, M. (2016), “Meet Zoox, the Robo-Taxi start-up taking on Google and Uber”, available at: https://spectrum.ieee.org/transportation/advanced-cars/meet-zoox-the-robotaxi-startup-taking-on-google-and-uber (accessed October 5, 2017).

Hassler, S. (2017), “Self-driving cars and trucks are on the move”, IEEE Spectrum, Vol. 54 No. 1, p. 6.

Ibanez-Guzman, J., Laugier, C., Yoder, J.-D. and Thrun, S. (2012), “Autonomous driving: context and state-of-the-art”, Springer, London, pp. 1271-1310.

Jagadeesh, G.R. and Srikanthan, T. (2017), “Online map-matching of noisy and sparse location data with hidden Markov and Route choice models”, IEEE Transactions on Intelligent Transportation Systems, Vol. 18 No. 9, pp. 2423-2434.

Kent, J.L. (2010), “Driverless van crosses from Europe to Asia”, available at: http://edition.cnn.com/2010/TECH/innovation/10/27/driverless.car/ (accessed June 10, 2016).

Kirkpatrick, K. (2015), “The moral challenges of driverless cars”, Communications of ACM, Vol. 58 No. 8, pp. 19-20.

Lee, J.-G., Kim, K.J., Lee, S. and Shin, D.-H. (2015), “Can autonomous vehicles be safe and trustworthy? Effects of appearance and autonomy of unmanned driving systems”, International Journal of Human-Computer Interaction, Vol. 31 No. 10, pp. 682-691.

Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D., Kammel, S., Kolter, J.Z., Langer, D., Pink, O., Pratt, V., Sokolsky, M., Stanek, G., Stavens, D., Teichman, A., Werling, M. and Thrun, S. (2011), “Towards fully autonomous driving: systems and algorithms”, Intelligent Vehicles Symposium, Vol. 32 No. 14, pp. 163-168.

Li, J., Zhao, X., Cho, M.-J., Ju, W. and Malle, B. (2016), “From trolley to autonomous vehicle: perceptions of responsibility and moral norms in traffic accidents with self-driving cars”, SAE 2016 World Congress and Exhibition, Detroit.

Litman, T. (2015), “Autonomous vehicle implementation predictions: implications for transport planning”, Transportation Research Board 94th Annual Meeting, Washington, DC, January 11-15.

Liu, X., Liu, K., Li, M. and Lu, F. (2017), “A ST-CRF map-matching method for low-frequency floating car data”, IEEE Transactions on Intelligent Transportation Systems, Vol. 18 No. 5, pp. 1241-1254.

Mohamed, R., Aly, H. and Youssef, M. (2017), “Accurate real-time map matching for challenging environments”, Transactions on Intelligent Transportation Systems, Vol. 18 No. 4, pp. 847-857.

Pan, Z. (2012), Research on Motion Control Approaches of Autonomous Vehicle in Urban Environment, University of Science and Technology of China, Hefei.

Richards, D. and Stedmon, A. (2016), “To delegate or not to delegate: a review of control frameworks for autonomous cars”, Applied Ergonomics, Vol. 53, Part B, pp. 383-388.

Rohani, M., Gingras, D. and Gruyer, D. (2016), “A novel approach for improved vehicular positioning using cooperative map matching and dynamic base station DGPS concept”, IEEE Transactions on Intelligent Transportation Systems, Vol. 17 No. 1, pp. 230-239.

Ross, P.E. (2014), “Driverless cars: optional by 2024, mandatory by 2044”, available at: https://spectrum.ieee.org/transportation/advanced-cars/driverless-cars-optional-by-2024-mandatory-by-2044 (accessed October 1, 2017).

Seshan, J. and Maitra, S. (2014), “Efficient route finding and sensors for collision detection in Google’s driverless car”, International Journal of Computer Science and Mobile Computing, Vol. 3 No. 12, pp. 70-78.

Shchetko, N. (2014), “Laser eyes pose price hurdle for driverless cars”, available at: www.luxresearchinc.com/sites/default/files/WSJ_7-21-14.pdf (accessed June 13, 2016).

Surden, H. and Williams, M.-A. (2016), “Technological opacity, predictability, and self-driving cars”, Cardozo Law Review, Vol. 38, available at: https://ssrn.com/abstract=2747491; http://dx.doi.org/10.2139/ssrn.2747491 (accessed March 14, 2016).

Thrun, S. (2010), “Toward robotic cars”, Communications of the ACM, Vol. 53 No. 4, pp. 99-106.

Trial (2017), “ELROB –The European Land Robot Trial”, available at: www.elrob.org/elrob (accessed October 2, 2017).

Urmson, C. and Whittaker, W.R. (2017), “Self-driving cars and the urban challenge”, IEEE Intelligent Systems, Vol. 54 No. 1, p. 6.

US Congress (2017), “Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act”, available at: www.congress.gov/bill/115th-congress/house-bill/3388/text (accessed July 13, 2017).

USDO Transportation (2016), Federal Automated Vehicles Policy, available at: www.transportation.gov/AV/federal-automated-vehicles-policy-september-2016 (accessed May 8, 2017).

Van Arem, B. (2014), “Implications of self driving cars”, IEEE Intelligent Systems, Vol. 23, pp. 66-68.

VISLAB (2016), available at: http://vislab.it/automotive/ (accessed August 13, 2016).

Walker, G.H., Stanton, N.A. and Young, M.S. (2001), “Where is computing driving cars?”, International Journal of Human–Computer Interaction, Vol. 13 No. 2, pp. 203-229.

Wang, G. and Zimmermann, R. (2014), “Eddy: an error-bounded delay-bounded real-time map matching algorithm using HMM and online Viterbi decoder”, Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Dallas, TX.

Yang, J. and Coughlin, J.F. (2014), “In-vehicle technology for self-driving cars: advantages and challenges for aging drivers”, International Journal of Automotive Technology, Vol. 15 No. 2, pp. 333-340.

Yida, X. (2013), Autonomous Vehicle Platform and Key Technique Research Focusing on Urban Environment, Shanghai Jiao Tong University, Shanghai.

Yongfeng, X. (2007), Design and Realization of the Automatic Drive Electric Vehicle Substrate Control System, master’s thesis, Shanghai Jiao Tong University, Shanghai.

Zeng, Z., Zhang, T., Li, Q., Wu, Z., Zou, H. and Gao, C. (2016), “Curvedness feature constrained map matching for low-frequency probe vehicle data”, International Journal of Geographical Information Science, Vol. 30 No. 4, pp. 660-690.

Zhou, X., Ding, Y., Tan, H., Luo, Q. and Ni, L.M. (2017), “HIMM: An HMM-based interactive map-matching system”, in Candan, S., Chen, L., Pedersen, T.B., Chang, L. and Hua, W. (Eds), Database Systems for Advanced Applications: 22nd International Conference, DASFAA, Suzhou, China, March 27-30, Proceedings, Part II, Springer International Publishing, Cham, pp. 3-18.

Zhu, Y., Qu, C. and Chen, H. (2012), “GPS/INS-Based intersection turning of autonomous vehicle”, Journal of Transportation Systems Engineering and Information Technology, Vol. 12 No. 1, pp. 91-97.

Acknowledgements

The authors would like to thank the financial support from Shenzhen Science and Technology Innovation Committee (Nos JCYJ20160429145314252, JCYJ20160527162817715, JCYJ20160407160609492), Guangdong Provincial Science and Technology Plan projects (No. 2016A010101039), Shenzhen Polytechnic (Nos 601522k30007, 601522K30015).

Corresponding author

Dr Bodong Liang is the corresponding author and can be contacted at: liangbodong@szpt.edu.cn

About the authors

Jianfeng Zhao is a Lecturer of School of Automotive and Transportation Engineering at the Shenzhen Polytechnic. He received his Doctorate Degree in Fundamentals of Artificial Intelligence from the Xiamen University. His current research interests include artificial intelligence, automotive technology, and heuristic algorithm.

Dr Bodong Liang has been with Shenzhen Polytechnic (SZPT) as an Associate Professor since 2010. He received his PhD Degree in Computer Vision from The Chinese University of Hong Kong (CUHK) in 2006. Prior to joining SZPT, he held the position of Technical Marketing Manager at Zoran Corporation in 2006-2009, and served as a FAE at C-Cube Microsystems Inc. and LSI Logic Corporation in 1997-2002. His current research interests are computer vision, artificial intelligence, and intelligent transportation systems.

Qiuxia Chen received her PhD Degree from the Hong Kong University of Science and Technology. She is currently a Lecturer of School of Automotive and Transportation Engineering at the Shenzhen Polytechnic. Her research interests focus on the localization, sensor network, RFID, and cloud computing.

Related articles