A kind of infrared expand depth of field vision sensor in low-visibility road condition for safety-driving

Hui-Feng Wang (School of Electronic and Control Engineering, Chang’an University, Xi'an, China)
Gui-ping Wang (School of Electronic & Control Engineering, Chang’an University, Xi'an, China)
Xiao-Yan Wang (School of Electronic & Control Engineering, Chang’an University, Xi'an, China)
Chi Ruan (State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Xi'an, China)
Shi-qin Chen (School of Electronic & Control Engineering, Chang’an University, Xi'an, China)

Sensor Review

ISSN: 0260-2288

Article publication date: 18 January 2016

1468

Abstract

Purpose

This study aims to consider active vision in low-visibility environments to reveal the factors of optical properties which affect visibility and to explore a method of obtaining different depths of fields by multimode imaging.Bad weather affects the driver’s visual range tremendously and thus has a serious impact on transport safety.

Design/methodology/approach

A new mechanism and a core algorithm for obtaining an excellent large field-depth image which can be used to aid safe driving is designed and implemented. In this mechanism, atmospheric extinction principle and field expansion system are researched as the basis, followed by image registration and fusion algorithm for the Infrared Extended Depth of Field (IR-EDOF) sensor.

Findings

The experimental results show that the idea we propose can work well to expand the field depth in a low-visibility road environment as a new aided safety-driving sensor.

Originality/value

The paper presents a new kind of active optical extension, as well as enhanced driving aids, which is an effective solution to the problem of weakening of visual ability. It is a practical engineering sensor scheme for safety driving in low-visibility road environments.

Keywords

Citation

Wang, H.-F., Wang, G.-p., Wang, X.-Y., Ruan, C. and Chen, S.-q. (2016), "A kind of infrared expand depth of field vision sensor in low-visibility road condition for safety-driving", Sensor Review, Vol. 36 No. 1, pp. 7-13. https://doi.org/10.1108/SR-04-2015-0055

Publisher

:

Emerald Group Publishing Limited

Copyright © 2016, Authors. Published by Emerald Group Publishing Limited. This work is published under the Creative Commons Attribution (CC BY 3.0) Licence. Anyone may reproduce, distribute, translate and create derivative works of the article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licenses/by/3.0/legalcode .


1. Introduction

In the field of transportation, the influence of low-visibility conditions on safety has been paid much attention (Koetse and Rietveld, 2009). Low visibility influences the scope and range of the driver’s vision, which can lead to serious traffic accidents (Clark et al., 2004). According to the statistics, the probability of traffic accidents occurring in low-visibility highway environments such as at night or in smog and dense haze is three times higher than that in good visibility (Herschel et al., 1998). So it is significant to study enhanced-vision-aided driving in low-visibility highway environments.

Active-vision enhancement is derived from military requirements and has gradually been applied to aided driving (Bertozzi et al., 2002; Tan et al., 2007), and some research results have been obtained. (Liu et al., 2013; Garcia et al., 2014). However, the following problems remain:

  • the active field depth is limited by the image optical system component, so the vision cannot meet the driver’s requirement;

  • due to back scattering of the headlight caused by the air particulates and the smokes hade, dim targets ahead are submerged in the strong background; and

  • the single image mode cannot be adapted to the high-qualified imaging with extended depth of field.

As is well known, the human eye is a perfect optical system, and it can easily obtain an ultra-large field depth from infinity to 250 mm by adjusting its pupil and focal length. So in good road conditions, the human eye visual ability is quite good (Marcos et al., 1999). But in extreme conditions such as at night and in thick smog and foggy weather (NHTSA, 2009), its ability is greatly restricted, and thus a potential driving hazard may occur. So it is important to enhance the driver’s ability to see in low-visibility road conditions to guarantee road traffic safety. There are very few reports on the use of multi-modal infrared (IR) images and field-depth extension enhanced vision in the field of transportation.

This article is organized as follows: Section 2 describes the basic principle of the multi-modal IR image method and analyzes the image effect in different external conditions. Section 3 describes a kind of scene-synthesis model which is created for the enhancement of field depth. Section 4 provides the experimental results. Both computer simulation and real data are used to validate the proposed method and model.

2. Dual-mode IR image in low-visibility conditions and components of sensor

In this section, first, the atmospheric transmittance mechanism in low-visibility conditions (night and hazy weather) has been analyzed to find different image modes for different road conditions and then the principle of the vision-aided driving system frame has been set up.

2.1 Atmospheric light-extinction mechanism

Vision of outdoor scenes can be severely limited by atmospheric aerosols which absorb and scatter the target signal out of the optical path and scatter unwanted light into the optical path from the surroundings (Fang et al., 2014; Srinviasa and Shree, 2002), which we can call extinction. Figure 1 shows the light-wave transmittance and its interaction with the atmosphere.

2.1.1 Molecular absorption and atmospheric window

In clear atmospheric conditions, the extinction effect mainly acts by molecular absorption, while there is very little molecular scattering. Molecules can have very high absorption of electromagnetic waves in a continuous spectrum and leave just a small wavelength band with no absorption or weak absorption, which forms the so-called “atmospheric window” (Hudson, 1969).

2.1.2 Atmospheric scattering

Light waves can be intensely scattered by aerosol particles which exist largely in the atmosphere, so scattering is an important extinction mechanism. The scattering can be divided into Rayleigh scattering and Mie scattering.

Rayleigh scattering is mainly caused by macromolecules in the atmosphere. On the basis of scattering theory, the bulk coefficient is: Equation 1 Equation 2

where N is the number of macromolecules per unit volume(cm−3), σ p is the scattering sectional area of the macromolecule (cm2), λ is the light wavelength (cm) and n is the refractive index of the particle. In general, according to the empirical formula, σ m is: Equation 3

A is the scattering sectional area of the macromolecule (cm2). We know from equation (3) that Rayleigh scattering is inversely proportional to the biquadrate of wavelength and proportional to the macromolecule density.

Mie scattering is usually caused by aerosol particles. Mie scattering depends on the size, density and refractive index characteristic of the particles. The attenuation model of light in a foggy atmosphere can be described as (Yang et al., 2007): Equation 4

In this formula, V b is the atmospheric visibility (km) and α is the correction factor of the wavelength and is related to V b . In different visibility conditions, the value of α is: Equation 5

A French scholar named Nabouls (Nabouls et al., 2004) corrected α in low visibility to: Equation 6

From equation (4), we can clearly see that the longer the wavelength, the greater the fog attenuation.

2.2 Dual-mode IR-EDOF image in low-visibility conditions for aided safety driving

From the analysis given in Section 2.1, we can find that the visible wavelengths can be intensely scattered by the haze particles because they are short. However, IR light is diffracted in the same condition, so it can bypass clouds, haze and fog which cannot be passed by visible light. In addition, it is also unaffected in the night (Naboulsi et al., 2004; Zdunkowski et al., 1965).

Based on the principle above, we chose the IR wavelength atmospheric window and carried out some comparison experiments on dual-mode (short-wavelength active and long-wavelength inactive) IR images both at night and in hazy weather. We then designed a set of functional block diagrams of the Infrared Extended Depth of Field (IR-EDOF) system.

Figure 2(a) shows an image taken with an 808-nm near-IR laser of 0.5 W light in dense foggy weather, while Figure 2(b) shows an image taken without any light in the same condition.

We can see that the definition of the image can be improved by the IR laser. From Figure 3, we can see that the long-wavelength inactive image is lower in quality than the near-IR illumination active image, which can keep more detail of the scene because of its frequency and bandwidth.

In addition, we can use a polarization filter to suppress the background and enhance the object further, and we can reference the specific principle in the relevant datum (Dai and Khorram, 1999).

2.3 Dual-mode IR-EDOF image sensor system

Both near-IR illumination and long-wave IR inactive vision can pass a certain degree of haze. However, limited by the field depth and range of lens system, neither of the two technologies can meet the field depth and range of human eyes, especially for the long-range interval. So a new kind of long-depth field sensor system based on dual-mode IR-EDOF technology is presented to meet the requirements of safety vision for drivers. Figure 4 shows a block diagram of the sensor.

In our system, we merge the two-image model into a long field-depth model whose output is an image of length L1 + L2.

3. The IR-EDOF image fusion algorithm

The dual-mode IR-image aided driving sensor system needs two images, each with a limited field depth. It will be inefficient and inconvenient for the driver to observe two screens while driving, so a new method using a dual-modal IR-EDOF image fusion algorithm must be studied. In this section, a kind of IR-EDOF image fusion algorithm is presented based on the image matching and fusion algorithm.

3.1 The image matching and scene fusion method

To form a large enough depth of field, the dual-modal image should be a mosaic that is fused as shown in Figure 5. The far-range IR image should be transformed and set into the near-range image and then merged into a large field-depth image for the driver to observe.

To realize the EDOF, a kind of image scene matching, transformation and fusion method is provided here. We designed the block diagram of the method as shown in Figure 6.

The images from the dual-mode camera are put into the processing system. After characteristic matching, the scenes are inlayed and fused and then the EDOF image of large field-depth is the output to the screen for the observation by the driver.

3.2 Affine transformation model

The affine transformation is the basis of the image mosaic. Affine transformation is a kind of rigid body transformation. In a two-dimensional coordination frame, it can be denoted as: Equation 7

In this equation, (x,y) and (x′, y′) are the coordinates before and after transformation, respectively, and (t x , t y ) is the coordination translation vector. There are six parameters of the translation vector in equation (7); they are p = (a 1 , a 2 , t x , a 3 , a 4 , t y ) and they determine the translation relation of the image. So, three pairs of parameters are required to resolve the six parameters. By imposing restrictions on these parameters, we can obtain some kind of familiar transformation.

However, in this paper, we just adopt the Rotation-Scale-Translation transformation, which is sufficient in most cases (Lin and Chen, 2008) and needs just four parameters. Before merging the images, we must find the characteristic of the corresponding images and match them.

3.3 Character selection and image matching

Scales are derived by imitating human vision. The positions of the two cameras are fixed relative to one another after installation, so we can choose manual matching to find the parameters before working, which can improve the image processing speed.

After matching, the affine transformation of points gathered, P 1 = (p 11,p 12, ⋯ , p 1n ), p 1i = (x 1i , y 1i )T and P 2 =(p 21, p 22, ⋯ , p 2n ), p 2i =(x 2i , y 2i )T, i ∈ (1, n), is now expressed as: Equation 8

To resolve the parameters of affine transformation, we adjust the relation of the matrix as follows: Equation 9

Written in matrix form: Equation 10

Computing through least mean squares: Equation 11

The affine parameters can be obtained.

4. Experiments

The IR-EDOF sensor designed in this work, considering the economic efficiency of the experimental system by adopting just one near-IR active image, consists of one WATEC (WAT-902H2) CCD camera, a lens with a variable focal length of 8 to 50 mm and one 808-nm IR laser. The near scene and far scene can be obtained by changing the focal length. A picture of the experimental sensor is shown in Figure 7, while Figure 8 shows an IR image of the far and near images in the same scene at night obtained by the experimental sensor.

Figure 9 shows the feature points and the matching of the feature points.

We can get the affine parameters from equations (10) and (11), and they are: Equation 12

After affine transformation and fusion (Ludusan and Lavialle, 2012; Zhu and Guo, 2014), the IR-EDOF image to aid driving is as shown in Figure 10.

Comparing the image in Figure 10 with that in Figure 8(b), we can see that both the far scene and the near scene in Figure 10 are clearer than those in Figure 8(a and b), which means the field depth is expanded with a long range.

To evaluate the effect of the IR-EDOF, a tenengrad (Aslantas and Kurban, 2009) and entropy (Kamble and Bhurchandi, 2015) function for objective definition assessment metrics is adopted. The comparison of quality assessment results is shown in Table I.

It can be seen from Table I that the definition of the image is improved after IR-EDOF fusion. At the same time, to describe the effect and practical outcomes of the method proposed in this paper, the image enhancement effect of different methods have been analyzed and compared. The results of enhancement effect are shown in Figure 11, and the hardware implementation and feasibility are listed in Table II.

From Figure 11, we can clearly find that the visual enhancement effect of homomorphic filter and IR-EDOF are better than that of the other methods. However, the hardware implementation for the method of homomorphic filter is difficult.

5. Conclusions

This paper deals with several challenges in an IR-EDOF vision sensor for aided safety driving in the following steps. First, we discussed the principles of the dual-mode IR image in low visibility. Second, according to the results of analysis, a kind of dual-mode IR-image experiment has been conducted in low-visibility conditions and a new dual-mode IR-EDOF image sensor system design is presented. Third, using the algorithm for IR-EDOF, a method of affine transformation is adopted and good results are obtained. Finally, some experiments are conducted based on the algorithms. The test shows that the system can expand the vision of the driver while driving in low-visibility conditions. Some further works that should be developed areas follows:

  • both Long-Wave(L.W.) IR and Near-Wave(N.W.) IR should be adopted in an experiment;

  • a vehicle-mounted experiment should be conducted in real low-visibility road conditions and the existing problem should be analyzed in those conditions; and

  • a dynamic scene video image processing algorithm should be studied.


               Figure 1
             
               Pictorial description of atmospheric extinction

Figure 1

Pictorial description of atmospheric extinction


               Figure 2
             
               (a) Image of building with near-infrared illumination in dense fog; (b) image of building in natural light and dense fog

Figure 2

(a) Image of building with near-infrared illumination in dense fog; (b) image of building in natural light and dense fog


               Figure 3
             
               Infrared image at night

Figure 3

Infrared image at night


               Figure 4
             
               Infrared light image system

Figure 4

Infrared light image system


               Figure 5
             
               Schematic diagram of the IR-EDOF image mosaic

Figure 5

Schematic diagram of the IR-EDOF image mosaic


               Figure 6
             
               Schematic diagram of the IR-EDOF image processing

Figure 6

Schematic diagram of the IR-EDOF image processing


               Figure 7
             
               The composition of the IR-EDOF sensor

Figure 7

The composition of the IR-EDOF sensor


               Figure 8
             
               The IR-EDOF image at night

Figure 8

The IR-EDOF image at night


               Figure 9
             
               The feature points registered

Figure 9

The feature points registered


               Figure 10
             
               The IR-EDOF-aided driving image

Figure 10

The IR-EDOF-aided driving image


               Figure 11
             
               Enhancement experiment results by different algorithms

Figure 11

Enhancement experiment results by different algorithms


               Table I
             
               Performance comparison of IR-EDOF

Table I

Performance comparison of IR-EDOF


               Table II
             
               Performance of different algorithms

Table II

Performance of different algorithms

Corresponding author

Hui-Feng Wang can be contacted at: conquest8888@126.com

References

Aslantas, V. and Kurban, R. (2009), “A comparison of criterion functions for fusion of multi-focus noisy images”, Optics Communications , Vol. 282 No. 3, pp. 232-3242.

Bertozzi, M. , Broggi, A. , Cellario, M. , Fascioli, A. , Lombardi, P. and Porta, M. (2002), “Artificial vision in road vehicles”, Proceedings of the IEEE , Vol. 90 No. 7, pp. 1258-1271.

Clark, L. , Tarek, S. and Francis, N. (2004), “A driver visual attention model: part 1: conceptual framework”, Canadian Journal of Civil Engineering , Vol. 31, pp. 463-472.

Dai, X.L. and Khorram, S. (1999), “A feature-based image registration algorithm using improved chain-code representation combined with invariant moments”, IEEE Trans. on Geoscience and Remote Sensing , Vol. 37 No. 5, pp. 2351-2362.

Fang, S. , Xia, X.S. , Xin, H. and Chen, C.W. (2014), “Image dehazing using polarization effects of objects and air light”, Optics Express , Vol. 22 No. 16, pp. 19523-19537.

Garcia, F. , Garcia, J. , Ponz, A. , Escalera, D. and Armingol, J.M. (2014), “Context aided pedestrian detection for danger estimation based on laser scanner and computer vision”, Expert Systems with Applications , Vol. 41, pp. 6646-6661.

Herschel, W. , Leibowitz, D. and Alfred, O. (1998), “The assured clear distance ahead rule: implications for nighttime traffic safety and the law”, Accident Analysis & Prevention , Vol. 30 No. 1, pp. 93-99.

Hudson, R.D. (1969), Infrared System Engineering , 1st ed., Wiley-Inter Science, New York.

Kamble, V. and Bhurchandi, K.M. (2015), “No-reference image quality assessment algorithms: a survey”, Optik , Vol. 126 Nos 11/12, pp. 1090-1097.

Koetse, M.J. and Rietveld, P. (2009), “The impact of climate change and weather on transport: an overview of empirical findings”, Transportation Research Part D , Vol. 14 No. 3, pp. 205-221.

Lin, Y.H. and Chen, C.H. (2008), “Template matching using the parametric template vector with translation, rotation and scale invariance”, Pattern Recognition , Vol. 41 No. 7, pp. 2413-2421.

Liu, Q. , Zhuang, J.J. and Kong, S.F. (2013), “Detection of pedestrians for far-infrared automotive night vision systems using learning-based method and head validation”, Measurement Science Technology , Vol. 24, p. 074022.

Ludusan, C. and Lavialle, O. (2012), “Multifocus image fusion and denoising: a variational approach”, Pattern Recognition Letters , Vol. 33 No. 10, pp. 1388-1396.

Marcos, S. , Moreno, E. and Navarro, R. (1999), “The depth-of-field of the human eye from objective and subjective measurements”, Vision Research , Vol. 39 No. 12, pp. 2039-2049.

Naboulsi, M.A. , Sizun, H. and Fornel, D. (2004), “Fog attenuation prediction for optical and infrared waves”, Optical Engineering , Vol. 43 No. 2, pp. 319-329.

National Highway Traffic Safety Administration (NHTSA) (2009), Driving With Visual Field Loss: An Exploratory Simulation Study (Technical Report) , NHTSA, Washington, DC.

Srinviasa, G.N. and Shree, K.N. (2002), “Vision and the atmosphere”, International Journal of Computer Vision , Vol. 48 No. 3, pp. 233-254.

Tan, R.T. , Pettersson, N. and Petersson, L. (2007), “Visibility enhancement for roads with foggy or hazy scenes”, Proceedings of the 2007 IEEE Intelligent Vehicles Symposium , Istanbul , 13-15 June, pp. 13-15.

Yang, R.K. , Ma, C.L. , Han, X.E. , Su, Z.L. and Jian, D.J. (2007), “Study of the attenuation characteristics of laser propagation in the atmosphere”, Infrared and Laser Engineering , Vol. 36, pp. 415-418.

Zdunkowski, W. , Henderson, D. and Hales, J.V. (1965), “The influence of haze on infrared radiation measurements detected by space vehicles”, Tellus XVII , Vol. 17 No. 2, pp. 148-165.

Zhu, G. and Guo, S.X. (2014), “Image-fusion-based multi-resolution active contour model”, Optik , Vol. 125 No. 17, pp. 4955-4957.

Further reading

Pandey, G. , McBride, J.R. and Eustice, R.M. (2011), “Ford Campus vision and lidar data set”, International Journal of Robotics Research , Vol. 30 No. 13, pp. 1543-1552.

Acknowledgements

This research was financially supported by Fundamental Research Funds for the Central Universities (2013G1321046), the Science and Technology Development Plan Project of Shaanxi province of P.R. China (2013K09-17) and China Postdoctoral Science Foundation funded project (2015M580805).

©Hui-Feng Wang, Gui-ping Wang, Xiao-Yan Wang, Chi Ruan, Shi-qin Chen. Published by Emerald Group Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 3.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/3.0/legalcode.

Related articles