Previous PageTable Of ContentsNext Page

Terrain Mapping in Real-Time: Sensors and Algorithms

Steven Scheding, Jeff Leal, Mark Bishop, Salah Sukkarieh

Australian Centre for Field Robotics
The University of Sydney
(Ph) +61 2 93514023, (Fax) +61 2 93517474
Email: scheding@acfr.usyd.edu.au

Abstract

This paper presents an overview of the work being conducted at the Australian Centre for Field Robotics (ACFR) into sensors and systems used for real-time terrain mapping.

The trade-offs between various spatial sensors is discussed within the context of terrain mapping at a local level. The development of field deployable 3-D scanners capable of scanning an area in seconds (real-time surveying) is also highlighted. Finally, algorithms for storing and visualising the terrain data are presented.

Introduction

There are many applications within the Australian primary industries that require the rapid uptake and dissemination of terrain information. In particular, for Precision Agriculture there is a need for high fidelity terrain information for prediction of water run-off, crop yield, variable rate application and so on. Using modern high-speed scanning range sensors such as radar and laser, it is possible to obtain a coarse terrain image (meter resolution) in seconds, and finer resolutions (cm) in a matter of minutes. The advantages and disadvantages of the different sensor types are discussed in this paper.

The design and use of terrain databases is also presented. The ACFR has developed techniques for storing terrain information in a database that takes into account the uncertainty present in each sensor measurement. An inference engine (based on Information theory) is used to extract information from the database for use in various contexts. For example, the inference engine may construct a ‘best-fit’ terrain for use as a ‘true’ terrain representation, or alternatively the inference engine may extract information relevant only to a particular crop. This is achieved very simply by defining new rules for the inference engine.

The following section details the sensors used for real-time terrain imaging, namely laser and radar. The design of scanner units capable of gathering the sensor data is then presented. Finally, the algorithms used to store, retrieve and display the data are elucidated, followed by conclusions.

Sensors

This section details the sensors currently in use for real-time terrain imaging: laser and radar. Stereo vision systems are also candidates for this type of application, however inherent limitations such as poor accuracy vs. range, spurious data etc. make such systems unsuitable.


Millimetre Wave Radar

The millimetre wave (mmWave) region of the electromagnetic spectrum is generally defined as the frequency range from 30 to 300GHz (or wavelengths between 1cm and 1mm).

The selection of a radar frequency is based primarily on the size of the antenna that is required to produce a sufficiently narrow beam and its attenuation by atmospheric effects such as dust, fog and rain (K. Button et. al., 1981). In this regard mmWave radar offers a good compromise between microwave radars that offer low attenuation but require large antennas, and lasers that require a small aperture but offer poor immunity to atmospheric effects.

Low attenuation atmospheric windows occur at 35 and 94GHz with an oxygen absorption band at 60GHz between them, the obvious frequency for a mmWave radar is thus 94GHz or possibly at 77GHz (the automotive radar frequency) (P. Bhartia et. al., 1984).

Other windows occur at higher frequencies, but the cost of components becomes even more prohibitive and dust and water vapour attenuation both increase (F. Nathanson, 1969).

Since radar has a significantly larger beam ‘footprint’ than laser, it can typically be scanned at a much higher rate. For the radars designed at the ACFR the beam footprint is approximately 0.3m3. This allows a complete scan in a matter of seconds, or faster if complete coverage is not required. For many Precision Agriculture applications this accuracy is more than sufficient, however it is offset by the cost of the radar unit.

Figure 1 shows the prototype mmWave radar used by the ACFR for data gathering.

Figure 1 - Prototype mmWave radar

Laser

With respect to mmWave radar, the main advantages of laser-based systems are:

  • High Bandwidth.
  • Small Beam Divergence
  • Small Beam Footprint.

This is traded off against

  • Low Immunity to Atmospheric Effects (such as fog and rain).

Like radar, the laser frequencies chosen reflect peaks in the atmospheric transmission curves (see Figure 6.2). Typical commercially available laser range finders use laser diodes operating at frequencies in the near infrared of around 900nm. Many military systems use more powerful far infrared CO2 lasers operating at around 10000nm. Both of these wavelengths are at plateaus of near unity atmospheric transmittance. More recently, laser range finders have started to use lasers designed primarily for fibre optic communications systems that operate at mid infrared frequencies, typically around 1550nm, also corresponding to a plateau in the atmospheric transmission curve.

Where lasers fail, in comparison to radar, is their immunity to atmospheric effects such as fog, dust and rain. It can be shown that in the visible spectrum, transmission loss is around 300dB/m in fog, compared to only 0.1dB/m for mmWave frequencies. Rain and dust have similarly adverse effects. As a general rule of thumb, if a person can see through it, then so can a laser.

Lasers allow a much more accurate picture of an environment than a radar, due to their extremely small beam footprint and minute beam divergence. However, scanning an environment will necessarily take longer than radar if full resolution is required.

Figure 2 shows the commercia laser range finder (Riegl) used by the ACFR.

Figure 2 - Riegl laser range finder.

Scanners and Imaging

To produce an image, it is necessary to scan the narrow beam generated by the radar or laser and to monitor the range and amplitude of the echo returns.

It is possible to exploit the beam footprint and the multiple target capability of radar to produce a complete 2D image from a single horizontal scan.

For most applications however, the terrain will not be sufficiently smooth to exploit this property, and it will be necessary to scan both vertically and horizontally to build up a complete 3D image of the environment. This technique also allows the use of both radar and laser sensors.

Rather than physically pointing the whole sensor, a mirror scanner that can point the beam over 360° in one axis and 90° in the other is being constructed at the ACFR to perform this function.

This scanner is designed to have a positioning accuracy of 0.2° in both axes, which along with the range resolution of 25cm for the radar and 2.5cm for the laser will give it an exceptional imaging capability.

Figure 3 shows the scanning mechanism currently being built at the ACFR in its radar configuration. The scanner is also capable of housing a laser range sensor.

Figure 3 - Radar scanning mechanism.

Algorithms

Stochastic Surface Reconstruction

Sensing devices and any previous knowledge are the only sources of information about an environment. This information is usually incomplete, and imperfect, therefore making it necessary to store relevant information along with its uncertainty, in a compact manner, if consistent future decisions are to be made. Consequently, these decisions are performed under uncertainty (J. Leal et. al., 2000).

In a terrain information extraction application, the environment generally consists of various features that are of interest, and therefore need to be extracted from the information sources and stored for future decisions (e.g. terrain elevation, terrain composition, crop productivity measurements, etc.).

In order to handle all this information stochastically, the ACFR has developed a rational agent based on information theory (J. Manyika, 1993). This rational agent can be thought of as an expert system. Expert systems collect data related to a problem in a certain environment, and then use its inference technique to extract appropriate information from its knowledge base to produce an answer, diagnosis, or description of a solution.

There are a few levels of reasoning that have to be employed when developing expert systems. These consist mainly of information extraction and representation, information fusion, information inference, and information management. The rational layers are depicted in Figure 4.

Figure 4 - Diagram of the rational agent approach

In Figure 4, the environment consists of a set of world features, in which the system is embedded. The subset of features that are of interest to the system are defined as states of nature. The sensing layer is characterized by a single or multiple set of sources that interact actively or passively with the environment, providing mainly indirect, imperfect, and incomplete knowledge about the states. The objective of the information representation is to transform the ambiguous data into relevant knowledge about the states and into a compact description, which can be easily fused into the knowledge base (KB). One method of describing relevant information about certain states in the KB is performed in the form of a particle-based description for information. Other information can be expressed for example as colour, texture, quantity values, etc.

The information fusion relies on consistent methodologies to maintain and update the knowledge in the KB that is both compact and relevant to the current application. The knowledge base consists of a set of independent databases with different information about the states of nature, such as terrain data, texture data, temperature data, etc. Finally, there exists an information management layer, which serves as a decision-making mechanism, which is based on decision theory. This implies inferring on the data in the knowledge base in order to decide on an action to perform. For example, this action can be one that forces the system to interact with the environment, (such as the repositioning of sensors in order to maximize the gathered information in subsequent observations), or to update the knowledge base, in a certain fashion, in order to maximize the objectives of the current application (such as minimizing survey errors locally or globally, feature extraction, etc.).

This system description makes this approach general for various applications. The user only needs to provide means of obtaining information (sensors, previous knowledge in the form of independent databases), knowledge base fusion methods, an inference technique to extract information from the KB, and utilities to obtain the required results. This at first may seem a lot to provide, but for a general terrain mapping application, all that is needed are the utilities, since the system contains the remaining basic rational layers.

Results

Results from using a multiple sensor, multiple viewpoint system in an outdoor environment, were obtained using this novel approach. Figure 5 depicts the reconstructed terrain from various viewpoints, where we obtained laser data mounted on a pan-tilt head. Figure 5(a,b) were obtained at a low resolution scan of 3 degrees in both pan and tilt axes. Figure 5(c,d) where obtained at a 1 degree resolution in the pan axis only. Additionally, we set the information management layer to maximize the resolution of the reconstructed surface, by limiting the normal error of every triangle to zero variance. This resulted in a highly triangulated terrain. On the other hand, in Figure 6(a,b), we set the utilities to obtain a lower resolution triangulation, by limiting the normal error of every triangle to 0.0005 variance. This small deviation from the first example allowed the result of a less triangulated surface, but still representing a highly accurate map. This is due to a higher triangulated terrain where surface variations are greater and lower triangulation where the variation is minimum (flat regions).

Figure 5 - The triangulated reconstructed surface. (a, b) after a scan from the first viewpoint with a three degree scanning resolution in both pan and tilt axis; (c, d) after an additional scan from the second viewpoint with a one degree scanning resolution.

Figure 6 - The triangulated reconstructed surface from multiple viewpoints, and with multiple scanning resolutions. (a, b) after an additional scan from the second viewpoint with a one degree scanning resolution in the pan axis and three degrees in the tilt axis

Figure 7 depicts two levels of knowledge in two independent databases. One is the triangulation that is designed to fit optimally to information in another KB database, the particle representation. The optimality of the fit is defined by the utilities specified by the application’s objectives. The other knowledge representation (particle representation) represents the uncertainty in the sensor data. Particles represent the various possibilities of the true surface actually passing through the respective particles. Not represented in the figures are the likelihoods of this possibility, but the local density of the particles gives a rough approximate. The utilities defined in these examples are to minimize local errors, forcing a threshold on the global error in the final reconstructed surface.

Figure 7 - The representation of two levels of knowledge about the surface: The particle representation, and the triangulated surface.

Figure 8 demonstrates the decisions performed at the time where new knowledge is obtained. Also represented is the number of triangles at every step. Additionally, an interesting factor is that up to the stage where the first scan is performed at a lower resolution (approximately observation 500) the increase in triangles is greater than after this period. The reason for this is that initially there is no information in certain regions, so any knowledge in these regions produces a large knowledge gain, hence triangulating these regions. After this, new information is fused and the increase in knowledge isn’t as great. Also notice the plateau in the number of merges around the 1200th observation. This is due to a large variation in the terrain in the viewing direction of the sensor, forcing the algorithm to decide to split more often. This effect is visually more noticeable in Figure 8c), although it affects all cases.

Figure 8 - Number of operations and number of triangles at every stage of new information fusion with normal variance set to (a) 0; (b) 0.0005; (c) 0.001; (d) 0.002.

Conclusion

This paper has discussed several sensing technologies applicable to real-time three-dimensional imaging of terrain. The pros and cons of each sensor type were examined. Finally, an algorithm for performing real-time tessellation for presentation to a user was presented, along with several concrete examples, using real GIS data.

References

K. Button and J. Wiltse (ed), Infrared and Millimeter Waves: Millimeter Systems, Academic Press, 1981, vol 4.

P. Bhartia and I. J. Bahl, Millimeter Wave Engineering and Applications, John Wiley and Sons, 1984

F. Nathanson, Radar Design Principles, McGraw Hill, 1969

J. Leal, S. Scheding, G. Dissanayake, Probabilistic 2D Mapping in Unstructured Environments, Proceedings of the Australian Conference on Robotics and Automation, August 2000, Melbourne, Australia, pp 19-24.

J. Manyika, An Information-Theoretic Approach to Data Fusion and Sensor Management, Ph.D. Thesis, Dept. of Engineering Science, University of Oxford, 1993.

Previous PageTop Of PageNext Page