Table Of ContentsNext Page

Pictures from space for precision farming and ranching: are you smiling yet?

Jack F. Paris

Paris & Associates Inc.
1172 South Main Street, #255
Salinas, California, U.S.A., 93901-2204

(831) 769-9840 (Phone & Fax)
paris@redshift.com

Abstract

Today, many farmers and ranchers are using geospatial information technology. In most cases, this use involves the Global Positioning System (GPS), which, with even a cheap GPS receiver, provides absolute map-coordinates within a few meters. GPS-related geospatial information technology includes yield monitoring; variable-rate seeds, nutrients, pesticide, herbicide, and/or water applications; precision scouting and soil surveying, and even GPS-guided tractors. In addition, Geographic Information System (GIS) software can help in plotting and analyzing the resulting geodata.

This paper is about Pictures from Space (a.k.a., remote sensing) and how these could be an important source of up-to-date geoinformation in your overall farm or ranch management. However, to be most useful, expert services are needed to extract quantitative and consistent geoinformation from the pictures and to align this information with other GIS data layers. Until now, most efforts to provide Pictures from Space and/or the related geoinformation have not always been quantitative, consistent, sharp, easy to use, or timely enough to meet the needs of agriculture. Consequently, not many agriculturalists have incorporated Pictures from Space into their management activities.

This paper first introduces remote sensing technology as related to precision ranching and farming. Then, needed improvements are discussed. The paper finishes with a forecast of how remote-sensing services and products will likely evolve in the near future. Practical examples are included about how information from aerial imagery can be used for farm and ranch management.

Cameras in Space

“Smile! You’re on candid camera!” is the motto of a popular TV show. Ever since the early 1970s, cameras, whether candid or not, have been taking Pictures from Space of cropland and ranchland with the intension of improving farm and ranch management. For most of that time, these pictures have been too coarse, too infrequent, too boring (lacking useful information), too late, too difficult to get and to handle, and too expensive. So, they have given you little reason to smile. Now, in 2001, this will all change as many companies and even government agencies strive to address the difficulties of the past and provide useful information at last. This might even lead to you to smile.

Let’s review the history of Cameras in Space and thereby bring you up to date.

Landsat in the 1970s and 1980s

On July 23, 1972, almost exactly 29 years ago from the date for the GIA conference, the U.S. National Aeronautics & Space Administration (NASA) launched and operated the world’s first remote-sensing satellite – Landsat 1 – designed specifically for land mapping and monitoring from an outer space perspective.

The primary multispectral camera on Landsat 1 was the Multispectral Scanner System (MSS). Landsats 2, 3, 4 and 5 (launched in 1972, 1975, 1978, 1982, and 1984, respectively) also included the MSS. MSS captured pictures in three spectral regions as 4-bands of imagery – GL, RL, and two NIR bands (see Table 1). Two of these pictures – GL and RL – are visible to human eyes. The other two – in the NIR – are not. MSS, and the multispectral cameras that later came to a space-platform over your neighborhood (see Table 1), allowed the extra-ordinary perception of the otherwise invisible world of NIR and MIR imaging.

Table 1. Spectral Regions for Selected Spacecraft-Based

Earth Mapping & Monitoring Imaging Systems (80-m or Better Resolution)

See the Text for Definitions of Spacecraft Imaging Systems.

General name of spectral region

Abbreviation (in this paper)

Approximate wavelength band interval (in nanometers)

Spacecraft imaging systems that include this spectral region

Blue Light

BL

400 to 500

TM, LISS, ETM+, Ikonos, QuickBird

Green Light

GL

500 to 600

MSS, TM, LISS, XS, XI, ETM+, Ikonos, QuickBird

Red Light

RL

600 to 700

MSS, TM, LISS, XS, XI, ETM+, Ikonos, QuickBird

Near Infrared

RL

700 to 1000

MSS, TM, LISS, XS, XI, ETM+, Ikonos, QuickBird

Middle Infrared 1

MIR1

1,500 to 1,800

TM, LISS, XS, XI, ETM+

Middle Infrared 2

MIR2

2,000 to 2,400

TM, ETM+

Thermal Infrared

TIR

8,000 to 14,000

TM, ETM+

Panchromatic

PanVIS

500 to 700

SPOT 4 Pan

Panchromatic NIR

PanNIR

500 to 900

SPOT 1-2 Pan, LISS, ETM+, Ikonos, QuickBird

More than spectral coverage and the number of “colors” or bands are important when considering digital cameras. A camera’s spatial resolution deals with its ability to record the details of a scene.

A digital camera collects pictures as an array (a.k.a., a raster) of picture elements (called pixels). Each MSS pixel recorded the brightness of a small square patch of ground area that was about 80-m by 80-m, which is an area of only 0.64 hectares (ha). Another way to express spatial resolution is in terms of the number of pixels per hectare. There were only 1.6 MSS pixels per ha.

Starting with Landsat 4 (1982) and continuing with Landsat 5 (1986), NASA added a greatly improved 7-band multispectral camera called the Thematic Mapper (TM). The TM camera had significantly better band placement than the MSS did. In fact, TM covered all of the available broad-band regions of the electromagnetic spectrum – BL, GL, RL, NIR, MIR1, MIR2, and TIR – where remote-sensing of the ground is possible through a cloud-free atmosphere (see Table 1).

Except for the coarser TIR band, TM pixels recorded the brightness of a patch of ground area that was only 30-m by 30-m in size, which is an area of 0.09 ha. So, there were about 11 TM pixels per ha – a significant improvement to the spatial resolution of MSS (which had 1.6 pixels per ha). TIR pixels are 120-m by 120-m (1.4 ha) – only 0.71 pixels per ha for these surface-temperature-sensitive TIR images.

TM also detected more levels of scene brightness than did MSS by factors of 2 to 4. This improved radiometric resolution increased the information content of TM greatly. In addition, the wavelength placements of the TM spectral bands were significantly better than MSS in terms of avoiding strong absorption by atmospheric gases and focusing on surface-material spectral differences (for land cover mapping).

During the operational phases of Landsats 1, 2, 3, 4, and 5, several other non-USA earth-imaging space satellites were placed into operation (all during the 1980s), as addressed in the following subsections.

SPOT in the 1980s

France launched the first SPOT Image Corporation’s satellite – SPOT 1 – in 1986. Coincidentally, Landsat 5 was launched and privatized by the US Government in the same year. Amazingly, SPOT 1 is still in operation today! These two events opened the Age of Commercialism for sources of pictures from space.

With its smaller 20-m by 20-m pixels, the SPOT multispectral camera (commonly called XS) broke the 30-m spatial-resolution limit that the US Government had imposed on non-military US cameras in space. However, XS covers only 3 spectral regions – GL, RL, and NIR – in 3 bands (one in each region). SPOT 1 also was the first pushbroom imager. This approach to picture taking led also to better geometric fidelity along a given image line than is the case for mirror-scan imagers like TM.

A XS pixel sees a ground area of 20-m by 20-m in size, an area of 0.04 ha. So, SPOT XS significantly raised the pixel density to 25 pixels per ha (compared to 11 pixels per ha for TM). In addition, SPOT 1 has a 10-m by 10-m resolution panchromatic band, which produces 100 pixels per ha. A smile was beginning to form on the faces of many users!

As Table 1 indicates, there are two basic kinds of panchromatic bands on cameras in space. One covers only the spectral region from 500 to 700 nanometers; I call this PanVIS (since it covers mostly the visible region: 400 to 700 nanometers). Other “panchromatic” imagers (put in space later) extend well into the NIR region; I call these PanNIR.

SPOT Panchromatic, with its 530- to 730-nanometer band is an example of PanVIS.

Indian Remote Sensing Satellite in the 1980s

The multispectral cameras on Indian Remote-Sensing satellites are called LISS (for Linear Imaging Self-Scanning). IRS-1A’s LISS-I, launched in 1988, had a spatial resolution of 72 m, which is similar to MSS; but, it collected images in bands similar to the first 4 spectral regions and bands of TM – BL, GL, RL, and NIR. Later many IRS satellites, with many improved LISS cameras, were launched in the 1990s (as discussed later). There are so many kinds of IRS satellites and LISS cameras that it is difficult to keep up with the varied characteristics of all of these.

Landsat in the 1990s

The 1990s started off with a disaster: Landsat 6 – launched in 1992 – failed to make orbit. Smiles began to fade. Fortunately, Landsat 5 continued to function for 14 years, well beyond its original design lifetime of 3 years!

In 1999, NASA launched Landsat 7, which carried an improved TM camera called the Enhanced Thematic Mapper Plus (ETM+). With Landsat 7, the US Government returned to its tax-supported policy of the first four Landsats. While ETM+ data are not free, they are cheap. For as little as $600 (U.S.), a full-scene of ETM+ data can be purchased within about a week of its acquisition. This is considerably less expensive than Landsat 5 data, which had been going for $4,400 (U.S.) for the same scene. In addition, ETM+ has an 8th 15-m resolution panchromatic (PanNIR, actually) band and an improved resolution TIR band (60-m instead of 120-m).

Some end users not only smiled, but also began to laugh!

If ordered as Level 1R, ETM+ data are provided on a 16-bit scale that preserves the 11-bits of radiometric accuracy of the radiance-corrected image data. Compared to the 11 pixels per ha image density of multispectral ETM+ or TM bands, ETM+ Band 8 (the PanNIR band) has 44 pixels per ha. This is not as good as the 100 pixels per ha of SPOT panchromatic images; but, it is an image that, when corrected for elevation differences, can meet mapping scales of 1:24,000, a standard in the U.S. for topographic mapping. Some frowns appeared when users realized that they could not have both 16-bit excellent radiometric resolution and excellent geometric accuracy in the same Landsat Level 1 product. [To discuss this at length would require another paper.] So, let’s look at other developments of Cameras in Space in the 1990s.

SPOT in the 1990s

SPOT Image Corporation launched three more SPOT satellites in the 1990s – SPOT 2, 3, and 4 (in 1990, 1993, and 1998, respectively). SPOT 2 and 4 are still operational today still joined by the resilient SPOT 1. For these satellites, there was no change in the spatial resolution from the original SPOT 1.

However, SPOT 4’s multispectral camera (called XI) was different from the previous SPOT cameras. One MIR1 band was added. For agriculture applications, this allows for significantly better mapping of soil properties, especially peat soils versus non-peat soils. The SPOT 4 panchromatic band was changed also to be sensitive only to RL. Apparently, this was done to provide high-resolution (10-m) images based on RL for the purpose of better characterizing the chlorophyll content of plants. The absorption by chlorophyll is strongest in the RL region. SPOT Image Corporation has decided to abandon this narrowband “panchromatic” design when it launches SPOT 5 in 2002. With SPOT 5, they employ the older PanVIS concept.

Indian Remote Sensing Satellite in the 1990s

IRS-1B with LISS-II cameras, launched in 1991, had a spatial resolution of 36 m, which is similar to TM. The spectral coverage was unchanged – BL, GL, RL, and NIR. In 1995, IRS-1C with LISS III was launched. It carried a 5.8-m panchromatic (PanNIR) camera and a multispectral camera sensitive to GL, RL, and NIR with 23-m cells and to MIR1 with 70.5-m pixels. IRS-1D, in an unplanned elliptical orbit, had similar characteristics to LISS-III. With a PanNIR resolution of 5.8 m, the IRS-1C and IRS-1D produce about 297 pixels per ha. This is about 3 times higher than the SPOT panchromatic pictures. Those who need high-resolution pictures broke out into a wide grin when IRS panchromatic pictures became available.

High-Resolution Commercial Cameras of the 21st Century

In 1999, Space Imaging, Inc, launched its Ikonos satellite. Ikonos has an integrated multispectral (GL, RL, and NIR) and panchromatic (PanNIR) camera (also called Ikonos). The spatial resolution of the Ikonos camera is 4-m by 4-m pixels for the multispectral bands – GL, RL, and NIR – and 1-m by 1-m pixels for the PanNIR band (for pictures taken at nadir – directly beneath the satellite); thus, the pixel density is 625 pixels per ha for the multispectral pictures, and it is 10,000 pixels per ha for the PanNIR band.

In 2000, EarthWatch Inc. launched a similar GL, RL, NIR, PanNIR camera/satellite called QuickBird 1. Unfortunately, QuickBird 1 failed to reach orbit. Undaunted, EarthWatch Inc plans to launch QuickBird 2 in October 2001. QuickBird 2 will have the same four spectral bands as QuickBird 1 (and the same as the Ikonos camera). However, the spatial resolution of QuickBird 2 will be much improved over that of the Ikonos. The pixels will be 2.5-m by 2.5-m for the multispectral bands – GL, RL, and NIR – and 0.61-m by 0.61-m for the PanNIR band. When operational in early 2002, QuickBird 2 will produce an incredible 1,600 pixels per ha for the multispectral bands and 25,600 pixels per ha for the PanNIR band. Thus, QuickBird 2 will be able to meet mapping standards associated with large-scale airborne photography from space.

QuickBird 2 will cause customers to smile from ear to ear!

Information from Pictures from Space

So far in this paper, I have talked about the many, many cameras that have been, are, or will be in space. While a picture is truly worth a thousand words, there is more value in a picture than just the density of pixels or even the number of spectral bands. It is possible, depending on the camera, to obtain quantitative information about plants and soils from the pictures. However, the characteristics of the camera system and of the ground processing have to be adequate to allow the information to be useful and timely for agricultural customers.

From the beginning of terrestrial remote sensing in 1972 with Landsat 1 to the incredible anticipated capabilities of QuickBird 2 in 2001, NASA has heralded agriculture as being one of the major applications for spacecraft-based multispectral pictures from space. During these nearly three decades of time, NASA and many international government agencies have sponsored hundreds of research efforts aimed at exploring and developing agricultural applications for multispectral images from spacecraft.

The most common denominator among the many kinds of space-borne cameras (listed in Table 1) is the inclusion of three particular spectral bands – GL, RL, and NIR. These three spectral bands have a longer history than even the Landsat series. They go back to World War II with the invention of color infrared (CIR) photography.

CIR photography was used in World War II for camouflage detection. Artificial green vegetation that looked real to ordinary human eyes appear, in CIR photography, to be very different than live green vegetation. CIR photography’s usefulness is based on the fact that dense leafy green biomass has a very high NIR reflectance (up to 60%) and a very low RL reflectance (down to 3%). Thus, the ratio of NIR reflectance to RL reflectance for leafy green biomass is as high as 20. In contrast, bare soil has a much more equal reflectance in these two spectral bands with the ratio of NIR reflectance to RL reflectance being about 1.15.

On a standard CIR photograph, dense green leafy vegetation has bright saturated red colors while bare soil has dark to medium bright cyan (combination of blue and green) colors. CIR colors are called false-color since humans cannot see the NIR component of reflected sunlight (not without assistance from technology). Trees and shrubs have dark, still saturated red colors. The physical cause of these CIR colors is the significant amounts of chlorophyll pigments in the green plants. Chlorophyll strongly absorbs RL and weakly absorbs NIR radiation. In addition, the build up of leaf area index (total leaf area per unit ground area) raises NIR reflectance to a high level of about 60% (dependent on leaf angles).

Another advantage of CIR photography is the absence of blue light (BL) in the pictures. Since about half of BL coming from the surface toward a camera on an aircraft or spacecraft is from the atmosphere, pictures in BL look hazy. CIR, being responsive only to GL, RL, and NIR, exhibits significantly less atmospheric haze. Also, water bodies on the surface are very distinct in CIR photos (as compared to natural color photos). CIR photos make image analysts smile a lot!

Thousands of photo interpreters learned the value of CIR photography for a wide variety of mapping needs. They also fully appreciate the value of the spatial detail visible in panchromatic photography. In fact, having natural color photography is useful due to the subtle effects of plants pigments and soil composition on color photos. Thus, the modern digital camera systems like QuickBird 2 and Ikonos seek to take advantage of the five spectral bands associated with them – BL, GL, RL, NIR, and PanNIR (highest resolution).

NASA’s biggest program in conjunction of the U.S. Department of Agriculture (USDA), with respect to agriculture, was the AgRISTARS Program, where AgRISTARS stands for Agricultural Resource and Inventory Surveys Through Aerospace Remote Sensing.

Researchers developed many quantitative ways and means for combining two or more MSS-type spectral bands to make numerical indicators (indexes) of specific vegetation and soils properties. In fact, except for a few algorithms, most of these efforts concentrated on just two bands – RL and NIR. Investigators soon learned about another advantage of the spacecraft perspective over the aircraft perspective. To cover a large area, aircraft-based cameras must view a scene over a wide range of viewing angles (from 0 to 45 degrees in one picture). From an earth-orbiting spacecraft, the range of viewing angles is much smaller – only a few degrees. A narrow range of viewing angles results in the variations in digital brightness being driven by variations in surface reflectance rather than by variations in the atmospheric path length. Also, a narrow range of view angles reduces the effects of bidirectional reflectance that causes hot spots and darkened areas in aircraft-based photography. Thus, information-extraction algorithms work better with spacecraft-based images than with aircraft-based images.

Vegetation Indexes (VIs)

The well-known significant contrast between RL reflectance and NIR reflectance for dense herbaceous green vegetation compared to bare soils has been noted above. Before going further, it is useful to consider how reflectance (a property of surface materials) is related to radiant parameters such as radiance, L, and irradiance, E.

Let’s do this in a mathematical way (since the analysis of Pictures from Space) is often done with the aid of mathematical algorithms in computer programs. I will also use some Greek symbols (as used in much of the remote-sensing literature).

Let ρ be the (hemispheric-to-hemispheric) reflectance factor for reflective surface material – perhaps a complex mixture of plant and soil materials including the effects of shadows and radiative transfer in the vegetative canopy. The modified hemispheric-to-hemispheric is used here to indicate that reflectance is a comparison between upwelling radiation (all possible angles) and downwelling solar radiation (all possible angles). In fact, most of the downwelling solar radiation comes from the small part of the sky that is where the sun is.

If the collection of surface materials behaves like a diffuse scattering medium, then

ρ = π L / E (Eq. 1)

where π = 3.1415927…, L is the upwelling radiance referenced at the surface and E is the downwelling irradiance referenced at the surface. To be diffuse, L is the same for all upwelling directions of travel. A camera that is close to the surface records in proportion to L (for each pixel).

With the right equipment, almost anyone can measure L and E at the surface and then use Equation (1) to estimate the value for ρ. Normally, ρ is estimated by comparing L from a diffuse non-absorbing panel (where ρ = 1) to L from the object being measured (with E being the same for both measurements of L).

If L is observed to vary with different lookdown angles, then a more complicated equation is needed. But, for purposes of illustration in this paper, assume that Equation (1) is valid. The point here is that the reflectance of plants and/or soils or mixtures of these is relatively straightforward if you are on the ground.

But, this paper is about remote sensing. Remote here implies that the observer is no where near the ground. In fact, the observer may be well above the ground in an aircraft or a spacecraft. Or, at least, the camera is far above the ground.

If you are using a calibrated camera (a very important kind of camera to use for quantitative remote sensing), then reasonably good measurements can be made of the apparent radiance of the ground, LC, as seen through the atmosphere. A simple mathematical relationship exists between LC, as measured from an aerial platform, and L, as measured just above the ground, as follows:

LC = LP + L tP (Eq. 2)

where LP is the component of LC that comes from the atmospheric path (between the aerial platform and the ground) and tP is the transmissive loss (a fraction between 0 and 1) that L suffers as the radiation travels from the ground to the aerial platform.

If LP and tP are known (through some kind of atmospheric calibration method), then it is a simple matter to use Equation (2) to estimate L from the remotely sensed measurement of LC.

An important point, which I want to make by resorting to this simple mathematical modeling, is that the digital picture (taken from space or even from an aircraft) contains scene-brightness components (which may be in terms of LC or in terms of a brightness number in an image raster related to LC) that are NOT the same as the surface reflectance factors, ρ. Only in cases of transparent air, where LC is zero and tP is 1, does the observed radiance, LC, equal the radiance, L, coming from the ground at the level of the ground.

Furthermore, even if this were the case (only true for very long wavelengths in the MIR2), you would still have to know what is the downwelling E in order to estimate the reflectance, ρ. It should be obvious that the estimate of E is perhaps even more difficult than the estimation of LP and tP (if you are not on the ground at the time that the picture from space was taken).

Information extraction algorithms are all based on combinations of reflectance values for two or more spectral bands – NOT on combinations of raw digital picture data. However, many users of digital picture data make exactly this mistake. They try to extract information from the raw digital picture data instead of from estimated reflectance factor data. Ignoring the effects of the atmosphere and probably the effects of sensor calibration (relating raw digital picture numbers to LC, the radiance at the camera) will cause the resulting plant and/or soil information to be inaccurate and inconsistent from date to date and possibly within a given scene.

So, let’s look at some examples of the algorithms for plant and soil information extraction. These examples are all based on two spectral band images – RL and NIR. In every case, the raw digital picture data needs to have been converted to accurate reflectance factor estimates.

One of the earliest algorithms for indicating the amount of green vegetation present in an agricultural field is the Difference Vegetation Index (DVI), which is:

DVI = ρNIR - ρRL (Eq. 3)

Suppose that (ρRL , ρNIR) = (0.17, 0.20) for bare agricultural soil (BAS) and (0.03, 0.60) for dense herbaceous green vegetation, DHGV (100% cover with a high green leaf area index, GLAI). For the BAS, DVI would be 0.03. For the DGHV, DVI would be 0.57. This is certainly a significant change in DVI. For simple linear mixtures of BAS and DHGV – say 50% each, DVI would be exactly in the middle of these two extremes, i.e., DVI would be equal to 0.285. But, this kind of simple mixture occurs only for short DHGV canopies in short row crops where the percent cover of DHGV is 50% and the percent of BAS is 50%; also, there can be no shadows, as might be the case in mid day with north-south rows. In addition, it would be useful to have a VI that would have a value of 0.00 for BAS and a value of 1.00 for DHGV. Such a VI would then be numerically equal to the percent cover amount of DHGV.

VIs of the type expressed in Equation (3) are called Perpendicular Green Vegetation Indexes (PGVI). A more general expression would be as follows:

PGVI = aNIR ρNIR + aRL ρRL (Eq. 4)

where a is a fixed value for each spectral band. In the case of Equation (3),

aNIR = +1.00 and aRL = -1.00.

If more bands than two are involved, e.g., the 6 multispectral bands of TM or ETM+, then Equation (4) becomes a more general linear combination such as

PGVI = a1 ρ1 + a2 ρ2 + a3 ρ3 + a4 ρ4 + a5 ρ5 + a7 ρ7 (Eq. 5)

Historically, Equation (5) is, with proper coefficients, the Kauth-Thomas Tasseled Cap Green formula. Similar equations exist for Soil Brightness, Yellowness, Wetness, and Non-Such.

Unfortunately, perpendicular indices of the form in Equation (5) have been found to be not as useful as other formulations when they are used to estimate certain plant and soil properties such as Green Leaf Area Index (GLAI), the amount of Absorbed Photosynthetically-Active Radiation (APAR), the amount of Green Biomass Density (GBD), and even the Percent Green Vegetative Cover (PGVC). Nevertheless, Kauth-Thomas type transformations remain popular and occur in even the most recent of papers in the scientific remote-sensing literature (See Figure 1).

Figure 1 shows several sets of isovegetation points [labeled for GLAI ranging from 0 (bare soil) to 4 (dense herbaceous green vegetation)]. The variations within each GLAI category are due to differences in the soil background, which varied from very dark to very light soils.

Lines of constant PGVI, when plotted over these points, show clearly that PGVI is not a good indicator of the amount of green vegetation, except when GLAI < 0.25 m2 m-2.

Figure 1. Isovegetation points with PGVI isolines.

Figure 2. Isovegetation points with NDVI isolines.

A significantly different VI formula is NDVI (see Figure 2). NDVI is defined in terms of the reflectance factors [see Equation (5)], NOT in terms of raw digital picture brightness values.

Most users of the NDVI formula, which is found in every image processing software package, do not calibrate the raw digital picture data to reflectance factors BEFORE applying NDVI formula.

Figure 2 shows that isolines of NDVI fit the isovegetation lines better than did isolines of PGVI (Figure 1).

NDVI is defined, as follows:

NDVI = (ρNIR - ρRL) / (ρNIR + ρRL) (Eq. 5)

Note that NDVI is equal to DVI divided by the average value of ρNIR and ρRL. Many investigators have found that NDVI is better than PGVI. The reason, I believe, is that NDVI expresses better than DVI or PGVI the way that plant and soil spectral properties mix in nature. That is, vegetation usually exists in a more-or-less continuous canopy over the soil background. So, the spectral mixing occurring between vegetation spectra and soil spectra is modulated by and through a translucent vegetation medium.

While NDVI performs better than PGVI under widely varying GLAI conditions, NDVI is far from perfect. The isovegetation points cut across lines of constant NDVI. Thus, as has been often observed, variations in soil background cause NDVI to change even when the green vegetation canopy (i.e., GLAI) has not changed.

In the late 1980s and 1990s, better models of spectral mixing were developed and creative experiments were performed in the field to investigate how isovegetation points behaved in NIR versus RL plots (like those in Figures 1 and 2). This eventually led to new ideas about vegetation indices: The soil adjusted vegetation indices, whether transformed, atmospherically adjusted, or modified in various ways, perform better than NDVI or PGVI. There is not time enough nor room enough in the paper to explore all of these ideas. It is sufficient, I believe, to state that it is time for plant-and-soil information providers to (1) pay attention to the need to calibrate raw digital picture data to reflectance factors and (2) to stop using PGVI or NDVI, which do not perform as well as more modern expressions.

The excellent understanding that has emerged out of three decades of research should give everyone reason to smile.

Beyond Cameras and Information Extraction Algorithms

With all of the cameras (TM, ETM+, XS, XI, LISS, Ikonos, and soon QuickBird) that are operating on earth-orbiting spacecraft (Landsat, SPOT, IRS, Ikonos, and soon QuickBird 2) and with all of the information extraction algorithms that have been developed and verified, one would think that there should be more farm and range managers using Pictures from Space and the geoinformation that can be extracted from them.

But, in fact, only a few agriculturalists are using these sources of geoinformation. Something else appears to be missing. What might that be?

Timeliness of Delivery Issue

Except for the low-resolution weather satellites, every other source of Pictures from Space has had difficulty providing these digital images to expert analysts and/or to end-users in a timely manner. Many studies have shown that production agriculture needs to have access to fresh information within 72 hours of the time that the related fresh pictures were acquired. Some aircraft-based vendors have met this timeliness need in limited markets. However, aircraft-based pictures present a host of other problems that make difficult the extraction of GIS-ready quantitative and consistent information.

Frequency of Coverage and Length of Coverage

Very often, would-be end users of Pictures from Space are able to obtain these data only once in a while. As a result, continuity from date to date and coverage throughout a growing season is not an option.

Cost

In some cases, purchasers of Pictures from Space are asked to buy or lease access to whole scenes of data, when a small part of a scene is all that is needed. Relatively inexpensive digital Pictures from Space do exist, e.g., Landsat 7 ETM+ (which is only $600 US per scene for about 31 thousand km2 of coverage (3,000 ha) of data – which is only about 19 cents (US) per ha!

Lack of Geospatial Information Technology Expertise on the Farm or Ranch

Researchers involved in the development of ways and means of handling digital imagery (Pictures from Space) usually are experts in most, if not all, of the essential components of geospatial information technology:

Remote Sensing:

  • How light interacts with vegetation and soils and how variations in plant and soil properties affect reflectance spectra
  • How light interactions with gases and particles in the atmosphere affect spectral signatures of plants and soils as seen from an aircraft or spacecraft
  • How illumination and viewing geometries affect the relationship between multispectral images and plant and soil properties

Image Processing

  • How to enhance (better visualize) image data
  • How to calibrate raw image data to reflectance rasters
  • How to use algorithms to extract information from one or more sets of Pictures from Space
  • How to manage and use complex image processing software

GIS

  • Know about various GIS formats for image and vector, CAD, TIN, and other GIS non-image data
  • How to georeference raw digital image rasters to standard map projections and datums
  • How to correct distorted imagery to conform to a given map projection and datum (orthorectification)
  • How to provide GIS-ready products in formats that are compatible with a host of end-user GIS software packages (or perhaps to other non-GIS software packages that might be used to view the products)
  • How to serve GIS-data via the Internet (for end users who do not have a functioning GIS)

It is unlikely that all of the skills exist among staff employed by farm or ranch managers.

Consistency of Extracted Geoinformation

Due mostly to the lack of Geospatial Information Technology expertise (noted above), geoinformation products from Pictures from Space often are inconsistent and lacking in quantitativeness. The real information content in a picture can be diluted or even eliminated during subsequent handling. This especially affects the assessment of changes in plant and/or soil conditions from date to date.

Conclusions and a Look Towards the Future

As you can see, there are many cameras taking Pictures from Space. A few companies are offering expert geospatial information services to help agriculturalists have access to these Pictures. Until now, no one has met the needs of the marketplace for timely, frequent, quantitative, consist, and easy-to-use space-based agricultural information products at reasonable prices and for a range of spatial resolutions appropriate to the mapping and monitoring needs.

No one, that is, until EarthWatch Incorporated began its AgroWatch™ Program.

EarthWatch’s AgroWatch Program

I conclude with examples from the new AgroWatch™ Program of EarthWatch Incorporated in Longmont, Colorado, U.S.A. I have been an advisor for EarthWatch for about one year. I have helped them to develop state-of-the-art proprietary algorithms and processing approaches to address all of the shortcomings discussed in this paper. Recently, EarthWatch Incorporated started offering products, based on existing multispectral satellite data, to targeted test markets within the U.S. One of those test markets is in California, U.S.A. EarthWatch has a strategy to bring these products to a wider world market in 2002 and beyond.

Customers are enrolled in the program and provided with initial AgroWatch™ Startup Products (example below). Then, EarthWatch schedules satellite multispectral image data acquisitions on a weekly basis to be delivered within a few hours of collection. Within 48 hours (or less), EarthWatch provides customers with AgroWatch™ Update Products set. These GIS-ready files, in industry-standard formats, are ready to be used with any of the commercial-off-the-shelf GIS software packages.

  • A calibrated Green Vegetation (GV) Index Map
  • A calibrated Soil Zone (SZ) Map (designed to complement the GV Map)
  • A calibrated GV Change Map: Called a ScoutAide™ (SA) Map. SA Maps are available starting with the 2nd AgroWatch™ Update Products Suite.
  • A metadata file (industry standard descriptive information about the GIS-ready products being provided: GV, SZ, and SA).

AgroWatch™ Update Products (examples on following pages) are shipped to customers as attachments to email. These GIS-ready products have been extracted to the extents of the customer’s Areas of Interest (AOIs), which may be as small as one section of land (e.g., 260 ha or 640 acres). These products are based on orthorectified and calibrated Pictures from Space (from selected existing sources) and are presented as georeferenced rasters having cell sizes appropriate to the source (e.g., 10-m cells for the 2001 test market).

Top Of PageNext Page