Image maps
covering term for the least abstract map type
Imaging systems
- Cameras
- Optical light paths to a sensing plane (two dimensional)
- Image captured at once.
- Examples:
- classical camera with film
- digital (2D) sensing plate (digital camera)
- Scanners
- Moves a sensing instrument through two dimensions (sweep
- LANDSAT)
- OR sweeps a 1D sensor through the other dimension (push broom
- SPOT)
- One element of the motion can be the flight path of the platform
(aircraft or satellite)
Analogue or Digital
- Film uses silver halides, dyes on potato grains (somewhat
uniformly distributed, but not exact)
- Digital sensors use rectangular geometry, digital counts
summed for a PIXEL (picture element)
Resolution is a major concern with either system, but
measured differently...
More on the topic of remote sensing insturments in next
lecture
Image Vantage Point
yes, here the perspective does matter.
Images are map-like when vertical (or nearly so)
Stereo coverage comes from two views from somewhat different
angles
<diagram>
Oblique perspectives (out an airplane window)
How does an image work?
- Little abstraction
- Each location is represented by its reflectance (or emission)
in some part of the energy spectrum.
- Textures built up by variation between values
- Patterns of light and shade give clues to 3D shapes (variations
in illumination)
Interpreted in terms of the graphic
variables applied to maps:
CANNOT change size, texture, shape OF THE SYMBOLS
Can ONLY change color : Hue, lightness, saturation
Hue assigned by allocating a given dye to a given emulsion
- "true" color: dyes match sensitivity of sensor
(emulsion, scanner)
- "false color" : senses nonvisible bands (such as
infrared), assigns to a visible dye (usually red)
Geometry:
- Camera works by similar triangles <diagram>
- Flight height and lens control the scale of the image (distance
on sensor :: distance on ground)
- Satellites fly smoothly, but aircraft "crab" due
to wind, change height (scale) ...
- Central axis of lens (principal point) may not be pointed
directly "down" (plumb = gravity)
- Scale variation: outer corners of image at much different
angles; radial displacement
- Cameras carefully calibrated with "fiducial marks"
(distances on film plane measured)
Correction process
"Orthophotographs" removes center point distortion;
all points from directly above.
Originally done with mirrors, regenerating the optical image,
now done digitally, linked to measurement of height (digital elevation
matrix).
Accessing images:
Repeated coverage, high resolution, small format of sensors
makes images bulky
Mosaics: composites of multiple images (if not corrected, they
will not match perfectly)
Indexes: maps showing center points of images (more than edges)
grouped by flight lines
Products derived from images
Interpretation
drawing lines, distinguishing objects, using the image as base
(layer model of maps)
Classification
converting the measurements of dark and light (usually on different
spectra) into recognized classes (usually through some form of
cluster analysis; pixels with similar values are probably reflecting
off similar materials... see Remote Sensing lectures
Version of 13 January 2000