Camera Systems & Sensors

Metric and non-metric cameras, digital frame and linear array sensors, interior orientation, camera calibration, and integration with LiDAR and multispectral systems.

Overview#

The camera is the fundamental measurement instrument in photogrammetry. Unlike a consumer camera optimized for visual appeal, a photogrammetric camera is an optical-mechanical system engineered for geometric precision. Every three-dimensional coordinate extracted from imagery depends on knowing -- with high accuracy -- the internal geometry of the camera that captured the photographs. Without that knowledge, the mathematical relationship between image coordinates and ground coordinates breaks down.

Photogrammetric cameras have evolved dramatically over the past century: from glass-plate film cameras in the early 1900s, through large-format film aerial cameras that dominated mapping from the 1940s through the 1990s, to the digital frame and linear array sensors that define modern practice. Throughout this evolution, the core requirement has remained constant -- the camera's internal geometry must be known, stable, and reproducible.

"The camera is the basic instrument used in photogrammetry. In a broad sense, the camera exposes a sensitized medium (film or digital sensor) to light reflected from objects, thereby creating an image that is a central perspective projection of the objects." -- Wolf, Dewitt & Wilkinson, Elements of Photogrammetry with Applications in GIS (4th Ed.), Ch. 4, p. 63

For exam preparation, understanding camera systems means understanding the difference between metric and non-metric cameras, how interior orientation parameters define the camera's geometry, what camera calibration accomplishes, and how modern digital sensors have transformed data acquisition. These concepts underpin everything that follows in photogrammetric processing.

Metric vs. Non-Metric Cameras#

The distinction between metric and non-metric cameras is foundational. A metric camera is one that has been designed and calibrated for photogrammetric measurement. Its internal geometry is stable, known, and documented in a calibration certificate. A non-metric camera is any camera not specifically designed for measurement -- consumer DSLRs, smartphone cameras, and general-purpose industrial cameras all fall into this category.

Key characteristics of metric cameras:

FeatureMetric CameraNon-Metric Camera
Calibrated focal lengthKnown to micrometersNominal only
Principal point locationPrecisely determinedUnknown or approximate
Lens distortionCharacterized and documentedUnknown
Fiducial marksPresent (film cameras)Absent
Geometric stabilityDesigned for stabilityVariable
Calibration certificateIssued by manufacturer or labNone

Fiducial marks are reference markers built into film-based metric cameras, typically located at the four corners or four midpoints of the image frame. When the film is exposed, these marks are recorded on every photograph. During photogrammetric processing, the fiducial marks define the internal coordinate system of the image, allowing the principal point and all interior orientation parameters to be applied. Digital cameras do not have fiducial marks because the sensor array itself defines the image coordinate system through its pixel geometry.

"Fiducial marks define the coordinate axes of the image space. Their positions, together with the calibrated focal length and principal point offset, constitute the inner orientation of the camera." -- Ghilani & Wolf, Elementary Surveying: An Introduction to Geomatics (15th Ed.), Ch. 27, p. 803

Non-metric cameras can still be used for photogrammetric work -- modern Structure from Motion (SfM) software routinely processes imagery from consumer cameras -- but the interior orientation parameters must be solved during processing through self-calibration. This adds unknowns to the adjustment and generally reduces the geometric rigor compared to a pre-calibrated metric camera.

Film-Based Aerial Cameras#

Film-based aerial cameras dominated photogrammetric mapping from the mid-twentieth century through the early 2000s. These instruments were precision-engineered, large-format cameras designed for vertical aerial photography from manned aircraft.

Notable Systems

CameraManufacturerFormatFocal Length Options
RC-30Wild/Leica23 cm x 23 cm88 mm, 153 mm, 210 mm, 305 mm
RMK TopZeiss23 cm x 23 cm88 mm, 153 mm, 210 mm, 305 mm
LMKLinhof/Jenoptik23 cm x 23 cm153 mm, 305 mm

The standard aerial film format was 23 cm x 23 cm (approximately 9 inches x 9 inches), using 240 mm wide roll film. This large format provided high ground resolution and geometric redundancy. A typical 153 mm (6-inch) focal length lens combined with the 23 cm format produced a field of view well suited to topographic mapping at standard flight altitudes.

Film-based cameras used forward motion compensation (FMC) -- a mechanism that moved the film platen during exposure to compensate for aircraft motion, preventing image blur at slow shutter speeds. The Leica RC-30, for example, included FMC and was considered the gold standard for aerial survey photography through the 1990s.

"The aerial mapping camera is a precision instrument designed to make photographs suitable for use in preparing maps and for other photogrammetric applications. It must produce sharp images over a large format area, and its inner orientation must remain stable." -- Wolf, Dewitt & Wilkinson, Elements of Photogrammetry with Applications in GIS (4th Ed.), Ch. 4, p. 65

Film cameras are now largely obsolete in production environments, replaced by digital sensors offering immediate data availability, no film processing, and broader dynamic range. However, vast archives of film-based aerial photography remain important for historical analysis, change detection, and retracement surveys. The USGS Earth Resources Observation and Science (EROS) Center maintains an archive of millions of historical aerial photographs dating back to the 1930s, many of which are scanned and available for photogrammetric processing using modern softcopy systems.

Understanding film-based systems remains relevant for exam preparation because the principles of interior orientation, fiducial marks, and calibration that were developed for these cameras form the conceptual foundation for all photogrammetric geometry.

Digital Frame Cameras#

Digital frame cameras capture the entire image simultaneously on a two-dimensional sensor array, directly analogous to how a film camera exposes the entire frame at once. These systems have replaced film cameras for virtually all production aerial mapping.

Major Systems

SystemManufacturerSensor TypeGround Pixel Size (typical)
DMC IIILeica GeosystemsMulti-head CMOS3--5 cm (low altitude)
UltraCam Eagle M3Vexcel ImagingMulti-head CCD/CMOS3--5 cm
UltraCam Osprey 4.1Vexcel ImagingMulti-head, oblique + nadir3--5 cm
PhaseOne iXM-RS150FPhase OneSingle-head CMOS (150 MP)2--5 cm

Large-format digital cameras achieve their sensor size through multi-head configurations. The DMC (Digital Mapping Camera), for instance, uses multiple camera heads whose imagery is stitched together in post-processing to create a single, seamless virtual frame. This approach overcomes the physical size limitations of individual CCD or CMOS sensor chips while maintaining the geometric properties of a frame camera.

Key advantages of digital frame cameras over film:

  • No film processing -- images are immediately available as digital files
  • Higher radiometric resolution -- 12-bit or 14-bit pixel depth vs. ~8-bit effective for film
  • Broader dynamic range -- better performance in shadow and highlight areas
  • Multispectral capability -- simultaneous capture of red, green, blue, and near-infrared bands
  • Forward motion compensation via electronic TDI (Time Delay Integration) rather than mechanical systems

"Digital cameras offer significant advantages over film-based systems, including immediate availability of imagery, superior radiometric resolution, and the capability for multispectral data acquisition." -- Mikhail, Bethel & McGlone, Introduction to Modern Photogrammetry, Ch. 3, p. 95

Linear Array (Pushbroom) Sensors#

Linear array sensors -- also called pushbroom sensors -- capture imagery one line at a time as the platform moves forward. Each line of pixels is exposed sequentially, building up a two-dimensional image strip from thousands of individual scan lines. This is fundamentally different from a frame camera, which captures the entire image in a single exposure.

The most prominent airborne pushbroom system is the Leica ADS (Airborne Digital Sensor) series. The ADS captures forward-looking, nadir, and backward-looking image strips simultaneously using three linear CCD arrays, providing along-track stereo coverage in a single pass.

Frame vs. Pushbroom Comparison

CharacteristicFrame CameraPushbroom (Linear Array)
ExposureEntire frame at onceLine by line
Stereo methodOverlap between framesMulti-line (forward/nadir/backward)
Exterior orientationOne set per frameVaries continuously per line
Geometric modelCentral perspectiveEach line has own perspective center
IMU/GPS dependencyModerateCritical (position for every line)
Swath widthFixed by sensor formatFixed by array length
Along-track coverageDiscrete frames with overlapContinuous strip

The principal challenge of pushbroom sensors is that every scan line has its own exterior orientation. The aircraft's position and attitude change continuously during acquisition, so the position (X0,Y0,Z0X_0, Y_0, Z_0) and rotation (ω,ϕ,κ\omega, \phi, \kappa) must be known -- or solved -- for each line. This makes pushbroom sensors heavily dependent on high-quality IMU (Inertial Measurement Unit) and GNSS data for direct georeferencing.

"Because each line of a linear array image is acquired at a different instant in time, the sensor position and orientation must be known for every line. This requires precise integration of GNSS and inertial navigation data." -- ASPRS, Manual of Photogrammetry (6th Ed.), Ch. 7, p. 357

Interior Orientation Parameters#

Interior orientation (IO) defines the internal geometry of the camera at the moment of exposure. It establishes the relationship between the physical sensor (or film plane) and the perspective center of the lens. Three parameters are fundamental:

  1. Calibrated focal length (ff or cc): The perpendicular distance from the perspective center to the image plane. Not the same as the nominal lens focal length -- it is determined through calibration and accounts for the actual optical geometry.

  2. Principal point offset (x0,y0x_0, y_0): The point where a line from the perspective center, perpendicular to the image plane, intersects the image plane. In a perfect camera, this would coincide with the geometric center of the sensor; in practice, it is offset by small amounts determined during calibration.

  3. Lens distortion parameters: Systematic deviations from the ideal central perspective caused by imperfections in the lens system.

The relationship between image coordinates and the ideal undistorted coordinates is:

xcorrected=xx0ΔxrΔxdx_{\text{corrected}} = x - x_0 - \Delta x_r - \Delta x_d

ycorrected=yy0ΔyrΔydy_{\text{corrected}} = y - y_0 - \Delta y_r - \Delta y_d

where (x0,y0)(x_0, y_0) is the principal point offset, Δxr,Δyr\Delta x_r, \Delta y_r are radial distortion corrections, and Δxd,Δyd\Delta x_d, \Delta y_d are decentering distortion corrections.

Camera Calibration#

Camera calibration is the process of determining the interior orientation parameters. For a metric camera, calibration is performed under controlled laboratory conditions or through analytical self-calibration during a photogrammetric adjustment. The result is a camera calibration certificate that documents the measured parameters.

Calibration Methods

MethodDescriptionApplication
Laboratory (stellar/collimator)Camera images a known target field (e.g., goniometer, collimator array)Metric cameras; gold standard
Test-field calibrationCamera photographs a precisely surveyed ground target fieldMetric and non-metric cameras
Self-calibration (in-situ)IO parameters solved as additional unknowns in bundle adjustmentNon-metric cameras, UAS workflows

A calibration certificate typically reports:

  • Calibrated focal length (cc)
  • Principal point coordinates (x0x_0, y0y_0)
  • Radial distortion coefficients (K1K_1, K2K_2, K3K_3)
  • Decentering distortion coefficients (P1P_1, P2P_2)
  • Fiducial mark coordinates (film cameras)
  • Date of calibration and serial number

"The camera calibration report provides the calibrated focal length, the coordinates of the principal point of autocollimation, and the radial and decentering lens distortion parameters. These values define the inner orientation of the camera and are essential for rigorous photogrammetric solutions." -- Wolf, Dewitt & Wilkinson, Elements of Photogrammetry with Applications in GIS (4th Ed.), Ch. 4, p. 78

For non-metric cameras used in SfM workflows, self-calibration is the norm. The software solves for IO parameters simultaneously with exterior orientation during bundle adjustment. While convenient, self-calibration requires strong image network geometry -- convergent imagery, multiple image scales, and sufficient ground control -- to produce reliable results. A weak network can yield IO parameters that absorb systematic errors rather than correctly modeling the camera geometry.

Lens Distortions#

Lens distortions are systematic departures from the ideal central perspective projection caused by imperfections in lens manufacturing and assembly. Two primary types are characterized in photogrammetric calibration.

Radial Symmetric Distortion

The dominant distortion in most lenses. It causes points to be displaced radially inward or outward from the principal point. The magnitude depends only on the radial distance rr from the principal point:

Δr=K1r3+K2r5+K3r7\Delta r = K_1 r^3 + K_2 r^5 + K_3 r^7

where K1,K2,K3K_1, K_2, K_3 are the radial distortion coefficients determined during calibration. The component corrections in xx and yy are:

Δxr=(K1r2+K2r4+K3r6)xˉ\Delta x_r = (K_1 r^2 + K_2 r^4 + K_3 r^6) \cdot \bar{x}

Δyr=(K1r2+K2r4+K3r6)yˉ\Delta y_r = (K_1 r^2 + K_2 r^4 + K_3 r^6) \cdot \bar{y}

where xˉ=xx0\bar{x} = x - x_0 and yˉ=yy0\bar{y} = y - y_0 are the image coordinates relative to the principal point, and r=xˉ2+yˉ2r = \sqrt{\bar{x}^2 + \bar{y}^2}.

Decentering Distortion

Caused by the imperfect alignment of lens elements along the optical axis. Unlike radial distortion, decentering distortion has both radial and tangential components:

Δxd=P1(r2+2xˉ2)+2P2xˉyˉ\Delta x_d = P_1(r^2 + 2\bar{x}^2) + 2P_2 \bar{x}\bar{y}

Δyd=2P1xˉyˉ+P2(r2+2yˉ2)\Delta y_d = 2P_1 \bar{x}\bar{y} + P_2(r^2 + 2\bar{y}^2)

where P1P_1 and P2P_2 are the decentering distortion coefficients.

"Radial lens distortion is the most significant systematic error in photogrammetric imagery. For wide-angle lenses, radial distortion at the corners of the format can exceed 100 micrometers if uncorrected." -- Ghilani & Wolf, Elementary Surveying: An Introduction to Geomatics (15th Ed.), Ch. 27, p. 806

In modern digital cameras with well-designed lenses, radial distortion is typically small near the image center and increases toward the edges. For high-accuracy mapping, even small residual distortions must be modeled and removed.

Integration: LiDAR + Camera Systems#

Modern airborne mapping platforms routinely combine cameras with LiDAR (Light Detection and Ranging) sensors on a single aircraft. This integration leverages the complementary strengths of both technologies:

CapabilityCameraLiDAR
Spectral/visual informationExcellentNone (intensity only)
Vegetation penetrationPoorExcellent (multiple returns)
Feature interpretationRich texture and colorGeometric only
Dense 3D point generationVia dense matchingDirect measurement
Breakline definitionGood (stereo compilation)Requires processing
Accuracy (vertical)Dependent on GSD and GCPDirect, typically 5--15 cm

Integrated LiDAR-camera systems share a common IMU/GNSS navigation solution, ensuring that the LiDAR point cloud and the imagery are registered in the same coordinate system. The camera imagery is used for orthophoto production and visual interpretation, while LiDAR provides bare-earth elevation models (penetrating vegetation canopy) and high-accuracy 3D coordinates.

Multispectral cameras -- capturing near-infrared (NIR) in addition to RGB -- add vegetation analysis capability. The Normalized Difference Vegetation Index (NDVI) computed from NIR and red bands enables land cover classification, crop health assessment, and environmental monitoring directly from the same data acquisition flight.

"The combination of LiDAR and digital camera imagery acquired from a single platform offers significant advantages. LiDAR provides precise elevation data including ground points under vegetation canopy, while the camera provides the spectral information and visual context needed for interpretation and orthophoto production." -- ASPRS, Manual of Photogrammetry (6th Ed.), Ch. 13, p. 611

For surveying professionals, understanding this integration matters because deliverables increasingly combine both data sources. A topographic survey might use LiDAR-derived bare-earth elevations for contours while using camera-derived orthophotos for planimetric features -- and the accuracy of the combined product depends on understanding the capabilities and limitations of each sensor.

Key Takeaways#

  • Metric cameras are designed and calibrated for photogrammetric measurement; non-metric cameras can be used but require self-calibration during processing
  • Interior orientation (focal length, principal point, lens distortion) defines the camera's internal geometry and must be known for rigorous photogrammetric solutions
  • Film-based aerial cameras (23 cm x 23 cm format) dominated mapping for decades; fiducial marks defined the image coordinate system
  • Digital frame cameras (DMC, UltraCam) capture the full frame simultaneously and offer superior radiometric performance, multispectral capability, and immediate data availability
  • Linear array (pushbroom) sensors capture imagery line by line, requiring continuous exterior orientation from IMU/GNSS -- they offer seamless strip coverage but greater processing complexity
  • Camera calibration determines IO parameters through laboratory, test-field, or self-calibration methods; the calibration certificate is a critical document
  • Radial and decentering lens distortions are modeled mathematically (K1,K2,K3K_1, K_2, K_3 and P1,P2P_1, P_2) and must be corrected for accurate coordinate extraction
  • LiDAR-camera integration combines direct 3D measurement with rich spectral imagery, producing complementary deliverables from a single acquisition

References#

  1. Wolf, P.R., Dewitt, B.A. & Wilkinson, B.E. Elements of Photogrammetry with Applications in GIS (4th Ed.). McGraw-Hill, 2014.
  2. Ghilani, C.D. & Wolf, P.R. Elementary Surveying: An Introduction to Geomatics (15th Ed.). Pearson, 2018.
  3. Ghilani, C.D. & Wolf, P.R. Elementary Surveying: An Introduction to Geomatics (13th Ed.). Pearson, 2012.
  4. ASPRS. Manual of Photogrammetry (6th Ed.). American Society for Photogrammetry and Remote Sensing, 2013.
  5. Mikhail, E.M., Bethel, J.S. & McGlone, J.C. Introduction to Modern Photogrammetry. John Wiley & Sons, 2001.
  6. ASPRS. Manual of Photogrammetry (5th Ed.). American Society for Photogrammetry and Remote Sensing, 2004.