Statistics provides the mathematical tools to analyze measurements, quantify uncertainty, and make informed decisions about data quality. For the surveyor, statistics is not an academic abstraction -- it is a daily working necessity. Every time you report a distance, an elevation, or a coordinate, you are implicitly making a statistical statement about the reliability of that value. Understanding the statistical foundations covered here allows you to determine the most probable value from repeated measurements, express the uncertainty of your results in a rigorous and defensible manner, detect and deal with outliers, and combine measurements of varying quality into a single best estimate.
"The theory of probability and statistics forms the foundation of the adjustment of observations, quality analysis, and much of the decision-making process in the geospatial sciences." -- Ghilani & Wolf, Elementary Surveying, 13th Ed., Ch. 2
The Arithmetic Mean#
The arithmetic mean is the most fundamental statistic in surveying. When a surveyor takes multiple measurements of the same quantity under the same conditions, the mean of those measurements is the most probable value (MPV).
For equally weighted measurements , the arithmetic mean is:
The mean has two important optimality properties:
- The sum of the residuals equals zero: , where . The mean is the balancing point of the data.
- The sum of the squared residuals is a minimum: . No other value produces a smaller sum of squared deviations.
These properties are not arbitrary -- they follow directly from calculus. If you take the derivative of with respect to and set it to zero, you get . This is the mathematical basis for the method of least squares, which extends the same principle to more complex adjustment problems.
Example
A surveyor measures a baseline distance five times:
| Measurement | Value (m) |
|---|---|
| 1 | 285.332 |
| 2 | 285.338 |
| 3 | 285.335 |
| 4 | 285.330 |
| 5 | 285.340 |
Residuals#
A residual is the difference between an individual measurement and the mean:
Residuals are the foundation of statistical analysis in surveying. They tell us how individual measurements deviate from the most probable value. For the example above:
| Measurement | |||
|---|---|---|---|
| 1 | 285.332 | ||
| 2 | 285.338 | ||
| 3 | 285.335 | ||
| 4 | 285.330 | ||
| 5 | 285.340 |
Check: (as expected).
The residuals themselves carry important information. Large residuals suggest either low precision in the measurement process or the possible presence of a blunder. The pattern of residuals -- whether they appear random or systematic -- can reveal problems with the measurement procedure that raw values alone might not expose.
Standard Deviation#
The standard deviation quantifies the spread (dispersion) of measurements around the mean. It is the most commonly used measure of precision in surveying.
Standard Deviation of a Single Measurement
For a sample of measurements, the standard deviation of a single measurement is:
The denominator is , not . This is known as Bessel's correction and accounts for the fact that we used one degree of freedom to compute the mean. Since the residuals are computed from rather than from the true (unknown) population mean , dividing by would systematically underestimate the true variance. Using produces an unbiased estimate of the population variance.
"The denominator is the number of degrees of freedom and is equal to the number of observations minus the number of unknowns determined from them." -- Ghilani, Adjustment Computations, 6th Ed., Ch. 2
Continuing the example:
This tells us that any single measurement in this set is expected to be within about m of the mean, roughly 68% of the time.
Standard Deviation of the Mean
The mean of measurements is more precise than any single measurement. The standard deviation of the mean is:
For our example:
This is a powerful result: by taking multiple measurements, the uncertainty of the reported value decreases proportionally to . Doubling the precision requires four times the measurements -- a relationship of diminishing returns that every surveyor should keep in mind when planning fieldwork.
The 68-95-99.7 Rule
For data that follows a normal distribution:
- 68.3% of measurements fall within of the mean
- 95.4% of measurements fall within of the mean
- 99.7% of measurements fall within of the mean
A measurement lying beyond is expected only 0.3% of the time -- roughly 3 in 1,000 observations. Values this extreme warrant investigation as potential blunders.
Variance#
The variance is the square of the standard deviation:
While the standard deviation has the same units as the measurements (making it more intuitive to interpret), the variance has a critical mathematical property: variance is additive. If two independent error sources contribute variances and , the total variance is:
This property makes variance the natural quantity to work with in error propagation. When computing a result from multiple measured quantities, each with its own uncertainty, the variance of the result is a function of the individual variances. Standard deviations, by contrast, do not simply add.
Covariance and Correlation
When two measured quantities are not independent -- for example, two angles measured from the same instrument setup -- the relationship between their errors matters. The covariance between two variables and is:
The correlation coefficient normalizes covariance to a dimensionless value between and :
A correlation of indicates independence; indicates perfect linear dependence. In surveying, correlated errors arise in situations such as GPS baselines sharing a common endpoint, angles observed from the same setup, and leveling circuits with shared benchmarks. Ignoring correlations when they exist leads to overly optimistic (or pessimistic) estimates of uncertainty in adjusted results.
The Normal Distribution#
Survey measurements subject to many small, independent, random influences tend to follow the normal (Gaussian) distribution. This is not an assumption made for convenience -- it is a consequence of the Central Limit Theorem, which states that the sum (or mean) of a large number of independent random variables tends toward a normal distribution, regardless of the underlying distribution of the individual variables.
The probability density function (PDF) of the normal distribution is:
where is the population mean and is the population standard deviation. The curve is symmetric about , and the mean, median, and mode all coincide.
Standard Normal Distribution and -Scores
Any normal distribution can be transformed to the standard normal distribution (with and ) using the -score transformation:
The -score tells you how many standard deviations a particular measurement lies from the mean. Standard normal tables (or software) then give the probability of observing a value at least that extreme. For example, corresponds to the 97.5th percentile, meaning 95% of the distribution lies between and .
"The normal distribution, or bell-shaped curve, applies when a large number of random errors are present. It describes the expected frequency of various sized errors and provides the basis for confidence interval estimation." -- Ghilani & Wolf, Elementary Surveying, 13th Ed., Ch. 2
Why Surveying Measurements Tend Toward Normality
A single distance measurement is affected by many small, independent sources of error: atmospheric refraction, instrument centering, pointing, reading, target centering, plumbing, temperature gradients, and more. Each source contributes a small random component. The Central Limit Theorem guarantees that the aggregate effect of these sources will be approximately normally distributed -- even if any individual source is not. This is why the normal distribution is the default model for random measurement errors in surveying.
Confidence Intervals#
A single number (the mean) without an associated uncertainty statement is incomplete. Confidence intervals provide a range within which the true value is expected to lie, at a specified level of probability.
Using the Normal Distribution (Large Samples)
For large samples (), the confidence interval for the mean is:
where is the critical value from the standard normal distribution for the desired confidence level. Common values:
| Confidence Level | |
|---|---|
| 68.3% | 1.000 |
| 90.0% | 1.645 |
| 95.0% | 1.960 |
| 99.0% | 2.576 |
| 99.7% | 3.000 |
The -Distribution for Small Samples
In surveying, we rarely have 30 or more repeated measurements of the same quantity. For small samples, the standard normal distribution underestimates the true uncertainty because the sample standard deviation is itself uncertain. The Student's -distribution accounts for this additional uncertainty.
The -distribution is similar to the standard normal but has heavier tails. The shape depends on the degrees of freedom . As , the -distribution converges to the standard normal.
The confidence interval using the -distribution is:
For example, with measurements () and a 95% confidence level, . This is notably larger than the normal approximation of , reflecting the greater uncertainty inherent in small samples.
Example (Continued)
For our baseline measurement with m, m, :
We can state with 95% confidence that the true distance lies between m and m.
Surveying Convention
The surveying profession generally reports results at 95% confidence. This corresponds to approximately (more precisely, for large samples). When you see a survey accuracy reported as, say, m, it typically implies a 95% confidence interval unless stated otherwise. Always specify the confidence level when reporting uncertainty.
Weighted Mean#
Not all measurements are created equal. A distance measured with a total station and a distance measured with a tape have different precisions. A GPS baseline observed for four hours is more reliable than one observed for fifteen minutes. When combining measurements of different quality, we need a weighted mean.
The weighted mean is:
where the weight assigned to each measurement is inversely proportional to the square of its standard deviation:
This weighting scheme gives more influence to precise measurements and less to imprecise ones. It can be shown that the weighted mean, computed in this way, is the minimum-variance unbiased estimator -- the most probable value considering the varying quality of the observations.
Example
A distance is measured by two methods:
| Method | Value (m) | (m) | Weight |
|---|---|---|---|
| Total station | 500.325 | 0.003 | 111,111 |
| GPS | 500.331 | 0.008 | 15,625 |
The result is pulled strongly toward the total station measurement because it carries much greater weight. The GPS observation still contributes, but its influence is proportional to its precision.
The standard deviation of the weighted mean is:
"In combining quantities measured with differing precisions, the weighted mean gives the most probable value. The weights are inversely proportional to the variances." -- Ghilani, Adjustment Computations, 6th Ed., Ch. 3
Rejection of Outliers#
Occasionally, a measurement set will contain one or more values that are far removed from the rest. These outliers may result from blunders (reading errors, transposition mistakes, equipment malfunction) or from genuinely extreme random errors. The question is: should they be included in the analysis or rejected?
Chauvenet's Criterion
Chauvenet's criterion provides a systematic rule for outlier rejection. A measurement is rejected if the probability of obtaining a residual as large (or larger) as its residual is less than , where is the number of observations.
The procedure is:
- Compute and from all observations (including the suspect value).
- Compute the ratio for the suspect observation.
- Look up the probability of exceeding this ratio in a normal distribution.
- If , reject the observation.
Equivalently, for a given , there is a maximum allowable ratio :
| Max | |
|---|---|
| 3 | 1.38 |
| 5 | 1.65 |
| 7 | 1.80 |
| 10 | 1.96 |
| 15 | 2.13 |
| 25 | 2.33 |
| 50 | 2.57 |
Cautions
Outlier rejection must be handled with care. There is an inherent tension between two types of error: rejecting a valid measurement (which discards real information and biases the result) and keeping a blunder (which corrupts the entire analysis). Several principles should guide your approach:
- Investigate before rejecting. A measurement should not be discarded purely on statistical grounds. Check the field notes. Was there a known equipment issue? An unusual atmospheric condition? A recording error?
- Never iterate blindly. After rejecting an outlier and recomputing the mean and standard deviation, another value may now appear to be an outlier. Repeated application of rejection criteria can strip away legitimate data, artificially inflating apparent precision.
- Document everything. Any rejected observation should be noted in the project records with a reason for rejection. This is both good practice and, in many jurisdictions, a professional requirement.
"Suspected outliers should never be discarded simply because they fail a statistical test. The reasons for their rejection should be justified and documented." -- Ghilani & Wolf, Elementary Surveying, 13th Ed., Ch. 2
Key Takeaways#
- The arithmetic mean is the most probable value of equally weighted measurements and minimizes the sum of squared residuals.
- Residuals reveal the quality of individual measurements and provide the raw material for computing standard deviation.
- Standard deviation quantifies precision; the standard deviation of the mean decreases by , providing a clear incentive for repeated measurement.
- Variance is additive, making it the natural quantity for error propagation calculations.
- Survey measurements tend toward the normal distribution by the Central Limit Theorem, validating the use of Gaussian statistics.
- Confidence intervals express uncertainty at a stated probability level; surveying convention uses 95% confidence, and the -distribution should be used for small samples.
- The weighted mean properly combines measurements of different precision by weighting inversely with variance.
- Outlier rejection (e.g., Chauvenet's criterion) must be applied judiciously -- always investigate before rejecting, and document every decision.
References#
- Ghilani, C. D., & Wolf, P. R. (2012). Elementary Surveying: An Introduction to Geomatics (13th ed.). Pearson.
- Ghilani, C. D. (2018). Adjustment Computations: Spatial Data Analysis (6th ed.). John Wiley & Sons.
- Mikhail, E. M., & Gracie, G. (1981). Analysis and Adjustment of Survey Measurements. Van Nostrand Reinhold.