9.4 Linear Algebra, Adjustments, and Least-Squares Foundations
Key Takeaways
- Linear algebra organizes repeated observation equations, unknown coordinates, residuals, and corrections in a compact form.
- Least-squares adjustment seeks a best-fit solution when redundant measurements do not close perfectly.
- Weights reflect relative confidence; a higher-weight observation has more influence than a lower-weight observation.
- Residuals should be inspected for pattern and magnitude instead of treated as meaningless leftovers.
Linear Algebra Behind Survey Adjustments
Linear algebra appears in the FS Applied Mathematics and Statistics area because modern surveying uses many observations to estimate a smaller set of unknowns. A traverse, level network, Global Navigation Satellite System baseline network, or control adjustment may contain redundancy. Redundancy is useful because it allows checks, but it also means the observations rarely agree perfectly. Linear algebra gives a clean way to organize the equations and compute corrections.
At the simplest level, a linear equation relates unknowns to observations. Several equations can be stacked into a system. In matrix language, the design matrix describes how each observation depends on each unknown, the unknown vector contains coordinate or correction values to be estimated, and the observation vector contains measured or derived quantities. You do not need to love matrix notation to understand the workflow: organize relationships, solve for unknowns, compute residuals, and evaluate the fit.
Adjustment Vocabulary
| Term | Practical meaning | FS interpretation |
|---|---|---|
| Observation | Measured distance, angle, elevation difference, or coordinate relation | Input with possible error |
| Unknown | Coordinate, elevation, correction, or parameter to estimate | Quantity being solved |
| Residual | Observation minus adjusted value, with sign convention | Evidence of remaining mismatch |
| Weight | Relative confidence in an observation | More reliable observations influence solution more |
| Redundancy | More observations than the minimum required | Allows checks and statistical evaluation |
Least-squares adjustment minimizes the weighted sum of squared residuals. Squaring prevents positive and negative residuals from canceling and penalizes large residuals more strongly. Weighting matters because not every observation has the same precision. A carefully repeated distance may deserve more influence than a rough measurement. A long sight in poor conditions may deserve less influence than a short, well-controlled observation.
The FS exam may ask conceptual questions about least squares, residuals, or weights rather than requiring a full network adjustment. Know that a residual is not automatically a mistake. It is the remaining difference after the best-fit solution is applied. A large residual, a pattern of residuals, or a residual inconsistent with expected precision may indicate a blunder, an incorrect model, or a weighting problem.
Linear algebra also supports transformations. A two-dimensional coordinate transformation can use translation, rotation, and scale parameters. A regression line can be found by fitting coefficients to data. Error propagation can use matrices to carry variances and covariances from observations to computed results. These topics are connected by the same idea: organize relationships in a structured system.
When studying, focus on interpretation as much as computation. If a problem asks which observation receives greater influence, look at the weight or standard deviation. If a problem asks why redundancy is useful, think checks and reliability. If a problem asks what least squares minimizes, remember weighted squared residuals. These ideas help with both dedicated Applied Mathematics questions and computation questions elsewhere in the FS exam.
What does least-squares adjustment generally minimize?
What does a higher weight usually indicate in an adjustment?
Why is redundancy valuable in a survey network?