Least Squares Adjustment Models and Error Propagation
Classified in Mathematics
Written on in English with a size of 7.85 KB
Least Squares Adjustment: Fundamental Models
1. Conditional Equations Model
This section outlines the formulation and solution for conditional equations in least squares adjustment.
- B: Jacobian matrix of the conditional equations with respect to observations.
- A: Vector representing the sum or constant terms of the conditional equations.
- P: Weight matrix of observations, calculated as
1 / σ₀²
. - Q: Covariance matrix of observations,
Q = P&supminus;¹
. - Qe: Covariance matrix of the conditional equations,
Qe = B * Q * B&supT;
. - K: Vector of Lagrange multipliers,
K = -Qe&supminus;¹ * A
. - V: Vector of residuals,
V = Q * B&supT; * K
. - L̂: Adjusted observations,
L̂ = V + Lobserved
.
Linearized Conditional Observation Equation
A common form of a linearized conditional observation equation is:
(a * -b * sinA * sinB) Δ + (-b * cosA) VA + (a * cosB * A) VB + (sinB) VA + (-sinA) Vb = 0
Precision of Adjusted Observations
- σ̂²L̂L̂: Variance of adjusted observations,
σ̂²L̂L̂ = σ₀² * QL̂L̂
. - QL̂L̂: Covariance matrix of adjusted observations,
QL̂L̂ = QVV
. - QVV: Covariance matrix of residuals,
QVV = Q * B&supT; * (B * Q * B&supT;)&supminus;¹ * B * Q
.
(Note: The original formulaQ * B&supT; * (-Q&supminus;¹) * B * Q
was likely a typo or simplified notation. The corrected form is standard for residual covariance.)
2. Parametric Equations Model
This section details the formulation and solution for parametric equations in least squares adjustment.
- A: Design matrix (Jacobian) of observation equations with respect to parameters.
- L: Vector of observed minus computed values (
L = Lobserved - Lcomputed
), with sign adjusted as needed. - P: Weight matrix of observations,
P = 1 / σ₀²
. - N: Normal matrix,
N = A&supT; * P * A
. - T: Right-hand side vector,
T = A&supT; * P * L
. - X: Vector of parameter estimates (corrections),
X = N&supminus;¹ * T
. - V: Vector of residuals,
V = A * X - L
. - L̂: Adjusted observations,
L̂ = V + Lobserved
.
3. Specific Matrix Definitions (Type 2)
- Matrix A: Represents the derivative or Jacobian matrix.
- Matrix L: Contains observed estimates or values.
Linearized Observation Equation (Type 2)
Vs = (∂α / ∂xa) * Δxa + (∂α / ∂ya) * Δya + (αobserved - αcomputed)
4. Error Analysis and Precision Measures
Posterior Variance Factor
The estimated variance of unit weight (posterior variance factor) is calculated as:
σ₀² = (V&supT; * P * V) / r
Where V is the vector of residuals (V = A * X - L
) and r is the redundancy (degrees of freedom).
Precision of Coordinates
The covariance matrix of the adjusted parameters (coordinates) is:
Σxx = σ₀² * N&supminus;¹
Error Ellipse Parameters
For a 2D error ellipse, the parameters are derived from the covariance matrix:
- Eigenvalues of Σxx: Two values, where the larger root represents the semi-major axis (λ).
- Orientation Angle: The angle (φ) of the semi-major axis is given by:
tan(2φ) = (2σxy) / (σy² - σx²)
(at the center of the ellipse)
Review of Advanced Concepts
1. Lagrange's Canonical Form and Derivatives
This section reviews the transformation of quadratic forms into canonical form and their derivatives.
Quadratic Form Example
Consider the quadratic form: q = x₁² + 3x₂² + 4x₁x₂
Completing the square:
(x₁² + 4x₁x₂) + 3x₂²
(x₁ + 2x₂)² + 3x₂² - 4x₂²
(x₁ + 2x₂)² - x₂²
Canonical Form Transformation
Let y₁² = (x₁ + 2x₂)²
and y₂² = x₂²
. The canonical form is y₁² - y₂²
.
Derivative of Quadratic Form
For q = x₁² + 3x₂² + 4x₁x₂
:
∂q / ∂x₁ = 2x₁ + 4x₂ = 2x&supT; * a₁
∂q / ∂x₂ = 6x₂ + 4x₁ = 2x&supT; * a₂
In general, the derivative of a quadratic form q = x&supT;Ax
is 2x&supT;A
(row vector) or 2A&supT;x
(column vector).
- Derivative (column vector):
(2x&supT; * A)&supT; = A&supT; * 2x
- Derivative (row vector):
2x&supT; * A
2. Eigenvalues and Probability Distributions
This section touches upon concepts related to eigenvalues and probability, possibly for confidence intervals.
For a random variable z
following a distribution with parameters A
and B
, and a scaling factor σ
:
(A / σ) < z < (B / σ)
- The probability
P(A < X < B)
can be expressed using the cumulative distribution functionF
:F(B / σ) - F(A / σ) > z
(This inequality might represent a condition for a certain probability level.)
3. Redundancy Check
Redundancy (r) is the number of degrees of freedom in an adjustment, calculated as the number of observations (n) minus the number of unknowns (n₀).
Example: n = 4
(observations), n₀ = 3
(unknowns), so r = 1
.
4. Linearized Observation Equations (Specific Examples)
Parametric Form Example
For an angle observation (e.g., between points A, B, C):
α = αBC - αBA = atan((xb-xc)/(yc-yb)) - atan((xa-xb)/(ya-yb))
The linearized form for residuals (Vs) is:
Vs = (∂α / ∂xa) * Δxa + (∂α / ∂ya) * Δya + (αobserved - αcomputed)
Conditional Form Example
A general conditional equation can be written as B * V + D = 0
.
Example of a linearized conditional equation:
(a * -b * sinA * sinB) Δ + (-b * cosA) VA + (a * cosB * A) VB + (sinB) VA + (-sinA) Vb = 0
Where:
- B: Matrix of coefficients for the residuals (V). Example:
B = [(-b * cosA) / f, (a * cosB * A) / f, sinB, -sinA]
(assumingf
is a scaling factor and this represents a row of B). - V: Vector of residuals.
- D: Constant term, often
(observed - computed)
. Example:D = a - b - sinA * sinB
.