Numerical Computing & Linear Algebra Essentials

Classified in Mathematics

Written on in English with a size of 198.29 KB

Floating Point Systems & Numerical Error

A Floating Point (FP) System represents numbers as: x = ± (d0 + d1/β + d22 + ... + dt-1(t-1)). The Unit Roundoff (u) is defined as εmachine/2, where fl(1 + ε) > 1.

Rounding to Nearest

When rounding to the nearest representable number, fl(x) = x(1 + ε) where |ε|.

IEEE 754 Standard for Floating Point

Normalized Numbers

If the exponent (e) is not equal to 0, it's a normalized FP number. The value is x = (-1)sign ⋅ β(e - offset) ⋅ (1.d1 d2...dt-1).

Denormalized Numbers

If the exponent (e) is 0, the number is denormalized. The value is x = (-1)sign ⋅ β(e - offset + 1) ⋅ (0.d1 d2...dt-1). The sticky bit 0 is free because it is always determined by the value of exponent e.

Exceptional Values

  • If all bits of e are 1s and the mantissa is all 0s, then x = ± Inf (Infinity).
  • If all bits of e are 1s and there is at least one nonzero value in the bits of the mantissa, x = NaN (Not a Number).
  • If all bits excluding the sign are 0, x = ± 0 (Zero).

Cancellations in Numerical Computation

Cancellations occur when subtracting nearby numbers that contain roundoffs. In such cases, the absolute error is small, but the relative error is much larger.

Mean Value Theorem

The Mean Value Theorem states: f'(c) = (f(b) - f(a))/(b - a) for some c between a and b.

Taylor Series Expansion

The Taylor Series expansion of a function f(x) around a point c is: f(x) = f(c) + f'(c)(x-c) + f''(c)/2! ⋅ (x-c)2 + ... This means, f(x) = ∑k=0 f(k)(c)/k! ⋅ (x-c)k.

Truncation Error

An example of truncation error is E3 = eE/3! ⋅ (E)3.

Computational Error

Computational Error = Roundoff Error + Truncation Error.


Vector Spaces & Linear Algebra Fundamentals

Vector Space Axioms

A vector space V over a scalar field (e.g., ℜ) satisfies the following axioms for all vectors u, v, w in V and scalars a, b:

  • 1. u + (v + w) = (u + v) + w (Associativity of addition)
  • 2. u + v = v + u (Commutativity of addition)
  • 3. There exists a zero vector 0 in V such that v + 0 = v (Additive identity)
  • 4. For every v in V, there exists an additive inverse -v in V such that v + (-v) = 0
  • 5. a(bv) = (ab)v (Associativity of scalar multiplication)
  • 6. 1v = v (Multiplicative identity)
  • 7. a(u + v) = au + av (Distributivity of scalar multiplication over vector addition)
  • 8. (a + b)v = av + bv (Distributivity of scalar multiplication over scalar addition)

Proof: Null Space (Kernel) as a Subspace

Let v = αv1 + βv2 for any v1, v2 in ker(A) and α,β in ℜ. Then A v = A(αv1 + βv2) = αA v1 + βA v2 = α ⋅ 0 + β ⋅ 0 = 0. Thus, v is in ker(A), proving it is a subspace.

Column Space (Range or Image)

The Column Space of a matrix is the same as its range or image.

Rank of a Matrix

The Rank of a matrix is the dimension of its Column Space (Col(A)).

Linearly Independent Vectors

Vectors v1, v2, ..., vn are Linearly Independent if and only if α1v1 + α2v2 + ... + αnvn = 0 implies that αi = 0 for all i = 1...n.

Matrix Invertibility (Non-Singular Matrices)

Invertibility is used to check if for a matrix A, the linear system Ax=b has a unique solution.

Equivalent Statements for Invertibility

  • 1. A ⋅ A-1 = I (Identity Matrix)
  • 2. The columns of A are linearly independent.
  • 3. The dimension of ker(A) = 0.
  • 4. det(A) ≠ 0 (Determinant of A is non-zero).

Eigenpairs and Eigenspaces

An Eigenpair consists of a vector v and a scalar λ such that Av = λv. Eigenvectors of A that share the same eigenvalue form a linear subspace called an eigenspace.

Orthogonality of Vectors

Vectors are said to be orthogonal if their dot product is 0. For vectors x and y, x ⋅ y = ||x|| ||y|| cos Θ = 0.

Dense and Sparse Matrices

  • Dense Matrix: A matrix with a small number of zero entries.
  • Sparse Matrix: A matrix where most elements are zero.

Gaussian Elimination

Gaussian Elimination puts a matrix in row-echelon form. It has a computational complexity of O(N3).

Vector Norms

A vector norm || ⋅ || in ℜn satisfies the following properties:

  • 1. ||x|| ≥ 0, and ||x|| = 0 if and only if x = 0.
  • 2. ||αx|| = |α| ||x|| for any scalar α.
  • 3. ||x + y|| ≤ ||x|| + ||y|| (Triangle Inequality).

P-Norm

The P-Norm is defined as || ⋅ ||p = (∑i=1n |xi|p)1/p.

Euclidean Norm

The Euclidean norm is the p-norm with p = 2.

Matrix/Operator Norms

A matrix/operator norm || ⋅ || satisfies the following properties:

  • 1. ||A|| ≥ 0, and ||A|| = 0 if and only if A = 0.
  • 2. ||αA|| = |α| ||A|| for any scalar α.
  • 3. ||A + B|| ≤ ||A|| + ||B|| (Triangle Inequality).

Relative Error for Linear Systems

For a linear system Ax=b, the relative error is defined as ||x* - xapprox|| / ||x||, where x* is the exact solution and xapprox is the approximate solution.

Residual Vector

The Residual vector is r = b - A ⋅ xapprox or A ⋅ xapprox - b.

Condition Number of a Matrix

The Condition Number of a matrix A is cond(A) = ||A|| ||A-1||. If cond(A) is not "large" and ||r||/||b|| is small, the relative error is also small.


html>

Related entries: