!main_tags!Gram Schmidt - Linear Algebra | What's Your IQ !main_header!

Overview

Definition

Gram Schmidt is a method to convert a set of linearly independent vectors into an orthonormal set spanning the same subspace. It uses inner products to sequentially orthogonalize and normalize vectors.

Purpose

Purpose: Construct orthonormal bases for vector spaces, simplify computations, and improve numerical stability in linear algebra problems.

Scope

Scope: Applicable in finite-dimensional inner product spaces over real or complex fields.

"The Gram-Schmidt process is a cornerstone of numerical linear algebra, enabling efficient orthonormalization of vector sets." -- Gilbert Strang

Historical Context

Originators

Introduced by Jørgen Pedersen Gram (1883) and Erhard Schmidt (1907) independently, formalized as an algorithm for orthogonalization.

Development

Development: Evolved from classical orthogonalization techniques in functional analysis and matrix theory.

Impact

Impact: Foundation for QR decomposition, spectral theory, and numerical linear algebra algorithms.

Conceptual Framework

Inner Product Spaces

Inner product: A binary operation defining length and angle; essential for orthogonality and projection.

Orthogonality

Orthogonality: Two vectors are orthogonal if their inner product is zero; basis vectors become mutually perpendicular.

Normalization

Normalization: Scaling vectors to unit length to form an orthonormal set.

Mathematical Formulation

Notation

Given a set {v₁, v₂, ..., vₙ} of linearly independent vectors in an inner product space.

Orthogonalization Step

Define u₁ = v₁.

For k = 2 to n:

u_k = v_k - sum_{j=1}^{k-1} proj_{u_j}(v_k), where proj_{u_j}(v_k) = (⟨v_k, u_j⟩ / ⟨u_j, u_j⟩) u_j

Normalization Step

Set e_k = u_k / ||u_k||, where ||u_k|| = sqrt(⟨u_k, u_k⟩).

u₁ = v₁For k = 2 to n: u_k = v_k - ∑_{j=1}^{k-1} (⟨v_k, u_j⟩ / ⟨u_j, u_j⟩) u_je_k = u_k / ||u_k||

Algorithmic Procedure

Stepwise Algorithm

Input: Linearly independent vectors {v₁, ..., vₙ}.

Step 1: Initialize u₁ = v₁.

Step 2: For each k = 2 to n, subtract projections onto previous u_j.

Step 3: Normalize each u_k to get e_k.

Pseudocode

function GramSchmidt(V): U = [] for v in V: u = v for prev_u in U:  proj = (inner_product(v, prev_u) / inner_product(prev_u, prev_u)) * prev_u  u = u - proj U.append(u) E = [u / norm(u) for u in U] return E

Computational Complexity

Complexity: O(n³) for n vectors of dimension n due to repeated inner products and vector operations.

Properties

Orthogonality

Resulting vectors e_k satisfy ⟨e_i, e_j⟩ = 0 for i ≠ j.

Normalization

Each e_k has unit norm: ||e_k|| = 1.

Span Preservation

Span{e₁,...,e_k} = Span{v₁,...,v_k} for each k ≤ n.

Uniqueness

Uniqueness: The orthonormal set depends on order of input vectors; different orders yield different bases.

Applications

QR Decomposition

QR factorization: Gram Schmidt orthonormalizes columns of matrix Q; R is upper-triangular.

Signal Processing

Used for constructing orthonormal bases in Fourier and wavelet transforms.

Numerical Linear Algebra

Improves stability and efficiency in solving linear systems and eigenvalue problems.

Machine Learning

Feature orthogonalization, dimensionality reduction, and kernel methods.

Worked Example

Given Vectors

Let v₁ = (1, 1, 0), v₂ = (1, 0, 1), v₃ = (0, 1, 1) in ℝ³ with standard inner product.

Step 1: Compute u₁ and e₁

u₁ = v₁ = (1, 1, 0)

||u₁|| = sqrt(1² + 1² + 0²) = sqrt(2)

e₁ = u₁ / sqrt(2) = (1/√2, 1/√2, 0)

Step 2: Compute u₂ and e₂

proj_{u₁}(v₂) = (⟨v₂, u₁⟩ / ⟨u₁, u₁⟩) u₁

⟨v₂, u₁⟩ = 1*1 + 0*1 + 1*0 = 1

⟨u₁, u₁⟩ = 2

proj = (1/2) * (1,1,0) = (0.5, 0.5, 0)

u₂ = v₂ - proj = (1,0,1) - (0.5,0.5,0) = (0.5, -0.5, 1)

||u₂|| = sqrt(0.5² + (-0.5)² + 1²) = sqrt(0.25 + 0.25 + 1) = sqrt(1.5) ≈ 1.2247

e₂ = u₂ / 1.2247 ≈ (0.408, -0.408, 0.816)

Step 3: Compute u₃ and e₃

proj_{u₁}(v₃) = (⟨v₃, u₁⟩ / ⟨u₁, u₁⟩) u₁

⟨v₃, u₁⟩ = 0*1 + 1*1 + 1*0 = 1

proj_{u₁}(v₃) = (1/2)(1,1,0) = (0.5, 0.5, 0)

proj_{u₂}(v₃) = (⟨v₃, u₂⟩ / ⟨u₂, u₂⟩) u₂

⟨v₃, u₂⟩ = 0*0.5 + 1*(-0.5) + 1*1 = 0 - 0.5 + 1 = 0.5

⟨u₂, u₂⟩ = 1.5

proj_{u₂}(v₃) = (0.5/1.5)(0.5, -0.5, 1) = (1/3)(0.5, -0.5, 1) = (0.1667, -0.1667, 0.3333)

u₃ = v₃ - proj_{u₁}(v₃) - proj_{u₂}(v₃) = (0,1,1) - (0.5,0.5,0) - (0.1667,-0.1667,0.3333) = (-0.6667, 0.6667, 0.6667)

||u₃|| = sqrt((-0.6667)² + 0.6667² + 0.6667²) ≈ sqrt(3 * 0.4444) = sqrt(1.3332) ≈ 1.1547

e₃ = u₃ / 1.1547 ≈ (-0.577, 0.577, 0.577)

Result

Orthonormal set: e₁ ≈ (0.707, 0.707, 0), e₂ ≈ (0.408, -0.408, 0.816), e₃ ≈ (-0.577, 0.577, 0.577)

Numerical Considerations

Stability Issues

Classical Gram Schmidt suffers from numerical instability due to loss of orthogonality in finite precision.

Modified Gram Schmidt

Modified Gram Schmidt rearranges computations to improve stability and orthogonality preservation.

Round-off Errors

Accumulated round-off errors can cause vectors to deviate from strict orthogonality; re-orthogonalization sometimes necessary.

Alternatives

Alternatives: Householder transformations and Givens rotations offer more stable orthogonalization methods.

Extensions and Generalizations

Infinite-Dimensional Spaces

Gram Schmidt extends to Hilbert spaces for countably infinite orthonormal bases.

Non-Euclidean Inner Products

Applicable to complex inner product spaces with conjugate symmetry.

Block Gram Schmidt

Block variants process multiple vectors simultaneously for performance optimization.

Weighted Inner Products

Generalizes to weighted inner products, adapting projections accordingly.

Comparison with Other Methods

Householder Transformations

More numerically stable; reflect vectors to eliminate subdiagonal entries in QR factorization.

Givens Rotations

Use rotations to zero specific vector components; preferred for sparse matrices.

Gram Schmidt Advantages

Simplicity, conceptual clarity, easy implementation, useful for theoretical insights.

Gram Schmidt Disadvantages

Numerical instability, order dependence, less efficient for large-scale problems.

Tables and Formulas

Projection Formula

Projection Formula
proj_u(v) (⟨v, u⟩ / ⟨u, u⟩) u

Norm Definition

Norm Formula
||v|| sqrt(⟨v, v⟩)

Summary Formula

For k = 1 to n: u_k = v_k - ∑_{j=1}^{k-1} proj_{u_j}(v_k) e_k = u_k / ||u_k||

References

  • Strang, G., Introduction to Linear Algebra, Wellesley-Cambridge Press, Vol. 5, 2016, pp. 210-220.
  • Golub, G.H., Van Loan, C.F., Matrix Computations, Johns Hopkins University Press, 4th edition, 2013, pp. 203-215.
  • Axler, S., Linear Algebra Done Right, Springer, 3rd edition, 2015, pp. 98-105.
  • Trefethen, L.N., Bau, D., Numerical Linear Algebra, SIAM, 1997, pp. 120-130.
  • Stewart, G.W., Introduction to Matrix Computations, Academic Press, 1973, pp. 50-65.
!main_footer!