Definition and Properties

Linear Transformations

Definition: A map T: V → W between vector spaces V, W over field F is linear if ∀u,v ∈ V, ∀α ∈ F, T(u+v) = T(u)+T(v), T(αu) = αT(u).

Properties

Additivity and homogeneity combined define linearity. Key property: T(0) = 0. Preserves vector space structure.

Examples of Vector Spaces

Typical examples: ℝⁿ, ℂⁿ, polynomial spaces Pₙ(F), function spaces C[a,b].

Common Examples

Zero Transformation

Maps every vector to 0 vector: T(v) = 0 ∀v ∈ V.

Identity Transformation

Maps every vector to itself: T(v) = v ∀v ∈ V.

Scaling Transformation

Multiplication by scalar λ: T(v) = λv.

Projection

Maps vectors onto a subspace U: T² = T, idempotent.

Rotation

In ℝ² or ℝ³, rotates vectors around an axis or origin by fixed angle.

Matrix Representation

Coordinate Vectors

Choosing bases B for V and C for W allows representing T by matrix A such that [T(v)]_C = A [v]_B.

Construction of Matrix

Matrix columns: A_j = [T(b_j)]_C where b_j are basis vectors of V.

Dimension Dependence

If dim(V) = n, dim(W) = m, then A is m×n matrix over F.

Change of Basis Effect

Matrix representation changes under basis change via similarity transformations.

Given P = change of basis matrix, new matrix A' = Q⁻¹ A P

Kernel and Image

Kernel (Null Space)

Definition: Ker(T) = { v ∈ V : T(v) = 0 }. Subspace of V. Measures vectors mapped to zero.

Image (Range)

Definition: Im(T) = { w ∈ W : w = T(v) for some v ∈ V }. Subspace of W.

Properties

Ker(T) and Im(T) are subspaces. Ker(T) trivial ⇔ T injective. Im(T) = W ⇔ T surjective.

Example

For T: ℝ³ → ℝ², T(x,y,z) = (x - y, 0), Ker(T) = {(a,a,c) | a,c ∈ ℝ}.

Rank-Nullity Theorem

Statement

For linear T: V → W, dim(V) = rank(T) + nullity(T).

Definitions

Rank(T) = dim(Im(T)), Nullity(T) = dim(Ker(T)).

Implications

Dimension of domain splits into dimensions of image and kernel.

ExampleDim(V)Rank(T)Nullity(T)
T: ℝ⁴ → ℝ²422

Invertibility and Isomorphisms

Definition of Invertibility

T is invertible if ∃ T⁻¹: W → V s.t. T⁻¹ ∘ T = id_V and T ∘ T⁻¹ = id_W.

Isomorphisms

Invertible linear transformations are isomorphisms. V and W are isomorphic vector spaces.

Conditions

T invertible ⇔ T bijective ⇔ Ker(T) = {0} and Im(T) = W.

Matrix Criterion

Matrix A representing T is invertible if det(A) ≠ 0.

Invertibility check:If A ∈ M_n(F), invertible ⇔ det(A) ≠ 0

Composition of Transformations

Definition

Given T: U → V and S: V → W, composition S ∘ T: U → W defined by (S ∘ T)(u) = S(T(u)).

Linearity

Composition of linear transformations is linear.

Associativity

For T, S, R linear, (R ∘ S) ∘ T = R ∘ (S ∘ T).

Matrix Representation

If T and S have matrices A and B, then matrix of S ∘ T is BA.

Change of Basis

Motivation

Different bases yield different matrix representations of same T.

Change of Basis Matrices

Given bases B, B' of V, change of basis matrix P satisfies [v]_B' = P [v]_B.

Effect on Matrix

Matrix A' of T relative to new bases: A' = Q⁻¹ A P where P, Q are change matrices for domain and codomain.

Similarity Transformations

For V = W and B = C, change of basis corresponds to conjugation A' = P⁻¹ A P.

NotationDefinition
PChange of basis matrix for domain
QChange of basis matrix for codomain

Diagonalization Basics

Definition

T is diagonalizable if ∃ basis of V s.t. matrix of T is diagonal.

Criteria

T diagonalizable ⇔ V has basis of eigenvectors of T.

Benefits

Diagonal form simplifies powers, exponentials of T.

Non-Diagonalizable Cases

Defective matrices lack sufficient eigenvectors; Jordan form applies.

Eigenvalues and Eigenvectors

Definition

Eigenvector v ≠ 0 satisfies T(v) = λv for scalar λ ∈ F called eigenvalue.

Characteristic Polynomial

Defined as p(λ) = det(A - λI). Roots = eigenvalues.

Eigen Space

Set of all eigenvectors for λ plus zero vector forms eigenspace.

Computation

Find λ: det(A - λI) = 0Find v: (A - λI)v = 0

Applications

Systems of Linear Equations

Linear transformations represent system coefficients; invertibility determines solvability.

Computer Graphics

Transformations model scaling, rotation, projection of images.

Quantum Mechanics

Operators on state spaces are linear transformations; eigenvalues relate to observables.

Data Science

PCA uses diagonalization of covariance matrices to reduce dimensionality.

Summary and Key Takeaways

Core Concepts

Linear transformations preserve addition and scalar multiplication. Represented by matrices once bases fixed.

Structural Insights

Kernel and image are fundamental subspaces; rank-nullity links their dimensions.

Matrix Tools

Invertibility, diagonalization, eigen-analysis reveal transformation properties.

Practical Use

Linear transformations underpin numerous applied and theoretical domains in mathematics and science.

References

  • Axler, S., Linear Algebra Done Right, Springer, 3rd Edition, 2015, pp. 1-350.
  • Lax, P., Linear Algebra and Its Applications, Wiley, 2nd Edition, 2007, pp. 12-400.
  • Strang, G., Introduction to Linear Algebra, Wellesley-Cambridge Press, 5th Edition, 2016, pp. 1-600.
  • Hoffman, K., Kunze, R., Linear Algebra, Prentice Hall, 2nd Edition, 1971, pp. 50-450.
  • Roman, S., Advanced Linear Algebra, Springer, 3rd Edition, 2008, pp. 20-500.