!main_tags!Markov Chains - probability | What's Your IQ !main_header!

Definition and Basic Concepts

Markov Chain Definition

Markov chain: sequence of random variables {X_n} with discrete or continuous state space. Future state depends only on current state, not past history. Time parameter: discrete or continuous.

State Space

Finite, countable, or uncountable set of possible states. Denoted S. Examples: integers, finite sets, vectors.

Transition Mechanism

Transitions between states governed by probabilities or rates. Captured by transition matrices (discrete-time) or generators (continuous-time).

Markov Property

Definition

Conditional probability: P(X_{n+1}=j | X_n=i, X_{n-1}=i_{n-1}, ..., X_0=i_0) = P(X_{n+1}=j | X_n=i). Memoryless property.

Implications

Simplifies dependence structure. Enables tractable analysis and computation. Basis for Markovian models in diverse fields.

Non-Markovian Processes

Processes with memory, history-dependent transitions. Markov models approximate or ignore such dependencies.

Discrete-Time Markov Chains (DTMC)

Definition

Sequence {X_n} indexed by n = 0,1,2,... with Markov property. State space S discrete or finite.

Transition Probabilities

P_{ij} = P(X_{n+1}=j | X_n=i). Time-homogeneous if P_{ij} independent of n.

Chapman-Kolmogorov Equations

Relate multi-step transitions: P^{(m+n)} = P^{(m)} P^{(n)}. Fundamental for analysis and computation.

Transition Matrix and Probabilities

Transition Matrix

Matrix P = (P_{ij}) with rows summing to 1. Represents one-step transition probabilities.

Properties

Nonnegative entries, row stochastic. Powers of P give multi-step transition probabilities.

Example

State To State 1 To State 2 To State 3
1 0.5 0.3 0.2
2 0.1 0.6 0.3
3 0.2 0.4 0.4

Multi-Step Transitions

Computed by matrix powers: P^{(n)} = P^n.

P^{(n)} = P × P × ... × P (n times)

Classification of States

Communicating States

States i and j communicate if reachable from each other with positive probability.

Irreducibility

Chain is irreducible if all states communicate. Ensures connectedness.

Recurrence and Transience

Recurrent: state visited infinitely often with probability 1. Transient: visited finitely often.

Periodic and Aperiodic States

Period of state: gcd of return times. Aperiodic if period = 1.

Absorbing States

States that once entered, cannot be left. P_{ii} = 1 and P_{ij} = 0 for j ≠ i.

Stationary and Limiting Distributions

Stationary Distribution

Probability vector π satisfying πP = π and sum π_i = 1. Represents equilibrium distribution.

Existence and Uniqueness

Exists and unique if chain is irreducible and positive recurrent.

Limiting Distribution

Limit of distribution as n → ∞. Equals stationary distribution under ergodicity.

πP = π, ∑_i π_i = 1

Calculation Methods

Solving linear system, iterative methods, power method.

Ergodicity and Convergence

Ergodic Chain

Irreducible, aperiodic, positive recurrent. Ensures convergence to unique stationary distribution.

Convergence Theorems

For ergodic chains, distribution converges regardless of initial state.

Mixing Time

Time required for distribution to be close to stationary within tolerance.

Continuous-Time Markov Chains (CTMC)

Definition

Markov chains with continuous time parameter t ≥ 0. Transitions occur at random times.

Generator Matrix

Q = (q_{ij}), with off-diagonal entries rates of transitions, diagonal entries negative sums.

Kolmogorov Forward and Backward Equations

Differential equations governing transition probabilities P(t).

dP(t)/dt = P(t)Q = QP(t)

Relation to DTMC

Embedded DTMC defined by jump chain; holding times exponential with parameters from Q.

Applications

Queueing Theory

Model customers arriving and being served. M/M/1 queue via CTMC.

Statistical Physics

Model particle systems, spin states, diffusion processes.

Finance

Model credit ratings, stock price movements, risk assessment.

Biology

Gene sequence modeling, population dynamics, epidemiology.

Computer Science

Markov decision processes, algorithms, reliability modeling.

Simulation Algorithms

Discrete-Time Simulation

Generate next state using discrete distribution P_{i·}. Pseudorandom number generation.

Gillespie Algorithm (CTMC)

Simulate jump times and next states from rates in Q. Exact stochastic simulation.

Monte Carlo Methods

Estimate distributions, expectations via repeated simulation runs.

Gillespie Algorithm:1. Initialize state i and time t=02. Calculate total rate λ = -q_{ii}3. Sample time to next event Δt ~ Exponential(λ)4. Choose next state j with probability q_{ij}/λ5. Update t = t + Δt, state = j6. Repeat

Illustrative Examples

Random Walk on a Line

States: integers, moves left/right with equal probability 0.5. Simple DTMC with transition matrix.

Two-State Weather Model

States: Sunny, Rainy. Transition probabilities based on historical data.

From/To Sunny Rainy
Sunny 0.8 0.2
Rainy 0.4 0.6

Absorbing Markov Chain Example

States include absorbing and transient. Used to compute absorption probabilities and expected absorption times.

References

  • Grimmett, G., Stirzaker, D., Probability and Random Processes, 3rd Ed., Oxford University Press, 2001, pp. 235-280.
  • Norris, J.R., Markov Chains, Cambridge University Press, 1998, pp. 1-120.
  • Kemeny, J.G., Snell, J.L., Finite Markov Chains, Springer, 1976, pp. 45-90.
  • Ross, S., Stochastic Processes, 2nd Ed., Wiley, 1996, pp. 120-160.
  • Ethier, S.N., Kurtz, T.G., Markov Processes: Characterization and Convergence, Wiley, 1986, pp. 50-110.
!main_footer!