Definition and Purpose
Concept
Null hypothesis (H0): default assumption of no effect, no difference, or status quo in statistical inference.
Purpose
Serves as baseline to test evidence against; foundation for statistical decision-making and inference.
Context
Used in scientific experiments, surveys, clinical trials, quality control, and social sciences.
Formulation of Null Hypothesis
Expression
Typically stated as equality: parameter equals specified value (e.g., μ = μ0, p = p0).
Parameter Types
Mean, proportion, variance, correlation coefficient, regression slope, etc.
Notation
Denoted as H0, contrasted with alternative hypothesis HA or H1.
Role in Hypothesis Testing
Framework
Defines null condition to be tested; statistical test evaluates observed data under assumption H0 true.
Decision Rule
Reject H0 if data contradict null beyond threshold; fail to reject otherwise.
Objective
Control error rates; maintain objectivity by presuming no effect until evidence shows otherwise.
Test Statistics and Null Hypothesis
Definition
Numerical summary computed from sample data to assess compatibility with H0.
Types of Test Statistics
t-statistic, z-statistic, chi-square, F-statistic, depending on data type and hypothesis.
Distribution Under H0
Sampling distribution of test statistic assumed known or approximated under null hypothesis.
P-Value and Decision Making
Definition
Probability of observing test statistic as extreme or more extreme than observed, assuming H0 true.
Interpretation
Small p-value indicates data unlikely under H0, suggesting evidence against null.
Thresholds
Common significance levels: α = 0.05, 0.01, 0.10; compare p-value to α for decision.
Type I and Type II Errors
Type I Error (α)
Rejecting true null hypothesis; false positive; controlled by significance level.
Type II Error (β)
Failing to reject false null hypothesis; false negative; depends on test power.
Trade-off
Reducing one error type increases the other; balanced by sample size, effect size, α level.
| Error Type | Description | Consequence |
|---|---|---|
| Type I | Reject true H0 | False alarm |
| Type II | Fail to reject false H0 | Missed detection |
Alternative Hypothesis
Definition
Contrasts null; represents presence of effect, difference, or relationship.
Types
One-sided (directional): e.g., μ > μ0 or μ < μ0. Two-sided (non-directional): μ ≠ μ0.
Role
Defines rejection region; guides interpretation of results beyond null assumption.
Practical Examples
Example 1: Mean Comparison
H0: μ = 100 (population mean equals 100). HA: μ ≠ 100.
Example 2: Proportion Test
H0: p = 0.5 (coin is fair). HA: p ≠ 0.5 (coin biased).
Example 3: Regression Slope
H0: β = 0 (no relationship). HA: β ≠ 0 (significant predictor).
Assumptions and Conditions
Random Sampling
Sample data collected randomly and independently from population.
Distributional Assumptions
Normality (often assumed for parametric tests), homoscedasticity, and independence.
Sample Size
Sufficient size for reliable approximations of sampling distributions.
Limitations and Criticisms
Misinterpretation
Rejecting H0 ≠ proving HA true; p-value ≠ probability H0 true.
Overemphasis on Significance
Neglect of effect size and practical relevance; dichotomous decision oversimplifies.
Dependence on Sample Size
Large samples detect trivial effects; small samples may miss real effects.
Extensions and Related Concepts
Bayesian Hypothesis Testing
Incorporates prior beliefs; compares hypotheses using Bayes factors.
Confidence Intervals
Alternative approach summarizing parameter uncertainty without binary decision.
Multiple Testing
Adjustments (e.g., Bonferroni) to control error rates across many hypotheses.
Key Formulas
Test Statistic for Mean (Z-test)
z = (x̄ - μ₀) / (σ / √n)Test Statistic for Mean (t-test)
t = (x̄ - μ₀) / (s / √n)P-Value Calculation
For two-sided test:
p-value = 2 × P(T ≥ |t|) under H₀ distributionPower of Test
Power = 1 - β = P(reject H₀ | Hₐ true)References
- Fisher, R.A., "Statistical Methods for Research Workers," Oliver & Boyd, 1925, pp. 1-100.
- Neyman, J., & Pearson, E.S., "On the Problem of the Most Efficient Tests of Statistical Hypotheses," Philosophical Transactions of the Royal Society A, vol. 231, 1933, pp. 289-337.
- Casella, G., & Berger, R.L., "Statistical Inference," Duxbury Press, 2002, pp. 100-150.
- Wasserman, L., "All of Statistics: A Concise Course in Statistical Inference," Springer, 2004, pp. 50-85.
- Lehmann, E.L., & Romano, J.P., "Testing Statistical Hypotheses," Springer, 2005, pp. 30-90.