Point Estimation
Learn about point estimators and their properties. Understand bias, efficiency, consistency, and choosing the best estimator.
On This Page
What is Point Estimation?
A point estimate is a single value used to estimate an unknown population parameter.
| Parameter | Symbol | Point Estimator | Symbol |
|---|---|---|---|
| Population mean | μ | Sample mean | x̄ |
| Population proportion | p | Sample proportion | p̂ |
| Population variance | σ² | Sample variance | s² |
| Population SD | σ | Sample SD | s |
Properties of Good Estimators
What makes one estimator better than another? Three key properties:
1. Unbiasedness
An estimator is unbiased if its expected value equals the parameter being estimated.
An estimator is unbiased for θ if:
The estimator is “correct on average.”
Unbiased: Sample mean x̄ for estimating μ
- E(x̄) = μ ✓
Biased: Sample range for estimating population range
- Sample range tends to underestimate population range
Bias of an estimator:
If Bias = 0, the estimator is unbiased.
2. Efficiency
Among unbiased estimators, the one with smallest variance is most efficient.
Comparing estimators and :
If this is greater than 1, is more efficient.
For estimating μ of a normal distribution:
| Estimator | Variance | Efficiency |
|---|---|---|
| Sample mean x̄ | σ²/n | Most efficient |
| Sample median | 1.57σ²/n | Less efficient |
The mean uses all the data; the median ignores some information.
3. Consistency
An estimator is consistent if it converges to the true parameter as sample size increases.
is consistent for θ if:
As n → ∞, → θ (in probability)
The Sample Mean
Properties:
- E(x̄) = μ (unbiased)
- Var(x̄) = σ²/n
- Consistent (variance → 0 as n → ∞)
The sample mean is the minimum variance unbiased estimator (MVUE) of μ for normal populations.
The Sample Variance
Why n-1? Division by n-1 makes s² an unbiased estimator of σ².
Sample Proportion
Properties:
- E(p̂) = p (unbiased)
- Var(p̂) = p(1-p)/n
- Consistent
Mean Squared Error (MSE)
Sometimes we accept a small bias in exchange for lower variance. The MSE balances both:
Maximum Likelihood Estimation (MLE)
Maximum Likelihood Estimation finds the parameter value that makes the observed data most likely.
Given data x₁, x₂, …, xₙ and parameter θ:
Likelihood:
MLE: Find θ that maximizes L(θ)
Flip a coin 100 times, get 60 heads.
What value of p makes this most likely?
L(p) = P(60 heads in 100 flips | p) =
Taking the derivative and setting to 0:
MLE: p̂ = 60/100 = 0.60
(This matches our intuition!)
Properties of MLEs
- Asymptotically unbiased: Bias → 0 as n → ∞
- Consistent: Converges to true value
- Asymptotically efficient: Minimum variance as n → ∞
- Invariant: MLE of g(θ) = g(MLE of θ)
Comparing Estimators
Estimating population center μ:
| Estimator | Unbiased? | Efficiency | Robust? |
|---|---|---|---|
| Sample mean | Yes | Best for normal | No (sensitive to outliers) |
| Sample median | No (for normal) | Less efficient | Yes (robust to outliers) |
| Trimmed mean | Slightly biased | Moderate | Moderately robust |
Choice depends on:
- Distribution shape (normal? skewed? outliers?)
- Sample size
- Research goals
Standard Error
The standard error is the standard deviation of an estimator.
For sample mean:
Estimated by:
| Estimator | Standard Error |
|---|---|
| Sample mean | σ/√n (or s/√n) |
| Sample proportion | √[p(1-p)/n] (or √[p̂(1-p̂)/n]) |
| Difference in means | √(σ₁²/n₁ + σ₂²/n₂) |
Summary
In this lesson, you learned:
- Point estimate: Single value to estimate a parameter
- Unbiased: E(estimator) = parameter
- Efficient: Smallest variance among unbiased estimators
- Consistent: Converges to true value as n → ∞
- MSE = Variance + Bias²: Overall accuracy measure
- Sample mean x̄ is unbiased for μ
- Sample variance s² uses n-1 to be unbiased for σ²
- MLE maximizes the likelihood of observed data
- Standard error measures variability of an estimator
Practice Problems
1. A sample of 64 has mean 50 and standard deviation 8. What is the standard error of the sample mean?
2. An estimator has E(θ̂) = θ + 2 and Var(θ̂) = 9. a) Is it unbiased? b) What is its MSE?
3. Why do we divide by n-1 instead of n when calculating sample variance?
4. You flip a coin 80 times and get 52 heads. What is the MLE for the probability of heads?
Click to see answers
1. SE(x̄) = s/√n = 8/√64 = 8/8 = 1
2a. No, it is biased. E(θ̂) = θ + 2 ≠ θ, so Bias = 2
2b. MSE = Var(θ̂) + Bias² MSE = 9 + 2² = 9 + 4 = 13
3. Dividing by n produces a biased estimator that underestimates σ².
When we calculate s², we use x̄ instead of the true μ. Since x̄ is calculated from the same data, the deviations from x̄ are artificially small (x̄ minimizes Σ(xᵢ - x̄)²).
Dividing by n-1 corrects for this, producing an unbiased estimator. (n-1 = degrees of freedom)
4. MLE for proportion: p̂ = successes/trials p̂ = 52/80 = 0.65
Next Steps
Build on your estimation knowledge:
- Confidence Intervals - Interval estimates
- Sample Size Determination - Planning studies
- Hypothesis Testing Basics - Testing claims
Was this lesson helpful?
Help us improve by sharing your feedback or spreading the word.