UAC

How Many People Do You Need to Survey?

A survey of 1,067 people provides plus or minus 3% accuracy at 95% confidence whether the population is 100,000 or 300 million. Here is why β€” and how to calculate the number you actually need.

6 min readUpdated March 1, 2026by Samir Messaoudi

The Most Counterintuitive Result in Statistics

A survey of 1,067 people provides plus or minus 3% accuracy at 95% confidence whether the target population is 100,000 or 300 million. Required sample size does not scale with population size β€” it scales with the precision you want. This is why national political polls sample approximately 1,000 people and legitimately claim to represent the entire country within a known error range.

This result comes from the mathematics of random sampling. Each additional randomly selected respondent adds independent information, but with sharply diminishing returns. Going from 100 to 200 respondents cuts margin of error by roughly 30%. Going from 1,000 to 2,000 reduces it by only about 18%. To cut margin of error in half, you need to quadruple the sample size. Precision is expensive.

Three decisions drive sample size: confidence level (95% for most business research; 99% for safety-critical decisions), margin of error (plus or minus 5% is standard; 2-3% for high-stakes conclusions), and expected proportion (when unknown, use 50% β€” the conservative assumption that maximizes required sample size).

Calculate your required sample size

Enter your confidence level, desired margin of error, and population size to find exactly how many responses you need β€” with finite population correction.

Calculate Sample Size

How to Determine the Right Sample Size

  1. 1

    Choose your confidence level based on decision stakes

    95% is standard for most business research and academic surveys. It means that in repeated sampling, 95% of resulting confidence intervals would contain the true value β€” equivalently, a 5% chance your result falls outside the margin of error. Use 99% for safety-critical or high-financial-stakes decisions. Use 90% for quick internal directional checks.

  2. 2

    Set your margin of error based on required precision

    Plus or minus 5% is the default for most surveys. Use 3% for important strategic decisions. Use 1-2% only when the cost of being wrong is very high. Note: halving the margin of error roughly quadruples the required sample size.

  3. 3

    Estimate the population proportion

    If you do not know what proportion will answer your key question a certain way, use 50%. This conservative assumption gives the maximum required sample size, ensuring you do not undersample. If your true proportion is more extreme (15% or 85%), the required sample is smaller β€” but 50% guarantees adequate precision regardless.

  4. 4

    Apply finite population correction for small populations

    For populations under roughly 10,000, the required sample is smaller than the standard formula suggests. For a population of 500 with plus or minus 5% margin of error at 95%, you need 217 responses (43% of the population) rather than the 384 the infinite-population formula would give. The calculator applies this correction automatically.

  5. 5

    Plan subgroup analysis before sampling

    If you need statistically valid conclusions about subgroups, each subgroup needs its own adequate sample β€” not just the overall total. A sample of 400 that is 15% from one subgroup gives only 60 responses in that subgroup, providing plus or minus 12.6% margin of error for subgroup conclusions. Plan subgroup targets before fielding.

How to Evaluate Research Statistics You Encounter

When encountering a published statistic, three questions establish validity: How many respondents? What was the margin of error? Was the sample randomly selected? Sample size determines precision. Margin of error tells you the uncertainty range. Random selection determines whether the precision applies to the target population or only to the specific group surveyed.

A study of 50 people claiming conclusions about a national population has a margin of error of approximately plus or minus 13.9% at 95% confidence. A reported finding of 60% support could represent anything from 46.1% to 73.9% in the actual population β€” a range too wide to support strong conclusions. Use the calculator in reverse mode to evaluate any research: enter the sample size to find the margin of error it produces.

Non-random samples are a separate issue that no sample size can fix. An opt-in internet survey with 10,000 responses cannot be generalized to the broader population if respondents self-selected in a biased way. Participation method, recruitment approach, and representativeness all affect generalizability in ways that sample size alone cannot address.

Frequently Asked Questions

What is the minimum acceptable sample size?

+

Statistical convention considers 30 the floor for the Central Limit Theorem to apply meaningfully. For surveys making business decisions, 100 completed responses is a practical minimum for overall conclusions. For reliable subgroup analysis across 4-5 segments, 300-400 total is a common floor. For national-level claims at plus or minus 3%, approximately 1,067 is required at 95% confidence.

What does 95% confidence actually mean?

+

It means: if you repeated this exact sampling process many times, 95% of the resulting confidence intervals would contain the true population value. It does not mean there is a 95% probability the true value is in this specific interval β€” once collected, the data either contains the true value or it does not. The 95% describes the reliability of the process, not a probability for any specific result.

Does a convenience sample change the required size?

+

The sample size formula assumes random sampling. Non-random samples (convenience samples, opt-in surveys, snowball samples) may be biased in ways no sample size increase can correct. Reporting sample size for a non-random sample as evidence of statistical validity is misleading β€” the appropriate response is to acknowledge the sampling limitation when interpreting results.

How do I calculate sample size for A/B testing?

+

A/B test sample size depends on: the baseline conversion rate, the minimum detectable effect (smallest difference worth detecting), statistical significance threshold (typically 95%), and statistical power (typically 80%). These inputs produce per-variant sample requirements. Dedicated A/B test calculators handle the two-proportion comparison math correctly.

When does population size matter?

+

Only when sampling a significant fraction of a small population. For populations above roughly 20,000, the infinite-population formula gives essentially the same result as the finite-population correction. For populations of 1,000, the correction meaningfully reduces the required sample. For populations under 200, surveying the entire population is often more practical than sampling.

Why does using 50% as the proportion maximize sample size?

+

The formula contains the term p(1-p), which is maximized when p equals 0.5, giving 0.25. As p moves away from 0.5 toward 0 or 1, p(1-p) decreases and required sample size falls. Using 50% when the true proportion is unknown guarantees you will not undersample regardless of the actual result.

Plan your survey sample size

Calculate the exact number of responses needed for statistically valid conclusions β€” or enter an existing sample to find the margin of error it provides.

Calculate Required Sample Size