This piece draws on real data from Videotape’s Hispanic QSR Study (2025)— capturing hundreds of survey responses and selfie-style video feedback.
Understanding sample size and statistical significance (without the jargon)
Have you ever opened a research report and thought, what the hell does any of this mean? Ever heard someone talk about "stat sig findings" and not know exactly what they mean?
You’re not alone — these concepts are confusing to everyone at times. Us included.
Charts, confidence intervals, margins of error, little arrows that say “statistically higher” — it can all feel like alphabet soup. And yet, these numbers shape massive business decisions.
So what does it all mean?
To start: every chart, survey, and percentage you’ve ever seen is built on two principles: sample size and statistical significance.
They’re what turn a handful of opinions into something you can act on. Get them wrong, and everything else starts to unravel. Get them right, and you supercharge you business decision making.
Before we get too deep into AI or quant-meets-qual storytelling, it’s worth pausing to unpack how researchers decide what’s real and what’s just random noise.
What does sample size even mean?
A sample is simply the group of people who take part in your study.
If you’re studying every person in your target audience, that’s a population. But since that’s rarely possible, we use a sample — a smaller group that represents the larger one.
When you see on a chart something along the lines of
n=1000
It simply means that's the amount of people who answered that specific question. Note that depending on the survey, not everyone will see every question (or answer option) and so the n value can differ widely within a single study, or within a single chart.
The goal of all of this is confidence.
The bigger and more representative your sample, the more certain you can be that your findings reflect reality.
If 60% of 100 people like a new ad, your true number might fall anywhere between 50% and 70%. Ask 1,000 people, and that range tightens — maybe 57% to 63%.
That narrowing is what we call confidence. Bigger samples reduce the “margin of error,” giving you a clearer picture of the truth.
But bigger isn’t always better. What matters most is that your sample actually reflects the audience you care about — not just the easiest group to recruit.
What statistical significance (stat sig) tells you.
Even with a solid sample, chance can play tricks.
Maybe Group A liked your campaign 54% of the time and Group B liked it 51%. Is that real, or just random noise?
That’s where statistical significance comes in. It’s the math that tells you whether a difference is likely to hold up if you ran the same study again.
Typically, when researchers say a result is statistically significant, it means that there's a 95% chance that the findings are real, and not due to random chance.
It's not a guarantee, but it means it's worth paying attention to.
So when you see a chart with little arrows — ↑ or ↓ — what it’s really saying is this: A statistical test was run on that bar, and the direction you’re seeing can be trusted. It's "stat sig."
In response to the question: "What is your typical spend per person when dining out?"
The ↑ next to the 14% isn't decoration. It's telling us that there's a 95% chance it is universally true that non-Hispanic people are more likely to spend less than $10 when eating out than Hispanic people.
Simply put, it means that: "You can be confident that this difference holds up beyond this study."
But there's an important caveat.
Significance doesn't always mean important.
We see statistically significant results that are practically meaningless – a 1% difference on a metric that doesn’t matter, or things that are simply obvious.
This is a chart pulled from a female-only study run on Videotape. It confirms, with 95% confidence, that women are in fact… women.
This is absolutely great news for our survey targeting.
Not so great for being useful.
On Videotape, our AI uses statistical significance to generate key takeaways for every chart you see to help you contextualize and separate what's important from the noise.
Why it matters more now, than ever before
It’s easy to forget, in a landscape full of new tools and bold promises — from AI-moderated focus groups to synthetic agents — that the insights driving business decisions still depend on sound, reliable methods.
If your research platform isn’t helping you understand what matters (and what doesn’t) with statistical significance and deeper analysis, it’s worth considering the business risks of relying on anything less.
Because at the end of the day, technology can make research faster — but only good methodology makes it right.
The future of insight isn’t just about more data or smarter algorithms. It’s about trust. It’s about being able to say, with confidence, that what you’re seeing reflects something real, and knowing exactly how sure you can be.



