Key Takeaways
Proper sample size calculation ensures the statistical validity and credibility of clinical research results in eye care.Understanding statistical power and type I/II errors is essential for designing impactful optometry studies and interpreting data accurately.Using published studies and statistical tools like G*Power or SPSS helps clinicians and researchers calculate accurate sample sizes for meaningful outcomes.Understanding sample size calculation is essential for any eye care professional conducting clinical research. Whether you’re practicing in a private clinic, academic institution, or commercial setting, knowing how many subjects you need for your study can make or break the integrity of your results. In this Four Eyed Professor episode, Dr. Chris Lievens is joined by Dr. Nancy Briggs, Director of the Statistical Consulting Unit at the University of New South Wales, to demystify the math and methodology behind proper study design. Their insights simplify the process of designing statistically sound studies and make the world of statistical power and clinical research design more approachable to optometrists, ophthalmologists, and students alike.
Table of ContentsKey TakeawaysThe Importance of Starting with the Right Sample SizeUsing Previous Research to Guide Your NumbersWhat to Do When Results Don’t Match ExpectationsMaking Sense of Type I and Type II ErrorsWhen Your P-Value Is “Close, But Not Quite”Tools and Software for Calculating Sample SizeEmbracing Research in Everyday Practice
The Importance of Starting with the Right Sample Size
Before diving into a study, one of the most critical steps is determining how many participants are needed. Dr. Briggs emphasized that this calculation isn’t something to be guessed: “It is really integral to the very start of the research process.” Choosing arbitrary numbers like 10 or 40 participants because they’re “nice round numbers” doesn’t cut it in evidence-based research.
A proper sample size calculation depends on:
Your study designThe primary outcomeHow data will be analyzedThe desired level of statistical powerThe acceptable type I error rate (alpha)By understanding these components, researchers can avoid common pitfalls and design studies that yield meaningful, publishable results.
Using Previous Research to Guide Your Numbers
If you’re not sure where to begin, previous studies are your best starting point. “You can use the mean and standard deviation from published studies to help inform your calculations,” Dr. Briggs explains. But it comes with a caution: be sure the populations and timelines are comparable.
For instance, if a study measured outcomes at 12 months and your study only runs for three, you should expect and adjust for potentially smaller effect sizes. This is where conservative estimation comes into play. It’s better to plan for more variability and smaller effects than to overestimate your power and underdeliver on findings.
Tip: When in doubt, aim for a slightly larger sample size than needed. Conservative estimates help ensure your study remains statistically valid even if real-world variability throws a wrench in your assumptions.
What to Do When Results Don’t Match Expectations
Let’s say you complete your study and discover your three-month outcome data looks vastly different from the prior 12-month study you used as a guide. What now?
Dr. Briggs advises against enrolling more participants on the fly. “That was not part of the research design,” she cautions. Particularly with randomized clinical trials (RCTs), protocols should be pre-registered, and changes must follow clearly defined procedures, like interim analysis plans.
Instead, publish your results. Even unexpected or inconclusive findings contribute to the scientific community and future meta-analyses. As Dr. Briggs puts it, “Whatever result you do get is important and worthy of reporting.”
Making Sense of Type I and Type II Errors
Understanding statistical errors is foundational to determining sample size and interpreting results. In short:
Type I Error (α): Finding an effect that isn’t really there. Commonly set at 5% (p < 0.05).Type II Error (β): Missing a real effect. Controlled by setting power, typically at 80% or 90%.Dr. Briggs points out that while most researchers focus on avoiding Type I errors, minimizing Type II errors is equally crucial. This is achieved through rigorous research design, using validated measures, and having enough subjects to detect real effects.
When Your P-Value Is “Close, But Not Quite”
A P-value of 0.051 often sparks debate: is the result statistically significant or not? According to Dr. Briggs, the rigid cutoff of 0.05 shouldn’t overshadow the broader context.
Instead of dismissing findings just above the threshold, she recommends:
Interpreting the P-value as evidence against the null hypothesisPresenting the effect size and confidence intervalsEvaluating consistency with secondary outcomes“If you find a P-value of 0.051, I would still say we have some evidence,” she explains. The bigger picture—context, design, and additional outcomes—matters as much as the decimal points.
Tools and Software for Calculating Sample Size
For eye care professionals who want to run their own calculations, Dr. Briggs offers several accessible options:
G*Power – Free and reliable for basic calculationsUniversity-hosted websites – Especially those focused on epidemiologyR Statistical Software – Free, open-source with extensive packages (coding required)SPSS (Version 27 and up) – Now includes power and sample size modulesSAS and Stata – Longstanding tools with comprehensive statistical featuresPASS (Power Analysis and Sample Size Software) – A premium but powerful option with advanced features for grant-funded researchersRegardless of which tool you use, understanding your inputs—study design, expected effect size, variability, and desired power—remains essential.
Embracing Research in Everyday Practice
Dr. Lievens opened the episode by noting how research often feels “unknown” or even intimidating to many practicing clinicians. Yet, with resources like accessible statistical support, user-friendly software, and expert collaborators like Dr. Briggs, conducting high-quality research is more feasible than ever.
This episode highlights that every clinician, regardless of practice setting, can contribute valuable data to the field. As long as the research is well-designed, statistically sound, and ethically executed, the door to clinical inquiry is open.
A solid sample size calculation is the foundation of reliable clinical research. From understanding the principles of statistical power and error types to using software tools and interpreting borderline P-values, Dr. Chris Lievens and Dr. Nancy Briggs offer a masterclass in applied research for eye care professionals.
As this Four Eyed Professor episode demonstrates, research isn’t just for academics. Private practitioners and students alike can lead innovative studies when armed with the right knowledge and tools. And the best place to start? Ask the right question and make sure you have enough people to answer it.