
Sign up to save your podcasts
Or
Understanding sample size calculation is essential for any eye care professional conducting clinical research. Whether you’re practicing in a private clinic, academic institution, or commercial setting, knowing how many subjects you need for your study can make or break the integrity of your results. In this Four Eyed Professor episode, Dr. Chris Lievens is joined by Dr. Nancy Briggs, Director of the Statistical Consulting Unit at the University of New South Wales, to demystify the math and methodology behind proper study design. Their insights simplify the process of designing statistically sound studies and make the world of statistical power and clinical research design more approachable to optometrists, ophthalmologists, and students alike.
Before diving into a study, one of the most critical steps is determining how many participants are needed. Dr. Briggs emphasized that this calculation isn’t something to be guessed: “It is really integral to the very start of the research process.” Choosing arbitrary numbers like 10 or 40 participants because they’re “nice round numbers” doesn’t cut it in evidence-based research.
A proper sample size calculation depends on:
By understanding these components, researchers can avoid common pitfalls and design studies that yield meaningful, publishable results.
If you’re not sure where to begin, previous studies are your best starting point. “You can use the mean and standard deviation from published studies to help inform your calculations,” Dr. Briggs explains. But it comes with a caution: be sure the populations and timelines are comparable.
For instance, if a study measured outcomes at 12 months and your study only runs for three, you should expect and adjust for potentially smaller effect sizes. This is where conservative estimation comes into play. It’s better to plan for more variability and smaller effects than to overestimate your power and underdeliver on findings.
Tip: When in doubt, aim for a slightly larger sample size than needed. Conservative estimates help ensure your study remains statistically valid even if real-world variability throws a wrench in your assumptions.
Let’s say you complete your study and discover your three-month outcome data looks vastly different from the prior 12-month study you used as a guide. What now?
Dr. Briggs advises against enrolling more participants on the fly. “That was not part of the research design,” she cautions. Particularly with randomized clinical trials (RCTs), protocols should be pre-registered, and changes must follow clearly defined procedures, like interim analysis plans.
Instead, publish your results. Even unexpected or inconclusive findings contribute to the scientific community and future meta-analyses. As Dr. Briggs puts it, “Whatever result you do get is important and worthy of reporting.”
Understanding statistical errors is foundational to determining sample size and interpreting results. In short:
Dr. Briggs points out that while most researchers focus on avoiding Type I errors, minimizing Type II errors is equally crucial. This is achieved through rigorous research design, using validated measures, and having enough subjects to detect real effects.
A P-value of 0.051 often sparks debate: is the result statistically significant or not? According to Dr. Briggs, the rigid cutoff of 0.05 shouldn’t overshadow the broader context.
Instead of dismissing findings just above the threshold, she recommends:
“If you find a P-value of 0.051, I would still say we have some evidence,” she explains. The bigger picture—context, design, and additional outcomes—matters as much as the decimal points.
For eye care professionals who want to run their own calculations, Dr. Briggs offers several accessible options:
Free Tools:
Paid Tools:
Regardless of which tool you use, understanding your inputs—study design, expected effect size, variability, and desired power—remains essential.
Dr. Lievens opened the episode by noting how research often feels “unknown” or even intimidating to many practicing clinicians. Yet, with resources like accessible statistical support, user-friendly software, and expert collaborators like Dr. Briggs, conducting high-quality research is more feasible than ever.
This episode highlights that every clinician, regardless of practice setting, can contribute valuable data to the field. As long as the research is well-designed, statistically sound, and ethically executed, the door to clinical inquiry is open.
A solid sample size calculation is the foundation of reliable clinical research. From understanding the principles of statistical power and error types to using software tools and interpreting borderline P-values, Dr. Chris Lievens and Dr. Nancy Briggs offer a masterclass in applied research for eye care professionals.
As this Four Eyed Professor episode demonstrates, research isn’t just for academics. Private practitioners and students alike can lead innovative studies when armed with the right knowledge and tools. And the best place to start? Ask the right question and make sure you have enough people to answer it.
4.8
5757 ratings
Understanding sample size calculation is essential for any eye care professional conducting clinical research. Whether you’re practicing in a private clinic, academic institution, or commercial setting, knowing how many subjects you need for your study can make or break the integrity of your results. In this Four Eyed Professor episode, Dr. Chris Lievens is joined by Dr. Nancy Briggs, Director of the Statistical Consulting Unit at the University of New South Wales, to demystify the math and methodology behind proper study design. Their insights simplify the process of designing statistically sound studies and make the world of statistical power and clinical research design more approachable to optometrists, ophthalmologists, and students alike.
Before diving into a study, one of the most critical steps is determining how many participants are needed. Dr. Briggs emphasized that this calculation isn’t something to be guessed: “It is really integral to the very start of the research process.” Choosing arbitrary numbers like 10 or 40 participants because they’re “nice round numbers” doesn’t cut it in evidence-based research.
A proper sample size calculation depends on:
By understanding these components, researchers can avoid common pitfalls and design studies that yield meaningful, publishable results.
If you’re not sure where to begin, previous studies are your best starting point. “You can use the mean and standard deviation from published studies to help inform your calculations,” Dr. Briggs explains. But it comes with a caution: be sure the populations and timelines are comparable.
For instance, if a study measured outcomes at 12 months and your study only runs for three, you should expect and adjust for potentially smaller effect sizes. This is where conservative estimation comes into play. It’s better to plan for more variability and smaller effects than to overestimate your power and underdeliver on findings.
Tip: When in doubt, aim for a slightly larger sample size than needed. Conservative estimates help ensure your study remains statistically valid even if real-world variability throws a wrench in your assumptions.
Let’s say you complete your study and discover your three-month outcome data looks vastly different from the prior 12-month study you used as a guide. What now?
Dr. Briggs advises against enrolling more participants on the fly. “That was not part of the research design,” she cautions. Particularly with randomized clinical trials (RCTs), protocols should be pre-registered, and changes must follow clearly defined procedures, like interim analysis plans.
Instead, publish your results. Even unexpected or inconclusive findings contribute to the scientific community and future meta-analyses. As Dr. Briggs puts it, “Whatever result you do get is important and worthy of reporting.”
Understanding statistical errors is foundational to determining sample size and interpreting results. In short:
Dr. Briggs points out that while most researchers focus on avoiding Type I errors, minimizing Type II errors is equally crucial. This is achieved through rigorous research design, using validated measures, and having enough subjects to detect real effects.
A P-value of 0.051 often sparks debate: is the result statistically significant or not? According to Dr. Briggs, the rigid cutoff of 0.05 shouldn’t overshadow the broader context.
Instead of dismissing findings just above the threshold, she recommends:
“If you find a P-value of 0.051, I would still say we have some evidence,” she explains. The bigger picture—context, design, and additional outcomes—matters as much as the decimal points.
For eye care professionals who want to run their own calculations, Dr. Briggs offers several accessible options:
Free Tools:
Paid Tools:
Regardless of which tool you use, understanding your inputs—study design, expected effect size, variability, and desired power—remains essential.
Dr. Lievens opened the episode by noting how research often feels “unknown” or even intimidating to many practicing clinicians. Yet, with resources like accessible statistical support, user-friendly software, and expert collaborators like Dr. Briggs, conducting high-quality research is more feasible than ever.
This episode highlights that every clinician, regardless of practice setting, can contribute valuable data to the field. As long as the research is well-designed, statistically sound, and ethically executed, the door to clinical inquiry is open.
A solid sample size calculation is the foundation of reliable clinical research. From understanding the principles of statistical power and error types to using software tools and interpreting borderline P-values, Dr. Chris Lievens and Dr. Nancy Briggs offer a masterclass in applied research for eye care professionals.
As this Four Eyed Professor episode demonstrates, research isn’t just for academics. Private practitioners and students alike can lead innovative studies when armed with the right knowledge and tools. And the best place to start? Ask the right question and make sure you have enough people to answer it.
78 Listeners
63 Listeners
80 Listeners
5 Listeners
15 Listeners
19 Listeners
17 Listeners
20 Listeners
13 Listeners
14 Listeners
18 Listeners
8 Listeners
17 Listeners
3 Listeners
2,801 Listeners