By James Lyons-Weiler at Brownstone dot org.
In January 2019, the New England Journal of Medicine published a study that was immediately hailed as the final verdict on vitamin D: it doesn't work. The study, known as the VITAL trial, was large, well-funded, and led by respected researchers from Harvard. Its conclusion-that vitamin D supplementation did not reduce the risk of invasive cancer or major cardiovascular events-rapidly diffused across headlines, textbooks, and clinical guidelines.
But the VITAL study didn't fail because vitamin D failed. It failed because it was never designed to test the right question. This article walks through the anatomy of that failure, why it matters, and what we must fix if we are to take prevention seriously in modern medicine.
The Trial That Didn't
On the surface, VITAL looked impeccable: over 25,000 participants, randomized and placebo-controlled, testing 2000 IU of vitamin D3 daily for a median of 5.3 years. The primary endpoints were the incidence of any invasive cancer and a composite of major cardiovascular events (heart attack, stroke, or death from cardiovascular causes).
But there is a foundational problem: most participants weren't vitamin D deficient to begin with. Only 12.7% had levels below 20 ng/mL, the threshold generally associated with increased risk. The mean baseline level was 30.8 ng/mL-already at or near sufficiency. It's the equivalent of testing whether insulin helps people who don't have diabetes.
Further eroding the study's contrast, participants in the placebo arm were allowed to take up to 800 IU/day of vitamin D on their own. By year 5, more than 10% of the placebo group was exceeding that limit. The intervention, in effect, became a test of high-dose vitamin D versus medium-dose vitamin D, not against a true control.
Add to that the decision to use broad, bundled endpoints like "any invasive cancer" or "major cardiovascular events" without regard to mechanisms, latency, or stage-specific progression, and the trial becomes a precision instrument for finding nothing.
The Important Real Signal They Missed
The one glimmer of benefit appeared in cancer mortality. While incidence rates were similar between groups, the vitamin D arm showed a lower rate of cancer deaths. This effect emerged only after two years of follow-up and became statistically significant once early deaths were excluded. Even more telling, among participants whose cause of death could be adjudicated with medical records (rather than death certificate codes), the benefit was stronger.
This suggests a biologically plausible mechanism: vitamin D may not prevent cancer from starting, but it may slow its progression or reduce metastasis. That theory aligns with preclinical models showing vitamin D's role in cellular differentiation, immune modulation, and suppression of angiogenesis.
And yet, VITAL buried this signal. The paper acknowledged a significant violation of the proportional hazards assumption in cancer mortality, a red flag that time-to-event models were inappropriate. Instead of adjusting with valid statistical models for non-proportional hazards, the authors sliced the data post hoc to generate a story and dismissed the result as exploratory. Meanwhile, they mentioned in passing that fewer advanced or metastatic cancers occurred in the vitamin D group-but offered no data.
How Design Choices Shape Public Understanding
The public interpretation of VITAL has been simple and sweeping: vitamin D doesn't help. That perception has reshaped policy, funding, and clinical guidance. Combined with errant policy based on acknowledged errors, It is dangerous and a risk to public health.
But what the trial actually tested was much narrower: Does high-dose vitamin D provide additional benefit in a mostly vitamin D-sufficient, highly compliant, aging American cohort already permitted to take moderate doses on their own? And does it do so within 5 years?
Given those conditions, the null result was foreordain...