Medicine is built on research — but what if much of that research can’t be trusted? We’re surrounded by headlines about “new medical breakthroughs,” but how do you know which ones are worth your attention?
Over the past few decades, a series of whistleblowers and trailblazers have exposed the shortcomings of modern research — and offered concrete ways to fix them. Here’s a brief history of metascience, and why it matters to all of us: scientists, laypeople, and journalists alike.
In my last post, Consensus-Followers, Reformers, and Contrarians, I noted that much of our published scientific research is unreliable. While this view hasn’t yet become mainstream, it is gaining traction within the scientific community. That recognition has given rise to the field of metascience, or meta-research — essentially, the scientific study of science itself.
I admitted I’m a late-comer to this field of skepticism, but I’ve enjoyed diving into its history. I believe it’s vital that we all become better purveyors of scientific research. So at the end of this piece, I’ll share a brief history of this school of skepticism.
That history — and the subject of metascience more broadly — is aimed mainly at clinicians and academics. If the recommendations of the thinkers I’ll cite were followed, it would help us better parse studies we see in journals or at professional meetings, and design and conduct stronger research in the first place.
But what about outside the clinic or the lab? The rest of us are constantly bombarded by “breaking medical news” from mainstream outlets, algorithm-driven social media posts, and “healthcare influencers” whose motives often extend beyond your well-being. How can you know which research deserves attention and which to ignore?
And what about journalists, tasked with writing quick blurbs on new studies? Asking the right questions is essential for deciding whether a study is newsworthy — and how to present it responsibly if it is. The way these stories are covered doesn’t just satisfy curiosity; it shapes patient decisions, physician practices, and even public policy.
I’ll circle back soon to how to assess science reporting in everyday life — and how journalists can do a better job of it. To whet your appetite: just yesterday my wife flagged an article on hormone replacement therapy and Alzheimer’s risk. The coverage is better than most, yet it still highlights how tricky it is to properly evaluate research and report it responsibly.
Hormone Replacement Therapy Timing Linked to Alzheimer’s Disease Risks
A Brief History of Metascience
1972 – Archie Cochrane publishes Effectiveness and Efficiency, a short book arguing that medicine should be guided by randomized trials and judged by both effectiveness and efficiency. It became a manifesto for evidence-based medicine.
1980s – Iain Chalmers, Murray Enkin, and Marc Keirse lead the charge for systematic reviews. Their Effective Care in Pregnancy and Childbirth (1986) pulled together every trial they could find in obstetrics. This approach led directly to the founding of the Cochrane Collaboration in 1993, named in honor of Archie Cochrane, and dedicated to synthesizing medical evidence across fields.
1992 – The Evidence-Based Working Group coins the term “EBM” in JAMA (Evidence-Based Medicine), led by David Sackett and Gordon Guyatt. Their aim: to make clinical decisions explicitly grounded in critically appraised evidence.
1994 – Douglas Altman publishes The Scandal of Poor Medical Research in BMJ. His blunt message: “We need less research, better research, and research done for the right reasons.”
2005 – John Ioannidis publishes Why Most Published Research Findings Are False in PLoS Medicine. His probability model showed how low study power, weak prior odds, and bias make many “statistically significant” findings unreliable. The paper became one of the most cited in modern science.
2006 – Richard Smith, longtime editor of BMJ, published The Trouble with Medical Journals in the Journal of the Royal Society of Medicine, noting publication bias, conflicts of interest, poor peer review, and a tendency to favor flashy results over solid science.
2009 – Iain Chalmers and Paul Glasziou published Avoidable Waste in the Production and Reporting of Research Evidence in The Lancet. They estimate that as much as 85% of medical research is wasted — through asking the wrong questions, using poor methods, failing to publish results, or reporting them inadequately.
2010s – The Replication Crisis hits home.
2012 – Amgen Cancer Reproducibility Study
C. Glenn Begley and Lee Ellis reported in Nature (Raise Standards for Preclinical Cancer Research) that Amgen scientists tried to replicate 53 “landmark” preclinical cancer studies — and succeeded in only 6. The failures highlighted how weak methods, bias, and poor reporting undermine even high-profile biomedical research.2015 – Open Science Collaboration
Brian Nosek and colleagues reported in Science (Estimating the Reproducibility of Psychological Science) that of 100 psychology studies replicated, only 36% produced significant results again, and effect sizes were often smaller.2015 – Vinay Prasad and Adam Cifu publish Ending Medical Reversal, showing how many accepted treatments collapse under stronger evidence.
Today – Metascience (or meta-research) has matured into a formal field, with centers like the Cochrane Collaboration, the Center for Open Science, and the Meta-Research Innovation Center at Stanford (METRICS). Special mention also to the founders and collaborators at the Substack, Sensible Medicine, which I have found to be an invaluable resource. Their combined mission is to study how science is done, expose systemic flaws, and promote reforms for more reliable research.