For decades, randomized clinical trials have been the gold standard for demonstrating a potential new medicine is safe and effective.
Nowadays, however, payers, policymakers and even patients are demanding more. They want to know how that drug will behave in the complex, unpredictable real world – in which patients it will be most effective, how it compares with other treatments, and which treatments are most cost effective.
“Real-world evidence is saying, rather than just looking at the effects of the drug in a controlled (clinical trial) environment, let's understand what happens in actual clinical practice,” and use that to inform treatment and reimbursement decisions, says Glen Schumock, Pharm.D., Ph.D., professor and head of the Department of Pharmacy Systems, Outcomes and Policy at the University of Illinois at Chicago.
That’s why biopharmaceutical companies and academic institutions increasingly are employing teams of researchers and analysts, with access to terabytes of data, to better understand how prescription drugs are being used by physicians and patients in clinics, hospitals and patients’ homes.
Randomized clinical trials remain important
That doesn’t mean randomized clinical trials are going away. They remain the gold standard for determining whether a new drug is safe and efficacious. They are required by global regulatory bodies, including the U.S. Food and Drug Administration, the European Medicines Agency and many others for the approval of new medicines.
Clinical trials are carefully designed, focusing on very specific types of patients. Those patients take their medicines according to a pre-defined schedule, attend frequent follow-up visits and have routine lab tests. This enables clinical trials to answer the questions, “is this medicine safe and effective for its intended use?”