If you're reading this as a biostatistician or clinical trial professional, you know that the pharmaceutical industry is shifting from classical to adaptive clinical trial design in an effort to reduce NDA failure rate, lower the cost of research and development, and expedite the precision medicine movement. As a member of the Talent & Culture team at Veristat (a CRO), I spend a great deal of time speaking with people like yourself, many of whom are potential candidates excited about the opportunity to work at a company where they may get exposure to an adaptive design trial. Realizing the consistency across these conversations, I have begun wondering if their interest in adaptive designs is a result of it being a current trends, or if it is because adaptive design truly offers increased efficiencies. As a result, I have spent some time research when adaptive design might not be right for a clinical trial and have found two articles that discuss when it is appropriate to use them.
The first article, titled “Adaptive Design – Recent Advancement in Clinical Trials” was written by Mark Chang and John Balser (in full disclosure, I should tell you that Dr. Chang and Dr. Balser are both members of Veristat’s statistical consulting group), and published last month in the Journal of Bioanalysis & Biostatistics. It focuses on how adaptive design can be a more cost efficient clinical trial approach than the traditional design because it optimizes and streamlines the drug development process. This article also provides a high level overview of the different types of adaptive designs and how to select the appropriate one based on the type of clinical trial. Chang and Balser also review many of the controversies surrounding adaptive trials.
The second article, a discussion between Wade Wirta and Steven Schwager of Medidata Solutions, is titled “When is Adaptive Design Right for your Clinical Trial?” This piece covers similar topics as that of the first article, but leaves out statistical formulations that clarify common confusions among statisticians who have not had a lot of exposure to adaptive trials. Wirta mentions that adaptive trials are not just a change in methodology, but a change in the entire process, including the technology used to run the trial. Additional complexities can be found in operational challenges such as needing a more agile supply chain.
Drawing on these two articles and my conversations with members of the Biostatistics & Statistical Programming Department at Veristat, I have listed three actions that I recommend you take to determine if you should avoid an adaptive design trial:
- Consider the practicality – Everything from "will the recruitment speed jeopardize the adaptive design?" to "can the interactive voice response system (IVRS) support the adaptive design?"
- Recognize the complexity – The statistical methodologies for complicated adaptive designs are still being developed, while methods are readily available for more commonplace adaptive designs like futility analysis.
- Examine the work environment – Adaptive designs only succeed in truly collaborative environments because they require rapid integration of abilities and knowledge from various disciplines into the decision-making process.
Now that you know the challenges posed by adaptive designs, you can go about assessing if an adaptive design in right for your trial. Remember that it doesn’t have to be a complicated adaptation. Perhaps it is as simple adaption such as assessing futility, which is a simple and cost effective adaptation that has the ethical advantage of stopping patient exposure to ineffective drugs.
Robin Brodrick is a Talent Acquisition Consultant at Veristat and an aspiring minimalist. Follow Robin on Twitter or LinkedIn for a unique mix of minimalism, job search, and recruiting advice. To learn more about Veristat and its open positions, click here!