Clinical Trials in Cardiology
Pinnacle or Inflection Point?
Controlled clinical trials conduct the research that completes the causal argument between a treatment and a disease’s control. Yet, this pinnacle of clinical research is itself afflicted. Chronic problems with recruitment failure vitiate the potency of our research efforts. In addition, the collision of end-point multiplicity (the drive to measure multiple end points) with the requirement of statistical parsimony (ie, the need to reduce the number of interpretable end points to control the overall type I error) induces a core inefficiency in clinical trial productivity by reducing the number of endpoints findings that are generalizable to the population at large. Unless clinical trialists engage these problems with vigor and imagination, our pinnacle may be nothing more than an inflection point leading to decline.
The 16th and 17th centuries resonated with excitement as Europe accepted the presence of a New World to the West. Yet ships suffered cataclysmic disasters at sea1 and voyage duration took far longer than anticipated because of the inability to determine longitude at sea. If this problem were not solved, then New World exploration would expire. Yet, rather than turn their backs on the exploration, the community put supreme effort into solving the problem—and succeeded.
We may not be at a breaking point in clinical trials, but we are much closer than is comfortable to contemplate. The reputation of clinical trials, once considered sacrosanct by health care researchers, is now in some jeopardy.2 And, like the mariners and inventors of 400 years ago, it is up to us to solve the problems bedeviling clinical trial methodology or the research culture will simply turn elsewhere. What follows is a summary of 2 of our major problems and how we might approach them.
Unsuccessful recruitment effort is the leading factor in clinical trial failure, generating underpowered trials that consume thousands of dollars or more in costs and hundreds or thousands of person hours but generate only inconclusive data with little return on investment. In fact, the ethics of conducting such underpowered clinical studies have been questioned.3 Recognized as a problem for many years, we clinical trialists work to overcome recruitment obstacles by developing referral networks and satellite treatment centers.4 We disseminate news about our trials by first using mass mailings, then the internet, then social media, and finally we promulgate useful templates for recruitment strategies among ourselves.5 The Thrombus Aspiration in ST-Elevation Myocardial Infarction in Scandinavia (TASTE) trial2 that successfully recruited patients in Scandinavia based on drawing potential subjects from an underlying and well-developed registry demonstrates the possibility of success when there is a solid patient database available.
Yet, in the United States, failures abound. Pfizer’s failure to ease the burden of work on participants by permitting them to participate in a clinical trial at home suggests the problem may be deeper than just reducing the effort burden of the participants.6 Despite our efforts, clinical trials continue to lag in recruitment, and clinical trialists are now in a state of confusion as to what to do other than the best that they can, reducing themselves to reporting recruitment problems in the hope that this will help to devise strategies to overcome the problem.7 Similarly, retention in clinical trials cripples the execution of clinical studies. Eloquent calls for education of the public have been made,8 but about what should they be educated?
This problem is all the more frustrating because its cause is not a paucity of subjects. There are millions of people in the United States who meet the inclusion and exclusion criteria of clinical trials but either do not know of the option or choose not to participate. They are within our sight but seemingly outside of our grasp. This is the price we pay for a simple yet critical failure of the scientific and public health communities to enlighten the public about the individual and direct benefits that are derived from their participation in clinical trials—a critical education lapse for which we are each responsible.
One of the most frequent reasons that people do not enroll in clinical trials is the sense that there is a good chance that they will not receive the intervention to be tested; therefore they see no value in investing their time and effort into a clinical trial to merely receive an inert substance.9 Researchers have responded in the past by offering clinical trials that randomize a greater number of subjects to the active than to the control group. However, perhaps a more helpful response would be to recognize the flaw in the potential subject’s assumption, a belief set that flies in the face of one of the most consistent observations in clinical trials yet is unfortunately treated like a secret.
Subjects recruited to control groups have a superior performance and experience beyond expectation.10
There are several reasons for this salubrious experience. One has to do with the placebo effect, that is, patients who think that they will improve actually demonstrate an improvement.11 Another is conditioning (the finding that individuals through prior experiences can actually demonstrate improvement, although they have received nothing further that drives the improvement). This has even been demonstrated in animals.12 Layered onto this is the volunteer effect.13
However, there are substantial additional benefits. Subjects who have successfully navigated the inclusion and exclusion criteria of a trial are assured that they have no disease or life-threatening illness, a self-selection process that increases the likelihood of a normal outcome. Finally, and predominantly, subjects who enter clinical trials receive the best medical care (frequent visits to specialists, state-of-the-art diagnostic and imaging tests, vigilant follow-up). These 5 influences combine to confer on the patient recruited to the placebo arm of the trial a healthful experience that decreases serious adverse events and deleterious outcomes.
It is not too much of an exaggeration to conclude that one of the most salutary experiences a subject can experience is being a member of a control group in a clinical trial, which begs the question, if the public understood this, then would they be so reluctant to participate? This is a powerful message that the public, looking for relief from historically complicated, befuddling, and expensive health care systems, can embrace if they are educated about the possibility.
Of course, there are other messages that must also be delivered. We as clinical trialists certainly know that a subject’s right to privacy is protected, yet the public may not know that. We also ensure an individual’s right to leave the study, although the eligible populations may not know that either. Surrounded by an inchoate media, it would be a mistake for us to think that the public has the same understanding of subject protection as we do. Consequently, the United States public should be given the opportunity to openly consider best quality treatment for their condition, with their privacy and right to choose protected. They are currently not educated about that choice. Reversing this deficiency will produce the beneficial problems of why there are not more clinical trials and why not to recruit at an active-to-control patient ratio of not 2:1 but 1:2.
This thesis can be put to the test simply. Conduct a small clinical trial in 2 small communities. In the first, a clinical trial is begun using current state-of-the-art recruitment activities. In the second, the announcement of clinical trial is preceded by meetings, radio discussions about the experiences of patients in clinical trials, and clarification of the responsibilities of the researchers and oversight boards. The researchers can then compare the recruitment results of the 2 studies.
Recruiting Underrepresented Minorities
Recruiting underrepresented minorities is a vexing subset of the general under recruitment activities that afflicts clinical trials. However, its solution is already within our grasp. Our next step toward a solution is a step beyond what National Institutes of Health currently mandates in its studies. It requires that their funded clinical trials report the number of women and number of subjects of underrepresented races and ethnicities who are randomized to the study. This activity has an essential role in measuring the under recruitment problem, but it does not suggest how the problem may be solved. A useful and relatively easy next step is to report recruitment at each of the steps of selection, permitting a first assessment of the location of selection bias. This requires reporting, for example, the number of women who are in each of the screening, consented, enrolled, and randomized populations. Such data, when of sufficiently high quality, permit an evaluation of the change in the percentages of women in this cohort that changes over time, permitting a direct assessment of the presence of selection bias. Learning that a large number of women are screened but only a relatively small number of women are consented to the study is a clear signal of a bias (however innocent) in operation during the consenting process. Alternatively, observing that many women are consented but that relatively few successfully navigate the inclusion/exclusion criteria suggests the solution lies at another level. This would be a helpful improvement because it would quantify the magnitude of bias and allow us to measure its improvement. Unfortunately, in some clinical trials not even the most elementary data of the screened population are provided.14
To explore whether selection bias operates locally versus nationally, each clinical center should report this screening, consenting, and enrollment information for their center. Because involvement by sex and underrepresented racial and ethnic minorities can be affected by local cultures and relationships with clinical trialists, this is a natural first place to turn attention.
A next step begins with the truism that every community has its own culture. The degree to which that culture is different from those commonly discussed in national conversations is a measure of the problem that clinical trialists face when interacting with that local culture. Adding to the difficulty is the perception that, regardless of whether the research is to be executed in disenfranchised and poor neighborhoods or affluent communities, each community fears that the lives of its residents are thought to be of less value by strangers and outsiders. This sense is aggravated by sensationalist stories in the media that create the impression that neighborhood crimes that are rare are commonplace.
In addition, despite the important emphasis on ethics in modern clinical research, the ghastly experiences from the Tuskegee syphilis experiment15 and the Guatemalan syphilis work16 aggravate the wound. Language barriers require additional efforts to bridge the divide. The combination of these influences can produce a community composed of residents who are hostile to the idea of research being conducted on their members. Because the trust of all subjects has to be earned before the candidate agrees to be a participant in a clinical trial, the deeper the prior mistrust, the more work must be invested by the researcher to gain that trust. Efforts to undo these multiple layers of mistrust can feel like trying to melt a glacier with a candle.
The question is therefore not how to recruit more women and ethnic minorities into clinical trials, but instead how to best provide the information for individuals to demonstrate the value of clinical research to them. Every community has the right to be seen as a unique locale and therefore requires its own relationship with the investigators. That relationship can begin with the establishment of relationships between the researchers and community leaders. However, it must be clear that the purpose of the relationship is not only to get information about the study into the community but also to learn from community leaders how the investigators can best communicate with its residents.
It would be a mistake to think that the community must simply be educated. It is the investigative team that must be educated in the concerns and language of the community so that it can use that language clearly to ensure that community leaders understand the role of the investigator and the role of the subject in research. Individuals must be allowed to understand in language and concepts that they are comfortable with that their privacy, lives, and livelihoods will not be euthanized in the research effort. Merely supplying a 30-page informed consent document is a recipe for failure. Who among us would wish to be treated this way?
There is no question about the sacrifice of time and effort required by the investigators to achieve this. Outreach to each community, not just those of underrepresented populations, would benefit from having the same goal, although the effort depends on the community involved. Yet, given the investment of time on the part of the subjects to surmount the learning curve and overcome their own biases and misperceptions of the modern clinical trial, the investigators’ sacrifices should be seen as equitable.
Relationship-building now is likely to return handsome dividends in the future for clinical trials, allowing us to overcome one of the greatest obstacles facing us.
End Point Rules
We clinical trial investigators are caught in a 2-part war. On the one side is the requirement that our clinical trial bears a rich bounty of valuable results. To that end, we strive to collect all relevant data that we have permission to obtain and analyze. This natural tendency to use the clinical trial’s data set to the fullest, to learn all that is learnable from the study, generates many analyses. These families of analyses satisfy our own curiosity instincts and these evaluations have a solid basis in epidemiology. The Bradford Hill causality tenets17 motivate the need to identify any dose–response relationships, fuel the drive to examine therapy effects on different but related end points, and motivate our exploration of possible mechanisms of action for the therapy. Investigators who, at their heart, are driven to learn want to and enjoy supplying good answers to these good questions. We simply want to learn all that we can.
However, this push to examine and report on all of the analyses is opposed by statistical reasoning. Recruiting and studying only a sample from a population and not evaluating the entire population itself introduce sampling error, which, like gravity, has invisible yet powerful effects. The lessons from the vesnarinone studies,18,19 Evaluation of Losartan in the Elderly Study (ELITE) I and II,20,21 and Prospective Randomized Amlodipine Survival Evaluation (PRAISE) I and II,22,23 show the hazards of believing that exploratory (nongeneralizable) results are really generalizable (confirmatory) ones. Simply put, exploratory analyses are untrustworthy and must be replicated in a confirmatory analysis before they can be safely integrated into our fund of knowledge.
Cardiovascular clinical trialists have received this message. We all now appreciate that only those analyses that are prospectively declared and those that control the family-wise type I error rates are considered primary. Other prospectively declared analyses are secondary, and nonprospectively declared analyses are exploratory.
However, just because we must accept this status quo does not mean that we should be satisfied with it. The number of analyses that can be performed in a clinical trial is in the hundreds, yet only a small (and with a single primary outcome, only one) evaluation is considered confirmatory. This injects tremendous inefficiency into clinical trial interpretation because by these rules, most results cannot be generalized. This fact is appreciated by the private sector, where hundreds of millions of dollars of investment rely on the finding of a single primary outcome in a phase III clinical trial. One of the best examples is the United States Carvedilol program,24–30 in which findings for the null effect of exercise tolerance were confirmatory and accepted whereas powerful findings for mortality effect of the drug were relegated to exploratory status, requiring a second study to confirm the mortality effect.
The clear demarcation of prospectively declared versus exploratory analyses should not be ignored. But let us also acknowledge that this is only a dividing line, not a finishing line.
There are 2 concerns. The first is the difference between prospective and exploratory analyses. The problems with exploratory analyses are the inability to draw a sample that will be optimally designed to answer all interesting question for a research enterprise and a limitation of statistical theory requiring that we follow the sequence of first choose the end point, then select the sample, and then use the sample to estimate the end-point effect. The first issue can be addressed by cardiologists. With good prospective thought, we can intelligently choose a sample that will provide representable data to address the cardiology questions. However, the second issue lies in the biostatistical domain. What is required is a new class of estimators that convert exploratory analyses to confirmatory ones. Such estimators will profoundly increase the efficiency of clinical trials.
Setting exploratory evaluations aside, cardiology researchers understand that, even if all of the end points are prospectively declared, we are likely to make a mistake attempting to generalize all estimates from the sample to the population, a problem that forces us to separate the primary end points (when we tightly control the type I error) from secondary end points that are seen as merely supportive. This is also inefficient. What is required here is hypothesis testing that expends so little type I error that there is no penalty for performing multiple analyses. Given that the original type I error concerns arose from a manure experiment in the 1920s,31,32 and despite the fact that epidemiologists have long argued that P value obsession is counterproductive,33–40 P values in their current configuration remain a benchmark of a clinical trial’s success, saddling us with the multiple comparisons problem. For reasons that are principally historical, we have had to live with this to be funded, to gain regulatory approval, and to be published.41
So, we have gone as far as we can in cardiovascular clinical trials to solve this end-point issue and must now call on the biostatistical community. Given the advances in computing, is there no one who can provide a new practical statistical estimator/decision process that functions reliably and a decision process separate from the accumulation of type I error? Perhaps, we need a new longitude prize to help generate a solution.
Sources of Funding
Dr Moyé is currently funded by an NHLBI grant.
- © 2014 American Heart Association, Inc.
- Sobel D
- Frank G
- Silverman Ed
- 9.↵National Cancer Institute. Design dilemma. The debate over using placebos in cancer clinical trials. NCI Cancer Bull. 2011;May 3:8.
- Ader R,
- Cohen N
- Barrett JF,
- Hannah ME,
- Hutton EK,
- Willan AR,
- Allen AC,
- Armson BA,
- Gaffni A,
- Joseph KS,
- Mason D,
- Ohlsson A,
- Ross S,
- Sanchez JJ,
- Asztalos EV
- 16.↵Findings from a CDC Report on the 1946–1948 US. Public Health Service Sexually Transmitted Disease (STD) Inoculation Study, U.S. Department of Health & Human Services, 30 September, 2010.
- Cohn JN,
- Goldstein SO,
- Greenberg BH,
- Lorell BH,
- Bourge RC,
- Jaski BE,
- Gottlieb SO,
- McGrew F 3rd.,
- DeMets DL,
- White BG
- Pitt B,
- Poole-Wilson PA,
- Segal R,
- Martinez FA,
- Dickstein K,
- Camm AJ,
- Konstam MA,
- Riegger G,
- Klinger GH,
- Neaton J,
- Sharma D,
- Thiyagarajan B
- Packer M
- 25.↵Food and Drug Administration Center for Drug Evaluation and Research. Transcript for the May 2, 1996 Cardiovascular and Renal Drugs Advisory Committee. Adelphi: University of Maryland University College; 1996.
- Fisher RA
- Fisher RA
- Walker AM