Principle 1 describes the component design characteristics of a RCT that combine to resolve important uncertainties about the effects of a health intervention.
RCTs should help to resolve important uncertainties about the effects of health interventions. Depending on the context, the results may be needed to determine whether to proceed with development or further evaluation of the intervention or to inform regulatory licensing, clinical guidelines, and/or health policy. In each case, any uncertainties applying to the specific question(s) that remain at the end of the RCT should be sufficiently small to allow meaningful decisions to be made.
This requires the combination of:
Good RCTs should include the following features:
Key Message: The eligibility criteria should be tailored to the question the RCT sets out to answer. Inclusion criteria should not be unnecessarily restrictive. Efforts should be made to include a broad and varied population (e.g. with appropriate sex, age, ethnic and socioeconomic diversity) unless there is a good medical or scientific justification for doing otherwise.
Exclusion criteria should be focused on identifying individuals for whom participation would place them at undue risk by comparison with any potential benefits (e.g. based on their medical history or concomitant medication) or for whom the benefits have already been reliably demonstrated.
Why this is important: Inclusive eligibility criteria increase the relevance of the findings. They sometimes allow assessment of whether there is good evidence of material differences in the effects (beneficial or adverse) and/or acceptability of an intervention or its delivery in any particular subgroup (e.g. based on specific genetic, demographic, or health characteristics). However, statistical power to detect such differences may be limited.
Key Message: Randomization requires generation of an unpredictable allocation schedule with concealment of which intervention will be allocated to a particular participant until after the point of randomization. It should be impossible to predict in advance which individual trial participant or individual cluster (e.g. hospital or city in a cluster RCT) the study intervention is likely to be allocated to, so that investigators, health care providers and other staff involved, and potential participants are not aware of the intervention to which they will be assigned.
Why this is important: Randomization allows for like– with– like comparisons so that subsequent differences in health outcomes between the groups (beneficial or adverse) are due either to the play of chance or are due causally to differences in the study intervention. Measures such as minimization may be used to reduce the size of random differences between intervention groups, provided that they are implemented in such a way that avoids potential participants and those enrolling them being able to predict which intervention will be allocated at the point of randomization. The absence of adequate allocation concealment prior to randomization can lead to selection bias (i.e. the decision to enter a particular participant in a trial can be influenced by knowledge of which intervention they are likely to be assigned to).
Key Message: A RCT should be sufficiently large and statistically powered to provide a robust answer to the question it sets out to address.
Why this is important: For the effects of health interventions to be reliably detected or reliably refuted then, in addition to randomization (to minimize biases), random errors must be small by comparison with the anticipated size of the effect of the intervention. The best way to minimize the impact of random errors is to study sufficiently large numbers (noting that RCTs assessing impact on discrete health outcomes such as mortality will require more participants than those assessing impact on continuous measures such as laboratory results, as is often the case in early phase trials).
There are some scenarios for which it is inappropriate or challenging to randomize sufficiently large numbers of participants, such as trials assessing interventions in rare diseases. For such trials, it may be helpful to contribute to a broader collaboration to conduct the RCT or select a clinically relevant outcome for which the effect size is expected to be larger (e.g. a physiological or imaging biomarker). It may be possible to reduce the impact of random errors through the statistical analyses that are done (e.g., analyses of a continuous outcome adjusted for baseline values of that outcome would typically increase statistical power compared with an analysis of either mean follow-up levels or an analysis of mean changes in levels) or by making assessments at a time when the effects of the intervention are anticipated to be greatest.
Key Message: Knowledge of the allocated trial intervention may influence the behaviour of participants, those who care for them, or those assessing study outcomes (particularly if these are subjective in nature). This can be avoided through use of placebo medications or dummy interventions or by ensuring that those individuals or systems responsible for assessing participant outcomes are unaware of the treatment allocation.
Why this is important: In some RCTs, knowledge of the allocated intervention can influence the nature and intensity of clinical management, reporting of symptoms, or the assessment of functional status or clinical outcomes. This is particularly important for trials in which blinding of the allocated intervention is not feasible or desirable. Masking (or blinding) participants, investigators, health care providers, or those assessing outcomes to the assigned intervention can help prevent such issues, as can the use of information that is recorded separately from the clinical trial (e.g. routine clinical databases and disease registries). These considerations are important for the assessment of both the efficacy and the safety of the intervention, including processes relating to adjudication of outcomes and considerations of whether an individual health event is believed to have been caused by the intervention.
Key Message: Efforts should be made to facilitate and encourage adherence to the allocated intervention(s).
Why this is important: If trial participants allocated to active intervention do not receive it as planned, or if those allocated to the control group (e.g. placebo or usual care) start to receive the active intervention, then the contrast between the two study groups is less. Consequently, the ability to assess any differences (beneficial or harmful) between interventions is reduced, (and it is more likely to falsely conclude that there is no meaningful difference between the interventions when in fact there is one).
Key Message: Participant outcomes should be ascertained for the full duration of the RCT, regardless of whether a trial participant continues to receive the allocated intervention or ceases to do so (e.g. because of perceived or real adverse effects of the intervention). In some cases, it may also be appropriate to continue follow-up for many years beyond reporting the main analyses.
Why this is important: Continued follow-up of all randomized participants (even if some stop taking their assigned intervention) maintains the like-with-like comparison produced by the randomization process. Premature cessation of follow-up or post-randomization exclusion of participants should therefore be avoided since it may introduce systematic bias, particularly as the type of people excluded from one intervention group may differ from those excluded from another. Incomplete follow-up may reduce the statistical power of a RCT (i.e. the ability to distinguish any differences between the interventions) and underestimate the true effects (benefits or hazards) of the intervention. Extended follow-up can allow for detection of beneficial or harmful effects of the study intervention that may persist or emerge months or years after the initial randomized comparison.
Key Message: The outcomes that are assessed in a RCT need to be relevant to the question being addressed. These may include physiological measures, symptom scores, participant-reported outcomes, functional status, clinical events, or healthcare utilization. The way in which these are assessed should be sufficiently robust and interpretable (e.g. used in previous trials or validated in a relevant context).
Why this is important: The ways by which the consequences of the randomized intervention are measured should be sensitive to the anticipated effects of the intervention and appropriate to the study question, and in general should be applicable and meaningful for the relevant population. The choice of outcomes may vary depending on the extent of prior knowledge of the effects of the intervention (e.g. early trials may assess the effects on imaging and laboratory markers and later trials the effects on clinical outcomes). It is rarely possible or desirable to assess the full range of potential outcomes in a single RCT. Instead, there should be a focus on providing a robust answer to the specific, well-formulated question.
Key Message: Data collection should focus on those aspects needed to assess and interpret the trial results as specified in the protocol and should not be excessive. The extent to which information (e.g. on participant characteristics, concomitant treatments, clinical events, and laboratory markers) is detected and recorded, and the means and level of detail to which this is done should be tailored to each RCT. This should take into account what is needed to answer the trial question and, the level of existing knowledge about the background health condition and the intervention being studied. The choice of data collection approach may also be influenced by considerations such as suitability, availability, and usability as well as the extent to which such information is sufficiently accurate, comprehensive, detailed, and timely.
Tools and methods for data collection, storage, exchange, and access should enable the RCT to be conducted as designed, support privacy and security, and enable reliable and consistent analyses. Digital technology and routine healthcare data can provide alternative or complementary means to record information about participants and their health at study entry, during the initial intervention and follow-up period, and for many years beyond, where appropriate.
Why this is important: The volume, nature, and level of detail of data collection should be balanced against its potential value. Disproportionate data collection wastes time and resource. It places unnecessary burden on trial participants and staff, distracts attention from those aspects of the trial that have greatest consequence for the participants, and reduces the scale (number of participants, duration of follow-up) of what is achievable with available resources. In some trials, it may be appropriate to measure some features (e.g. intermediary biomarkers) in a subset of participants, chosen on the basis of baseline characteristics or random selection, or at a limited number of timepoints. The choice of method used for data collection can have an important bearing on trial reliability and feasibility. Use of data standards can help ensure data quality and data integrity. Use of digital technology and routine healthcare data can improve the relevance and completeness of information collected (e.g. reducing loss to follow-up).
Key Message: Processes for ascertaining study outcomes should be the same in all randomized groups. This includes the frequency and intensity of assessments. Particular care should be taken to ensure that the people assessing, clarifying, and adjudicating study outcomes are not influenced by knowledge of the allocated intervention (i.e. blinded or masked outcome assessment). Equally, the methods for acquiring, processing, and combining sources of information (e.g. to define participant characteristics or clinical outcomes) should be designed and operated without access to the intervention allocation for individual participants or knowledge of the unblinded trial results.
Why this is important: If the methods used to assess, clarify or classify outcomes differ between the assigned interventions, the results may be biased in one direction or other leading to inappropriate conclusions about the true effect of the intervention. Therefore, the approach used to assess what happens to participants should be the same regardless of the assigned intervention and those making judgements about the occurrence or nature of these outcomes should be unaware of the assigned intervention (or features, such as symptoms or laboratory assays, that would make it easier to guess the assignment) for each participant.
Key Message: Trial results should be analyzed in accordance with the protocol and statistical analysis plan, which should be developed prior to knowledge of the study results. Any post-hoc analyses should be clearly identified as such. The main analyses should follow the intention-to-treat principle, meaning that outcomes should be compared according to the intervention arm to which the participants were originally allocated at randomization, regardless of whether some of those participants subsequently received some or none of the intended intervention, and regardless of the extent to which the post randomization follow-up procedures were completed.
Subgroup analyses should be interpreted cautiously, especially if they are not pre-specified or are multiple in number (whether pre-specified or not). In general, any prognostic features that are to be used in analyses of intervention effects in RCTs should be irreversibly recorded (or sample collected) before randomization.
Why this is important: The strength of a RCT is that there is a randomized control group with which to compare the incidence of all health events. Consequently, it is possible to distinguish those events that are causally impacted by allocation to the intervention versus those that are part of the background health of the participants. Analyzing all participants according to the intervention to which they were originally allocated (‘intention-to-treat’ analysis) is important because even in a properly randomized trial, bias can be inadvertently introduced by the post-randomization removal of certain individuals from analyses (such as those who are found later not to meet the eligibility criteria, who are non-adherent with their allocated study treatment, or who commence active intervention having been allocated to a control group).
Additional analyses can also be reported, for example, in describing the frequency of a specific side effect. It may be justifiable to record its incidence only among those who received the active intervention, because randomized comparisons may not be needed to assess large effects. However, in assessing moderate effects of the treatment, ‘on-treatment’ or ‘per protocol’ analyses can be misleading, and ‘intention-to-treat’ analyses are generally more trustworthy to assess whether there is any real difference between the allocated trial interventions in their effects.
One of the most important sources of bias in the analysis is undue concentration on just part of the evidence (e.g. selective emphasis of the result in one subgroup of many or in a subgroup that is defined after consideration of the data). Apparent differences between the therapeutic effects in different subgroups of study participants can often be produced solely by the play of chance. Subgroups therefore need to be relevant, pre-specified, and limited in number. Analysis of results in sub-groups determined by characteristics observed post-randomization should be avoided because if the recorded value of some feature is (or could be) affected by the trial intervention, then comparisons within subgroups that are defined by that factor might be biased. It is important to interpret results in specific sub–groups (e.g. men vs. women) cautiously and consider whether they are consistent with the overall result or not. Failure to do so can lead to people in those subgroups being treated inappropriately (given an intervention that is ineffective or harmful) or untreated inappropriately (not being given an intervention that would benefit them), when there is no good evidence that the effect varies between them.
Key Message: Data generated during the course of conducting a RCT may reveal new information about the effects of the intervention which is sufficiently clear to alter the way the trial is conducted and participants are cared for, or is sufficiently compelling to change the use of the intervention both within and outside the trial. Potential harms of the intervention should be considered alongside potential benefits and in the wider clinical and health context.
Why this is important: Not every health event that happens in a trial is caused by one of the interventions; individuals involved in a trial may suffer health events that have nothing to do with the trial or the interventions being studied (the less healthy the participants in the RCT, the more likely that any health event is related to factors other than the intervention.)
Assessing whether signals (e.g. rates of clinical events or laboratory abnormalities) seen among those allocated to receive a health intervention are significantly more or less frequent than in the control group provides a reliable assessment of the impact of the intervention. It provides a fair assessment of which events are causally impacted by allocation to the intervention versus those that are part of the background health of the participants. In an ongoing RCT, such unblinded comparisons should be conducted by a group (such as a Data Monitoring Committee) that is independent (or firewalled) from the trial team to avoid prematurely unblinding the emerging results to those involved in running the trial.
By contrast, reports of individual events that are believed (e.g. by the participant or a doctor) to be caused by the intervention are much less informative due to the lack of a comparison with the incidence of the event in control group and the inherently imprecise judgement of causality. The exceptions are events that are rare in the types of people involved in the trial but known to be potentially strongly associated with particular interventions (e.g. anaphylaxis or bone marrow failure in association with drugs).
Harmful and beneficial effects of health interventions may have different impact or frequency, may have different time courses, and may occur in particular groups of individuals. Some interventions (e.g. surgery, chemotherapy) may be associated with little or even hazardous effect in the short-term but provide longer-term benefit. It should also be recognized that for many interventions, the benefits may not be apparent on an individual basis, such as where a detrimental outcome has been prevented (e.g. a stroke or infection).
Key message: An independent Data Monitoring Committee (DMC) provides a robust means to evaluate safety and efficacy data from an ongoing RCT, including unblinded comparisons of frequency of particular events, without prematurely unblinding any others involved in the design, conduct, or governance of the trial. For many RCTs, particularly in earlier phase trials, the functions of a DMC could be provided internally but those involved should nonetheless be adequately firewalled from the trial team to ensure that awareness of results does not introduce bias, (or the perception of bias). Some trials may not require a DMC (e.g. if the trial is short-term and would not be modified regardless of interim data).
Why this is important: All those involved in the design, conduct and oversight of an ongoing RCT should remain unaware of the interim results until after the study conclusion so as not to introduce bias into the results (e.g. by stopping the trial early when the results happen by chance to look favourable or adverse). The requirement for, and timing and nature of, any interim analyses should be carefully considered so as not to risk premature decision-making based on limited data.
A DMC should include members with relevant skills to understand and interpret the emerging safety and efficacy data. A DMC should review analyses of the emerging data, unblinded to the randomized intervention group. The DMC should advise the RCT organisers when there is clear evidence to suggest a change in the protocol or procedures, including cessation of one or more aspects of the trial. Such changes may be due to evidence of benefit or, harm or futility (where continuing the trial is unlikely to provide any meaningful new information). In making such recommendations, a DMC should take account of both the unblinded analyses of the RCT and information available from other sources (including publications from other trials).