SYSTEM Research Note on: How should competing clinical interventions be compared in dentistry? – A simulation-based investigation

Background: Inferences based on naïve-indirect comparisons, particularly concerning the effectiveness of different types of tooth restorations, are still relatively common in reviews of the dental literature: Thus, the aim of this study was to illustrate, by use of a simple trial simulation, the potential impact of naïve-indirect comparison on effect estimates.

Methods: Clinical trials were simulated by assuming comparisons of two interventions with dichotomous outcomes. The treatment effect of both interventions was set to be equally successful (Risk ratio 1.00; p = 1.00). The same simulated data was compared, using either naïve-indirect (no randomisation) or direct comparison (randomisation). A percentage of all subjects per study were assumed to contain a confounding ‘trait X’ that caused failure with either intervention. The percentage of ‘trait X’ subjects in each trial was randomly determined. For each comparison type, the data from both interventions per study were entered into a random-effects meta-analysis and a pooled Risk ratio (RR) with 95% Confidence interval (CI) was computed. Agreement between results of comparison types was calculated (kappa).

Results: The pooled Risk ratios for naïve-indirect and direct comparison were RR 1.64 (95% CI: 1.22 – 2.19; p = 0.001) and RR 1.00 (95% CI: 0.96 – 1.04; p = 0.99), respectively. Inter-comparison type agreement of results was poor (kappa = 0.06).

Conclusion: Naïve-indirect comparison, in contrast to direct comparison of the same data, generated a large inflation of the effect estimate. Naïve-indirect comparison is, therefore, unacceptable and should be avoided.