Comparison of outcome reporting in clinical study reports, journal publications and registry reports in the field of telecardiology

ID: 

179

Session: 

Poster session 1

Date: 

Sunday 16 September 2018 - 12:30 to 14:00

All authors in correct order:

Angelescu K1, Schell L1, Glinz D2, Schulz A1, Knelangen M1
1 Institute for Quality and Efficiency in Health Care (IQWiG), Germany
2 Basel Institute for Clinical Epidemiology and Biostatistics, Switzerland
Presenting author and contact person

Presenting author:

Konstanze Angelescu

Contact person:

Abstract text
Background:
A comprehensive information base is crucial for systematic reviews. The Institute für Qualität und Wirtschaftlichkeit im Gesundheitswesen (IQWiG: Institute For Quality And Efficiency In Health Care) uses three primary information sources: clinical study reports (CSRs) by manufacturers, journal publications, and trial registry information. Generating CSRs is not compulsory when investigating medical device interventions (MDI), in contrast to pharmaceutical trials. A recent IQWiG report on telemonitoring was based on a substantial number of CSRs and thus allows the significance of CSRs in outcome reporting of MDI to be evaluated.

Objectives:
To determine the completeness and consistency of reported patient-relevant outcomes in journal publications and trial registries when compared to CSRs.

Methods:
Based on the IQWiG report, we identified studies for which results were available from both a public source (journal article or trial registry, or both) and a non-public source (CSR). By comparing both sources of information and assuming the completeness of CSRs, we assessed the number of studies with incomplete results in public sources and examined the number of missing outcomes. For results reported in both public and non-public sources, we examined whether there were any deviations in the CSR.

Results:
For 7/17 studies included in the IQWiG report, usable data were available from both information sources. Published information consisted of journal articles (three studies), registry information (one study), or both (three studies). In 6/7 cases, only incomplete results had been published; the number of missing outcomes was 14/45. For the seven studies reporting mortality, data posted in trial registries were incomplete for four studies. In five studies minor deviations of outcomes occurred. In all cases, these were statistically non-significant differences that remained non-significant.

Conclusions:
Although published results are largely in line with available CSRs, there are some gaps, suggesting potential outcome reporting bias. Thus, for a comprehensive assessment, CSRs are necessary and should be mandatory for MDI reporting as well. For this analysis, we presumed completeness of reporting in CSRs, although data on planned outcomes (serious adverse events and quality of life) were missing in the CSRs available. Thus, the quality of CSRs for MDI has to be evaluated, too, and regulation on MDI trial reporting should be implemented.

Patient/healthcare consumer involvement:
Patients were not involved.

Relevance to patients and consumers: 

This analysis deals with one aspect of publication bias. Avoiding bias in systematic reviews helps generating a valid information base for patients, doctors, and funding agencies in order to make appropriate decisions on whether patients do or do not benefit from interventions.