Estimating diagnostic accuracy when primary studies selectively report cut-offs: individual participant data meta-analysis versus three methods for analyzing published data

Session: 

Oral session: Diagnostic test accuracy review and prognostic methods (2)

Date: 

Tuesday 18 September 2018 - 14:00 to 14:10

Location: 

All authors in correct order:

Levis B1, Rücker G2, Jones HE3, Thombs BD1, Benedetti A1, DEPRESSD Research Group ;4
1 McGill University, Canada
2 Medical Faculty and Medical Center - University of Freiburg, Germany
3 University of Bristol, United Kingdom
4 N/A, ;
Presenting author and contact person

Presenting author:

Brooke Levis

Contact person:

Abstract text
Background: In studies of diagnostic test accuracy of ordinal tests, results are sometimes only reported for cut-off thresholds that generate desired results in a given study (e.g. high combined sensitivity and specificity). When combining results in meta-analyses, selective cut-off reporting may result in biased accuracy estimates. One way to overcome this bias is via individual participant data meta-analysis (IPDMA). Another approach is to use published results, but model by missing cut-off data using statistical techniques.

Objectives: To compare IPDMA of data from all studies and cut-offs to three approaches for estimating accuracy using published data in the context of missing cut-off data: conventional meta-analysis using bivariate random-effects meta-analysis, and modeling missing cut-off data using multiple cut-off models developed by Steinhauser and colleagues and Jones and colleagues.

Methods: We analyzed data collected for an IPDMA of Patient Health Questionnaire-9 depression screening tool accuracy. We compared sensitivity and specificity estimates from conventional meta-analysis of published results, the two modelling approaches, and IPDMA. The modeling approaches were applied to the published dataset blind to IPDMA results.

Results: We analyzed 15,020 participants (1972 cases) from 45 studies. All methods produced similar specificity estimates. Compared to IPDMA, conventional bivariate meta-analysis underestimated sensitivity for cut-offs < 10 and overestimated sensitivity for cut-offs > 10 (mean absolute difference: 6%). For both modeling approaches, sensitivity was slightly underestimated for all cut-offs (mean underestimation: 2%).

Conclusions: IPDMAs are the gold standard for evidence synthesis, but are labor intensive. In the context of missing cut-off data, applying modeling approaches to published data is more efficient than IPDMA and produces accuracy estimates that more closely resemble IPDMA than not modeling. However, applying modeling approaches to published data resulted in a slight underestimation of sensitivity in our case study and precludes the possibility of assessing accuracy in participant subgroups.

Patient or healthcare consumer involvement: There was no consumer involvement in this project.

Relevance to patients and consumers: 

There was no consumer involvement in this project, but the results can be used to generate more accurate estimates of diagnostic test accuracy, which can eventually improve patient care. In studies of diagnostic test accuracy of ordinal tests, missing cutoff data in primary studies can lead to biased accuracy estimates in meta-analyses that do not reflect clinical reality. Modeling missing data using statistical techniques offers a feasible alternative for estimating diagnostic accuracy estimates without needing to obtain primary data.