Data Abstraction Assistant, a new tool, saves time without compromising the accuracy of data abstraction during systematic reviews

Session: 

Oral session: Innovative solutions to challenges of evidence production (4)

Date: 

Tuesday 18 September 2018 - 15:10 to 15:20

Location: 

All authors in correct order:

Saldanha I1, Smith B1, Jap J1, Canner J2, Schmid C1
1 Brown School of Public Health, USA
2 Johns Hopkins Medicine, USA
Presenting author and contact person

Presenting author:

Tianjing Li

Contact person:

Abstract text
Background:
Data abstraction during systematic reviews is typically error-prone and resource-intensive. We developed Data Abstraction Assistant (DAA) as a free and open-source tool that facilitates data verification and reproducible abstraction by allowing data abstractors to link abstracted information with its source by dropping 'flags' in study articles.

Objective:
Compare the relative effectiveness, in reducing time taken and errors made, of three data abstraction approaches:
A) DAA-facilitated single abstraction plus verification;
B) non-DAA-facilitated single abstraction plus verification; and
C) non-DAA-facilitated independent dual abstraction plus adjudication.

Methods:
We enrolled and organized data abstractors into pairs based on experience. We randomized each pair to abstract data from six studies (two studies each under approaches A, B, and C). Across all pairs, abstraction was done from 48 studies that came from four systematic reviews. We defined an 'error' as either omission or incorrect abstraction of information for a given item on the data abstraction form, when compared with the information abstracted by two investigators (TL and IJS). Participants self-recorded the total time spent for data abstraction per study, including initial abstraction and verification/adjudication.

Results:
All 52 enrolled abstractors (26 pairs) completed the DAA Trial. The data abstraction forms had a median of 121 data items per study (IQR 102 to 150). Mean error proportions (18%, 17%, and 17% for approaches A, B, and C, respectively) were similar, with no statistically significant differences. Mean times per study were similar for approaches A and B (~90 minutes) but were significantly longer for approach C (142 minutes) by 52 minutes (95% CI 33 to 71).

Conclusions:
Error proportions were similar among the three data abstraction approaches, but, times spent on single abstraction plus verification (approaches A and B) were much lower than dual independent data abstraction (approach C). The time saving that occurs without compromising accuracy implies that systematic reviewers should reconsider their choice of data abstraction approach, such as avoiding dual independent data abstraction. Importantly, by linking abstracted data with their exact source, DAA provides an audit trail that is crucial for reproducible research.

Relevance to patients and consumers: 

Patients and consumers rely on systematic reviewers to accurately summarize the available evidence for important questions on the minds of them and other decision-makers. This abstract describes the results of a randomized controlled trial that tested whether a new software (“Data Abstraction Assistant”) helps systematic reviewers make fewer data abstraction errors, thereby reducing the potential for harm in patients treated on the basis of the findings from systematic reviews.