Authors

Marta Michas1, Sandy Campbell2, Liza Chan2,3, Liz Dennett2, Victoria Eke4, Laura Hamonic2,5, Janice Kung2, Linda Slater2, Maria Tan2, LIsa Tjosvold2,6, Erica Wright2,6


Institution

1MacEwan University Library, Edmonton, Canada

2J.W. Scott Health Sciences Library, University of Alberta, Edmonton, Canada

3Alberta Innovates, Edmonton, Canada

4Concordia University of Edmonton, Edmonton, Canada

5Canadian Patient Safety Institute, Edmonton, Canada

6Institute of Health Economics, Edmonton, Canada


Abstract

Introduction

Good quality searches and transparent reporting of searches are ways to minimize bias in systematic reviews (SRs). Many published SRs are potentially biased because poorly designed and executed searches may not have located all relevant studies, and poor reporting prevents replication. This presentation outlines the initial stages of a project involving the post-publication peer review of systematic review search strategies in top-tiered medical journals from 2017-2019. To effectively evaluate the searches for quality, replicability, and librarians’ involvement, the team developed a standardized tool to extract data from systematic review articles.

Methods

The team developed an extraction tool using Google Forms to capture authors’ demographic data, details about search strategy documentation, and to evaluate the quality of search methods from published systematic reviews. Ten health sciences librarians participated in form development and calibration (i.e. pilot phase). The form was refined and calibrated through consensus during an iterative process to ensure consistent data extraction. Nine team members reviewed the pilot articles and met to discuss discrepancies and any ambiguities. One researcher was responsible for developing the form and creating and revising a codebook to guide users. After three calibration rounds, redundancy was reached, and no new changes were necessary.

Results

The data extraction form included 34 questions: 4 identification questions, 22 questions related to the reported search mechanics and methodology, 7 modified PRESS (Peer Review of Electronic Search Strategies) items, and 1 overall evaluation question (score on a scale from 0 to 5). A comments section followed most of the questions to allow extractors to add their observations. If the search was not available for evaluation, the PRESS was not executed, and the search scored 0 in the overall evaluation. For searches that were published, the PRESS was performed, and the search strategies scored on a scale from 1 (poor quality; included several significant errors; significant bias was introduced) to 5 (search was comprehensive).

Discussion

The pilot phase was integral to formulating an extraction form that is clear, concise, and ensures a standardized approach on how project members responded to questions. The team provided ample feedback throughout the calibration process, which resolved uncertainties in the way questions were interpreted. The pilot phase, although time-consuming, produced a rigorous extraction process that will be used to underpin our research project. Viewers of this poster could apply this method to evaluating search strategies from comprehensive reviews.


COMMENTS

To comment, you must login with any of the following social media accounts or you can create a new one!
Your e-mail account will not be published.