Radiologist Peer Review by Group Consensus.
- H. Benjamin Harvey; Tarik K. Alkasab; Anand M. Prabhakar; Elkan F. Halpern; Daniel I. Rosenthal; Pari Pandharipande; G. Scott Gazelle
- The objective of this study was to evaluate the feasibility of the consensus-oriented group review (COGR) method of radiologist peer review within a large subspecialty imaging department.This study was institutional review board approved and HIPAA compliant. Radiologist interpretations of CT, MRI, and ultrasound examinations at a large academic radiology department were subject to peer review using the COGR method from October 2011 through September 2013. Discordance rates and sources of discordance were evaluated on the basis of modality and division, with group differences compared using a χ(2) test. Potential associations between peer review outcomes and the time after the initiation of peer review or the number of radiologists participating in peer review were tested by linear regression analysis and the t test, respectively.A total of 11,222 studies reported by 83 radiologists were peer reviewed using COGR during the two-year study period. The average radiologist participated in 112 peer review conferences and had 3.3% of his or her available CT, MRI and ultrasound studies peer reviewed. The rate of discordance was 2.7% (95% confidence interval [CI], 2.4%-3.0%), with significant differences in discordance rates on the basis of division and modality. Discordance rates were highest for MR (3.4%; 95% CI, 2.8%-4.1%), followed by ultrasound (2.7%; 95% CI, 2.0%-3.4%) and CT (2.4%; 95% CI, 2.0%-2.8%). Missed findings were the most common overall cause for discordance (43.8%; 95% CI, 38.2%-49.4%), followed by interpretive errors (23.5%; 95% CI, 18.8%-28.3%), dictation errors (19.0%; 95% CI, 14.6%-23.4%), and recommendation (10.8%; 95% CI, 7.3%-14.3%). Discordant cases, compared with concordant cases, were associated with a significantly greater number of radiologists participating in the peer review process (5.9 vs 4.7 participating radiologists, P < .001) and were significantly more likely to lead to an addendum (62.9% vs 2.7%, P < .0001).COGR permits departments to collect highly contextualized peer review data to better elucidate sources of error in diagnostic imaging reports, while reviewing a sufficient case volume to comply with external standards for ongoing performance review.
Year: 2016Type of Publication: Article
Journal: J Am Coll Radiol