Abstract
BACKGROUND AND PURPOSE: The American College of Radiology is now offering an accreditation pathway for programs that use peer learning. Here, we share feasibility and outcome data from a pilot peer learning program in a pediatric neuroradiology section that, in its design, follows the American College of Radiology peer learning accreditation pathway criteria.
MATERIALS AND METHODS: We retrospectively reviewed metrics from a peer learning program with 5 participating full-time pediatric neuroradiologists during 1 year: 1) number of cases submitted, 2) percentage of radiologists meeting targets, 3) monthly attendance, 4) number of cases reviewed, 5) learning points, and 6) improvement actions. In addition, a faculty survey was conducted and is reported here.
RESULTS: Three hundred twenty-four cases were submitted (mean, 7 cases/faculty/month). The faculty never met the monthly submission target. Peer learning meeting attendance was 100%. One hundred seventy-nine cases were reviewed during the peer learning meetings. There were 22 learning points throughout the year and 30 documented improvement actions. The faculty survey yielded the highest ratings (4.8 of 5) for ease of meeting the 100% attendance requirement and for the learning value of the peer learning sessions. The lowest rating (4.2 of 5) was given for the effectiveness of improvements as a result of peer learning discussions.
CONCLUSIONS: Implementing a peer learning program that follows the American College of Radiology peer learning accreditation pathway criteria is feasible. Program metric documentation can be time-consuming. Participant feedback led to meaningful program improvement, such as improving trust, expanding case submission categories, and delegating tasks to administrative staff. Effort to make peer learning operations more efficient and more effective is underway.
ABBREVIATIONS:
- ACR
- American College of Radiology
- CME
- Continuing Medical Education
- PL
- peer learning
The American College of Radiology (ACR) is now offering an accreditation pathway for programs that use peer learning (PL).1 To qualify, a PL program should have a PL policy, explicit program targets, and annual documentation of program metrics. Specifically, the annual report should include the total number of case submissions to the PL program, the number and percentage of radiologists meeting targets as defined in the facility practice policy, a determination of whether PL activities met the minimum standard as defined by the facility practice policy, and a summary of related quality-improvement effort and accomplishments.1
Many radiology practices in the United States are adopting PL in lieu of or in addition to traditional score-based peer review.2⇓-4 PL is an approach to performance improvement that is based on quality and safety concepts found in high-reliability organizations.5 PL builds a safety culture by creating a safe environment for error disclosure, it facilitates joint learning from mistakes, and it creates opportunities for improvement through group discussions that elucidate sources of errors.6⇓⇓⇓⇓⇓-12 Higher case submission rates have been observed after switching from score-based peer review to PL, indicating higher engagement of radiologists.6,7
Here, we share feasibility and outcomes data from a pilot PL program in a pediatric neuroradiology section that, in its design, follows the ACR PL accreditation pathway criteria.1 Our program uses several PL metrics, including radiologist participation rates, number of cases submitted, number of cases reviewed, tangible lessons learned, and improvement projects completed.
MATERIALS AND METHODS
This quality-assurance study was exempt from institutional review board approval. The data were collected at Children's Healthcare of Atlanta (CHOA), a freestanding academic pediatric hospital with nearly 300,000 examinations annually. Five full-time pediatric neuroradiologists participated in the PL program during the 1-year study period, January 1, 2021, through December 31, 2021. A total of 24,724 neuroradiology examination reports were issued during this time.
PL Program
In December 2020, we incorporated an additional pediatric site into our practice and added substantially to our pediatric neuroradiology faculty, resulting in a separation of pediatric from adult neuroradiology service lines. This created an opportunity for implementing a pilot PL program for the pediatric neuroradiologists who previously participated in score-based peer review.
Our PL program is informed by a written policy that incorporates the elements recommended by the ACR accreditation checklist for PL.1 Our section chief defined the program targets as follows: PL conferences to occur monthly, 100% faculty attendance, and 5 PL cases submitted each month per pediatric neuroradiologist. The annual documentation of our PL program metrics includes the following: a statement of commitment to sequestering PL from performance evaluations, the total number of case submissions to the PL program, the number and percentage of radiologists meeting targets as defined in the facility practice policy, a determination of whether PL activities met the minimum standard as defined by the facility practice policy, and a summary of related quality-improvement effort and accomplishments.1
PL conferences occur monthly throughout the calendar year and are recorded for asynchronous viewing. The meetings occur between 12:00 and 1:00 pm, when, in most instances, there is service coverage by a fellow, and they last for 1 hour. There are 2 dedicated faculty members who alternate monthly in selecting and presenting cases. During the study period, we reviewed not only discrepancies of perception, interpretation, or communication, but also interesting cases. Each month, cases submitted during the previous month were reviewed. Cases were presented as anonymized PowerPoint slides (Microsoft). The case discussion was documented for each case on a case-review form, along with any learning points and improvement actions. Each session was recorded (Teams; Microsoft) and saved in an online location outside the institution’s health records system, where it is protected under peer review state law. Recordings are shared only with faculty and PL staff and can be accessed for remote viewing by those who could not attend the in-person session. During the study period, any improvement actions were immediately assigned to a faculty volunteer who set a deadline; he or she was followed to the conclusion at the beginning of subsequent PL meetings.
Data Collection
We analyzed the following items that were collected monthly: 1) the number of cases submitted per faculty per month, 2) the percentage of radiologists meeting PL program targets for case submissions (5 per month per faculty), 3) monthly faculty PL attendance (target of 100% live attendance or asynchronous viewing of session recordings), 4) the number of cases reviewed during the PL session, 5) the number and nature of learning points, and 6) the number and nature of improvement actions with assigned faculty volunteer and documented completion.
Faculty Survey
An 11-item survey (Online Supplemental Data) was developed and face-validated by the radiology quality director (N.K.). Responses were collected anonymously in January 2022. There were 2 yes/no questions, 3 open-comment items, and 6 Likert items requesting a Likert star rating with the maximum rating of 5 stars.
Data Analysis
Descriptive statistics were performed in Excel (Microsoft).
RESULTS
PL Program Metrics
The number of monthly case submissions varied widely. During the year, 324 cases were submitted for the PL meetings, with an average of 7 case submissions per faculty per month, and monthly submissions ranging from 0 to 26 cases for a single faculty member (Online Supplemental Data and Fig 1).
There was no month during which >80% of the faculty met the monthly submission target of 5 cases (Fig 2). The low case-submission rate for review in January could be due to the program being new (it was started December 2020), and low submission rates in April correspond to high case volumes and diminished staffing in the same month (data not shown).
PL meeting attendance was 100% for each faculty member.
A total of 179 cases were reviewed throughout the year, which is about 50% of all case submissions (179/324). On average, we reviewed 14 cases per PL meeting, ranging from 6–24 case reviews per session.
The session moderator documented any learning points and improvement actions during each PL meeting. There were 22 learning points throughout the year, which averages to 2 learning points per session. Lessons learned included recognizing the importance of accurate use of overnight agree/disagree statements, identifying potential pitfalls in image interpretation, importance of report proofreading, identifying instances when it is appropriate to reference normative values for measurements, and identifying imaging signs of rare diagnoses.
There were 30 documented improvement actions throughout the year, which average to 2.5 improvements identified per session. Improvements that resulted from the PL program thus far included changes to CT and MR imaging protocols, education of radiologists and technologists, changes to reporting templates, changes to EPIC workflows, and modifications of team communications.
PL Faculty Survey
All faculty members responded to the survey (response rate, 100%) (Online Supplemental Data). All section members had previously participated in randomized score-based peer review, and only 2 faculty members had experienced PL previously.
When asked to list any differences between random score-based review and PL that favor random score-based review, respondents listed the following: faster, more objective, simple, easy metric, and mixed agree/disagree (versus only reporting disagreements) as giving a sense of accuracy. Respondents listed differences that favor PL as the following: more fun, learning, discovering improvement opportunities, group discussion, interactive and constructive feedback, and better experience overall.
The highest ratings (4.8 of 5) were given for ease of meeting the 100% attendance requirement and for the learning value of the PL sessions. A slightly lower rating (4.6 of 5) was given for feeling safe during case discussions, for the ease of submitting cases, and for the ability to gain Continuing Medical Education (CME) credit for session participation. The lowest rating (4.2 of 5) was given for effectiveness of improvements as a result of PL discussions.
Additional general comments included lowering the participation target to 80% to include good calls and not just discrepancies, having too many case-submission tools, and improvement actions being rushed and seeming reactive.
DISCUSSION
We were able to set up a PL program in pediatric neuroradiology that incorporates the checklist items for the new ACR accreditation pathway for PL, demonstrating feasibility in program design and implementation. However, we have not yet sought ACR accreditation through this pathway.
Most interesting, generating the data required for ACR reporting adds to the overall time commitment for running a PL program. While we did not measure this issue, we estimate that the annual time commitment for the physician leaders is 56 hours, which includes 4 hours/month to collate, select, prepare, and discuss cases for the monthly PL conference, 0.5 hour/month for transcribing PL program data and submitting CME materials, and 2 hours for writing the annual report. We have now trained an administrative assistant who reviews the PL session recording to track attendance, fill out the case-review forms, and handle any activities related to CME credit. While obtaining CME credit for PL was rated less important in our survey, we will continue to offer it because our administrative staff is now managing this aspect of the program. The more time-intensive effort for PL programs compared with score-based peer review has been acknowledged by others.13
The monthly PL meeting attendance target was easily met when allowing our faculty who could not attend the live session to attest to viewing session recordings. Faculty rated the ease of compliance with this target very favorably. Similar to others, we used the virtual format due to coronavirus disease 2019 (COVID-19) conditions14 but realize now that it remains the best option for participating from various sites and practice locations within our system. Most interesting, we are not using any incentives or penalties to drive up our faculty participation rate.15
There was not a single month when our entire faculty met the target for case submissions. Two faculty members (Fig 1, faculty D and E) disclosed not entirely trusting the separation of learning from performance assessment and, therefore, avoiding case submissions, which was also reflected in the survey by low ratings for the perceived safety during PL meetings. The other faculty member struggled with the multitude of reporting tools to be used, ie, RADPEER (https://www.acr.org/Clinical-Resources/RADPEER), EPIC, and e-mails. In response to these concerns, we have uninvited an external PL session participant who represented the system Peer Practice Evaluation Committee. We also informed our faculty about the educational nature of the PL program and its protection under state peer review legislation, and a reminder slide is now included in the introductory portion of the PL meeting slides. As another change in response to these concerns, we are now keeping case discussions completely anonymous, meaning that we no longer allow faculty to self-identify in any way during a case discussion. Regarding the case submission tools, we are currently still required to use the ACR RADPEER tool for ongoing professional practice evaluation. Unfortunately, our RADPEER is not set up to allow reviews for past faculty readers nor can we submit cases when the current and past reader is the same person. In those instances, we have configured a quality reporting tool in EPIC, but it can only be used as long the report has not been finalized. For all other cases, we notify the PL leaders by e-mail so that cases can be included. We are currently developing an alternate performance review system for ongoing professional practice evaluation16,17 so that we can abandon RADPEER and replace all current submission options with a single tool.
At the start of the Pl program, we randomly determined the target for monthly case submission per faculty on the basis of what seemed “reasonable.” Because our faculty never met that target, we propose several changes. In our program we reviewed a maximum of 24 cases in a PL conference, which can help determine faculty case submissions per month. For example, for our general pediatric radiology section with currently 18 faculty, it was decided to maintain a minimum submission of 2 cases per month per faculty. This still yields a surplus of cases that allows the PL program leads to select cases with the highest yield for discussion and omit redundant/repetitive cases. If we continually fail to meet our monthly case-submission target in pediatric neuroradiology, we may lower the monthly target below 5 cases or set the target at the section rather than the individual level. Another option to consider, especially for smaller radiology subspecialties, could be to expand PL programs across multiple institutions to spread the shared learning experience and variety of cases.18,19
On the basis of the collected data on learning points and survey responses highly rating the learning value, our program performs similar to those of others who reported higher rates of satisfaction and learning.6,20,21 On the basis of the feedback submitted in the survey, we have expanded the submission categories from only discrepancies to also include good calls,22 interesting cases, and cases for any type of group discussion (communication, protocols, imaging technique, and so forth). Sources for PL cases in our program include routine workflow, clinical conferences, consultations, as well as a provider feedback submission system. In the future, we may be able to integrate artificial intelligence applications that can identify cases with radiology-pathology correlations.23
The lowest survey ratings from our faculty were issued for the improvement effectiveness of the PL program. On further inquiry, faculty members were concerned that improvement actions were decided too quickly without deeper reflection on root causes and balancing measures. We are now documenting any improvement ideas that are mentioned during PL conferences, but we hold off on initiating improvements until a subsequent discussion with the section director has occurred.
Of note, our PL process eliminates faculty “voting” on discrepancies of perception, interpretation, and communication. In our system, the radiologist who identifies a discrepancy is in charge of immediately addressing any patient care issues and notifying the original interpreting radiologist of the discrepancy. He or she can suggest that the original radiologist should act, eg, by issuing an addendum to a report. Whether the recommended action is implemented by the original radiologist, however, is left to that radiologist’s professional decision. Any concerns regarding a radiologist’s clinical practice or behaviors are to be submitted to our system’s Peer Practice Evaluation Committees, which review any physician practice or behavior concerns and determine possible actions.
This study has several limitations. While we assume that PL is more effective than score-based peer review when it comes to improved practice, we do not have any data to show this to be true. Some programs use addendum rates as a proxy for improvement effects and show higher addendum rates with PL compared with score-based peer review.8,15 Our survey supports the notion that PL is a valued activity for our faculty, and that at a minimum, it creates opportunities for teambuilding and collaboration.24 Some of the submitted discrepancies may be unproven, disputed, or clinically insignificant. We have not yet needed a system to address disputes. We currently have the person identifying a discrepancy notify the original reader and indicate that either no further action is needed on the basis of an existing follow-up report or an action would be helpful on the basis of a clinician request or patient care impact. It is then up to the radiologist receiving this feedback to act appropriately and responsibly.
CONCLUSIONS
We show the feasibility of a PL program in a pediatric neuroradiology section that follows the ACR PL accreditation pathway criteria. At our academic institution, PL is currently piloted in the pediatric radiology sections. Solicitation of feedback from PL program participants has been helpful in making changes to certain aspects of the program, such as improving trust in the PL program, including meaningful case submission types, and more thoughtful improvement actions. While radiologists favor PL over score-based review, the lack of tools and support to run PL meetings efficiently and effectively may present a barrier to a widespread replacement of score-based review with PL. We are currently developing a submission and data collection tool that supports semiautomated reporting for the ACR accreditation pathway, and we are exploring aspects of the PL process that can be handed off to administrative staff.
Acknowledgments
Special thanks goes to Jennifer Broder, MD, who provided insight and support throughout the process of designing and implementing our peer learning program.
Footnotes
Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.
References
- Received June 27, 2022.
- Accepted after revision September 12, 2022.
- © 2022 by American Journal of Neuroradiology