Assessing new screening strategies: collaboration and persistence pay off
One strategy that I've found ensures a long and productive academic career is collaborating with talented and curious colleagues. I first put it into practice as a medical officer at the Agency for Healthcare Research and Quality, writing evidence summaries and systematic reviews with other staffers and rotating Preventive Medicine residents. At Georgetown, I collaborated with a terrific team of health researchers to write papers on interpreting and managing test results for Zika virus and COVID-19. I've collaborated with members of the AAFP's Commission on Health of the Public and Science on clinical practice guidelines on COPD exacerbations and blood pressure targets in adults with hypertension. I've collaborated on articles on social media for doctors and advocacy curricula in family medicine residency programs. Through the Lown Institute, I have collaborated on original work estimating the U.S. rate of overuse of screening colonoscopy and the associated harms. And most recently, I worked with a longtime mentor and sometime mentee on an analysis piece in BMJ Evidence Based Medicine that assesses evidence thresholds for proposals for updating established screening strategies.
How this paper came about deserves some explanation. In spring 2021, a company interested in developing a new screening test approached me to ask whom they should consult about the type of evidence required for that test to be endorsed by guideline panels and be covered by insurance. I recommended Russ Harris, a former U.S. Preventive Services Task Force (USPSTF) member who had retired from his academic position at UNC-Chapel Hill but was still interested in pursuing new scholarly projects. Russ then reached out to me in an e-mail: "I don’t think the methods are well worked out for replacing an established screening strategy with a new one in the absence of a formal RCT. Do you have any interest in [doing] some scholarly work on making a contribution to this problem?"I responded: "I agree with you that it's not clear what kind of evidence is needed to recommend a new screening strategy without a RCT with mortality outcomes. I've been puzzling over this problem since the USPSTF first recommended CT colonography and FIT-DNA for colorectal cancer screening, apparently on the basis of being able to plug their diagnostic accuracy studies into a decision model. I don't see how the TF got to 'high certainty of substantial net benefit' for these tests."
Not long after that, we recruited the third member of our team: Alison Huffstetler, whom I met when she was completing her policy fellowship at Georgetown (we both saw patients in the same family medicine office on Fridays) and had subsequently taken a research faculty position at Virginia Commonwealth University. We decided that we would each study a different condition where the USPSTF had adopted a new screening strategy (cervical, colorectal, and breast cancer) and formulate some common principles that would drive the paper's analysis. Ambitiously, in early May I proposed a timeline that would allow us to "get the bulk of the paper written in the second half of June / first half of July [2021]."
It took a bit longer than that. I moved back to DC from Utah, then moved to Lancaster the following summer to start a new job. Alison got married, changed jobs, and had a baby. Russ kept us on task and, though many Zoom calls, we kept making progress.
Finally, in January 2023 (a year and a half later), we completed a manuscript that we submitted to the American Journal of Preventive Medicine. It was peer reviewed but rejected. We did some rewriting and resubmitted it to JAMA Internal Medicine in June 2023. It was "desk rejected" (not sent out for review) by JAMA-IM, then, in rapid succession, transferred to and rejected by JAMA Health Forum and JAMA Network Open. We moved on to the Journal of Clinical Epidemiology, Annals of Internal Medicine, and then the Journal of General Internal Medicine: same outcome. In the meantime, I presented our paper at the Preventing Overdiagnosis conference in Copenhagen and received some encouraging feedback, which provided enough motivation to keep trying. We embarked on another rewrite and resubmitted the paper as an analysis piece to BMJ in January 2024. The journal's editors rejected it after peer review, but they advised that it could be a good fit for BMJ Evidence Based Medicine. We revised our paper in response to the reviews and submitted it to BMJ-EBM in April 2024. More revision requests ensued. Finally, on August 21, our paper was accepted, and it was published online on September 3.
If you've been keeping count, 8 journals had a chance to publish our manuscript but passed after varying degrees of editorial and peer review. Number 9 turned out to be the charm. This paper required not only collaboration, but dogged persistence and an unswerving belief that we had something of value to contribute to the dialogue around screening. I hope you will agree.