Association Between Maintenance of Certification Examination Scores and Quality of Care for Medicare Beneficiaries
Eric S. Holmboe, MD; Yun Wang, PhD; Thomas P. Meehan, MD, MPH; Janet P. Tate, MPH; Shih-Yieh Ho, PhD, MPH; Katie S. Starkey, MHA; Rebecca S. Lipner, PhD
Arch Intern Med. 2008; 168(13):1396-1403. doi: 10.1001/archinte.168.13.1396
VIEW ARTICLE
The authors include highly paid employees of ABIM. Physicians were grouped into quartiles based on their performance on the American Board of Internal Medicine MOC examination. The main outcome measures were the associations between physician scores on MOC exams and diabetes care, using a composite measure of hemoglobin A1c, retinal screening, mammography, and lipid testing in patients with cardiovascular disease. The intent was to see if doctors who scored higher on their MOC exams did a better job ordering screening tests for their patients with diabetes.
Physicians scoring in the top quartile on the MOC exam were more likely to perform processes of care for diabetes and mammography screening compared to physicians in the lowest physician quartile, even after adjustment for multiple factors. There was no significant difference among the groups in lipid testing of patients with cardiovascular disease and the physician’s performance on the MOC examination.
The study concludes “…physician cognitive skills, as measured by a maintenance of certification examination, are associated with higher rates of processes of care for Medicare patients.” Notice the authors do not conclude that doing MOC improves processes of care. Instead, they are using MOC test scores as a surrogate for “cognitive skills.” The conclusion could also be summarized as: Doctors who do better on tests are more likely to do a better job ordering screening tests for their patients. There is no causal association stated. Just an implication. In fact, the authors give it away in the next to last paragraph which is the final part of the Limitations section: “Finally, we excluded physicians who did not take an MOC examination. We fitted additional models with a dummy variable for physicians who did not take the test and found that there was no difference in performance between physician groups with and without MOC scores. However, the distribution of scores from the physicians’ initial certification examination was similar to that of the analyzed cohort and thus likely explains the lack of an association.” Thus, in the very paper often used by ABMS to support MOC, the authors state there was no difference in patient care outcomes when cared for by physicians who did versus did not participate in MOC.
Furthermore, we see comparisons among the high vs low scorers but we never see “passed” vs “failed.” Pass or fail is the only “grade” the public ever sees regarding MOC, i.e. the public is told the doctor is either Participating in MOC or Not Participating in MOC. No one but the physician sees the actual score.
The authors also point out that one limitation of the study is that most of the clinical tests evaluated (LDL levels, HG A1c levels, etc) could be implemented by non-physician office staff. So, another explanation is that high MOC scores correlate not so much with knowing more and being a better doctor, but with being better at organizing a good system of care in your office, i.e., having a good system run by nurses and assistants.
Finally, the differences between high scorers and low scorers on patient outcomes are not that dramatic: for example, those physicians in the lowest quartile had a compliance rate only 6.2% lower than physicians in the top quartile for mammography screening and only 5.0% lower for the diabetes composite measure.
This is one of the higher quality studies in the portfolio. As noted above, the authors used linked Medicare data to assess quality of care for primary care (as assessed by ordering guideline-indicated screening tests) to examine the association between “cognitive skill” (as assessed by scores on an internal medicine MOC test) and quality of care. In my opinion, the analyses are well-conducted given the inherent limitations of any observational study and demonstrate a fairly convincing association between test performance and quality of care. However, as Dr. Teirstein notes, the study may simply be measuring the association between physician intelligence (or even test-taking ability) and performance rather than the association between participating in MOC and performance. Thus, if one were to want to identify a group of physicians who would be most likely to deliver high quality care (according to the outcome measures selected), it would be better to select individuals who performed well on MOC exams rather than simply took the exam. I agree that this type of study does not really make the case that participating in MOC leads to improved care. But it would help me to pick a doctor who would provide better care.
This manuscript is a retrospective analysis of 10 primary care performance measures at 4 VA medical centers. 105 primary care physicians (71 time-limited ABIM certification and 34 time-unlimited certification) with a mean panel size of 610 patients were surveyed. Before statistical adjustment, time-unlimited physicians performed better in 3/10 categories, but after adjustment, there were no differences in outcomes by certification status.
This analysis compared processes of care for diabetes, mammography screening, and lipid testing among groups of internal medicine physicians stratified by first attempt test score on the ABIM MOC examination. In adjusted analyses, physicians in the highest quartile of scores had a higher performance in processes of care for diabetes and referral to mammography but not in lipid testing. It is notable that the authors do not present data stratified by the conventional pass/fail cut point that ultimately determines recertification status. This analysis would have been easy to conduct and is the more relevant analysis to determine whether maintenance of certification is actually associated with outcomes. Additionally, it is somewhat intuitive that physicians who do better on a standardized test might perform better than those who do not, but this does not mean that the MOC process itself is the reason. That the comparison between physicians who did not take the test and those that did (in the limitations) showed no difference in outcomes is particularly notable, especially when one considers that these physicians’ distribution of scores on the initial certification examination was similar to the analyzed cohort. That suggests that it is overall test-taking performance (and not the MOC examination) that may be driving the results.