Together we can end the monopoly and create choice in board re-certification. Thank you for your support!
Hess BJ1, Weng W, Holmboe ES, Lipner RS.
Acad Med. 2012 Feb;87(2):157-63. doi: 10.1097/ACM.0b013e31823f3a57.
This paper is similar to the paper #4 above by Holmboe et. It is also written by highly paid employees of the ABIM. It is a more recent paper (2012 vs 2008) but only looks at 676 physicians. The paper aims to correlate physician scores on the MOC exam with physicians ordering screening tests (retinal exam, foot exam, blood pressure control, AIC at goal, LDL control). The authors conclude, “Physicians’ cognitive skills significantly relate to their performance on a comprehensive composite measure for diabetes care. Although significant, the modest association suggests that there are unique aspects of physician competence captured by each assessment alone and that both must be considered when assessing a physician’s ability to provide high-quality care.”
As with the paper above, MOC scores are being used as a surrogate for cognitive skills. There is no evidence that doing MOC improves the outcome measured. The study shows doctors who do better on a written test do better ordering laboratory tests on their patients. No causation is claimed.
In my view, a major problem with this small study is the investigators measured patient outcomes (number of screening tests patients received) by looking at each physician’s Practice Improvement Module (PIM). The PIM is part of the MOC process. Physicians are asked to abstract the charts of 25 of their patients to see how often the screening tests were done and how often their patients were in good diabetes control. The data is entirely self-reported with no auditing. The physicians who were part of the study only managed to abstract an average of 21 charts, not 25. I don’t see how this paper can be quoted by ABMS as supportive of MOC when the data on physician behavior is self-reported by the physicians being tested.
It is also noteworthy that the authors’ conclusion is not very strong, “Although the associations that we observed were statistically significant, they were modest and not surprising given the complexity of clinical practice.”
The authors also point out that one limitation of the study is that most of the clinical tests evaluated (LDL levels, HG A1c levels, etc) could be implemented by non-physician office staff. So, another explanation is that high MOC scores correlate not so much with knowing more and being a better doctor, but with being better at organizing a good system of care in your office, i.e. having a good system run by nurses and assistants.
The statistical methods employed by this study are reasonable. As noted above, the main finding of this study is that scores on the internal medicine MOC exam demonstrated modest correlation with a composite measure of diabetes care, with a stronger correlation specifically with the endocrinology component of the MOC. I agree that the main limitation of this study is that the outcome measure (diabetes composite score) was derived from what are essentially self-reported data from review of approximately 20 patient charts. There is no assurance that these charts are truly representative or sequential, or even abstracted correctly. In addition, it is possible that these differences relate more to intrinsic characteristics of the physicians (e.g., intelligence, learning ability, etc.) rather than to taking the MOC, per se.
This analysis correlates ABIM MOC Examination scores from 676 physicians with time-limited certification in internal medicine with their practice performance using a composite diabetes measure based upon the ABIM’s diabetes practice improvement module (PIM, derived though physician audits). Overall examination scores correlated with the diabetes composite score, and this was highest for performance on the endocrine disease component of the examination. Examination scores also correlated with the process sub composite measure as well as the patient experience measure, although this was the weakest association. Notably, the overall explanatory power of the model was only 13%. This analysis does not dichotomize between failing/passing scores (what certification really represents) and does not enable a distinction between overall cognitive/test-taking ability vs. the effect of participation in the MOC process.