MOC Journal Club

In their marketing materials, ABMS member boards provide lists of publications they claim support the beneficial impact of MOC on patient outcomes. NBPAS asked two uninvolved clinical researchers to formally review the major studies in this area. Below we provide the reviews from the two independent reviewers as well as Dr Teirstein (President of NBPAS). The studies were selected from the ABMS member boards’ marketing materials (the one exception is paper #2 Hayes et al JAMA 2014 which is absent from ABMS marketing materials) and were selected because they appeared to be the most robust research in this area.

Download (PDF)


Dr Teirstein (PST) is the Chief of Cardiology at Scripps Clinic and the President of NBPAS. He has modest experience in the design, execution and review of clinical trials. He describes himself as anti-MOC.

Dr Cohen (DJC) is the Vice Chairman of Medicine for Research, Beth Israel Deaconess Medical Center, Boston, MA. He has extensive experience in the design, execution and review of clinical trials. He describes himself as neutral with respect to the MOC controversy.

Dr Ajay Kirtane (AJK) is Associate Professor of Medicine at Herbert Irving Columbia University Medical Center (CUMC) and Director of the Cardiac Catheterization Laboratories at NewYork-Presbyterian (NYP) Hospital / CUMC. He has extensive experience in the design, execution and review of clinical trials as well as in in the design and execution of educational programs (including self-assessment/MOC programs) for practicing physicians and fellows. He describes himself as of two minds with respect to the recent MOC controversy and requirements: While recognizing the critical need to maintain physician competency, he is firmly convinced that the mechanisms by which such competency is attained (and evaluated) must be clinically relevant and demonstrably worth the considerable efforts and costs involved for practicing physicians.


Paul Teirstein M.D. (PST) Comments/Overall Assessment:

When evaluating studies on the impact of MOC on patient outcomes, I believe there are several important issues warranting consideration.

  1. The overwhelming majority of articles in this space are authored by highly paid (>$300- 400,000/yr) employees of ABMS member boards. The two most notable are Rebecca Lipner and Eric Holmboe. Both are seasoned, senior researchers. Their obvious conflict of interest does not mean the work is not trustworthy. However, the conflict of interest should be noted.
  2. The reader should be careful to distinguish papers that examine the impact of initial board certification from those examining maintenance of certification (MOC). In ABMS marketing materials sometimes papers evaluating initial certification are mixed with papers evaluating MOC. Below, we have not critiqued data concerning initial board certification as we do not believe initial ABMS member board certification is controversial.
  3. In this field, there is no robust, level A evidence. The only means to achieve real scientific evidence, i.e. on the level one would use to evaluate a medical intervention, would be to randomize physicians to either doing MOC or not doing MOC and then look at patient outcomes. Randomization would have to be a blinded, i.e., the physician would have to somehow not know to which arm of the study they were assigned. Such a study would be impossible to execute. Therefore, most of the literature consists of registries and surveys. Furthermore, it is very difficult to show differences in low frequency, “hard” patient outcomes like mortality. As a result most studies use surrogate patient outcomes, like the number of times the physician ordered lipid levels or checked a patient’s retina for diabetic disease. The lack of level A data and hard outcomes is an important limitation of much of the literature.
  4. I believe most of the research on MOC, including the articles written by conflicted authors, has been conducted and reported honestly. My criticism is the interpretation of the studies by the ABMS member boards in their marketing materials. Note how most of the studies reviewed below are listed by the ABIM as supporting the benefits of MOC. However, if you read the actual papers referenced, you will find the data unconvincing.

David Cohen (DJC) Comments/Overall Assessment:

In general, I would say that the literature is mixed as to whether MOC improves patient care or outcomes and that the effects that were noted in the positive studies were fairly modest (although this is hardly surprising, given the complexity of patient care). Only one of the studies that I was provided was a randomized trial (which provides the strongest type of evidence), and that study was largely negative. Several observational studies do suggest a relationship between board certification, time since certification, or MOC processes, and outcomes. However, as with all observational studies, there is the possibility that the results are explained by unmeasured factors other than MOC, per se. On the other hand, even though the methodology of these observational studies is necessarily complex, I do not see any obvious or egregious methodologic errors with these analyses. Several of the studies are purely qualitative, and should be seen as descriptive and really don’t provide a lot of meaningful data.

Ajay Kirtane (AJK) Comments/Overall Assessment:

In reviewing the 10 manuscripts provided, I was struck by the limitations of the evidence base specifically regarding the current implementation of MOC. Several of the studies are descriptive only, and even these illustrate the difficulties in execution of some of the MOC content (e.g. performance improvement modules). Some of the studies do not draw meaningful distinctions between initial certification and subsequent MOC. Additionally, the issue of “grandfathering” (something directly counterintuitive to the concept of ongoing MOC) is not adequately addressed in the published literature. The studies examining the association between exam performance and outcomes do not assess performance at the currently utilized pass/fail mark, but rather simply link outcomes to those who perform best on exams, without examining how MOC itself modified/influenced this association. There are limited observational data that show improvements in process outcomes with MOC-type implementations, but in my opinion the effects are mild-to-modest at best. The sole randomized trial in the literature was quite underwhelming, but most importantly illustrates the challenges in MOC implementation (quite possibly why the study was negative).

In many respects, the MOC concept makes intuitive sense, but the design and execution (and the data) seem to lag significantly behind the intuitive concept. It is therefore no surprise that the survey of physician attitudes on MOC obtained the results it did, with so many physicians feeling dissatisfied with its current implementation.


1. Association Between Imposition of a Maintenance of Certification Requirement and Ambulatory Care–Sensitive Hospitalizations and Health Care Costs

Bradley M. Gray, PhD; Jonathan L.Vandergrift, MS; Mary M. Johnston, MS; James D. Reschovsky,PhD; Lorna A. Lynn,MD; Eric S. Holmboe, MD; Jeffrey S. McCullough, PhD; Rebecca S. Lipner, PhD

JAMA. 2014;312(22):2348-2357. DOI:10.1001/JAMA.2014.12716

PST Comments

This is a key article because a) it is a negative study that found no difference in clinical outcomes when patients were treated by physicians with lifetime certification compared with those requiring MOC; b) it is written by highly paid employees of the ABIM; and c) the ABMS often uses this article as an example of MOC supportive literature because of the small reduction in cost found by the authors.

This study uses a very complex statistical analysis comparing outcomes and costs when patients are cared for by MOC-required physicians compared with grandfathered physicians who are not required to do MOC. The study found imposition of the MOC requirement was not associated with a difference in any clinical outcomes, but was associated with a small reduction in differences of costs for Medicare beneficiaries ($167 per patient annually). Note that this paper found no differences in clinical outcomes, but is often described as supportive of MOC in ABMS marketing materials. MOC advocates often point to the small cost reduction per patient as being significant when multiplied by the large number of patients treated in the U.S. annually. There are several problems with this conclusion. First, the paper is written by highly paid ($300,000 – $400,000/yr) employees of the ABIM. Second, no differences were observed between groups until a highly adjusted statistical analysis was performed (propensity matching followed by a regression analysis). Third, as noted in Table 2 of the manuscript, Emergency Department visits are somewhat lower in patients cared for by MOC-Grandfathered physicians (p=0.07), which is not supportive of MOC, but this finding is not mentioned in the text. It is not clear whether the economic measures were a pre-specified endpoint, so it is possible the authors, who are enormously conflicted, conducted a fishing expedition to find any benefit they could correlate with MOC and came up with the 2% reduction in growth of costs. While the authors performed extensive statistical manipulations in order to compare the two groups, there were large differences in characteristics between them. The grandfathered physicians were older, more likely to be male, more likely to have graduated from an international medical school, and had lower scores on the initial internal medicine examination. These factors plus other not reported or measured, like geographic region of the country and academic versus private practice, could explain the small differences in costs. Despite the severe conflicts of interest, the author’s own formally stated conclusions at the end of article are: “Imposition of the MOC requirement was not associated with a difference in the increase in ‘clinical outcomes’ (ambulatory-care sensitive hospitalizations) but was associated with a small reduction in the growth differences of costs for a cohort of Medicare beneficiaries.”

DJC Comments

This is one of the higher quality studies in the group, because of the quasi- experimental design that was used (quasi-experimental design refers to a study that is not truly randomized but takes advantage of a “natural experiment” that closely approximates randomization). The basic approach of this study was to use Medicare data to compare outcomes and costs of care for 2 different groups of patients according to their primary care provider (which was identified through a complex statistical algorithm). One group was patients who were cared for by primary care physicians who were initially certified in 1991 and were required to undergo MOC/recertification in 2001, and the other group was patients whose primary care physicians were initially certified in 1989 and therefore were “MOC grandfathered”. The analytic approach used was a “difference in differences approach” that used sophisticated statistical analytic techniques to compare the growth in health care spending between the 2 patient groups from 1999-2000 (before MOC) to 2002-2005 (after MOC for the non- grandfathered group). This is an appropriate analytic technique because it helps to adjust for the fact that health care utilization and costs increase as patient’s age, and thus by looking at the difference between patient groups over the same years, one is able to account for the aging of the population.

The main findings of the study were that compared with the reference period, there were no differences in clinical outcomes tracked (ambulatory-care sensitive hospitalizations) between the MOC group and the non-MOC group. Nonetheless, there was a significant difference in adjusted costs between the 2 groups with the MOC group having annual per beneficiary costs that were $167/patient lower than for the non-MOC group (a difference of 2.5% of overall costs). The results were reasonably robust to a variety of sensitivity analyses. The fact that cost per patient fell while hospitalizations were unchanged suggests that the main driver of the cost savings was more “efficient” ambulatory care—presumably better and wiser use of things like diagnostic tests and specialty referral.

This is a highly sophisticated and technical analysis of a “natural experiment” that certainly uses state of the art analytic techniques. The results do suggest that MOC leads to a small reduction in medical care costs although the precise mechanism of these savings is unknown. Other than the fact that the study is incredibly complex (which is appropriate given the analytic challenges), I do not see any clear “red flags” in the methodology. In particular, the use of the non-MOC group who were certified just 2 years earlier than the MOC group as the control group and the difference in differences design of the analysis are state of the art for such an observational study. Nonetheless, at the end of the day, the results are fairly unimpressive with respect to both the relative (2.5%) and absolute ($167/year) magnitude of cost savings achieved.

AJK Comments

This manuscript assessed differences among Medicare beneficiaries treated by two cohorts of ABIM-certified primary care physicians over time: those who were subject to MOC requirements and those who were not due to grandfathered status. Because the study assesses changes over time among patients treated by both groups of physicians, this type of analysis (natural experiment) should be regarded as higher quality than other purely observational analyses. In the primary analysis, there were no differences between the two comparison groups in the incidence of ambulatory care-sensitive hospitalizations (ACSNs) among patients treated by these physician cohorts after propensity matching. There was a small annual per-beneficiary cost savings difference of $167 (2.5%, realized through lower costs in imaging, laboratory, and specialist) observed between the two groups (in favor of the physicians subjected to MOC). What is not definitive is to what extent the MOC itself resulted in this difference (versus other differences between these two physician groups).


2. Association Between Physician Time-Unlimited vs Time-Limited Internal Medicine Board Certification and Ambulatory Patient Care Quality

John Hayes, MD; Jeffrey L. Jackson, MD, MPH; Gail M. McNutt, MD; et al Brian J. Hertz, MD; Jeffrey J. Ryan, MD; Scott A. Pawlikowski, MD

JAMA. 2014;312(22):2358-2363. doi:10.1001/jama.2014.13992

PST Comments

The authors are not conflicted. This is a comparison of clinical outcomes when patients are cared for by grandfathered versus non-grandfathered ABMS certified physicians in four VA hospitals. The study found no difference in patient outcomes. It completely and simply supports our position that MOC has no impact on patient care quality. ABMS always leaves this paper out of their marketing materials.

DJC Comments

This is a fairly straightforward observational study comparing processes and intermediate outcomes of care between physicians with time-unlimited board certification vs. time-limited certification (which is considered as a proxy for MOC). The study was conducted in primary care clinics of 4 VA hospitals and demonstrated no differences across 10 different outcome measures. The authors note that improving processes and outcomes of care is just one goal of MOC, but certainly one that is most straightforward to quantify.

AJK Comments

This manuscript is a retrospective analysis of 10 primary care performance measures at 4 VA medical centers. 105 primary care physicians (71 time-limited ABIM certification and 34 time-unlimited certification) with a mean panel size of 610 patients were surveyed. Before statistical adjustment, time-unlimited physicians performed better in 3/10 categories, but after adjustment, there were no differences in outcomes by certification status.

The strengths of this study are the ability to ascertain performance within the well-characterized VA system. Additionally, the time from initial certification among time-unlimited physicians was long – approximately 30 years compared with approximately 15 years for the time-limited certification group – suggesting a robust comparison. Stated limitations of the study include the VA-only design (representing a system with ongoing performance benchmarking), and possible roles of medical school affiliations and continuous review within the hospitals in the study.


3. Association of Physician Certification in Interventional Cardiology With In-Hospital Outcomes of Percutaneous Coronary Intervention

Paul N. Fiorilli, MD; Karl E. Minges, MPH; Jeph Herrin, PhD; John C. Messenger, MD; Henry H. Ting, MD; Brahmajee K. Nallamothu, MD; Rebecca S. Lipner, PhD; Brian J. Hess, PhD; Eric S. Holmboe, MD; Joseph J. Brennan, MD; Jeptha P. Curtis, MD

Circulation. 2015;132:1816-1824. DOI: 10.1161/CIRCULATIONAHA.115.017523.

PST Comments

The authors include highly compensated employees of the ABIM. This is a comparison of patient outcomes following interventional cardiology procedures stratified according to certification status of performing physician. Outcomes in patients receiving PCI by board certified physicians (i.e., participated in a training program and passed the initial boards) versus non-certified physicians (ie practice pathway, never certified) were compared as well as a group of physicians who were once certified but let their certification lapse (i.e. were certified but did not do MOC). Outcomes were better in the board certified physician group (i.e. physicians who did a fellowship and passed the initial certification exam) but no different in the group who let their certification lapse (non MOC) after ten years. Thus, this paper supports initial certification, but not recertification, i.e. MOC.

DJC Comments

This is a fairly straightforward observational study that used data from the ACC-NCDR database to examine the association between board certification in interventional cardiology (either initial certification or possibly recertification) and in-hospital PCI outcomes. The analysis was adjusted for patient characteristics, physician experience and volume, and hospital characteristics. The main results of the study were that board certification in interventional cardiology was associated with a small but statistically significant reduction in in- hospital mortality and need for emergency bypass surgery but no differences in other outcomes including bleeding, vascular complications, or a composite adverse outcome indicator. There were also no differences in procedural appropriateness. In a secondary analysis in which the group of non-board certified practitioners were separated according to whether they were never certified or were initially certified but lapsed (because they did not complete maintenance of certification), the excess risk associated with lack of board certification appeared to be confined to the group who were never board certified and was not seen in those practitioners who were originally board certified but allowed their certification to lapse.

Overall, this study does suggest that there may be some value to initial board certification with respect to patient outcomes, but the effect is quite weak and may not be clinically important given the low rates of the complications that were affected. This is probably the main limitation of the study in that PCI outcomes are so favorable in the current era that detecting small to modest differences across physician groups may be very challenging. The fact that the benefit of initial board certification appeared to be similar whether or not one completed recertification and MOC suggests that the real benefit is in the initial certification process—not necessarily MOC. However, it should be noted that the subgroup of lapsed practitioners only accounted for 5% of the PCIs performed during the study period and, as such, the comparisons of this group vs. the group with both initial certification and maintenance of certification are relatively underpowered (as reflected by the wide confidence intervals for the adjusted odds ratios). This limitation mainly affects the comparison of emergency CABG rates and is less of an issue for in-hospital death where the upper bound of the confidence limit for the adjusted odds ratio only extends to 1.06.

AJK Comments

This manuscript was a retrospective analysis of data from a large national registry of PCI procedures. The analysis assessed in-hospital outcomes of procedures performed by interventional cardiologists within the registry. The crude outcomes (across endpoints of in- hospital mortality, bleeding, vascular complications, emergency CABG, and the composite endpoint) were indistinguishable among analysis groups. In adjusted analyses, the odds of in- hospital death and emergent CABG were higher among non-certified physicians. Interestingly, when the analyses were further stratified, it appeared that the absence of initial certification was associated with the increased odds of in-hospital mortality or emergent CABG; the concept of further recertification and/or MOC was not directly assessed. Notably, the rates of most complications were among the lowest in physicians with lapsed certification, and the group of physicians who were initially certified but trained in Cardiovascular Disease prior to 1999 (e.g. the longest time from initial training to outcomes assessment in this study) was used as the referent group, indirectly suggesting no improvements in outcomes with recertification/MOC.


4. Association Between Maintenance of Certification Examination Scores and Quality of Care for Medicare Beneficiaries

Eric S. Holmboe, MD; Yun Wang, PhD; Thomas P. Meehan, MD, MPH; Janet P. Tate, MPH; Shih-Yieh Ho, PhD, MPH; Katie S. Starkey, MHA; Rebecca S. Lipner, PhD

Arch Intern Med. 2008; 168(13):1396-1403. doi: 10.1001/archinte.168.13.1396

PST Comments

The authors include highly paid employees of ABIM. Physicians were grouped into quartiles based on their performance on the American Board of Internal Medicine MOC examination. The main outcome measures were the associations between physician scores on MOC exams and diabetes care, using a composite measure of hemoglobin A1c, retinal screening, mammography, and lipid testing in patients with cardiovascular disease. The intent was to see if doctors who scored higher on their MOC exams did a better job ordering screening tests for their patients with diabetes.

Physicians scoring in the top quartile on the MOC exam were more likely to perform processes of care for diabetes and mammography screening compared to physicians in the lowest physician quartile, even after adjustment for multiple factors. There was no significant difference among the groups in lipid testing of patients with cardiovascular disease and the physician's performance on the MOC examination.

The study concludes “...physician cognitive skills, as measured by a maintenance of certification examination, are associated with higher rates of processes of care for Medicare patients.” Notice the authors do not conclude that doing MOC improves processes of care. Instead, they are using MOC test scores as a surrogate for “cognitive skills.” The conclusion could also be summarized as: Doctors who do better on tests are more likely to do a better job ordering screening tests for their patients. There is no causal association stated. Just an implication. In fact, the authors give it away in the next to last paragraph which is the final part of the Limitations section: “Finally, we excluded physicians who did not take an MOC examination. We fitted additional models with a dummy variable for physicians who did not take the test and found that there was no difference in performance between physician groups with and without MOC scores. However, the distribution of scores from the physicians' initial certification examination was similar to that of the analyzed cohort and thus likely explains the lack of an association.” Thus, in the very paper often used by ABMS to support MOC, the authors state there was no difference in patient care outcomes when cared for by physicians who did versus did not participate in MOC.

Furthermore, we see comparisons among the high vs low scorers but we never see “passed” vs “failed.” Pass or fail is the only “grade” the public ever sees regarding MOC, i.e. the public is told the doctor is either Participating in MOC or Not Participating in MOC. No one but the physician sees the actual score.

The authors also point out that one limitation of the study is that most of the clinical tests evaluated (LDL levels, HG A1c levels, etc) could be implemented by non-physician office staff. So, another explanation is that high MOC scores correlate not so much with knowing more and being a better doctor, but with being better at organizing a good system of care in your office, i.e., having a good system run by nurses and assistants.

Finally, the differences between high scorers and low scorers on patient outcomes are not that dramatic: for example, those physicians in the lowest quartile had a compliance rate only 6.2% lower than physicians in the top quartile for mammography screening and only 5.0% lower for the diabetes composite measure.

DJC Comments

This is one of the higher quality studies in the portfolio. As noted above, the authors used linked Medicare data to assess quality of care for primary care (as assessed by ordering guideline-indicated screening tests) to examine the association between “cognitive skill” (as assessed by scores on an internal medicine MOC test) and quality of care. In my opinion, the analyses are well-conducted given the inherent limitations of any observational study and demonstrate a fairly convincing association between test performance and quality of care. However, as Dr. Teirstein notes, the study may simply be measuring the association between physician intelligence (or even test-taking ability) and performance rather than the association between participating in MOC and performance. Thus, if one were to want to identify a group of physicians who would be most likely to deliver high quality care (according to the outcome measures selected), it would be better to select individuals who performed well on MOC exams rather than simply took the exam. I agree that this type of study does not really make the case that participating in MOC leads to improved care. But it would help me to pick a doctor who would provide better care.

AJK Comments

This analysis compared processes of care for diabetes, mammography screening, and lipid testing among groups of internal medicine physicians stratified by first attempt test score on the ABIM MOC examination. In adjusted analyses, physicians in the highest quartile of scores had a higher performance in processes of care for diabetes and referral to mammography but not in lipid testing. It is notable that the authors do not present data stratified by the conventional pass/fail cut point that ultimately determines recertification status. This analysis would have been easy to conduct and is the more relevant analysis to determine whether maintenance of certification is actually associated with outcomes. Additionally, it is somewhat intuitive that physicians who do better on a standardized test might perform better than those who do not, but this does not mean that the MOC process itself is the reason. That the comparison between physicians who did not take the test and those that did (in the limitations) showed no difference in outcomes is particularly notable, especially when one considers that these physicians’ distribution of scores on the initial certification examination was similar to the analyzed cohort. That suggests that it is overall test-taking performance (and not the MOC examination) that may be driving the results.


5. The association between physicians' cognitive skills and quality of diabetes care.

Hess BJ1, Weng W, Holmboe ES, Lipner RS.

Acad Med. 2012 Feb;87(2):157-63. doi: 10.1097/ACM.0b013e31823f3a57.

PST Comments

This paper is similar to the paper #4 above by Holmboe et. It is also written by highly paid employees of the ABIM. It is a more recent paper (2012 vs 2008) but only looks at 676 physicians. The paper aims to correlate physician scores on the MOC exam with physicians ordering screening tests (retinal exam, foot exam, blood pressure control, AIC at goal, LDL control). The authors conclude, “Physicians' cognitive skills significantly relate to their performance on a comprehensive composite measure for diabetes care. Although significant, the modest association suggests that there are unique aspects of physician competence captured by each assessment alone and that both must be considered when assessing a physician's ability to provide high-quality care.”

As with the paper above, MOC scores are being used as a surrogate for cognitive skills. There is no evidence that doing MOC improves the outcome measured. The study shows doctors who do better on a written test do better ordering laboratory tests on their patients. No causation is claimed.

In my view, a major problem with this small study is the investigators measured patient outcomes (number of screening tests patients received) by looking at each physician's Practice Improvement Module (PIM). The PIM is part of the MOC process. Physicians are asked to abstract the charts of 25 of their patients to see how often the screening tests were done and how often their patients were in good diabetes control. The data is entirely self-reported with no auditing. The physicians who were part of the study only managed to abstract an average of 21 charts, not 25. I don’t see how this paper can be quoted by ABMS as supportive of MOC when the data on physician behavior is self-reported by the physicians being tested.

It is also noteworthy that the authors’ conclusion is not very strong, “Although the associations that we observed were statistically significant, they were modest and not surprising given the complexity of clinical practice.”

The authors also point out that one limitation of the study is that most of the clinical tests evaluated (LDL levels, HG A1c levels, etc) could be implemented by non-physician office staff. So, another explanation is that high MOC scores correlate not so much with knowing more and being a better doctor, but with being better at organizing a good system of care in your office, i.e. having a good system run by nurses and assistants.

DJC Comments

The statistical methods employed by this study are reasonable. As noted above, the main finding of this study is that scores on the internal medicine MOC exam demonstrated modest correlation with a composite measure of diabetes care, with a stronger correlation specifically with the endocrinology component of the MOC. I agree that the main limitation of this study is that the outcome measure (diabetes composite score) was derived from what are essentially self-reported data from review of approximately 20 patient charts. There is no assurance that these charts are truly representative or sequential, or even abstracted correctly. In addition, it is possible that these differences relate more to intrinsic characteristics of the physicians (e.g., intelligence, learning ability, etc.) rather than to taking the MOC, per se.

AJK Comments

This analysis correlates ABIM MOC Examination scores from 676 physicians with time-limited certification in internal medicine with their practice performance using a composite diabetes measure based upon the ABIM’s diabetes practice improvement module (PIM, derived though physician audits). Overall examination scores correlated with the diabetes composite score, and this was highest for performance on the endocrine disease component of the examination. Examination scores also correlated with the process sub composite measure as well as the patient experience measure, although this was the weakest association. Notably, the overall explanatory power of the model was only 13%. This analysis does not dichotomize between failing/passing scores (what certification really represents) and does not enable a distinction between overall cognitive/test-taking ability vs. the effect of participation in the MOC process.


6. Effect of Board Certification on Antihypertensive Treatment Intensification in Patients With Diabetes Mellitus

Alexander Turchin, MD, MS; Maria Shubina, DSc; Anna H. Chodos, BA; Jonathan S. Einbinder, MD, MPH; Merri L. Pendergrass, MD, PhD

Circulation. 2008;117:623-628

PST Comments

I did not find any conflicts on the part of the authors.

This was a retrospective cohort study looking at the association between the number of years since the physician’s last board certification and the probability of pharmacological antihypertensive treatment intensification at a given visit. The authors found frequency of treatment intensification decreased from 26.7% for physicians who were board certified the previous year to 6.9% for physician who were board certified 31 years before the visit. “Treatment intensification rate was 22.5% for physicians certified <10 years ago versus 16.9% for physicians last certified >10 years ago (P<0.0001).” The authors conclude that “intensification of pharmacological therapy for blood pressure levels above the recommended treatment goals decreases with time since the last board certification.”

If one looks at the key data in Figure 1 of the manuscript, we see that 30 years after certification there is a precipitous drop in treatment intensification. These are the much older doctors often with a different kind of practice compared to the recently certified. If you exclude this group (likely a very small number of doctors--we never learn how many) the number goes from about 6% to 16%, i.e. the docs who just were certified or recertified intensified treatment 26% of the time vs only 16% for the docs years away from taking the boards. This not a very impressive difference. Neither group demonstrated an impressive amount of treatment intensification, so the impact of recertification, if it exists, is rather small. Also, we are not told how high the patients’ blood pressure was. Is it possible that as doctors age their patients’ blood pressures are more likely in control so while the BP may be high on the day of appointment, perhaps, in the older, more established practices, the BP was not high enough to warrant medication intensification? A doctor who has seen a patient for a decade might see an isolated BP of 140/85 and say, “Come back and check it again in two weeks.”

Another criticism is that this data does not imply a causal relationship between re- certification/MOC and intensification of hypertension treatment. Rather, it is just an association.

DJC Comments

I agree with the summary of findings by Dr. Teirstein above. However, other than the observational nature of this study (which is true of virtually all the studies in this field), I do not find a lot of obvious methodologic flaws in this study. The authors did their best to adjust for differences in physician age as well as for the actual level of blood pressure and the frequency of BP checks, so many of the potential confounding factors noted have at least been considered in the analysis. The fact that the association between time from certification (or recertification) and the frequency of treatment intensification is somewhat weak does not detract from the main finding that there was a relationship in the hypothesized direction.

Overall, given that this is one of the few positive studies for MOC that was not written by an employee of ABIM or sponsored by ABIM lends further credence to the results.

AJK Comments

This observational analysis assessed treatments of diabetic patients with hypertension, specifically addressing whether physicians differed in their frequency of intensification of antihypertensive treatment based upon years since last board certification. The study was conducted among patients treated by academically-affiliated internists at the MGH and Brigham from 2000-2005. Rates of treatment intensification (for BP above target range) were low overall, and there was a negative association between years after last board certification and the frequency of treatment intensification among these patients. This association was most pronounced for physicians >30 years after board certification, but remained relatively constant for physicians who were within 15 years of board certification. Physician age was not associated with the frequency of treatment intensification when years from board certification was also introduced into the multivariable model. Overall, these data do suggest some association between board certification and the outcome measured.

7. The Impact of a Preventive Cardiology Quality Improvement Intervention on Residents and Clinics: A Qualitative Exploration

Elizabeth C. Bernabeo, MPH, Lisa N. Conforti, MPH, Eric S. Holmboe, MD

Am J Med Qual 2009;24: 99-107

PST Comments

This work was sponsored by ABIM and the authors include highly paid employees of the ABIM. This study explored the impact of the Preventive Cardiology Practice Improvement Module (PC- PIM) on residency clinics. Residents did Practice Improvement modules that are part of MOC. The authors then interviewed the residents and training program directors, asking them questions about how they felt about the module. There is no data provided. The results section is a series of anecdotes. They state, “results from 22 clinic interviews indicated merit in using the PC-PIM to teach QI during residency. Many residents reported increased knowledge and confidence, particularly regarding the value of QI. The majority recognized that QI often leads to improved patient care and outcomes, even in resource poor environments.” This paper is written by conflicted authors who interviewed conflicted subjects. The results section is simply a description of what the subjects thought and felt about the experience. An analogy might be if your boss called you on the telephone and asked if you thought you worked for a good company or a bad company. It would not be a surprise if everyone reported they worked for a good company.

DJC Comments

This is a qualitative research study, meaning it was based on structured but open ended interviews and is largely descriptive—looking for themes. Whether the residents who were interviewed were conflicted is difficult to know, but it appears that a number of the interviews were conducted jointly with both a resident and a supervising faculty member, who was often a champion for the quality-improvement project. Therefore, it would not be surprising that some degree of bias was induced by the joint interview process.

AJK Comments

This study explored the impact of the Preventive Cardiology Practice Improvement Module (PC- PIM) on residency clinics through interview methodology. Outcomes or process changes were not reported here. The overall study included 720 participating internal medicine residents at 23 ambulatory sites; the results in this publication include that from 22 interviews at 15 training sites with faculty and resident “champions” (in 7 cases both together). Descriptive terminology is used and in general “most clinics reported a successful experience and results demonstrated multiple aspects of positive impact from implementing the PC-PIM”. Another summary description states “Overall, the themes that emerged to describe the value of doing the PC-PIM at the resident level were the following: (1) learning the value of a QI team, as well as how to form one; (2) recognizing that QI often leads to improved patient care and outcomes, even in resource poor environments; and (3) increased knowledge and confidence regarding performing QI activities.” Whether these were influenced by the inherent selection bias among interviewees who participated is not known.


8. Promoting Physicians’ Self-Assessment and Quality Improvement: The ABIM Diabetes Practice Improvement Module

Eric S. Holmboe, MD; Thomas P. Meehan, MD, MPH; Lorna Lynn, MD; Paula Doyle, BS, MBA; Tierney Sherwin; and F. Daniel Duffy, MD

The Journal of Continuing Education in the Health Professions, Volume 26, pp. 109-119.

PST Comments

The authors include highly paid employees of the ABIM. The study was funded by the ABIM.

Sixteen practicing general internists and endocrinologists with 10-year time limited certification participated in a beta test of the ABIM’s diabetes practice improvement module (PIM) as part of their recertification program. A PIM consists of a self-directed medical record audit, practice system survey, and patient survey.

Fourteen physicians completed the diabetes PIM. All but 1 physician found the medical record audit provided important information about the practice. Of the 11 physicians who completed a follow-up interview, 10 stated that the quality improvement education specialist helped improve their practice.

The authors conclude, “Self-assessment using practice improvement modules as part of maintenance of certification programs can lead to meaningful behavioral change by physicians in quality improvement.”

The major criticism of this study can be found in the “Limitations” paragraph: “The study has several limitations. First, the sample size was small, and the physicians were self-selected.” In my opinion this means the physicians whose work was studied all volunteered to do the Practice Improvement Module and were likely inclined to support it.

DJC Comments

Like the preceding study (#7), this is a qualitative research project that attempted to explore physician’s impressions of the diabetes quality improvement module as part of the maintenance of certification process for internal medicine. As a qualitative study, it is mainly meant to be descriptive and hypothesis generating. It does suggest that the physicians who participated in this exercise found the QI module to be of some value to their practice. Whether this truly leads to improved care cannot be determined. In addition, it is difficult to know from the study’s methodology whether the physicians who participated were representative of general practicing physicians and to what extent they were prejudiced to be in favor of a practice improvement module.

AJK Comments

This manuscript reports the self-reported and interview-based outcomes of 16 practicing internists and endocrinologists in Connecticut after participation in a beta test of the ABIM’s diabetes practice-improvement module (PIM). The analysis is largely descriptive, and the most valued items described by the physicians included the practice audit and the patient survey. Notably, 21 physicians began the module and started in the baseline record assessment, but 16 completed the data (dropout rate of >20%). Outcomes related to changing practice (either temporary or lasting) related to performance of PIM are not reported in this publication. This publication well-characterizes the challenges in implementation of a PIM in real-world clinical practice.


9. Improving Asthma Care Through Recertification. A Cluster Randomized Trial

Jan Simpkins, MA; George Divine, PhD; Mingqun Wang, MS; Eric Holmboe, MD; Manel Pladevall, MD, MS; L. Keoki Williams, MD, MPH

ArchInternMed.2007;167(20):2240-2248

PST Comments

The authors include highly paid employees of the ABIM. The study was funded by the ABIM.

As part of recertification, the American Board of Internal Medicine requires completion of at least 1 practice improvement module (PIM). The authors assessed whether completing an asthma-specific PIM resulted in improved patient outcomes.

The primary outcome was the dispensing of an inhaled corticosteroid (ICS) after an office visit for asthma. Secondary outcomes included patient reported processes of care, asthma-related heath care use, and asthma severity.

For the primary outcome, patients seen by intervention group physicians were not more likely to fill an ICS prescription in the post intervention period than patients seen by control group (adjusted odds ratio = 1.0). Patients seen for asthma by intervention group physicians were less likely to receive a written action plan than patients seen by control group physicians (adjusted odds ration 0.67). However, patients seen by the intervention group were more likely to discuss potential asthma triggers and had lower self-reported asthma severity measures (not the primary endpoint). The authors conclude that a “PIM designed to improve asthma care did not improve filling of ICS prescriptions but may have lessened asthma severity through an increased discussion of asthma triggers.”

This is an odd report in that it is used by ABIM in their marketing and also quoted in their editorials as supporting MOC, yet the study was undeniably negative. The only data point supporting the intervention was that patients had “lower self-reported asthma severity measures (unadjusted P=.03).” But this was not a pre-specified end point and the difference in asthma severity is never provided. All we are told in the results section is: “In the unadjusted analysis, patient-reported asthma severity (ie Asthma Symptom Utility Index score) was significantly lower in patients seen by physicians in the intervention group (P=.03) but was of borderline significance after adjustment (P=.09) (Table 5).” I am surprised ABMS continues to promote such a negative trial in their marketing materials as being supportive of MOC.

DJC Comments

The main strength of this particular study is that it is one of the few randomized trials in the field to assess the relationship between recertification (in this case, a specific practice improvement module [PIM]) and processes of care. The design of the study was a cluster randomized trial conducted at the practice level to assess whether performing a practice improvement module led to increased fill rates for inhaled steroids in asthma patients (an established quality measure). And as noted above, for the primary endpoint, the study was unequivocally negative (albeit somewhat underpowered). The authors did find a relationship between assignment to PIM completion and follow-up asthma severity, but this was a secondary (or possibly tertiary) endpoint. As such, the findings can only be considered hypothesis- generating despite the fact that they occurred in the setting of a randomized trial. Despite the authors’ arguments to the contrary, it is very likely that this positive finding (among a large number of endpoints tested) is a false positive. As such, I agree with Dr. Teirstein’s review that this study provides fairly weak evidence of the benefit of PIM in improving asthma care or outcomes. One thing to note about this study is that the rate of completion of the PIM intervention was quite modest, and this may have biased the study toward the null hypothesis.

AJK Comments

This is the report of a cluster randomized trial of 16 practices that had physicians undergo a practice improvement model for asthma care. The primary outcome was fill rates for inhaled corticosteroids, and was no different among the intervention group vs. the control group. Remarkably, only 5 (26%) of physicians actually completed the intervention (despite 10/19 other physicians who initiated it), clearly biasing towards the null, but perhaps emphasizing the challenges of implementation of such a module (in a randomized trial that was designed to study it)! A written action plan was less likely to be received by patients treated in practices randomized to intervention, although these patients were more likely to discuss asthma triggers. Of note, while the unadjusted asthma acuity score was improved in the intervention group, this was not significant in adjusted analyses (only the unadjusted p value is reported in the abstract). Overall, this trial failed its primary endpoint, and among multiple secondary comparisons, only significant ones are reported, which is not appropriate.


10. Mayo Clinic: Physician Attitudes About Maintenance of Certification – A Cross-Specialty National Survey

David A. Cook, MD, MHPE, Morris J. Blachman, PhD, Colin P. West, MD, PhD, Christopher M. Wittich, MD, PharmD

PST Comments

Mayo Clinic Proceedings: A survey of physician’s attitudes on MOC in 2016. Only 15% agree with the statement “MOC is worth the time and effort.” In the online addendum you will find data indicating if physicians who only “slightly agree” with the above statement are removed, this number drops to 4%. This is just a survey, but it is worth including because it shows the near universal physician opinion that MOC is not helpful.

AJK Comments

This reports the results of an internet and paper survey on MOC across specialties. The response rate of 988/4583 is fair, but expected for a survey of this kind. Remarkably, only 24% of physicians agreed that MOC activities were relevant to their patients, and 15% felt they were worth the time and effort. A total of 27% felt that they had adequate support in completing MOC activities and 12% felt that these activities were well-integrated with clinical practice. A total of 81% felt that MOC was a burden to them, and 9.1% felt that patients cared about MOC status. In secondary questions, the pessimistic outlook on MOC (including PIM persisted) across this sample, with 22% feeling that MOC self-assessment activities contributed to professional development. In further analyses, there were no associations between these perceptions and other demographic characteristics of the surveyed physicians. Overall these survey results are remarkable in the pessimism expressed for the MOC process.