Third Annual Survey Now Open

Image courtesy American Medical Writers Association
Our third annual survey of best practices opened on September 22, 2016. Click here to begin the questionnaire. New this year: questions about key quality indicators, market conditions, and pricing. Everyone who completes the survey will receive the raw results by Thanksgiving. This is an ideal professional and business development activity for members of both the American Medical Writers Association and the Alliance for Continuing Education in the Health Professions. The survey will close on Sunday night, October 9. Follow me on this blog, on LinkedIn, or on Twitter (@CME_Scout) for more info.

Award-Winning Poster


This poster won the "People's Choice" award during the annual meeting of the Alliance for Continuing Education in the Health Professions in January, 2016. The original poster measured 56 inches across the top. To receive a PDF of the original, send an email to don (at) hartingcom.com. A text report is also available upon request.

What's All This Buzz About SQUIRE?

Medical writers and editors active in the accredited CME space may be interested to learn a bit more about SQUIRE, a new standard outcomes reporting format used by a growing number of peer-reviewed journals. SQUIRE stands for Standards for Quality Improvement Reporting Excellence and was pioneered a few years ago by the British Medical Journal, better know as the BMJ. A special version of SQUIRE has been developed for use by members of the Alliance for Continuing Education in the Health Professions. A conference call aimed at recruiting grant support for wider user of the "ACEhp SQUIRE tool" was held this afternoon, and I was fortunate enough to be invited.

Here's a link via DropBox to a very brief (7 slides) presentation based on our call, pared way back to focus on elements of interest to my fellow medical writers active in the accredited CME space.

Isn't it cool how SQUIRE originated with an effort to promote EXCELLENT WRITING???

By the way, the photo above comes from a brochure promoting a writer's conference to be held next month at Dartmouth College in New Hampshire. Sounds like fun, sure wish I could go.

Wanted: A Deep Dive into CME Needs Assessment

The CME Cycle
The American Medical Writers Association just asked for my input on a survey aimed at helping our organization develop a plan for serving members in the future. I filled it out and added this when asked for additional suggestions on educational topics:


"A deep dive into the CME needs assessment as a special type of document: what it is, why it is so important, how to write one, sources of evidence, how to present the evidence, how to reference and attribute your sources, how to get hired to write NAs, what the NA of the future will look like, how much to charge, how much lead time to insist upon, how to include the patient perspective, how PI-CME and QI-CME are affecting how we write needs assessments, and how to overcome the most common barriers encountered while writing needs assessments, including the appearance of commercial bias."

2nd Annual Survey of Best Practices in CME Needs Assessment

The CME Needs Assessment Cycle
We had such good results last fall we are coming back for more. This year's survey contains 5 totally new questions and 5 repeat questions. We also have a new co-investigator: Nathalie Turner, MS, ELS. Survey closes Oct. 9; the first 100 people who respond will receive the raw results by Thanksgiving. Here's your chance to keep abreast of this fast-changing field, and contribute to our profession at the same time.

Another Reality Check: Good Charting Vs. Good Care

This article by Greene and Moreo is one of my favorites because the journal is top-tier and the writing style is so clear and pragmatic. For example, in the "Lessons and Limitations" section, the authors note that some of the physicians involved in the chart audit pointed out that just because a patient's chart is not up to date doesn't mean that the patient is receiving poor quality care. Again, the sample size is rather small (N=30 gastroenterologists) but the choice of quality measures seems unassailable because they align with both the National Quality Strategy as well as the Physician Quality Reporting System.



Here's a Published Example of a Failed Intervention

I love how this article from Canada by Shah et al  doesn't mince words: "The cardiovascular disease management toolkit failed to reduce clinical events or improve quality of care for patients with diabetes." I also love how the article includes a plain and simple "Editors' Summary" near the top, the style of which will be familiar to anyone who has used the Standards for Quality Improvement Reporting Excellence (SQUIRE). Funding for this large-scale intervention came from the Canadian government, and the poor outcomes highlight how we need to do more -- much more -- than just mail boxes of brightly colored printed materials to family physicians if we want to improve the quality of diabetes care.


EHRs Frustrate British Docs, Too

This outcome report by Michael and Patel from a hospital in England is a favorite because it sheds light on how physicians on the other side of the Atlantic are trying to cope with 2 problems often encountered in the United States: docs' frustration with electronic health records (EHR) and poor weekend safety for hospital patients. While measurable improvement using EHR was noted, even after the training ended, nearly half of physicians surveyed said they found using the computer too time consuming.


Another Scalp on This Lead Author's Belt

This article by S. Stowell, et al. is an easy read, no doubt in part because the lead author has been involved in writing so many CME outcomes reports in recent years. It's also shorter than many other articles, and it deals with an unusual topic: shared medical visits, where a single physician cares for more than one diabetes patient at a time. A nice feature of the intervention is how a control group was included -- you don't see that very often. Outcomes showed significant increases in several important areas, even after 30 days of followup.


They Even Published the Meeting Agenda

This article from 2014 remains a favorite for a number of reasons. First off the lead author, Wendy Nickel, works next door at the American College of Physicians in Philadelphia. Next, the full text is available on a fancy new Elsevier open access platform. It's a train-the-trainer type project, aimed at building self-confidence among leaders of quality improvement projects by giving them some easy practice. The specific intervention involved preventing infections from catheters. I'm no clinician, but that sounds like a simple way to get started. Colorful graphics make the article easy to read. Here's the cherry on top: Elsevier's editors saw fit to publish the live meeting agenda, complete with learning objectives, continental breakfast, lunch, and 2 breaks (!) 


When CME Becomes Foreign Aid, and Vice Versa

The project reported in this article by J. Lowe, et al. comes from the country of Guyana on the continent of South America. While technically this is a continuing medical education intervention, it also an example of foreign aid, as the project was funded by the Canadian government. I like how the intervention was based on a firm understanding of adult learning principles, and how it involved an interprofessional team of key opinion leaders. Best of all: reported outcomes are strong. A total of 340 Guyanese health professionals were trained to improve care for diabetics, and a significant reduction in the number of foot amputations was observed.


When No Change is Good News

This article from London tears at my heart strings. P. Lachman, et al. based their intervention in a small inpatient ward of a children's hospital and aimed to determine ways that both children and parents could more effectively report any harm being done to them by hospital employees. The overall goal was to measure nurses' perceptions of any change in the ward's safety culture over the course of a year. Investigators tested 10 different versions of a reporting tool. (Not surprisingly the kids liked the version with the most pictures.) The bad news is also the good news: safety climate scores were already high, and no significant change was observed during the study period.


A Report on 50 Reports

This study from Italy is interesting for how it is designed as a meta-analysis of performance improvement (PI) initiatives aimed at increasing compliance with clinical guidelines regarding management of sepsis. It's a bit long, with many data tables, but the upshot appears to be that implementation of PI programs increases guideline compliance and decreases mortality in patients with sepsis. Clearly the Italian definition of "performance improvement" differs from the American definition, as the authors based their meta-analysis on 50 studies (!)


This Project Changed Physician Prescribing Patterns


This report of a PI-CME project by Greenspan et al, published in Journal of Women's Health, is notable because the intervention resulted in a statistically significant change in physician prescribing patterns. Prescriptions for denosumab rose, while prescriptions for estrogen declined. The sample size of 75 physicians seems pretty encouraging when compared to even smaller numbers in other PI-CME projects.


Significant Difference: P Doesn't Always Equal .05

Here's another gem where the name of the lead author, Georgetown University's John Marshall MD, will be familiar to many in this important therapeutic area: colorectal cancer. The article in Journal of Oncology Practice does a nice job of telling the story of a typical 3-step PI-CME project. While only 27 people (mostly physicians) participated, outcomes improved significantly, especially with respect to patient safety and supportive care. 

Here's something unusual: The small sample sizes required investigators to choose a non-standard significance value of 10% (P < .10) to detect important trends. 





These Investigators Also Sought to Measure Compassion

This article by F.R. Hirsch, et al. in Cancer Control is one of my favorites. It's a classic PI-CME intervention in one of my favorite tumor types: non-small cell lung cancer. Admittedly the sample size is small (22 physicians), but that's par for the course for PI-CME projects. Very cool how the outcomes show care improvements not just in the use of molecular markers for treatment decisions, but also in assessment of patients' emotional well-being.




Old Folks Feeling Better in Fort Worth

This article by C. Reid, et al. in Annals of Long-Term Care is a nice write-up of a small-scale QI initiative related to pain management. The work took place at a long-term care facility in Fort Worth, Texas. Sponsors deserve a lot of credit for trying to help elderly patients, for involving clinicians in planning and implementing the project, and for basing their outcomes report on results of chart reviews.  It's also quite interesting how the use of opioids decreased while the use of neuropathic agents increased.


A Hidden Gem from Reading, Pennsylvania

Can anything good come out of Reading, Pennsylvania? The answer is yes! This article by R. Hingorani, et al. in the Journal of Community Hospital Internal Medicine Perspectives is a hidden gem, despite originating from a town that has seen better days. Readers can gain free access via PubMedCentral, the results are recent, the authors focus on changes in physician performance, and the intervention involves clinical decision support tools in a highly relevant therapeutic area: acute respiratory infections. Too bad the journal name abbreviates to J-CHIMP.




A Methods Article That Actually Reports Outcomes

This article by R. Gold, et al. in Implementation Science is another one of my faves. It's very recent, the authors registered the project as a clinical trial, they used a control group, and the disease state (diabetes) is highly prevalent. The journal editor labelled it a methodology article, but it reports outcomes. The authors even obtained funding from the National Heart, Lung, and Blood Institute. What's not to like??


A Healthy Dose of Reality

This article by R.C. Amland, et al. in the Journal for Healthcare Quality is one of my favorites. It's recent, it's based on a large sample size, the outcomes are impressive with big changes, and the disease state is common. It also includes a healthy dose of reality -- the authors state in the "Implications for Practice" section that physicians were annoyed when interrupted by computerized flags on the decision support reminders. But how was the study funded?