Friday, August 14, 2015

Another Reality Check: Good Charting Vs. Good Care

This article by Greene and Moreo is one of my favorites because the journal is top-tier and the writing style is so clear and pragmatic. For example, in the "Lessons and Limitations" section, the authors note that some of the physicians involved in the chart audit pointed out that just because a patient's chart is not up to date doesn't mean that the patient is receiving poor quality care. Again, the sample size is rather small (N=30 gastroenterologists) but the choice of quality measures seems unassailable because they align with both the National Quality Strategy as well as the Physician Quality Reporting System.



Here's a Published Example of a Failed Intervention

I love how this article from Canada by Shah et al  doesn't mince words: "The cardiovascular disease management toolkit failed to reduce clinical events or improve quality of care for patients with diabetes." I also love how the article includes a plain and simple "Editors' Summary" near the top, the style of which will be familiar to anyone who has used the Standards for Quality Improvement Reporting Excellence (SQUIRE). Funding for this large-scale intervention came from the Canadian government, and the poor outcomes highlight how we need to do more -- much more -- than just mail boxes of brightly colored printed materials to family physicians if we want to improve the quality of diabetes care.


EHRs Frustrate British Docs, Too

This outcome report by Michael and Patel from a hospital in England is a favorite because it sheds light on how physicians on the other side of the Atlantic are trying to cope with 2 problems often encountered in the United States: docs' frustration with electronic health records (EHR) and poor weekend safety for hospital patients. While measurable improvement using EHR was noted, even after the training ended, nearly half of physicians surveyed said they found using the computer too time consuming.


Another Scalp on This Lead Author's Belt

This article by S. Stowell, et al. is an easy read, no doubt in part because the lead author has been involved in writing so many CME outcomes reports in recent years. It's also shorter than many other articles, and it deals with an unusual topic: shared medical visits, where a single physician cares for more than one diabetes patient at a time. A nice feature of the intervention is how a control group was included -- you don't see that very often. Outcomes showed significant increases in several important areas, even after 30 days of followup.


They Even Published the Meeting Agenda

This article from 2014 remains a favorite for a number of reasons. First off the lead author, Wendy Nickel, works next door at the American College of Physicians in Philadelphia. Next, the full text is available on a fancy new Elsevier open access platform. It's a train-the-trainer type project, aimed at building self-confidence among leaders of quality improvement projects by giving them some easy practice. The specific intervention involved preventing infections from catheters. I'm no clinician, but that sounds like a simple way to get started. Colorful graphics make the article easy to read. Here's the cherry on top: Elsevier's editors saw fit to publish the live meeting agenda, complete with learning objectives, continental breakfast, lunch, and 2 breaks (!) 


When CME Becomes Foreign Aid, and Vice Versa

The project reported in this article by J. Lowe, et al. comes from the country of Guyana on the continent of South America. While technically this is a continuing medical education intervention, it also an example of foreign aid, as the project was funded by the Canadian government. I like how the intervention was based on a firm understanding of adult learning principles, and how it involved an interprofessional team of key opinion leaders. Best of all: reported outcomes are strong. A total of 340 Guyanese health professionals were trained to improve care for diabetics, and a significant reduction in the number of foot amputations was observed.


When No Change is Good News

This article from London tears at my heart strings. P. Lachman, et al. based their intervention in a small inpatient ward of a children's hospital and aimed to determine ways that both children and parents could more effectively report any harm being done to them by hospital employees. The overall goal was to measure nurses' perceptions of any change in the ward's safety culture over the course of a year. Investigators tested 10 different versions of a reporting tool. (Not surprisingly the kids liked the version with the most pictures.) The bad news is also the good news: safety climate scores were already high, and no significant change was observed during the study period.


A Report on 50 Reports

This study from Italy is interesting for how it is designed as a meta-analysis of performance improvement (PI) initiatives aimed at increasing compliance with clinical guidelines regarding management of sepsis. It's a bit long, with many data tables, but the upshot appears to be that implementation of PI programs increases guideline compliance and decreases mortality in patients with sepsis. Clearly the Italian definition of "performance improvement" differs from the American definition, as the authors based their meta-analysis on 50 studies (!)


This Project Changed Physician Prescribing Patterns


This report of a PI-CME project by Greenspan et al, published in Journal of Women's Health, is notable because the intervention resulted in a statistically significant change in physician prescribing patterns. Prescriptions for denosumab rose, while prescriptions for estrogen declined. The sample size of 75 physicians seems pretty encouraging when compared to even smaller numbers in other PI-CME projects.


Significant Difference: P Doesn't Always Equal .05

Here's another gem where the name of the lead author, Georgetown University's John Marshall MD, will be familiar to many in this important therapeutic area: colorectal cancer. The article in Journal of Oncology Practice does a nice job of telling the story of a typical 3-step PI-CME project. While only 27 people (mostly physicians) participated, outcomes improved significantly, especially with respect to patient safety and supportive care. 

Here's something unusual: The small sample sizes required investigators to choose a non-standard significance value of 10% (P < .10) to detect important trends. 





These Investigators Also Sought to Measure Compassion

This article by F.R. Hirsch, et al. in Cancer Control is one of my favorites. It's a classic PI-CME intervention in one of my favorite tumor types: non-small cell lung cancer. Admittedly the sample size is small (22 physicians), but that's par for the course for PI-CME projects. Very cool how the outcomes show care improvements not just in the use of molecular markers for treatment decisions, but also in assessment of patients' emotional well-being.




Old Folks Feeling Better in Fort Worth

This article by C. Reid, et al. in Annals of Long-Term Care is a nice write-up of a small-scale QI initiative related to pain management. The work took place at a long-term care facility in Fort Worth, Texas. Sponsors deserve a lot of credit for trying to help elderly patients, for involving clinicians in planning and implementing the project, and for basing their outcomes report on results of chart reviews.  It's also quite interesting how the use of opioids decreased while the use of neuropathic agents increased.


A Hidden Gem from Reading, Pennsylvania

Can anything good come out of Reading, Pennsylvania? The answer is yes! This article by R. Hingorani, et al. in the Journal of Community Hospital Internal Medicine Perspectives is a hidden gem, despite originating from a town that has seen better days. Readers can gain free access via PubMedCentral, the results are recent, the authors focus on changes in physician performance, and the intervention involves clinical decision support tools in a highly relevant therapeutic area: acute respiratory infections. Too bad the journal name abbreviates to J-CHIMP.




A Methods Article That Actually Reports Outcomes

This article by R. Gold, et al. in Implementation Science is another one of my faves. It's very recent, the authors registered the project as a clinical trial, they used a control group, and the disease state (diabetes) is highly prevalent. The journal editor labelled it a methodology article, but it reports outcomes. The authors even obtained funding from the National Heart, Lung, and Blood Institute. What's not to like??


A Healthy Dose of Reality

This article by R.C. Amland, et al. in the Journal for Healthcare Quality is one of my favorites. It's recent, it's based on a large sample size, the outcomes are impressive with big changes, and the disease state is common. It also includes a healthy dose of reality -- the authors state in the "Implications for Practice" section that physicians were annoyed when interrupted by computerized flags on the decision support reminders. But how was the study funded?


Friday, August 7, 2015

RT to WIN $100! Your Fun Guide to CME Outcomes Publishing


One of the things I love about the CME field is how the landscape is constantly changing. There's never a dull moment for a medical writer who develops content for continuing education. That's why I call myself @CME_Scout on Twitter.
@CME_Scout

A good example is the new trend of publishing articles containing outcomes of continuing education interventions in peer-reviewed journals. This trend started a few years ago but now seems to be gaining momentum as more articles about quality improvement (QI) and performance improvement (PI) are published.

As former editor of a peer-reviewed education journal, I think this is a terrific step forward for our profession. So does my friend Sandra Binford, an experienced medical education designer and outcomes researcher who recently won a Rising Star Award from the Alliance for Continuing Education in the Health Professions.

Rising star: Sandra Haas Binford, MAEd
Sandra and I are teaming up to offer you a special treat. We plan to give our followers on Twitter a guided tour of this exciting new body of literature. Our back-to-school tour will take the form of a month-long series of daily tweets, from mid-August to mid-September. 

Here's what you can expect if you follow us: I will tweet from August 17 to 31, Sandra will tweet from Sept 1 to 15. Each of our 30 tweets will take you to a different one of our favorite articles, along with a note about why you will find it worth reading. These won't be just any old articles. To be worthy of mention in one of my tweets, an article has to meet my stringent criteria. (Sandra gets to have her own criteria, which she will explain in a separate message.)

To be the topic of one of my tweets, an article must be:

1) published in a peer-reviewed journal
2) indexed in PubMed
3) dated 2010 or later
4) available as free full text
5) an outcomes report from a continuing education initiative (QI or PI initiatives are OK, too)
6) focused on clinician education, rather than patient education
7) written in English (!)


Our guided tour will interest anyone who is curious to learn more about what’s involved in getting educational outcomes data published in the peer-reviewed literature. This includes my fellow medical writers and editors who want to learn more about the process, as well as 
  • research analysts
  • program managers
  • medical directors
  • proposal writers
  • continuing educators
  • publishing house executives
  • medical education company officers

To give the tour lasting value, we will be curating our tweets in an accessible web-based archive for later reference. (Sandra loves this part -- I keep telling her she missed her calling as a librarian.) 

To make our project interactive, we will give you a chance to express your opinion on each article. Just edit your retweet, or leave a comment when you see the note about the article posted on one of our blogs.



Retweet to WIN!
Now for the really fun part: We are going to make this bit of a contest, where you can actually earn money while following us on Twitter. Each person who retweets one of our daily article tweets will have his or her name entered in a drawing (max one entry per person per day). At the end of 30 days, we will draw 2 names (Twitter handles) at random from among all the entries. 

FIRST PRIZE: The Twitter handle with the larger following will win a $100 Amazon gift card courtesy of Harting Communications LLC. 

SECOND PRIZE: The Twitter handle with the smaller following will win a $50 Amazon gift card courtesy of Full Circle Clinical Education.

Be sure to follow one of us when you retweet so we can notify you via direct message (DM) if you win!