We spend a lot of time criticizing health-care practitioners, facilities, insurance and pharmaceutical companies because when it comes to harming patients, some of them deserve it. We’re always happy when somebody within the medical establishment looks at its problems with as much skepticism as we do.
Dr. Bob Wachter is a hospitalist—that is, a physician who works solely treating sick patients in hospitals —and the author of Wachter’s World. The blog, as it tagline says, is about “lively and iconoclastic ruminations on hospitals, hospitalists, quality, safety and more.”
In a recent post, Wachter comments on U.S. News & World Report’s annual “honor Roll” of “America’s Best Hospitals” for 2012-13. He applauds that U.S. News has moved from rating facilities almost exclusively by their reputations among the medical community to include more patient satisfaction factors as a measure of quality. We also have found fault with ratings that exclude the actual patient experience.
For the first time in more than 10 years of the magazine’s ratings, Wachter’s hospital, the University of California, San Francisco Medical Center (UCSF), has fallen out of the Top 10, moving from No. 7 last year to 13. But even more compelling to Wachter is that Johns Hopkins Hospital in Baltimore lost the No. 1 position it held for 21 years.
“When US News launched its Best Hospitals list with its April 30, 1990 issue,” Wachter wrote, “the entire ranking (which, then and now, considers only large teaching hospitals with advanced technologies) was based on reputation—a survey of 400 experts in each specialty rated the best hospitals in their field. Was this a measure of quality and safety? Maybe a little. But I’d bet that the rankings had more to do with the prominence of each hospital’s senior physicians, its publications and [National Institutes of Health] portfolio, the quality of its training programs and its successes in philanthropy than with the quality of the care it delivered. While the magazine changed the methodology to include some nonreputational outcome and process data in 1993, the reputational survey remained the most important factor.”
But this year, the magazine rejiggered the metrics, making “reputation” worth less than one-third of the total, and promoting other measures of patient-safety quality, such as nurse staffing.
Now, the Top 10 list is less predictable—less, say, like the Yankees in the playoffs (again) and more like, say, the Kings winning the Stanley Cup (say what?). Former Top 10 institutions fell to the teens, and others fell out off the list (which ends at 17) altogether. Former also-ran hospitals have pushed them aside.
Medicare’s introduction of the Hospital Compare website in 2003 was a wake-up call not only for any organization that rates hospitals, but for the institutions themselves, whose compensation will depend in part on how well they perform in patient-safety areas such as readmissions. Medicare also requires hospitals to have patients complete satisfaction surveys.
After UCSF scored relatively poorly in some categories on the 2003 U.S. News rating, Wachter says it transformed its approach to quality, safety and patient experience, and, he writes, “Without question, UCSF is a far better hospital today than it was then, and I don’t think that would have happened without public reporting and rankings.”
When it comes to putting a priority on patients, we also like Planetree, a nonprofit organization that promotes patient-centered care in hospital design and management, and The Leapfrog Group, an organization of businesses that promotes high-quality, cost-effective health-care as part of employee benefits.
There’s no shortage of hospital rankers. As Wachter points out, “Americans love rankings, and the hospital ranking game has become big business.” Even The Joint Commission, an independent, nonprofit organization that accredits and certifies more than 19,000 health-care organizations and programs in the U.S., has joined the ranking ranks.
Like Yelp or Angie’s List, hospital surveys that include subjective consumer criteria can skew results if enough people are more interested in retribution than they are in informing. And more important, as Wachter notes, hospitals might be motivated to address ranking criteria at the expense of other important but unmeasured factors. “Just consider all of the attention being lavished on preventing hospital falls and central line infections, safety problems that are not nearly as consequential or common as diagnostic errors (which have received considerably less attention because they’re so hard to measure). Great performance on some measures—like ultra-tight glucose control or the four-hour door-to-antibiotics measure pneumonia—was ultimately proven to be harmful to patients.
“And, as long as many of the outcome measures (such as mortality and readmission rates) are judged based on ‘observed-to-expected’ ratios, hospitals will find it a lot easier to improve their ranking by changing the ‘expected’ number (through changing their documentation and coding) than by actually improving the quality of care.”
Still, we’re glad that U.S. News & World Report now measures a more complete picture of quality care, and we agree with Wachter that the benefits of hospital rankings that embrace this wider range and patient input outweigh the pitfalls. “Ranking and public reporting does serve to motivate hospitals to take quality and safety seriously, and to invest in systems and people to improve them.”