So, I’ll admit that I tend to think of myself as being a pretty awesome NP. Kind of a rock star.
But I recognize that I might be slightly biased.
My mom, my husband, my kids are all similarly convinced that I, generally speaking, rock when it comes to being an NP.
And they’re, most definitely, totally and completely biased as well. (And more than a little unqualified to judge.)
Isn’t that human nature, though? To think of ourselves as being pretty damn good at what we do? Probably true for everyone, but maybe even moreso for clinicians?
I mean, this job can be hard. REALLY hard. Crazy schedules, crazy patients, crazy admins, crazy insurance companies. If we don’t think we’re particularly great at it, that we’re bringing something special and unique and AWESOME to the table – and I mean even more than that other guy over there – what’s the point?
But we all know that all clinicians are not equally awesome.
Every clinic everywhere has “good” providers and “bad” providers and “okay” providers. It’s not written down anywhere, but when a friend asks a friend who knows some things about who they should ask to see, certain names come up, over and over again, while others are conspicuously omitted.
I’ve been thinking about this topic recently because of a conversation I had with a certain administrative type from a certain clinic this week. While we were chatting about unexciting clinic business, he let slip that roughly half of the patient complaints the clinic receives are consistently about one particular clinician. He then brushed them aside because he thinks of this clinician as being stingy with prescribing antibiotics, so therefore the fact that 50% of the clinic’s complaints are about 1/10 of the clinical staff may not the best measure of the quality of this guy’s work. Besides, he sees A LOT of patients. Very, very quickly. No drama. Admins love a high volume:drama ratio.
There’s currently this great experiment underway to figure out how to measure which clinicians are particularly good at what they do, and to reward those who are the MOST AWESOME in an effort to inspire everyone to be even MORE AWESOME than they may already be.
On one level, this makes perfect sense. Why NOT pay more money for better care? Why not reward high quality? Isn’t that how the rest of the world works?
But measuring quality in health care isn’t the same as counting how many widgets an assembly worker makes, or how many five-star Yelp reviews you get.
Briefly, a probably-incomplete rundown of what seem to be most common approaches to quality measurement in healthcare:
a) Volume. How many patient visits can you squeeze into X minutes? Easy to measure – the greater the number, the greater the reward. In a fee-for-service environment, the reason for this relationship is obvious: more visits = more cash. But even outside the FFS world, this idea – that part of what makes one clinician better than another is how on-time they run, or how soon a patient can get an appointment – is a pervasive one. And who knows, maybe it should be. Access matters, right?
b) Patient satisfaction. How happy was your patient with the perceived quality of the care provided? If health care was just like any other consumer-oriented business, this would make 100% perfect sense.
BUT. Studies show that patients may not always be the best judges of the actual quality of the care they just received. Scores go up when providers hand out more antibiotics and more narcotics. Scores go up when providers order more expensive tests and procedures. Satisfaction matters, but so does averting a post-functioning-antibiotic future, avoiding the unintentional promotion of opioid addiction, and keeping medical costs from bankrupting the nation.
c) Adherence to clinical guidelines. How many patients got a flu shot this year? How about a Pap smear, or a mammogram? Hard to see what could be wrong here. Lots of people who know lots of things spent lots of time creating those guidelines. Flu shots are good. Early detection and treatment of cancer is good.
But what about shared decision making? Does judging clinicians based on their adherence to guidelines, above all else, result in strong-arming patients into taking medications, or having tests performed, or whatever it might be that they don’t want, or can’t afford, or may not even be considered appropriate care in 5 years? (cough mammograms cough)
Having a conversation with the patient and discussing guidelines, and the evidence behind the guidelines, in a shared and mutually understandable language is always a good thing. But is it possible that overreliance on adherence to guidelines as a marker of quality misses something?
d) Measuring outcomes. How many of your patients’ LDLs, blood pressures, and A1Cs are within normal limits? This, also, seems reasonable. Considering that part of our job (most of our job?) is to try to make these things better, it makes sense that we be judged on how well we do.
But we also all know that controlling chronic illness is harder in some populations than others. Access to care matters. Wealth matters. Community norms and expectations matter. Does rewarding clinicians who can show better outcomes unwittingly drive them away from caring for more at-risk populations?
Back to this unpopular-with-the-patients clinician. Sure, he’s decent at avoiding prescribing antibiotics for colds. But I also happen to know he’s not great with the interpersonal skills. He also doesn’t bother to spend much time explaining what he’s thinking and why – to anyone, really, about anything, let alone his diagnostic and care plans with his patients. So you can consider me completely unsurprised to learn that he receives a far-greater-than-average number of “He did nothing!” complaints than the average clinician.
Not for nothing, but I’m also known for being stingy about prescribing antibiotics. And yet I consider it a point of pride that even my most pre-visit-antibiotic-committed patients leave their visit agreeing with me that their nasal congestion is viral.
I don’t actually have any answers here. I feel like each of these quality measurements dances around an aspect of care that matters, but also somehow misses the bigger point. It feels like there’s got to be a way to measure and reward the intangibles.