January 26, 2018

Quick quiz: where is an infant more likely to die, in Bulgaria or north Minneapolis?  The correct answer, sadly, is the latter.  In fact, if the black population of Minnesota were its own country, it would rank 80th in the world in infant mortality, around the same as Thailand and Croatia.  White Minnesota would be about the same as Denmark.

Minnesota children of color lag behind white Minnesota children in many measures of health.  While our state consistently ranks among the best for children overall, it also has among the biggest health disparities.  Infant mortality, immunization rates, poverty, school readiness – all are worse for children of color, in some cases by an order of magnitude.

To draw attention to this problem, Gov. Dayton declared January Health Equity Month.  While it’s great to raise awareness, eliminating health disparities will require action, bold action.  Only 20% or less of health outcome is related to medical care.  The remainder is affected by environmental, social, and behavioral factors.  The health disparities we see in Minnesota (and frankly the rest of the country – they’re just more marked here) are due to deep underlying issues around poverty and inequality.  Issues that are so deep it can seem impossible to fix them.  What can those of us who care for kids do?  One thing Children’s Minnesota has done is to begin to focus on all those social determinants of health in our patients.  Our Community Connect program, launched in our St. Paul primary care clinic in 2017 and now rolling out to other areas, screens patients and families not only for medical issues like immunizations and allergies, but also non-medical ones such as poverty, housing and food insecurity, joblessness, and immigration issues, all of which have a direct impact on a child’s health.  When needs are identified, we can refer families to community resources for help.  So far we have screened nearly 2000 families.  Unfortunately, insurers don’t generally see this as a billable medical expense, but our program has been generously supported by philanthropy, including grants from the Children’s Hospital Association and U Care, among others.

We also need to take a hard look at ourselves.  While social issues predominate, racial and ethnic disparities in medical care itself have also been identified.  For example, children of color with appendicitis are less likely to receive pain medication than their white counterparts.  When we saw a similar issue in our own emergency department we implemented education and guidelines to help correct it.

But just as with patient safety, we need to look at the root causes.  Health disparities arise from the same issues of historical oppression and systemic racism that underlie a host of other issues.  Referring people to food banks and educating doctors about pain protocols is like giving ibuprofen to someone with an infection; it treats the symptom, not the disease.  If our children of color are to thrive as much as our white children, we need to dig far deeper, and advocate for real change.  January is a start, but it’s going to take much more than a month.

Leading from Below

December 18, 2017

“How would you describe your leadership style?” It’s a pretty common question, almost a cliché.  Mindful of the scorn heaped on President Obama for his “leading from behind” concept by some of the more bellicose talking heads who thought it sounded weak, I hesitated.  “I’d call it leading from below.”

Let me explain. Rather than a military analogy, as Obama’s was, mine is a musical one.  For many years I had the great fortune to play tuba with a group of wonderful Milwaukeeans far more talented than I, in a brass quintet called esprit d’Brass.  The modern brass quintet features 2 trumpets, French horn, trombone, and of course tuba.  There is no conductor.  So who leads it, you might ask?  Go ahead, ask.  OK, I’ll tell you.  You might think it’s the first trumpet, which most often carries the melody, but in fact, it is the tuba.

First, we’d decide on what to play, and I was the one who had to remind everyone where in our collection of music books the piece was. Literally making sure we were on the same page.  Next, we’d tune.  And not to some nasally oboe.  In a brass quintet, as in a concert band (like the charming Minnesota Freedom Band I currently play with), the tuning note is played by…the tuba.  The rest of the band tunes to my B flat.  When my horn is cold from sitting in the car all day it’s closer to an A, but they have to tune to it.  That’s power.

Now, the tuba rarely has the melody or a solo. In many pieces, the tuba simply plays the downbeats.  But in the absence of someone waving a baton in the front, that downbeat sets the tempo.  Playing that bass part requires careful listening, so that if someone is straying a little ahead or behind, I can subtly emphasize the beat to restore the desired tempo.  Also pretty powerful.  (Of course, if I wanted to I could intentionally speed us up or slow us down too much, but that would be an abuse of power.)

Note that unlike the conductor of an orchestra, who may be familiar with how to play many different instruments, I would never presume tell the trumpet or horn how to play their instrument. I simply provide the right environment and let them do what they do well.

So, setting the direction, getting everyone on the same page, setting the tone and the tempo, and letting the experts excel. It may not sound sexy, but that’s leading – from below.

Live And Let Die

November 8, 2017

My adult son and I watched the most recent James Bond movie the other day. The level of violence and mayhem is, of course, astounding, so it’s a bit surprising that it is rated PG-13. On the other hand, such violence has become such a part of our culture that perhaps it doesn’t even make sense to try to “shield” kids from it in our entertainment.  And one could also ask, does seeing a violent movie even make a difference given the pervasiveness of violence and guns in the America of 2017?

According to a fascinating recent study in JAMA Pediatrics, the answer to this is yes, it does matter.  Researchers at Ohio State studied 104 children ages 8-12.   All children were individually shown a 20 minute clip from one of 2 PG rated movies (The Rocketeer and National Treasure, in case you were wondering).  Half the children were randomly selected to view a clip with guns, and half saw a clip without guns.  After the movie viewing, a pair of children (who had both watched the same movie) was taken to a different room with toys, and told they could play with any of the toys while they waited.  Also in the room was a cabinet with a 9-mm handgun (modified to be unfirable).  During the 20 minute waiting period, researchers and parents monitored the child using a hidden camera.

Thanks to the randomization, there was no difference between those who watched movies with and without guns with respect to their demographics, prior media watching habits, aggressiveness, or attitudes toward guns. Overall, 83% of children found the gun, and almost half picked it up.  There was no difference between gun-watching and non-gun-watching participants in regard to finding or picking up the gun.  But children who had just finished watching a movie containing guns held the gun 3 times longer, and pulled the trigger 22 times more often, than children who saw the gun-free movie clip.  Kids who had watched the movie with guns were also more likely to point the gun at the other child in room and use threatening language.

This is a single study, with inherent limitations, but the findings are startling and provocative, albeit none too unexpected. Seeing violence begets violence, whether in real life or on the screen.  The morning after we watched 007, I read in the paper about the latest mass shooting (26 people killed in a church in Texas).  I couldn’t help but wonder about the connection.

Papa Was A Rolling Stone

October 12, 2017

It was the third of September

That day I’ll always remember, yes I will

Cause that was the day that my daddy died…

Papa was a rolling stone

Wherever he laid his hat was his home

And when he died all he left us was alone


Few issues define the cultural divide as sharply as one’s stance on family structure.  Senator Ron Johnson, for example, in a presentation I heard at Children’s Hospital of Wisconsin, placed the blame for America’s fiscal and other woes on single parent families.   (This didn’t go over so well at a workplace that is overwhelmingly female and with more than a few highly successful single parents.)  A fascinating recent article in Pediatrics may shed some light on this debate, or may simply generate more heat.

Researchers from Michigan and Princeton looked at the association between loss of a father and telomere length, a chromosomal marker of stress that is itself associated with a variety of adverse health outcomes.  (Telomeres shorten with age, and when they become sufficiently short cells die.  Thus telomere length has been called a “biological clock.”)  In this study, children who lost a father for any reason had significantly shorter telomeres than those who had not.  This effect was strongest for the death of a father, somewhat less for incarceration, and least for separation or divorce.  And it was stronger for boys than for girls.

Traditionalists might use this as evidence for the superiority of raising children in a two parent (specifically, mother and father) home.  But not so fast.  This study only examined those children who started out in a home with a mother and father, and then lost the father.  Loss and absence are not necessarily the same thing.  Also, at least for loss due to separation or divorce, nearly all of the effect (95%) is explained by lost income.  So one could as easily say this is evidence for the superiority of a living wage, and equal pay for men and women.

These findings also support the need for a change in the mass incarceration policy in this country.  The millions of men in prison – many of them men of color – are leaving behind millions of children who we now know suffer not only emotional but biological damage as a result.  This public policy crisis is creating a public health crisis.

One other tidbit in this study was intriguing.  The effect of loss of a father on telomere length was strongest among those children with a genetic variant in molecules involved in processing certain neurotransmitters.  How one copes with adverse events, like the loss of a parent, isn’t simply a matter of one’s character, or the strength of one’s support system.  Biology may not be destiny, but it sure tilts the playing field.

A rolling stone may gather no moss, but it can sure leave a lot of havoc in its wake.

Take This Job and Shove It

October 2, 2017

This song about burnout on the job was quite popular in 1977 (original version by Johnny Paycheck; subsequently also recorded by Dead Kennedys). While many of you are not old enough to have been assaulted by the recording on AM radio, the sentiment probably isn’t at all foreign.  The phenomenon of burnout among medical professionals has been the subject of both serious research and discussion in the lay press.  A 2012 study in JAMA Internal Medicine revealed high levels of self-reported burnout among physicians, especially in “front-line” specialties such as family practice and emergency medicine, where over half of physicians reported some form of burnout.  (Fortunately, both primary care pediatrics and pediatric subspecialties had below average rates, although a recent study among pediatric emergency physicians was concerning.)  Also, physicians had higher rates of burnout than the general population.  (There are studies showing similar statistics for nurses, but I haven’t been able to find any studies specifically dealing with burnout among advanced practice providers.  I think we can assume it’s fairly similar.)

Burnout is defined as “a syndrome characterized by a loss of enthusiasm for work (emotional exhaustion), feelings of cynicism (depersonalization), and a low sense of personal accomplishment.”  Effects of burnout include symptoms of depression and/or anxiety, loss of empathy and objectivization of patients and co-workers, unprofessional behavior, high rates of error, and turnover or leaving the profession entirely.  It can also lead people to leave the profession.  Thus, burnout is a problem for the providers, for their patients, and ultimately for the system.

While a good bit has been written about the prevalence of burnout, there seems to be little data on what can be done to prevent it.  Burnout doesn’t appear to correlate strongly with hours worked, income, or satisfaction with work-life balance, but data are limited.  Consistent with what has been described, an internal survey of physicians at Children’s Minnesota revealed two overarching themes.  The first is dissatisfaction with the conditions of work themselves – things that make it difficult to do a job, everything from lack of staff to cumbersome electronic health records to dealing with insurance companies.  But the more important is dissatisfaction with the one’s ability to influence the conditions of work.  This includes lack of input into decisions, as well as the feeling that external forces – changes in the nature of healthcare – are inexorable.

For example, a commonly cited factor contributing to burnout is the electronic health record.  We physicians have been complaining about paperwork and charting since well before Epic and Cerner were a glimmer in anyone’s eye.  And honestly, even a digital non-native like me, with a bit of advance prep, is able to complete an EHR record in no more time than I could a written one, certainly when adjusted for quality and completeness.  So is the computer really the problem?  It seems to me what is different is that I have less control over the EHR.  With a written chart, I could decide how thorough to be, and what format to use.  But now a bunch of bureaucrats, administrators, and Millennial programmers determine what we need to document, and how, without seeming to care when it doesn’t make sense.

There’s good news and bad news here.  The good is that both of these themes can be addressed.  The conditions of work can be ameliorated – the EHR can be modified, scribes hired, staffing needs addressed.  And those doing the work, including physicians and other clinical staff, can be empowered to participate in decision-making.  At the same time we have to recognize that health care has changed and will continue to, in fundamental ways.  As the cost of health care now consumes almost one-fifth of the entire US economy, resources will be more limited, forcing us to make hard choices about those conditions of work.  Scribe, or clinic nurse, or staff to screen for social determinants of health?  Unlike in the past, we can no longer afford all three.  And no matter how much providers participate in making those choices, the locus of control has moved away from physicians.  Many of us came up in the era when the doctor had the final word.  When no one would ever question his (and it was usually his) productivity.  Accepting that decisions are made at least multilaterally, and increasingly by patients and families as consumers, is in my opinion a key underlying factor driving dissatisfaction and burnout.  Whether this will be as true for the newer generation, which doesn’t have the same inherent expectations of command and control, remains to be seen.

Burnout is real. There are many proximate causes, and we should try to address those to the extent possible.  But the root causes are related to the fact that being a health professional doesn’t mean what it used to.  At least not in the way we deliver health care.  Doctors have neither the status nor the authority nor the autonomy they once did.  Yet the mission of improving the health of those we serve hasn’t changed.  It’s still health care, and we do.  Anecdotally, keeping that front and center has kept me and many of my colleagues this side of shouting Johnny Paycheck’s lyrics at the top of their lungs in the clinic.  Perspective matters.  So if you need a little pick-me-up, watch this

The New Professionalism

September 7, 2017

When I first get out of bed the word unkempt comes to mind, mainly because my hair is unruly. But after I clean up you wouldn’t call me “kempt,” or my hair “ruly.”  These adjectives don’t, at least in modern use, yield words with opposite meaning by dropping the prefix “un-.”  The same appears to be the case with “unprofessional.”  When applied to a doctor, for example, it most often signifies behavior that is unethical, unbecoming, and unacceptable for a member of the medical profession.  But when someone does live up to those standards, we rarely use the term “professional” to describe them.  Professional in the affirmative more often is used in distinction to “amateur.”  As with the concepts of kemptness and ruliness (neither of which exists according to my spell checker), the concept of professionalism is most often acknowledged in the breach.  It is easier to note its absence than its presence, which makes it difficult to reinforce a positive notion of professionalism, one of six core competencies recognized as essential to becoming a successful physician.

It’s also difficult because professionalism in medicine is a complex and slippery concept.  It includes commitment to ethical principles such as integrity, respect, and compassion; service to patients and society that supersedes self-interest; responsiveness to the needs to diverse patients; and accountability to the profession.  Some of these, like integrity and compassion, have changed little since the time of Hippocrates.  But others evolve with time.  For example, there was a time when doctors were expected put patients first to the degree that they could even endanger themselves.  While some degree of self-sacrifice remains laudatory, few at present would condone such dangerous self-experimental practices like Dr. William Harrington’s exchange transfusion between himself and a thrombocytopenic patient, or Walter Reed and colleagues intentionally infecting themselves with yellow fever in search of a cure. (Not to mention 100 hour work weeks and 36 hour days.)

Yet the degree to which we are expected to put our patient’s needs ahead of our own continues to be something on which there is disagreement.  In part, at least, this appears to reflect generational differences.  Now unlike some of my Baby Boomer compatriots, I do not subscribe to the notion that Millennials are inherently less altruistic.  If anything, my experience with my own children and their peers suggests exactly the opposite.  But I do believe there are differences in how they show it.  My generation was more willing to sacrifice time, while Gen Y foregoes money (though both only to an extent).  And as medicine has become corporatized and more transactional, those who have only experienced that will see it as the norm.  Physicians who have always practiced in an environment where their contribution to patients is measured in relative value units (RVUs) are naturally going to be influenced by RVU considerations.  Those of us who trained 30 years ago should be neither surprised by this, nor feel superior.  (Only a couple of my medical school classmates work for free, after all.)

There are also differences in how we view “accountability to the profession.” Consistent with their overall greater attachment to organizations and institutions, earlier generations were raised with the notion that medicine was a sort of fraternity, and that as members we had an inherent responsibility to it.  If declining membership in medical societies is any indicator, I doubt most Millennials feel that same sort of attachment to the institutions of medicine.  And this isn’t necessarily bad.  Medicine, like many professions, is largely self-regulatory.  The traditional unwillingness to speak ill of a colleague has, fortunately, given way to a greater openness to mutually recognizing, acknowledging, and learning from errors.

Since the more seasoned (i.e., older) of us are the ones in a position to both teach the precepts of professionalism and judge adherence to them, we need to be aware of how the concept evolves. Otherwise we may fall into the trap – described by George Thorogood in the song “Get A Haircut And Get A Real Job” – of believing our younger colleagues are inherently unkempt, unruly, and unprofessional.


August 21, 2017

“Well, I’m an accountant, and consequently too boring to be of interest,” says John Cleese in response to a question in a man-in-the-street interview on Monty Python’s Flying Circus. Accounting, and accountants, have a perhaps-not-entirely-undeserved reputation for being necessary but dull.  (Disclosure – my father was an accountant, so I know whereof I speak.)  Yet accountability, which shares a word root, is all the rage.  I just did a Google search and a half dozen news headlines containing the word “accountability” from just this week appeared.

Moreover, at a Children’s Minnesota leadership forum last week, we addressed the topic of accountability, which leaders raised as a concern in a survey. Specifically, there is a belief that we have trouble with accountability.  Our leaders are not unique: every place I have worked believes they have a unique problem holding people accountable.  So it’s clearly not unique. But is it true?  I think it is, albeit not in the way people think.

Let’s start with what we mean by accountability. It comes from the Latin root meaning to count, calculate, or reckon.  In early use, it signified an ability to explain, specifically to reconcile an amount of money or valuables with which one was trusted.  The concept of consequences was added later; it came to mean being able to prove that one has done right, to justify either reward or punishment.  In current usage, it often focuses solely on the consequence part, and specifically on punishment.  Every time I have had someone tell me an organization has a problem with accountability, what they mean is “So and so does a terrible job and they haven’t been punished or fired yet.”

Which may or may not be true. For one thing, firing is only the end of a long chain of consequences for poor performance, the rest of which may be invisible.  (How many of us could say whether anyone we work with has been given warning, put on a performance improvement plan, or sacrificed some compensation because of their actions?)  And those consequences are only the end of a long chain of the process of accountability.  If we look at the full meaning of the word, it includes: 1) establishing expectations; 2) communicating expectations; 3) measuring progress toward expectations (these three are necessary for the part that means “proving one has done right”); 4) providing feedback on that progress and establishing a plan for improvement if expectations are not being met; 5) identifying gaps that may be contributing to not meeting expectations (either fixing system problems that create barriers or helping the accountable person develop the necessary skills); 6) progressive consequences for continuing to fail to meet expectations.

In other words, accountability is really just what we call “performance management.” Which means it is a process. But too often we think of accountability as an event – that person who is incompetent or lazy or unprofessional loses his or her job.

So do we have an accountability problem? Yes, we do.  Holding ourselves and others accountable is difficult.  First, we don’t always devote the time and effort we need to steps 1-3.  I once had a chair who, despite never meeting me face to face, would send me an annual progress summary that read, in its entirety, “I have serious concerns about your progress toward promotion.”  (Spoiler alert – I did OK.)  As for feedback, that can be hard for multiple reasons.  There is a saying, “Feedback is a gift.”  Yet few of us approach receiving feedback with nearly the enthusiasm we have for Christmas or a birthday.  Too often feedback is criticism, not a gift.  We don’t want to be on the receiving end, and often not on the giving end.  Especially in pediatric healthcare, where we are all nice and cheerful and positive.  And in a place like ours, where people stick around and have worked up through the ranks, they may be giving that “gift” to someone who was once a colleague and friend.  Now imagine having to actually change someone’s pay or relieve them of their job.

Yet at the same time, we usually do most of this quite well. For one thing, accountability is about justifying both punishments and rewards.  If we were truly terrible about it, good people would be fleeing to places that recognized their values and rewarded them.  And yes, there are always going to be challenging cases.  But in an organization like ours that takes its mission of caring for kids seriously, the large majority of problems are actually dealt with quite effectively.  We just don’t advertise them.

So to quote another of my favorite movies, The Princess Bride, when I hear people say we have a problem with accountability, I think “You keep using that word. I don’t think it means what you think it means.”  The actual process of accountability – as opposed to the event of the final punishment that people usually think of as accountability – is, like accountants, dull but necessary, and often in the background.

%d bloggers like this: