Live And Let Die

November 8, 2017

My adult son and I watched the most recent James Bond movie the other day. The level of violence and mayhem is, of course, astounding, so it’s a bit surprising that it is rated PG-13. On the other hand, such violence has become such a part of our culture that perhaps it doesn’t even make sense to try to “shield” kids from it in our entertainment.  And one could also ask, does seeing a violent movie even make a difference given the pervasiveness of violence and guns in the America of 2017?

According to a fascinating recent study in JAMA Pediatrics, the answer to this is yes, it does matter.  Researchers at Ohio State studied 104 children ages 8-12.   All children were individually shown a 20 minute clip from one of 2 PG rated movies (The Rocketeer and National Treasure, in case you were wondering).  Half the children were randomly selected to view a clip with guns, and half saw a clip without guns.  After the movie viewing, a pair of children (who had both watched the same movie) was taken to a different room with toys, and told they could play with any of the toys while they waited.  Also in the room was a cabinet with a 9-mm handgun (modified to be unfirable).  During the 20 minute waiting period, researchers and parents monitored the child using a hidden camera.

Thanks to the randomization, there was no difference between those who watched movies with and without guns with respect to their demographics, prior media watching habits, aggressiveness, or attitudes toward guns. Overall, 83% of children found the gun, and almost half picked it up.  There was no difference between gun-watching and non-gun-watching participants in regard to finding or picking up the gun.  But children who had just finished watching a movie containing guns held the gun 3 times longer, and pulled the trigger 22 times more often, than children who saw the gun-free movie clip.  Kids who had watched the movie with guns were also more likely to point the gun at the other child in room and use threatening language.

This is a single study, with inherent limitations, but the findings are startling and provocative, albeit none too unexpected. Seeing violence begets violence, whether in real life or on the screen.  The morning after we watched 007, I read in the paper about the latest mass shooting (26 people killed in a church in Texas).  I couldn’t help but wonder about the connection.


Papa Was A Rolling Stone

October 12, 2017

It was the third of September

That day I’ll always remember, yes I will

Cause that was the day that my daddy died…

Papa was a rolling stone

Wherever he laid his hat was his home

And when he died all he left us was alone

 

Few issues define the cultural divide as sharply as one’s stance on family structure.  Senator Ron Johnson, for example, in a presentation I heard at Children’s Hospital of Wisconsin, placed the blame for America’s fiscal and other woes on single parent families.   (This didn’t go over so well at a workplace that is overwhelmingly female and with more than a few highly successful single parents.)  A fascinating recent article in Pediatrics may shed some light on this debate, or may simply generate more heat.

Researchers from Michigan and Princeton looked at the association between loss of a father and telomere length, a chromosomal marker of stress that is itself associated with a variety of adverse health outcomes.  (Telomeres shorten with age, and when they become sufficiently short cells die.  Thus telomere length has been called a “biological clock.”)  In this study, children who lost a father for any reason had significantly shorter telomeres than those who had not.  This effect was strongest for the death of a father, somewhat less for incarceration, and least for separation or divorce.  And it was stronger for boys than for girls.

Traditionalists might use this as evidence for the superiority of raising children in a two parent (specifically, mother and father) home.  But not so fast.  This study only examined those children who started out in a home with a mother and father, and then lost the father.  Loss and absence are not necessarily the same thing.  Also, at least for loss due to separation or divorce, nearly all of the effect (95%) is explained by lost income.  So one could as easily say this is evidence for the superiority of a living wage, and equal pay for men and women.

These findings also support the need for a change in the mass incarceration policy in this country.  The millions of men in prison – many of them men of color – are leaving behind millions of children who we now know suffer not only emotional but biological damage as a result.  This public policy crisis is creating a public health crisis.

One other tidbit in this study was intriguing.  The effect of loss of a father on telomere length was strongest among those children with a genetic variant in molecules involved in processing certain neurotransmitters.  How one copes with adverse events, like the loss of a parent, isn’t simply a matter of one’s character, or the strength of one’s support system.  Biology may not be destiny, but it sure tilts the playing field.

A rolling stone may gather no moss, but it can sure leave a lot of havoc in its wake.


Take This Job and Shove It

October 2, 2017

This song about burnout on the job was quite popular in 1977 (original version by Johnny Paycheck; subsequently also recorded by Dead Kennedys). While many of you are not old enough to have been assaulted by the recording on AM radio, the sentiment probably isn’t at all foreign.  The phenomenon of burnout among medical professionals has been the subject of both serious research and discussion in the lay press.  A 2012 study in JAMA Internal Medicine revealed high levels of self-reported burnout among physicians, especially in “front-line” specialties such as family practice and emergency medicine, where over half of physicians reported some form of burnout.  (Fortunately, both primary care pediatrics and pediatric subspecialties had below average rates, although a recent study among pediatric emergency physicians was concerning.)  Also, physicians had higher rates of burnout than the general population.  (There are studies showing similar statistics for nurses, but I haven’t been able to find any studies specifically dealing with burnout among advanced practice providers.  I think we can assume it’s fairly similar.)

Burnout is defined as “a syndrome characterized by a loss of enthusiasm for work (emotional exhaustion), feelings of cynicism (depersonalization), and a low sense of personal accomplishment.”  Effects of burnout include symptoms of depression and/or anxiety, loss of empathy and objectivization of patients and co-workers, unprofessional behavior, high rates of error, and turnover or leaving the profession entirely.  It can also lead people to leave the profession.  Thus, burnout is a problem for the providers, for their patients, and ultimately for the system.

While a good bit has been written about the prevalence of burnout, there seems to be little data on what can be done to prevent it.  Burnout doesn’t appear to correlate strongly with hours worked, income, or satisfaction with work-life balance, but data are limited.  Consistent with what has been described, an internal survey of physicians at Children’s Minnesota revealed two overarching themes.  The first is dissatisfaction with the conditions of work themselves – things that make it difficult to do a job, everything from lack of staff to cumbersome electronic health records to dealing with insurance companies.  But the more important is dissatisfaction with the one’s ability to influence the conditions of work.  This includes lack of input into decisions, as well as the feeling that external forces – changes in the nature of healthcare – are inexorable.

For example, a commonly cited factor contributing to burnout is the electronic health record.  We physicians have been complaining about paperwork and charting since well before Epic and Cerner were a glimmer in anyone’s eye.  And honestly, even a digital non-native like me, with a bit of advance prep, is able to complete an EHR record in no more time than I could a written one, certainly when adjusted for quality and completeness.  So is the computer really the problem?  It seems to me what is different is that I have less control over the EHR.  With a written chart, I could decide how thorough to be, and what format to use.  But now a bunch of bureaucrats, administrators, and Millennial programmers determine what we need to document, and how, without seeming to care when it doesn’t make sense.

There’s good news and bad news here.  The good is that both of these themes can be addressed.  The conditions of work can be ameliorated – the EHR can be modified, scribes hired, staffing needs addressed.  And those doing the work, including physicians and other clinical staff, can be empowered to participate in decision-making.  At the same time we have to recognize that health care has changed and will continue to, in fundamental ways.  As the cost of health care now consumes almost one-fifth of the entire US economy, resources will be more limited, forcing us to make hard choices about those conditions of work.  Scribe, or clinic nurse, or staff to screen for social determinants of health?  Unlike in the past, we can no longer afford all three.  And no matter how much providers participate in making those choices, the locus of control has moved away from physicians.  Many of us came up in the era when the doctor had the final word.  When no one would ever question his (and it was usually his) productivity.  Accepting that decisions are made at least multilaterally, and increasingly by patients and families as consumers, is in my opinion a key underlying factor driving dissatisfaction and burnout.  Whether this will be as true for the newer generation, which doesn’t have the same inherent expectations of command and control, remains to be seen.

Burnout is real. There are many proximate causes, and we should try to address those to the extent possible.  But the root causes are related to the fact that being a health professional doesn’t mean what it used to.  At least not in the way we deliver health care.  Doctors have neither the status nor the authority nor the autonomy they once did.  Yet the mission of improving the health of those we serve hasn’t changed.  It’s still health care, and we do.  Anecdotally, keeping that front and center has kept me and many of my colleagues this side of shouting Johnny Paycheck’s lyrics at the top of their lungs in the clinic.  Perspective matters.  So if you need a little pick-me-up, watch this


The New Professionalism

September 7, 2017

When I first get out of bed the word unkempt comes to mind, mainly because my hair is unruly. But after I clean up you wouldn’t call me “kempt,” or my hair “ruly.”  These adjectives don’t, at least in modern use, yield words with opposite meaning by dropping the prefix “un-.”  The same appears to be the case with “unprofessional.”  When applied to a doctor, for example, it most often signifies behavior that is unethical, unbecoming, and unacceptable for a member of the medical profession.  But when someone does live up to those standards, we rarely use the term “professional” to describe them.  Professional in the affirmative more often is used in distinction to “amateur.”  As with the concepts of kemptness and ruliness (neither of which exists according to my spell checker), the concept of professionalism is most often acknowledged in the breach.  It is easier to note its absence than its presence, which makes it difficult to reinforce a positive notion of professionalism, one of six core competencies recognized as essential to becoming a successful physician.

It’s also difficult because professionalism in medicine is a complex and slippery concept.  It includes commitment to ethical principles such as integrity, respect, and compassion; service to patients and society that supersedes self-interest; responsiveness to the needs to diverse patients; and accountability to the profession.  Some of these, like integrity and compassion, have changed little since the time of Hippocrates.  But others evolve with time.  For example, there was a time when doctors were expected put patients first to the degree that they could even endanger themselves.  While some degree of self-sacrifice remains laudatory, few at present would condone such dangerous self-experimental practices like Dr. William Harrington’s exchange transfusion between himself and a thrombocytopenic patient, or Walter Reed and colleagues intentionally infecting themselves with yellow fever in search of a cure. (Not to mention 100 hour work weeks and 36 hour days.)

Yet the degree to which we are expected to put our patient’s needs ahead of our own continues to be something on which there is disagreement.  In part, at least, this appears to reflect generational differences.  Now unlike some of my Baby Boomer compatriots, I do not subscribe to the notion that Millennials are inherently less altruistic.  If anything, my experience with my own children and their peers suggests exactly the opposite.  But I do believe there are differences in how they show it.  My generation was more willing to sacrifice time, while Gen Y foregoes money (though both only to an extent).  And as medicine has become corporatized and more transactional, those who have only experienced that will see it as the norm.  Physicians who have always practiced in an environment where their contribution to patients is measured in relative value units (RVUs) are naturally going to be influenced by RVU considerations.  Those of us who trained 30 years ago should be neither surprised by this, nor feel superior.  (Only a couple of my medical school classmates work for free, after all.)

There are also differences in how we view “accountability to the profession.” Consistent with their overall greater attachment to organizations and institutions, earlier generations were raised with the notion that medicine was a sort of fraternity, and that as members we had an inherent responsibility to it.  If declining membership in medical societies is any indicator, I doubt most Millennials feel that same sort of attachment to the institutions of medicine.  And this isn’t necessarily bad.  Medicine, like many professions, is largely self-regulatory.  The traditional unwillingness to speak ill of a colleague has, fortunately, given way to a greater openness to mutually recognizing, acknowledging, and learning from errors.

Since the more seasoned (i.e., older) of us are the ones in a position to both teach the precepts of professionalism and judge adherence to them, we need to be aware of how the concept evolves. Otherwise we may fall into the trap – described by George Thorogood in the song “Get A Haircut And Get A Real Job” – of believing our younger colleagues are inherently unkempt, unruly, and unprofessional.


Accountability

August 21, 2017

“Well, I’m an accountant, and consequently too boring to be of interest,” says John Cleese in response to a question in a man-in-the-street interview on Monty Python’s Flying Circus. Accounting, and accountants, have a perhaps-not-entirely-undeserved reputation for being necessary but dull.  (Disclosure – my father was an accountant, so I know whereof I speak.)  Yet accountability, which shares a word root, is all the rage.  I just did a Google search and a half dozen news headlines containing the word “accountability” from just this week appeared.

Moreover, at a Children’s Minnesota leadership forum last week, we addressed the topic of accountability, which leaders raised as a concern in a survey. Specifically, there is a belief that we have trouble with accountability.  Our leaders are not unique: every place I have worked believes they have a unique problem holding people accountable.  So it’s clearly not unique. But is it true?  I think it is, albeit not in the way people think.

Let’s start with what we mean by accountability. It comes from the Latin root meaning to count, calculate, or reckon.  In early use, it signified an ability to explain, specifically to reconcile an amount of money or valuables with which one was trusted.  The concept of consequences was added later; it came to mean being able to prove that one has done right, to justify either reward or punishment.  In current usage, it often focuses solely on the consequence part, and specifically on punishment.  Every time I have had someone tell me an organization has a problem with accountability, what they mean is “So and so does a terrible job and they haven’t been punished or fired yet.”

Which may or may not be true. For one thing, firing is only the end of a long chain of consequences for poor performance, the rest of which may be invisible.  (How many of us could say whether anyone we work with has been given warning, put on a performance improvement plan, or sacrificed some compensation because of their actions?)  And those consequences are only the end of a long chain of the process of accountability.  If we look at the full meaning of the word, it includes: 1) establishing expectations; 2) communicating expectations; 3) measuring progress toward expectations (these three are necessary for the part that means “proving one has done right”); 4) providing feedback on that progress and establishing a plan for improvement if expectations are not being met; 5) identifying gaps that may be contributing to not meeting expectations (either fixing system problems that create barriers or helping the accountable person develop the necessary skills); 6) progressive consequences for continuing to fail to meet expectations.

In other words, accountability is really just what we call “performance management.” Which means it is a process. But too often we think of accountability as an event – that person who is incompetent or lazy or unprofessional loses his or her job.

So do we have an accountability problem? Yes, we do.  Holding ourselves and others accountable is difficult.  First, we don’t always devote the time and effort we need to steps 1-3.  I once had a chair who, despite never meeting me face to face, would send me an annual progress summary that read, in its entirety, “I have serious concerns about your progress toward promotion.”  (Spoiler alert – I did OK.)  As for feedback, that can be hard for multiple reasons.  There is a saying, “Feedback is a gift.”  Yet few of us approach receiving feedback with nearly the enthusiasm we have for Christmas or a birthday.  Too often feedback is criticism, not a gift.  We don’t want to be on the receiving end, and often not on the giving end.  Especially in pediatric healthcare, where we are all nice and cheerful and positive.  And in a place like ours, where people stick around and have worked up through the ranks, they may be giving that “gift” to someone who was once a colleague and friend.  Now imagine having to actually change someone’s pay or relieve them of their job.

Yet at the same time, we usually do most of this quite well. For one thing, accountability is about justifying both punishments and rewards.  If we were truly terrible about it, good people would be fleeing to places that recognized their values and rewarded them.  And yes, there are always going to be challenging cases.  But in an organization like ours that takes its mission of caring for kids seriously, the large majority of problems are actually dealt with quite effectively.  We just don’t advertise them.

So to quote another of my favorite movies, The Princess Bride, when I hear people say we have a problem with accountability, I think “You keep using that word. I don’t think it means what you think it means.”  The actual process of accountability – as opposed to the event of the final punishment that people usually think of as accountability – is, like accountants, dull but necessary, and often in the background.


Shadow of Leadership

July 24, 2017

When I was at Children’s Hospital of Wisconsin, I frequently had to go back and forth between the hospital and our office building. In the middle of the day, when traffic was light, I would often cross against the light, as did most other people.  One day Mike Thiel, our director of security, saw me do this and pulled me aside.  “You know, you have to think of the shadow you cast as a leader.  If you cross against the light, others will do the same, and they’ll think it’s OK not to follow the rules because that’s what you do.”  So from that day I would dutifully wait for the light to change – my inner New Yorker seething – and lo and behold, everyone else waited too.

Most of us are leaders to some extent – at work it may be in an informal if not a formal role, and certainly in our families and communities. We need to be aware that as leaders we do cast a shadow.  In setting an example, our actions truly speak louder than our words.

The shadow of leadership is at least as important in producing desired results as in avoiding undesired ones (like walking in front of a moving car). The other day one of our staff at Children’s Minnesota ran into me in the hall.  “I took an example from you,” he said, “and bought a bike.” (Most of you know I’m a regular bicycle commuter.)  “First time I’ve had a bike in 20 years,” he said, “and I’ve been riding it everywhere, more than I can ever remember.  It feels great.”

Wow, that felt great to me, too. To make a positive difference by setting an example is the best way to cast that shadow of leadership.  I just hope he doesn’t cross against the light.


Baseball, Hot Dogs, Apple Pie, and Inequality

June 12, 2017

The official name of Obamacare, the Affordable Care Act, is actually a bit of a misnomer. The goal of the legislation was not to lower the cost of health care.  Rather, it was to expand coverage for millions of people without insurance.  It achieved this goal, in part by expanding Medicaid, the government program primarily for the poor, and in part by providing subsidies to individuals purchasing private insurance on newly established exchanges.  And these subsidies did make insurance more affordable for those receiving them.  Thus the ACA sharply reduced income-based disparities in coverage.  But while the exchanges, spurring competition among plans, did temporarily drive lower prices for those plans offered on the exchange, the law did little to address the underlying drivers of high health care spending, and prices are now again on the rise.  So for all the good it accomplished, “affordable care” is something of a stretch.

On the other hand, the American Health Care Act, passed by the House and now being rammed through the Senate under cloak of secrecy, fully lives up to its name.  It is as American as it gets.  In rolling back Obamacare, it will restore the vast inequalities in health care that are frankly a hallmark of the American health care “system.”  It not only rolls back the Medicaid expansion, it sharply reduces Medicaid spending to below pre-ACA levels (a report done for the Children’s Hospital Association estimates the negative impact on Medicaid funding for children alone at $48-72 billion).  Moreover, it reduces the subsidies, shifts the remaining subsidies toward those with higher incomes, and gives the rest in the form of tax cuts for higher income individuals.  A foundation of the AHCA is to shift decision-making to states.  Medicaid would take the form of block grants, which states would have great latitude to distribute as they wish.  States could also opt to eliminate many of the coverage requirements of Obamacare, which prevent insurers from discriminating against people with pre-existing conditions, mental health disorders, and women, among others.

A study from the Urban Institute underscores the risk of this policy.  It showed a strong inverse correlation between the African-American population of a state and the coverage of its antipoverty programs.  For example in Vermont, which has one of the lowest populations of African-Americans (1%), 78% of poor families receive benefits under the Temporary Assistance to Needy Families (TANF) program, compared with only 4% of such poor families in Louisiana, which is 32% black.  When decisions about helping the needy are made at the state level, old patterns of inequality re-surface.

Such disparities in health insurance coverage and in support for addressing needs related to the social determinants of health lead, not surprisingly, to disparities in health outcomes.  These have been well documented for some time, but lest we need to be reminded of it yet again, a study in JAMA Pediatrics demonstrated that children living in areas of high income inequality had higher rates of preventable hospitalizations.  Another recent study, in Health Affairs, showed the gap in health between the richest and poorest Americans is among the largest in the world.  Of 32 higher-income countries studied, only Chile and Portugal had a wider disparity.

The Senate leaders may be using stealth in getting their program passed (knowing how very unpopular it is), but by calling it the American Health Care Act, they are making very clear their intentions: less health care for more people.


%d bloggers like this: