Where Are The Leaders?

January 20, 2026

“The ultimate measure of a man is not where he stands in moments of convenience and comfort, but where he stands at times of challenge and controversy.” – The Rev. Dr. Martin Luther King, Jr.

Trigger warning: For those who think it’s facile and overwrought to make comparisons between 2020s America and 1930s Germany, you may want to skip this essay. Also, I urge you to read William Shirer’s The Rise and Fall of the Third Reich. Maybe it will become clearer to you.

For the second time in five years, Minneapolis – and America – is in crisis. For the second time, an unarmed civilian was murdered in broad daylight by uniformed officers of the state, an act that was captured on video and circulated around the world. For the second time, the people of the city are responding to state-sanctioned violence with resistance, with vigilance, and with mutual aid. But this time, the response of so-called leaders could not be more different.

In 2020, just a few days after George Floyd was choked to death by a Minneapolis police officer, the CEOs of nearly every major company in the state issued a statement condemning his killing and the systemic racism it so vividly demonstrated. They also committed to taking actions to address racial inequities and social justice. Now, I know from my own experience then that within the companies we led, and the communities we served, there were widely divergent opinions about Mr. Floyd’s killing, policing, and the issues of equity and inclusion. So this wasn’t necessarily an easy step for these CEOs to take in that moment. It was an act of courage, an expression of principle over expedience.

Contrast that to now. There has been near silence from business leaders about the murder of Renee Good by an ICE agent, and about the mass deployment of para-military forces engaged in widespread acts of harassment, intimidation, and violence in Minneapolis and many other communities in our state. And let’s be clear: this has little to do with immigration. It is about intimidation.  Citizens and non-citizens alike are being indiscriminately kidnapped and assaulted. For goodness sake, four of those detained are members of the Oglala Sioux Nation! (Oh, and despite the rhetoric about going after criminals, DHS’ own data shows that fewer than half of those captured have any criminal record at all, and only about 5% for violent crimes. Less than at a reunion of January 6th rioters.)

Where are our leaders, as our community is under assault? As our nation is engaging in neo-colonial undertakings around the world and alienating our friends and allies? As the rule of law is being dismantled in front of our eyes? Where the hell are our leaders?

Many would argue that the main responsibility of a CEO is to maintain the value of their company, to act in the best interests of its owners, the shareholders. Others would expand that to include other stakeholders such as customers and workforce. But the leader’s duty is to their company, first and foremost.

Even taking that view of a business leader’s obligations, it’s hard to argue that the ICE surges, and more broadly the Trump Administrations upending of legal processes, is good for business. It can’t be good for business if employees and customers alike are afraid to leave their homes. It can’t be good for business if trust in institutions is further undermined. It can’t be good for business if contracts can be arbitrarily canceled, if economic policy is conducted by the whim of a president who is at least narcissistic and at worst unstable, if society is becoming further and further fragmented.

My first thought was that the silence is due to fear. This would be understandable, given the demonstrated tendency of Trump to use the full force of the federal government to punish anyone who disagrees with him. Indeed, unlike the business community, elected officials in Minnesota have been strong in their condemnations of the administration’s actions and they are paying for their courage. (Well, at least some of them, including Minneapolis Mayor Jacob Frey and Gov. Tim Walz. Not sure what’s happening on the other side of the aisle.)

Then, I discovered the writings of Ernst Fraenkel, a Jewish lawyer who fled Nazi Germany in 1938 for the US. In his book The Dual State, he noted that the fascist regime built its totalitarian power gradually. Prof. Aziz Huq writes in The Atlantic, “As Fraenkel explained it, a lawless dictatorship does not arise simply by snuffing out the ordinary legal system of rules, procedures, and precedents. To the contrary, that system – which he called the “normative state” – remains in place while dictatorial power spreads across society. What happens, Fraenkel explained, is insidious. Rather than completely eliminating the normative state, the Nazi regime slowly created a parallel zone in which ‘unlimited arbitrariness and violence unchecked by any legal guarantees’ reigned freely. In this domain, which Fraenkel called the “prerogative state,” ordinary law didn’t apply.” Like the proverbial frog in the pot of heating water, we don’t recognize when we have slipped over completely to rule by executive prerogative until it is too late. This is especially likely when the loss of legal guarantees applies only to marginalized groups: Jews and Communists in 1930s Germany, people of color in 2020s America. It is easy for those in positions of privilege and power to be in denial.

We do still have much of our normative state. Large companies are allowed to engage in mega-mergers, individuals are still allowed to challenge government actions in court, people file their taxes and get passports and vote. But make no mistake: what we are seeing in Minneapolis today is an example of the prerogative state. When George Floyd was killed, the legal system followed normal procedure and justice was done. When Renee Good was killed, the Department of Justice simply decided it was justified, they would not investigate, and they would not help the state of Minnesota do so either. Our companies may be able to conduct business more or less normally for now. But for how long?

I’m not suggesting we are on the verge of a full-fledged fascist dictatorship (though it is a possibility that is distressingly more realistic than I would have ever imagined). I am suggesting that unless something is done, further erosion of the rule of law is almost certain. And as we have seen throughout history, that can be successfully resisted.

Whether out of fear or denial, our business leaders have been silent. We need them to speak up. We need them to resist. Perhaps it is starting. Last week the Minneapolis Chamber of Commerce, not exactly a hotbed of radicalism, released a statement that, while measured, was clear in calling out the current harassment being conducted by ICE and the need for redress when government acts unconstitutionally. There’s still a lot of crickets, but I hope more voices will follow. Because as Dr. King also said, “In the end, we will remember not the words of our enemies, but the silence of our friends.”


Taking the Public Out of Public Health

January 13, 2026

It is perhaps an indication of what a nerd I am, but one of the thrillers I vividly recall from my childhood was a short movie I saw in 6th grade about the World Health Organization’s Smallpox Eradication Program. Seriously. I was spellbound by the story of health workers traveling around Bangladesh, Ethiopia, and Somalia, the last countries known to have the disease, tracking down reports of individuals who might have contracted the disease and then vaccinating everyone around them who was at risk.  They were on the brink of completely eradicating one of humankind’s deadliest foes – smallpox having nearly wiped out the indigenous population of the Americas after its arrival with Europeans in 1492, and killing an estimated 300 million people in the 20th century alone – and victory was finally declared in 1980. Man, what an exciting public health adventure!

Ask any expert about the greatest public health advances in human history, and vaccines are almost sure to be in the top three, along with clean water/sanitation and maternal health. And now, as with so many other things in 2020s America, we appear to be moving backwards. Earlier this month, the US Department of Health and Human Services dropped 6 vaccines – hepatitis A and B, rotavirus, COVID-19, influenza, and meningococcal disease – from the list of those routinely recommended in childhood, reducing that number from 17 to 11. It’s a move that is frankly bewildering. It is opposed by essentially all experts in child health and infectious diseases, including the American Academy of Pediatrics (AAP) and the Center for Infectious Disease Research and Policy (CIDRAP).

Now, in the spirit of full disclosure, I have had a certain skepticism about routine vaccines in the past. When I started my pediatric training in the mid-1980s (yes, I am that old), we basically gave 3 shots for a total of 7 diseases, a recommendation that had been in place for nearly 20 years. During my residency one more was added, for Haemophilus influenzae type b (Hib). Then starting in the mid-1990s, the number seemed to explode. By 2005 there were 13 diseases on the schedule. As a pediatrician, and as a parent, it raised some questions. The earlier vaccines were for things that had significant mortality or morbidity. Smallpox vaccine is kind of a no-brainer. And stories of polio panics and the thrilling race to deliver diphtheria antitoxin to Nome via dog sled to contain a deadly oubreak, now fortunately historical artifacts, showed just how dangerous and terrifying those illnesses were. But chicken pox? Flu? Do we really need vaccines for those?

Decisions like those require a careful balancing of many considerations. How effective are the vaccines, what are the side effects, what are the costs? How many and which people are at risk of the diseases, what is the morbidity and mortality? What are the treatment options? In short, what are the risks and benefits? And as importantly, who bears the risks, and who reaps the benefits? Vaccines are one of those things where there is both and individual and a societal benefit. Herd immunity refers to the concept that for most infectious diseases, there is a certain pool of susceptible individuals required to maintain the disease in the population. If enough people are immune, either from prior infection or from vaccines, then the disease ceases to spread in that population. Those who cannot be vaccinated or develop natural immunity (immunosuppressed, cancer, etc.) are thus protected. Routine immunization may therefore be justified even when few people are at risk of more severe disease, if costs and risks are sufficiently low and herd immunity can be expected. These newer vaccine recommendations made sense in that context.

These are not simple decisions. They require lots of data, and a balanced view across many stakeholders with varying perspectives. Which is exactly how these decisions have been made since the mid-1960s, when the Advisory Committee on Immunization Practices was established by the US Surgeon General.

Until now.

It was bad enough that DHHS Secretary Robert Kennedy, Jr., fired all of the members of ACIP and replaced them with individuals of varying levels of expertise, and nearly all with demonstrated anti-vaccine bias. But the most recent de-listing of vaccines was done without even their input. It was done by the Secretary to comply with an executive order from the White House.

Anti-vaccine activists like Kennedy often cite safety concerns. Clearly, safety is paramount. It is one of the major things considered before vaccines are approved. And with post-licensing monitoring and the provisions of the National Childhood Vaccine Injury Act of 1986, vaccines may be among the most closely scrutinized pharmaceutical products in the country. Yet no new evidence of significant safety concerns has emerged for previously recommended vaccines. Indeed, the most widely circulated concerns, such as MMR-linked autism, have been so thoroughly debunked it’s hard to believe even hard-core conspiracy theorists truly believe them.

Safety is not really the issue here. I suspect this is part of a larger agenda. The most recent announcement emphasized the need for individual decision making about vaccines. Each person needs to decide what is right for them and their child. But this has always been an element of medical care. These vaccines are recommended for routine use. Individuals have the option of not following those recommendations, and often do, though a recommendation that all children should receive a particular vaccine, versus a recommendation that a vaccine should be considered, carries a lot of weight when parents are deciding. So what is the rationale for reclassifying some of these vaccines as recommended for every child, and others only recommended based on shared clinical decision-making?

A look at some of the diseases that have been demoted gives some hints. One of them is COVID-19. This has become a libertarian lightning rod for reasons that have nothing to do with the safety of the vaccine. But a large part of the rationale (though not the only one) for giving the vaccine to all children, even though the majority of children are not themselves at high risk, is to reduce the chance that a child will spread the disease to higher-risk contacts. The compelling rationale for universal COVID-19 vaccines for kids is societal protection, rather than protection of the individual. The same is true for hepatitis A (another reclassified vaccine).

Even more telling is the case of hepatitis B. The most common routes of infection in the US are perinatal transmission from an infected mother, and sexual or bloodborne transmission later in life. The recommendation for universal newborn immunization was based in part on the fact that many infected individuals are asymptomatic, and therefore unaware they are at risk of transmission to their infant. It also recognizes that whether we like it or not, adolescents frequently experiment with risky behaviors; early childhood vaccine offer them protection. I suspect that removing hep B from the recommended-for-all list is a way to further stigmatize and marginalize at-risk groups.

By putting more of the emphasis on what the risks and benefits are for the individual, rather than for the individual and others, DHHS and CDC appear to be engaging in an ideologically motivated effort to de-emphasize the societal benefit of vaccines. It’s taking the public out of public health.


I’m Back

January 8, 2026

Well, it’s been a minute…It’s hard to believe my last post here was over 3 years ago. Not that I’ve been idle. I’ve had an “official” blog at Children’s Minnesota, and I’ve written a book, Saving Our Kids, that lays out a public health approach to the gun violence crisis that I talked about in my last Starting With Curious post. But to be honest, I felt that despite the disclaimers about the views here being my own, it was difficult to keep it completely separate from my role as a CEO. I thought having this blog on hiatus was best for my organization.

But as of this past summer, I am retired! I can more freely write about things affecting the health and wellbeing of kids. And man, is there a lot to write about.

I’m back!


Guns Move In to First Place.

April 21, 2022

Recent studies in Pediatrics and the New England Journal of Medicine show that thanks to a sharp decline in deaths from motor vehicle crashes, and an increase in gun deaths (especially suicide), in 2019 guns became the leading cause of death for all US youth age 0-19.  More than childhood cancer, drowning or poisoning – combined.  This has generated some media attention, which I have mixed feelings about; after all firearm injury has been the leading cause of death for Black youth since 2001.  Also, while I’d love to think this might jolt policy-makers into action, it is unlikely, despite the inevitable hand wringing, that we will see any of the common sense measures to reduce this epidemic such as those proposed by the study authors, the same types of regulatory measures that have dramatically improved motor vehicle safety while still allowing free access to automobiles.  Perhaps it’s time to follow the advice of singer-songwriter Cheryl Wheeler.


Why Black History Month Matters

February 28, 2022

In 1721, Boston was facing an outbreak of smallpox.  Onesimus, a Black man originally most likely from what is now Ghana, then kidnapped and enslaved, told his enslaver of a practice common in parts of Asia and Africa for centuries, of inoculating an individual with matter from a healing smallpox lesion to prevent them from getting the disease, a practice known as variolation.  A physician was convinced to try it, and it helped mitigate the outbreak.  This was 75 years before the much more widely known English physician Edward Jenner developed the technique of preventing smallpox by inoculation with matter from a cowpox lesion (also known as vaccinia, hence the name vaccination).

Many people see Black History Month – established in 1976 and observed in February – as a chance to learn about people like Onesimus and his achievement.  I certainly appreciate learning about and celebrating the many ways that Black women and men, both individually and collectively, have enriched and contributed to the history of this nation. But its significance is far greater than that.  It is a time to reflect on why, for example, I have known about Edward Jenner since middle school but only learned about Onesimus this month. It’s a time to reflect on why we need a Black History Month.

I recently finished Caste: The Origins of Our Discontents, a 2020 book by Isabel Wilkerson. (This is where I heard the story of Onesimus.)  She provides a compelling framework for understanding the answer to that question, and for understanding much about the persistence of systemic racism in the US.  In short, she draws on the work of numerous scholars to show that America is a caste society, one where the defining characteristic determining caste is race.  For centuries, Blacks were kept in the lowest caste by legal means: at first slavery, then a host of discriminatory laws collectively referred to as Jim Crow, economic and physical segregation, and acts of sanctioned violence. These were reinforced by many social conventions designed to buttress Blacks’ inferior status.  This included denigrating the intelligence or attractiveness of Black people through humor and popular culture, and enforced deference and subservience on the part of Black people toward Whites.

Since the 1950s and 60s, when the legal means of suppressing Black people have largely evaporated, it is left to these social conventions – the many insults, overt and subtle, we now refer to as microaggressions – to prop up the caste system.  I have to admit that I have wondered why microaggressions have been such a focus of antiracist action.  After all, it’s not as bad as segregated facilities and poll taxes, right? Understanding the outsized role microaggressions play in maintaining White supremacy in an era where the laws are (at least technically) race-neutral, helped me appreciate the importance of confronting them.

The fact that Edward Jenner is widely known while Onesimus is generally ignored is one of those microaggressions. Highlighting the historical achievements of White people while downplaying those of Black women and men is a way to reinforce the heinous myth of White superiority and Black inferiority.  Admittedly, Jenner’s technique was far safer and represented an improvement.  But why did the supposedly inferior races in Asia and Africa have a method for preventing smallpox for hundreds of years before the Europeans figured it out?  Better to distort the historical record than to raise an uncomfortable question like that.  Black History Month is a step toward correcting the record.


Discomfort

January 26, 2022

Dear Dr. Watson:

As Dean of the medical school, you have a responsibility to train the next generation of health care providers.  I believe the time has come for some significant changes in our curriculum.

First, we should prohibit physicians from asking questions about illness and health.  We are taught, “first, do no harm.”  But talking about their symptoms might make some people uncomfortable.  They might feel guilty about the fact that they still smoke, or experience mental anguish when confronted with the need to reduce their salt intake.

Similarly, we should not be prescribing treatments that have any side effects, even if it is a minor discomfort.  It is not right to make someone uncomfortable solely because they have an illness.

You could argue that if we don’t ask questions about health or provide treatments, patients would not get healthy.  But honestly, I think we are making too big a deal about health.   These days, it seems everyone is trying to make everything about health. You can’t turn on the TV or the radio, or look at social media, without someone raising the issue of health. Health might have been an issue a long time ago, but now that we have antibiotics, illness isn’t a problem anymore.  People need to just get over it.

Now that we have finally stopped making people uncomfortable by talking and teaching about race, thanks to legislators in Tennessee, Texas, Wisconsin, and other places, it’s time to stop making people uncomfortable about their health status.


MLK’s Legacy Misappropriated

January 17, 2022

Today is a day set aside to remember the Rev. Dr. Martin Luther King, Jr., and to reflect on his legacy.  It’s also important to call out when that legacy is being misrepresented, as has been happening by those who want to limit the teaching and discussion of the subject of race in schools.

Over the past couple of years, and particularly following George Floyd’s murder in 2020, the notion of systemic and institutional racism became a topic of national conversation.  Recently there has been a backlash, as school boards and state legislatures have passed a number of bills intended to restrict what is taught about race and how.  Tennessee House Bill SB 0623, for example, prohibits teaching that could lead a student to “feel discomfort, guilt, anguish or another form of psychological distress solely because of the individual’s race or sex.”  In Texas, House Bill 3979 forbids teaching that “slavery and racism are anything other than deviations from, betrayals of, or failures to live up to, the authentic founding principles of the United States.”  It also specifically bans requiring assigning the 1619 Project, based on a series of articles in The New York Times Magazine, as a resource.

How American history and the topic of race are taught is a legitimate and important area of discussion.  But what is disturbing is that proponents of these laws are citing none other than Dr. King to support their stance.  Here is a representative quote: “Martin Luther King once said that he had a dream that his grandkids would be judged not by the color of their skin but by the content of their character. But what you have going on … they’re trying to make everything about skin color.”  And another: “Critical race theory goes against everything Martin Luther King has ever told us, don’t judge us by the color of our skin, and now they’re embracing it.”

As is true of most things in life, context is everything. Yes, Dr. King spoke hopefully and eloquently of a day when color will not be relevant.  But he had no illusions that such was the case today.  Consider the opening of that same speech:

“Five score years ago a great American in whose symbolic shadow we stand today signed the Emancipation Proclamation. This momentous decree was a great beacon light of hope to millions of Negro slaves who had been seared in the flames of withering injustice. It came as a joyous daybreak to end the long night of their captivity. But 100 years later the Negro still is not free. One hundred years later the life of the Negro is still badly crippled by the manacles of segregation and the chains of discrimination. One hundred years later the Negro lives on a lonely island of poverty in the midst of a vast ocean of material prosperity. One hundred years later the Negro is still languished in the corners of American society and finds himself in exile in his own land. So we’ve come here today to dramatize a shameful condition.”

He went on to call the Declaration of Independence and Constitution a “promissory note,” pledging the blessings of life, liberty, and the pursuit of happiness to all people.  But, King said, “It is obvious today that America has defaulted on this promissory note insofar as her citizens of color are concerned. Instead of honoring this sacred obligation, America has given the Negro people a bad check which has come back marked ‘insufficient funds.’” It was only after he laid out what he saw as the reality of persistent racism in the present that he went on to share his beautiful, hopeful dream of a better future.

If we look at some of Dr. King’s other speeches and writings, the context becomes even clearer.  In his 1967 speech “The Other America,” for example, he notes “we will never solve the problem of racism until there is a recognition of the fact that racism still stands at the center of so much of our nation, and we must see racism for what it is.”  However, in “Where Do We Go From Here,” written that same year, he laments that not everyone sees the same reality. “Whites, it must frankly be said, are not putting in a similar mass effort to reeducate themselves out of their racial ignorance. It is an aspect of their sense of superiority that the white people of America believe they have so little to learn.”

Yet learn we must, adults and children, White and Black. We must understand our American history and our American present if we hope to create a better American future of the type Dr. King dreamed of. How we do that – what we teach and how – is a complex subject, and a debate we should have. But that debate needs to be honest.  Dr. King may have had a wonderful dream, but he was not subject to illusions.  We should not misuse his words to defend a position he would not support. Today is a day to honor Dr. King’s legacy. Let’s make sure we get it right.


COVID Orphans

December 27, 2021

Since early in the pandemic, there has been a myth that kids are not affected.  It is true that severe COVID-19 illness, including need for hospitalization, is less common among children than adults.  That doesn’t mean it doesn’t happen: we have been averaging around 15 kids in the hospital on any given day, with up to a third requiring intensive care. And it remains unclear whether the phenomenon of MIS-C (multisystem inflammatory syndrome of COVID-19) that affect primarily children will have the potential for long-term effects like its cousin, Kawasaki syndrome.

But there are known serious and long-lasting impacts of COVID-19 on kids.  First, the rate of overweight and obesity has skyrocketed: the rate of increase in BMI among children under 19 years has doubled in the past 2 years compared with pre-pandemic increases (which were already alarming).  There has also been a dramatic acceleration of mental health problems in children and teens, including eating disorders, anxiety, and depression.  At our hospital we have seen an over 30% increase in emergency department visits for acute mental health issues, and a 50% increase in children and adolescents requiring hospitalization for a mental health condition, similar to our peers across the country.  And keep in mind, in the year before the pandemic suicide was already the second leading cause of death among youth ages 10-24.  This has led the American Academy of Pediatrics and the Children’s Hospital Association, along with other organizations, to declare a mental health crisis for kids.  Having seen it first hand, I think the often-overused word “crisis” is not too strong here.

On top of this, we now know of one more devastating effect of this pandemic: a recent study in Pediatrics estimates that over 140,000 US children – 1 in 500 – has lost a parent or primary caregiver to COVID-19.  As with the mental health crisis, kids from underserved communities, including Black, Latino/a, and Indigenous youth, are disproportionately represented among these COVID orphans.

These impacts on the physical, mental, and social well-being of kids are of the type that are likely to be life-long.  We keep wondering when this pandemic will “end.” But for too many kids, the answer is not for a very, very long time.


DRIVE for Results

August 27, 2021

It may be true, as Heraclitus said, that change is the only constant, but the pace of change is variable.  The current environment is one of exceptionally rapid and momentous change.  A key lesson from the COVID-19 pandemic has been the importance of agility for organizations of all sizes.  For large healthcare organizations, which tend to be relatively change-averse and less than nimble, this has created unease and dissatisfaction with the speed and quality of decisions affecting both current and future operations.

What gets in the way of rapid and effective decisions?  Often, it is lack of clarity about how the decision is to be made.  For important decisions, especially in large or complex settings, it is most helpful to have an explicit framework to guide the process.  One that I have found useful is described by the acronym DRIVE.

Decision.  What exactly is being decided? Who has the ultimate authority to make the decision?  Is it an individual or a group, and if a group, how will the decision be made – consensus, majority vote, etc.?  Are there any parameters that will define limits on the decision (e.g., budget, regulatory considerations)?

Recommendation.  Often the decision will be to accept or reject a recommendation, or choosing one of several recommendations.  Who will be charged with making the recommendation and presenting it to the decision maker?  (For a relatively straightforward issue, the recommendation is likely to be made by the decision-maker themselves.)  What is the timeframe for developing the recommendation?  Since the drafting of a plan (or several plan options) is a creative process, the recommending body should have the right expertise to inform the product, but be small enough that the work is not slowed down.  The recommendation should include a summary of the input provided (see below), and it may be helpful to have someone play the role of “devil’s advocate” to ensure that the full range of input is considered.

Input.  This may be the most critical element to define.  Whose input will be sought as the recommendation is developed and the decision ultimately made?  This group can and should be broader than the one drafting the recommendation, and should be able to reflect the perspectives of all key stakeholders, as well as the appropriate content expertise.  On the other hand, it need not be exhaustive; those providing input should be able to speak on behalf of individuals or areas other than their own.  For example, a decision regarding changes in the operating suite should be informed by input from both employed and independent surgeons, as well as those who do primarily inpatient vs. outpatient cases, but not necessarily every single surgeon on staff.  Explicitly identifying what input is being sought and from whom will help avoid information gaps during the process, while heading off complaints about missing perspectives from people who may disagree with the ultimate decision.

Those whose input is sought need to understand their role, which is to provide information that may be relevant to a decision that has yet to be made.  While not every piece of input will be incorporated into a decision, every piece of input should have the potential to influence that decision. If the recommendation is already finalized or the decision made, seeking additional “input” would be disingenuous.

Vetting.  The development of the recommendation is often an iterative process.  After an initial round of input, a draft recommendation is developed, and can be refined based on additional rounds of input.  Once the recommendation is final (or close to it), it is often useful to vet it with another group of stakeholders before the decision is made.  This could include a subset of those who provided the earlier input, or others not previously involved may be brought in.  The purpose of this vetting is twofold.  The first is to prepare the recommendation to be brought forward to the decision-maker for action.  It provides a final opportunity to ensure that no important perspectives were omitted, and to gauge reaction from key stakeholders.  If there are any whose assent will be critical, this is a good time to solidify that.  The second purpose is to start to get thoughts on how the decision will be communicated, and to identify key execution risks that need to be considered.

Execution.  Any decision is only as good as its execution.  Who will be responsible for carrying out the decision?  What key dependencies are there?  What risks have been identified, and what are the plans for mitigating them?  How will the progress be monitored?

While this degree of planning and specification may seem like overkill, it has several important advantages.  First, while we cannot eliminate all the sources of bias that come into play when human judgment is involved, a rigorously defined process can help minimize their effect. Second, as mentioned above, laying out the process including who was involved at each step can aid in obtaining buy-in from those who may not agree completely with the decision.  Finally, having a process greatly facilitates delegation of decision-making. Once the delegator and the delegee have agreed on the various steps, the latter should be empowered to proceed without fear of being second-guessed on how they came to their decision.

Here is an example I recently went through. Due to truly unprecedented summer patient volumes, and pandemic-related staff turnover, we were facing staffing challenges for many roles in various parts of the organization.  Several executives were developing plans for the workforce in their area, but a consistent, organization-wide approach was needed, and quickly.  Here was the process:

D. The COO would be the ultimate decision-maker on a plan to add staff in all patient-facing roles where we had identified shortages, with immediate, short-, and long-term components.  As long as the plan was consistent with our contract obligations, and it did not put us significantly at risk for failing to meet our financial goal of break-even for the year, she could make the decision without my approval.

R. The recommendation would be drafted by a subset of the executive leadership team (COO, CFO, CNO, CHRO), to be ready for a final decision by the COO within 5 days.

I.  In addition to the recommending group, input would be obtained from labor relations, senior director of talent strategy, nursing leaders, legal, finance, and the equity and inclusion team.

V.  Draft recommendation would be vetted with the full executive leadership team (me and my direct reports).

E.  Execution would be org-wide, with particular involvement by the HR team.  Key measures of progress and success would be a decrease in critical staffing shortages, new hires (including position fill rates, time to fill, and workforce diversity), and budget variances.

With this clarity, those involved had their marching orders, and a plan was developed and a decision made within a week – practically warp speed for us!  There was widespread buy-in from HR, front-line managers, and the unions, and based on anecdotal information obtained during rounding, appreciation from front-line staff that the senior team was addressing their primary source of stress.


Don’t Deny It

June 30, 2021

United Healthcare became the most recent insurer to announce a policy of denying payment for what is deemed “unnecessary” emergency department visits.  Within days of its announcement, it said it was delaying the policy change in response to an outcry from healthcare providers and patient advocates. (Anthem put forward a similar policy in 2018, which it subsequently modified substantially under pressure.) While the goal of having the right care in the right place at the right time and at the right cost is reasonable and necessary, the punitive approach being pursued by payers is ill-conceived, unfair, and likely to be counterproductive.

First, let’s be clear about the problem.  I’ve seen no data to suggest that any significant proportion of ED visits are unnecessary in the sense that the patient didn’t actually need medical attention.  Rather, a substantial fraction of visits to the ED (estimates range from around 20% to over 60%) are for problems that could be managed in a different setting (e.g., primary care, urgent care). Because charges for ED visits are typically far higher than for those other settings (I say charges because there is some disagreement about whether the cost of such visits is actually higher, but that’s a blog for another time…), having that care provided in a different setting when appropriate could decrease health care spending while maintaining the effectiveness of the care and perhaps improving the experience.

So why do people go the ED when they could go someplace else?  Policies based on a financial disincentive seem to be predicated on the idea that people are intentionally misusing the system: I know I could go someplace else but my insurance is paying for it so what the heck.  But research has shown that ED visits for non-urgent problems are correlated largely with lack of accessible, quality alternatives; lower levels of health literacy; or a true belief that something is or at least might be an emergency. (Chest pain is an emergency whether it ends up being a heart attack or indigestion.)  In all my years of practicing emergency medicine, I encountered far more people being gamed by the system than those trying to game it.

Which is why such policies are ill-conceived – they don’t address the root cause which is lack of real access to lower-priced options.  They are also unfair.  Unfair to providers because emergency departments are legally and morally obligated to treat all who come to them.  Threatening not to pay them for the services they provide puts them at risk.  Unfair to patients because non-payment may prevent those without alternatives from getting care they need.  And barriers to accessing other sites of care are more prevalent among those with public insurance and those in higher poverty neighborhoods and those with more people of color. Denial of payment becomes one more source of inequity.

What’s worse, such policies are unlikely to achieve their goal of reducing spending, and may even increase it.  To start with, the vast majority of denied claims end up getting paid.  For example, in 2020 when Aetna was sued for inappropriate denial of emergency claims in California, the court found that 93% of the denied claims should have been allowed according to Aetna’s criteria.  However, the process requires the provider to appeal, generating a ton of paperwork and additional expense.  Second, the process for validating the “appropriateness” of the claim is likely to drive unnecessary utilization.  Approval of the claim is based in part on “the intensity of diagnostic services performed” and treatments provided (for example, visits in which IV medications or fluids are given are automatically approved).  We know that a good deal of diagnostic testing is unnecessary; this policy would incentivize additional testing as a way to justify the visit.  It would also incentivize therapeutic escalation – for instance, IV fluids instead of the equally effective oral rehydration.  All of this would actually add to the cost of care.

Excessive spending for care in emergency departments that could reasonably and safely be provided elsewhere is a problem.  Better and more equitable access to less expensive alternatives (effective triage lines, expanded primary care and urgent care hours, virtual care) would be a better approach than punishing patients and providers by denying payment.