Stanford Medicine (the university’s medical school and hospital) got 5,000 doses of COVID19 vaccine. Not enough for everybody – nowhere is there enough for everybody yet – but it was a great start. Stanford’s in Santa Clara County, where cases are soaring.
On Tuesday, December 15th, , they announced they’d be giving the first shots. Huzzah! They notified people who were slated to get the vaccine.
They had formed a committee which created an algorithm to distribute the vaccine in an “ethical and equitable” way.” But apparently not an intelligent one. More about algorithms later.
Maybe you heard. It went to hell. By Tuesday, the administration realized they had messed up. Their darling algorithm had left out most residents and fellows even though these are the doctors providing medical care up close and (cough cough) personal. To people frighteningly sick with COVID19. Out of 1,300 residents, Stanford Medicine had plans to vaccinate 7 in the first round. (What about nurses?)
Oh well, that’s how it goes. Too late, right? They explained: the algorithm had given priority to older staff, because age increases vulnerability to COVID19. This included older staff who didn’t see patients, who were working from home. The algorithm had also been told to pick between staff who worked in more dangerous units and those who worked in units where there were fewer COVID19 cases. But since residents aren’t assigned to particular units, no priority for them! (What about nurses?)
The residents wrote an angry letter. Friday they held a protest that made national news. “HEALTHCARE HERO Support is ZERO” said one widely-reported sign. In the letter the residents also said it was important to consider early vaccination of nurses, therapists, janitors, and other support staff who may be in danger, and who also seemed to have been left out of the algorithm.
Apologies commenced.
A reply to the letter said “This should not have happened — we value you and the work you do so highly. We had been told that residents and fellows would be in the first wave. This should never have happened nor unfolded the way it did.” (We didn’t do anything! It just… unfolded!) Stanford said they would change the list. They said senior faculty would be asked if they’d give up their slots to staff actually working with COVID19 patients.
In a video, Dr. Lawrence Katznelson, Associate Dean of the Medical School, said “I feel awful that this happened…. It was no one’s malicious fault, but it was a bad outcome.”
And here’s a statement of apology, edited for space:
…We take complete responsibility and profusely apologize to all of you. We fully recognize we should have acted more swiftly to address the errors that resulted in an outcome we did not anticipate. We are truly sorry.
As you know, we formed a committee to ensure the vaccine’s equitable distribution. Though our intent was to ensure the development of an ethical process, we recognize that the plan had significant gaps. We also missed the opportunity to keep you more informed throughout this process.
We are working quickly to address the flaws in our plan and develop a revised version. ….we will provide continuous communication in an effort to engage our entire community in this process….
…We deeply value each and every member of our community and the outsized contributions you make to our mission every day – especially during this particularly challenging year.
We take complete responsibility for the errors in the execution of our vaccine distribution plan. Our intent was to develop an ethical and equitable process for distribution of the vaccine. We apologize to our entire community, including our residents, fellows, and other frontline care providers, who have performed heroically during our pandemic response. We are immediately revising our plan to better sequence the distribution of the vaccine.
We see they stopped talking about the algorithm. Now it’s a “plan.”
Is that apology good? They do say they’re sorry for the lousy (death-dealing) plan. They do apologize for not acting to fix it right away. They’re acting now.
But they don’t name what they did, or its impact. Significant gaps? Could mean anything. They don’t say these gaps, flaws, and errors were bound to harm those working the hardest to save COVID19 patients, those in the most danger.
They say they take complete responsibility, and they admit they should’ve kept people informed about the planning. But they also plead that their intentions were good! All they ever wanted was to be fair! They don’t talk about how things went wrong, how the algorithm produced such a rotten list.
Okay, algorithms. Impressive word, maybe intimidating, something to do with math?
Although algorithms can use complicated mathematics and/or computer coding, when you get down to it, an algorithm is just a set of rules for doing something. Used by a person or a computer. A recipe is an algorithm. “Women and children first” is an algorithm if you use it to load the lifeboats.
You could give a computer that algorithm and a list of passengers on an ocean liner – a database – and instruct it to spit out an order of people to board the lifeboats. (And if you didn’t think of the possibility, it’s going to say a month-old-baby can get on the lifeboat long before its father can – too bad if it doesn’t have a mother along and has to board alone. Good luck, baby!)
So Stanford Medicine’s committee constructed an algorithm that said, among other things, “give priority to older people.” It did not say “don’t give priority to dermatologists.” It did not say “if they’re working from home, they’re not a priority.” It did not say to give priority to young staff working extra ICU shifts to cover the caseload. We don’t know what else it said. We don’t know if members of the algorithm committee got listed.
(‘Give priority to vulnerable older people’ is a fine idea, and an early part of the vaccine rollout across the country. But notice that this vaccine batch was sent to Stanford Medicine. Not to the law school, not to the business school, not any of the other departments at Stanford that have vulnerable older faculty. It was sent to the place where sick people are being cared for. Hint hint.)
Initially, they blamed the algorithm. ‘We love you and your work, but the list of who gets vaccinated is AN ALGORITHM AND MUST BE HEEDED.’
Algorithms are assumed to be impartial. If you’re not on the list, it’s not because the algorithm is hostile, it’s BECAUSE SCIENCE.
Ahem. Garbage in, garbage out. Algorithms function by following rules, and if the rules for making a list are flawed, the list will be flawed.
They can be racist – if you tell an algorithm to pick job candidates who resemble successful past hires, and you never hired black people before… the algorithm will perpetuate that. Even if that’s not what you meant! At all! (Highly recommended, very readable, not responsible for our errors: Weapons of Math Destruction, by Cathy O’Neil.)
Anyway, if you create an algorithm, you should check the results it produces. Proofread it. Maybe tinker with it, until it MAKES SENSE. Not just revere it because algorithm is such a cool word.
The apology acknowledges that the process produced a flawed list, and the flaws weren’t noticed, and when they were, they were at first brushed off. But it doesn’t say they’ll look into what was wrong with the process.
We’ll make a new list! We’ll let you look at it! All fixed!
Image Credits: Photo: Robert Skolmen. Creatve Commons Attribution-Share Alike 3.0 International license.
has there been any followon to this?
Oh yes. Here’s a San Francisco Chronicle article where they actually got to see the algorithm, and compare it with, say, that of UCSF:
https://www.sfchronicle.com/health/article/How-Stanford-s-vaccine-algorithm-caused-a-major-15824918.php
They quote Ziad Obermeyer, who wrote about this for MIT Technology Review, as saying “…the goal of algorithms should be to make… prioritization fair, transparent, and defensible. No one should hide behind ‘an error in the algorithm,’ as if the algorithm as a mind of its own, when they literally designed the algorithm.”
Stanford medicine seems to worship algorithms. My spouse (70+, male) sees a cardiologist in the Stanford system. The cardio practice uses an algorithm to determine who should and should not be on statins. This algorithm is constructed in such a way that given his age, there is literally no combination of other information that can be fed in that won’t produce a recommendation that the patient should go on a statin. Excellent cholesterol and blood pressure numbers? Doesn’t matter, the almighty algo still says “take a statin.”
Artificial intelligence is no match for natural stupidity!