Should governments delay the second dose of COVID vaccines, administer lower dosages, or otherwise depart from protocol in order to vaccinate more people earlier?
Overview of the dilemma
Nir Eyal, Henry Rutgers Professor of Bioethics, Rutgers Center for Population-Level Bioethics
The three vaccines for COVID-19 authorized for emergency use in some Western countries as of mid-January 2021—by Moderna, Pfizer (and BioNTech), and AstraZeneca—were tested for two doses, 3-4 weeks apart. In recent weeks, however, several options for lower dosing, spaced vaccinations, or mixing and matching different vaccines have been discussed:
I. One and a half dose: By mistake, AstraZeneca tested on some participants a regimen of 1.5 dose instead of 2, and these participants did even better than other groups, constituting very limited evidence in support of an unconventional 1.5 dose regimen.
II. A single dose in a single jab: Some suggested experimenting with giving only one dose of the Pfizer and Moderna vaccines in order to save more doses for others, on grounds that in trials of both vaccines the sharpest drop in disease in the vaccinated group started before active-arm participants received the second dose. Others suggested that rolling out single doses is justified even without further experimentation, because of the expected public health windfall from vaccinating more people earlier, and because some unconventional evidence already exists that even a single dose probably works, in their view, sufficient evidence to warrant the “gamble”.
III. Spacing out: The UK’s Chief Medical Officers decided to lengthen to 12 weeks the interval between doses of the Pfizer-and AstraZeneca vaccines, which had been authorized for use in the UK with 3-4 weeks between doses. The main goal was to reserve doses for vaccinating more people earlier. In the US, many opponents warned about the dearth of trial evidence in its support. But on an assumption of 52% efficacy after a single dose, modeling suggested that giving the first shot to more people, rather than keeping enough to ensure a second dose at 3-4 weeks, would prevent more cases of COVID-19. And President Biden announced that he would immediately deliver all doses available (that is, if any are), which would mean spacing out doses. The UK is now moving fast on immunizing the population and claims to have evidence of clinical benefit from even further spacing of the AstraZeneca vaccine. Israel, however, is reporting that the first dose of the Pfizer vaccine is less effective in the field than one might have hoped.
IV. A single dose divided into two half-dose jabs: Some American experts are excited about the option of distributing Moderna’s vaccine into two half doses instead of two full ones. There is some data to support it, per senior US proponents, though others oppose this option as well. While a lower dose may or may not enable more vaccine to be distributed early on, it would definitely leave more vaccine for others.
V. A single dose of one vaccine followed by one of another vaccine: While this wouldn’t reduce by much the number of doses needed, it should resolve logistical issues for a second vaccination when the original vaccine is locally unavailable. Yet, the vaccines were tried with two doses of a single vaccine.
Many crucial factual questions remain open. What is the likeliest efficacy of the various dosing and spacing approaches? What are the not-very-unlikely worst case scenarios for each (is undermining public trust, or creating vaccine resistance, likely? Does sheer discussion of these options already confuse the public and undermine trust?), and how bad would they be? There are some conflicting signals even in the evidence we have. For example, in the Moderna trial, there were fewer cases in the active arm than in the control arm after a mere two weeks from first vaccination; indeed, some commentators added, “Because [usually] we do not expect a protective immune response in the initial 14 days after immunization, this suggests that once immune response is more mature, the efficacy of a single dose may be higher”. However, in early testing, the vaccines had seemed quite inefficacious after a single dose—their promise of efficacy was then suggested only after two doses.
There are also questions in philosophy of science and epistemology. How should we understand the likelihood that a regimen that wasn’t tried as such will work, fail to work, or cause harm? Can we put a number on those chances?
Moral questions also surface.
1. Protecting population health vs. protecting individual health: In a pandemic, many accept that health authorities should generally prioritize population needs over those of individuals. But some may doubt that it is ever OK to give a patient less certainty of any effect whatsoever from an intervention in their body, pandemic notwithstanding.
2. Maximizing beneficiaries vs. maximizing fairness and protection of those at high priority: An egalitarian (pro-equality) argument for option II above was that “providing effective protection for as many people as soon as possible is more ethical because it distributes the scarce commodity more justly.” An argument for option III was fairness to priority groups, which would often serve equality as well: “In terms of protecting priority groups, a model where we can vaccinate twice the number of people in the next 2 to 3 months is obviously much more preferable in public health terms than one where we vaccinate half the number but with only slightly greater protection.” But what if a major effect of these diversions from maximal protection of vaccine recipients is to significantly lower the protection conferred to the initial recipients, who are usually in priority groups? Would that suboptimal protection of those with the strongest entitlement and, sometimes, the strongest moral claim to protection be sufficiently condoned by the earlier vaccination of more members of lower-priority groups?
3. Maximizing human health vs. maximizing rich-country population health: In the US, the manufacturers, some serious experts, and the FDA remain skeptical of any departure from authorized protocols, and other serious experts support such departures, which they think is likelier to promote US public health. But options I and IV above, compared to the authorized regimen, would leave far more doses within manufacturers’ production capacity available for other nations. That alone could free up (for purchase or donation) enough doses to vaccinate Mexico, Central America, the Caribbean, and large swaths of Latin America, even without any COVAX- and other incoming vaccines. Couldn’t this momentous humanitarian benefit break up the tie on what is very best for US public health, and decide in favor of low-dose options?
FDA Emergency Use Authorization requires adherence to dose and dosing schedule
Eddy Bresnitz, Medical Advisor to the New Jersey Department of Health on the COVID-19 response
On January 31, 2020, the Secretary of US Department of Health and Human Services declared a Public Health Emergency under section 319 of the Public Health Service Act in response to emerging COVID-19 infections in the US. This declaration allowed the US Food and Drug Administration (FDA) to issue an Emergency Use Authorization (EUA) to “…allow unapproved medical products or unapproved uses of approved medical products to be used in an emergency to diagnose, treat, or prevent serious or life-threatening diseases (…) when there are no adequate, approved, and available alternatives” and the benefits of the intervention outweigh the risks.
In December 2020, the FDA issued EUAs on the use of two novel vaccines for the prevention of COVID-19. These EUAs were based on the agency’s thorough reviews of data from the pivotal trials, and on recommendations from its Vaccine and Related Biologics Professional Advisory Committee. The two vaccines were developed on a messenger RNA (mRNA) platform, a technology that had no precedent of a licensed vaccine. Both vaccines were tested as two-dose series, with doses separated by either 3 or 4 weeks. The pivotal trials indicated that, after the second dose, both vaccines had a vaccine efficacy (VE) of approximately 95%, with similar VE in various sub-groups (age, race, ethnicity, underlying medical conditions). The trials also showed that the vaccines caused significant reactions at the site of injection, such as pain or swelling, and systemic reactions such as fever, headache, muscle pain and fatigue; however, these effects were mild to moderate, lasting 1 to 3 days, and were self-limited. Based on these findings, the FDA considered that the benefits of both vaccines outweighed the risk and issued the EUAs.
The EUAs require that healthcare providers use the vaccines as described in the authorizations. The salient requirement is that the vaccines be used in a two-dose regimen, with the second dose given at 3 or 4 weeks after the first, depending on the vaccine. Health care providers are obligated to adhere to the requirements of the EUA. Following issuance of the EUAs, the Advisory Committee on Immunization Practices (ACIP) and the CDC issued guidance on interim clinical considerations of the use of mRNA vaccines based on the conditions of the EUA.
A surge in the pandemic beginning in the fall of 2020, and increasing incidence of disease in 2021, motivated a discussion in the scientific literature and the media about changing the dosing regimen in order to more quickly vaccinate a larger share of the public with a single dose. This debate prompted the FDA to issue a statement expressing concern about changing vaccine regimens, given the available data: “Using a single dose regimen and/or administering less than the dose studied in the clinical trials without understanding the nature of the depth and duration of protection that it provides is concerning, as there is some indication that the depth of the immune response is associated with the duration of protection provided. If people do not truly know how protective a vaccine is, there is the potential for harm because they may assume that they are fully protected when they are not, and accordingly, alter their behavior to take unnecessary risks. (…) Until vaccine manufacturers have data and science supporting a change, we continue to strongly recommend that health care providers follow the FDA-authorized dosing schedule for each COVID-19 vaccine.”
At this point, neither the ACIP nor the CDC have issued recommendations or guidance on altering the dosing or schedule for administering these vaccines. Without FDA authorization, ACIP recommendations, and CDC guidance, states and health care providers are unlikely to recommend use of the vaccine outside of the requirements of the EUAs. Until vaccine manufacturers provide additional data that would support altering the dose or schedule to the FDA, the current authorized emergency use of the vaccine is likely to remain unchanged.
Consistent messaging is key to our public health mission
Phyllis Tien, Professor of Medicine, UCSF
Phase 3 COVID-19 vaccine trials in the US are currently ongoing, and two vaccines have received an Emergency Use Authorization (EUA) from the FDA. The design of these Phase 3 trials were based upon careful review and analysis of data from Phase 1 and 2 trials that tested different vaccine doses for effectiveness as well as safety. We are now also in the midst of a COVID-19 surge that is worse than nearly one year ago, and further compounded by increasing reports of mutated strains of SARS-CoV-2, possibly more infectious and transmissible, circulating in communities. As a result, distributing vaccines rapidly is of critical importance to public health, but dosing and dose scheduling should be based upon the available scientific data.
Consistent messaging regarding prevention efforts including vaccine dosing, vaccine scheduling, mask-wearing and social distancing are needed to maintain public trust. Mixed messaging in our national response to the pandemic has likely aggravated barriers to the COVID-19 vaccine roll-out, including vaccine hesitancy and fear of adverse effects from the vaccine among parts of the population. Still, many among us are eagerly awaiting vaccination in order to return to some normalcy. Until a significant proportion of our population is vaccinated, it remains critical to send a clear message that the benefits of the vaccine outweigh the risk, and, even for those vaccinated, precautions such as masking and social distancing must be adhered to.
On the bright side, with the advent of potentially new vaccine candidates that could obtain an EUA by early spring, and the promise of consistent public health mandates to curb the US pandemic, we may be able to accomplish both our public health mission of distributing vaccines in a timely manner while also adhering to the available scientific data.
If changing the vaccination protocol stops the pandemic sooner, change it!
Dan Hausman, Research Professor of Bioethics, Rutgers Center for Population-Level Bioethics
Two goals govern policy for COVID-19 vaccination: saving lives and preventing other harms from COVID-19; and ending the economic fallout from the public health measures imposed to limit the spread of the virus. These goals are largely, but not perfectly, aligned. In an emergency such as the current pandemic, consequentialist reasoning comes to the fore. Although policies should (of course) avoid violating rights, the central ethical questions are factual questions: which vaccination policies stop the pandemic most rapidly without causing other untoward consequences of comparable importance?
If delaying the second dose or lowering dosages is ineffective at preventing disease, then clearly neither should be adopted. If these measures are just as effective at preventing disease as the current protocol, then the second dose should be delayed or the dosage lowered. This conclusion might be questioned, because the confusion and doubts caused by a change of protocol might wind up deterring people from being vaccinated and thereby prolonging the pandemic. This disastrous consequence is highly uncertain. In circumstances such as these, where the immediate positive effects of an action are certain and the harms are speculative, I think that one should proceed with the action.
The facts seem to be that a single dose or two half-doses of any of the three vaccines whose emergency use has been authorized provide less protection than the standard two doses; and it is unknown how quickly the protection provided by a single dose will fade or what effect a delayed second dose will have. Formal modeling can tell us the consequences of assumptions concerning relevant but unknown parameters, and sensitivity analysis can give us some confidence concerning the risks that changing the protocol will have bad effects. The wild card again is the damage that confusion and doubt may cause. I would make the same response: proceed with the action.
Unless a single dose or two half doses turn out to provide poor and short-lived protection against disease and against transmission, those reluctant to be vaccinated will see their unvaccinated neighbors getting ill, unlike their vaccinated acquaintances. Will the qualms engendered by changes in protocols outweigh this persuasive experience?
If splitting doses and deferring the second dose have immediate effects in limiting disease and infection, then the change in protocol is warranted, even if there is a potential medium- or long-run risk of undermining confidence in vaccination. There is no issue of physicians violating their obligations to patients, because physicians are not making the dosing decisions; and no rights are being violated. So the policy question boils down to the empirical question of which policy stops the pandemic more rapidly. There is no way to know for sure; but, with thousands dying daily, let’s do whatever will help today and worry tomorrow about more speculative harms.
Can spacing out vaccination be justified to all?
Bastian Steuwer, Postdoctoral Associate, Rutgers Center for Population-Level Bioethics
Bottlenecks in distributing COVID-19 vaccines have led to a slower than hoped-for start of the vaccination program in many countries, including the United States. The United Kingdom has taken the unusual step of putting on hold the distribution of the second vaccine shot and using the available doses to vaccinate more people with a first dose.
The UK Chief Medical Officers’ rationale for taking this step was that doing so maximizes the number of people receiving vaccines, and thereby saves the most lives in the aggregate. In part, what underlies this thinking are contested scientific matters. Vaccine efficacy trials tested two-dose regimens. There are only preliminary and less reliable data from these trials showing efficacy from the first dose. The UK Chief Medical Officers estimate a level of protection of over 70 percent. Other writers have been more optimistic, citing 80 to 90 percent protection. Less optimistic data suggests that the Pfizer-BioNTech vaccine is 52 percent effective before the booster shot, although alternative statistical analysis may show it to be higher.
The question is not, however, exclusively scientific. An optimistic answer to the scientific question raises an ethical question: is it ethically defensible to lower the prospects of some by failing to give them a booster shot in order to improve the prospects of others by giving them a first shot?
This is a question of population-level bioethics. We need to consider the health of everyone in society, instead of adopting the perspective of a clinician charged with the interest of their patient. One contrast in population-level bioethics that is helpful for reasoning about this dilemma is the contrast between aggregative and non-aggregative reasoning. Aggregative reasoning asks about the population-level effects, as in the rationale employed by the UK Chief Medical Officers: the overall number of lives saved would be higher, they reason, under a policy of spacing out vaccines. The overall amount of benefits in terms of lives saved justifies the lesser protection afforded to those who will not receive their second shot as planned. Non-aggregative reasoning, by contrast, asks whether a policy can be justified to each individual: instead of justifying a social decision by the aggregate effect, we need to ask whether any individual could object to the decision. One might think that from this perspective the UK’s decision is problematic. Could not a person who will not receive their second shot as planned object that they now have to live with less than optimal protection?
However, I want to suggest that there is a non-aggregative rationale for spacing out vaccine doses. Our current vaccine priority-setting is already following such an approach. It is informed largely by trying to identify individuals at highest risk to give the vaccine to them first. The idea is that those at high risk have a stronger claim to the vaccine than those at low risk.
Consider a simple model of distributing vaccines. We start to give out as many doses as are being produced to persons at high risk. We continue this for three to four weeks, and then we face a choice: do we now give new vaccine doses to the originally vaccinated persons, thereby increasing their level of protection from the preliminary level to the full efficacy level? Or do we give the new doses to not-yet vaccinated persons, thereby giving them some preliminary protection? The originally vaccinated persons should no longer be treated with the same initial priority. Their risk has already been reduced, and their claim to a further risk reduction will not be as strong as the initial claim they had when they were at higher risk. Perhaps we should treat a person aged 75 who has received one shot of the vaccine like a person aged 65 who has not been vaccinated yet.
Whether we should space out vaccine doses, then, depends in part on our overall speed of vaccination. Continuing with my simple model, if after three to four weeks everyone over the age of 65 is already vaccinated and the choice is between persons aged 65 and persons aged 75 who have been given the first vaccine shot, then spacing out achieves little. However, if even after three to four weeks there is still a number of unvaccinated people left who are at higher risk, then spacing out appears more reasonable.
This simple model, as all simple models, leaves out many important considerations. It does not consider indirect effects on either vaccine hesitancy or on vaccine resistance, and it depends on finding a reasonable estimate for the level of protection from the available data. What the model suggests, however, is that opposition to aggregative reasoning does not directly translate into opposing a policy of spacing out vaccine doses.
Now that some COVID vaccines have been authorized, can it be ethical to (continue to) test these and further COVID vaccines, and how?
Overview of the dilemma
Nir Eyal, Henry Rutgers Professor of Bioethics, Rutgers Center for Population-Level Bioethics
Several Western countries have now authorized use of the first few COVID-19 vaccines following placebo-controlled efficacy testing. More vaccines may be authorized in the next few weeks. But further COVID vaccine research, of the following types, remains necessary:
I. Continued/new studies of authorized or approved vaccines in the conventional regimen, e.g. to ascertain their impact on infection and infectiousness, the correlations and duration of vaccine protection, their success against new viral strains, their efficacy and safety in population groups excluded from the initial studies such as children and pregnant women, ratios of rare complications, and impact outside the trial setting. The original trial results and the swabs collected from trial subjects before unblinding get at some of these questions, but there is room for more.
II. New studies of authorized or approved vaccines under new regimens (e.g. on spaced out dosing regimens, or half-doses—see our previous Dilemma).
III. New studies of new vaccines. New vaccines remain necessary should authorized vaccines turn out to have short-lived efficacy, or to protect recipients without reducing their infectiousness to others; and for areas of the world where authorized vaccines are impossible to store, deliver, or procure.
This necessary research could be a combination of (a) epidemiological observations, (b) collecting more samples in existing trials or even switching subjects between trial arms—a “blinded crossover”, (c) temporally controlled field trials (e.g. initiating a new trial that would compare people who receive the authorized vaccine early to ones who receive it later), (d) placebo controlled field trials, (e) active controlled field trials (e.g. comparing authorized vs. promising new vaccines), (f) immune-bridging studies, or (g) challenge trials.
Outcomes of interest could include (i) infection status and level; (ii) disease status and level; (iii) likely infectiousness status and level; (iv) immune response status and level (as in an immune bridging study); (v) adverse events; (vi) some of the former outcomes among participants’ contacts.
Such studies could take place in (1) countries in which the approved vaccines are already being rolled out to some people (either among those people and/or their contacts, or in other people and/or their contacts), or in (2) global populations who will not have access to currently-approved vaccines anytime soon.
This Dilemma explores which combinations of object of study (I-III), research type (a-g), outcome type (i-vi), and study population type (1 and 2) might be ethically permissible.
Many bioethicists would consider it unethical to give placebo to a control group when a known safe and effective vaccine exists. That is worse for their prospects and, normally, for those of their contacts. These bioethicists would especially object when the vaccine being tested has already been approved or authorized. They would be furious if the participants put on placebo would otherwise have access to the tried and tested vaccine outside the trial. But not all bioethicists consider placebo control unethical under these circumstances. And there may be a way around some of the ethical complications here. In particular, for a limited period, some in rich nations where vaccines are rolling out will lack access (e.g. young and healthy people who are not considered frontline or essential workers), such that their immediate prospects will not be worsened by being in a trial. Can very short trials in these populations provide helpful outcomes?
Similarly, the prospects of those who would not have access to the proven vaccine outside of a trial (say, populations in less-developed countries who will not get the vaccines for some time) would not be worsened by placebo-controlled trial participation. These participants’ own nations may stand to benefit greatly from the development of vaccines that are easier for them to procure or deliver. In the past, a WHO group on placebo controlled vaccine studies in developing countries noted a number of conditions that may justify use of placebo for vaccines already known to be safe and efficacious. Still, is it ethical to rely on these populations’ lack of access to the tested vaccines, when that lack of access results from rich/vaccine producing nations’ having hoarded or outbid potential participants’ nations? Does it matter who does the testing, what nations are likely to use the product, and whether post-trial access is guaranteed to participants and their fellow citizens?
And is the ethical challenge limited to placebo control? Any controlled study compares different options, and once one option is authorized, some of its participants will have to be assigned to an unauthorized option—often, unauthorized for the person’s own protection.
We can probably get some useful data out of careful observations, and much useful data from immune-bridging studies and challenge trials, but this discussion will focus especially on the questions above.
Nir Eyal’s work on this issue was supported by an award by the National Science Foundation (NSF 2039320)
The ethics of continuing trials: does the data justify the risks?
David Wendler, Senior investigator, Department of Bioethics, NIH Clinical Center
Several vaccines for COVID-19 have been found safe and highly efficacious and are now being made available to select groups through emergency use authorizations (EUAs) and other mechanisms. At the same time, there is still significant value to continuing current trials and testing additional vaccine candidates, raising the question of whether and to what extent it is acceptable to give research participants unproven vaccines and placebos after identification of ones that are safe and efficacious.
Some commentators argue that clinical trials are ethically acceptable only as long as there is insufficient evidence that the intervention offered in one arm is superior to what is offered in another arm, or to what is available outside the trial. This view implies that it would be unethical to continue placebo-controlled trials given the findings of efficacy. It also implies it would be unethical to test other unproven vaccine candidates. This view fails to recognize that the obligations researchers have to their participants are distinct from the obligations that clinicians have to their patients.
Codes and guidelines around the world permit researchers to expose participants in clinical trials, including vaccine trials, to some risks to collect socially valuable data that cannot be obtained in a less risky way. These guidelines reveal that researchers are not obligated to provide placebo recipients with a safe and efficacious vaccine once one has been identified. Instead, researchers are obligated to ensure that any plans to conduct placebo-controlled trials remain ethically appropriate given current evidence.
Continuing a trial after the vaccine candidate has been found to be safe and efficacious can provide an opportunity to collect several types of socially valuable data. Of greatest importance, continuing trials can provide a more reliable and more precise point estimate of the vaccine’s efficacy and offer an opportunity to collect additional safety data, including data on any uncommon or delayed side effects. Continuing trials can also help to assess how long the vaccine’s protective effect lasts; offer insight into the vaccine’s impact in various subgroups, such as older individuals or those with comorbidities; and evaluate whether the vaccine candidate protects against infection itself.
Once a vaccine candidate is found to be efficacious, participants in the placebo arm of that trial are known to be at higher risk of symptomatic disease than the participants in the active arm of the trial. How much higher depends on the chances that participants in the placebo arm will become infected, the risks they face if they are, and how much protection the efficacious vaccine offers. The chances that participants in the placebo arm will be infected depends on the local transmission rate, preventive measures they adopt, and the amount of time they remain on placebo. When participants are on placebo for a short time, the chances of infection are correspondingly low. Remaining on placebo for a few weeks, rather than accessing an efficacious vaccine, poses a low chance of substantial harm. Continuing on placebo for even longer periods also poses a low chance of substantial harm to individuals at low risk for severe disease.
Remaining on placebo for an extended period can pose considerable risks to individuals at high risk of severe disease. The extent of these risks depends critically on what options are available to them. In the setting of few effective treatments and potentially strained hospital systems, receiving placebo for an extended period rather than a safe and efficacious vaccine can pose substantial risks. However, if high risk individuals would not have access to a safe and efficacious vaccine outside of research— for example, when there is only enough supply for the trial or when they are not part of a prioritized group that will receive the vaccine during the time of the trial—receiving placebo in a clinical trial poses few additional risks to them.
There is no algorithm for determining how much social value a given clinical trial has and whether its social value justifies the risks participants face. As a result, IRBs tend to focus on ensuring that a trial has the potential to collect important data and that the risks of substantial harm are low. Trials with the potential to collect data helpful for addressing a global pandemic have considerable social value. Inviting competent adults to participate in such trials can be ethical when doing so poses a small increase in their risk of experiencing substantial harm. This suggests that it can be ethically acceptable to continue a placebo-controlled trial for a short period after the vaccine candidate has been found to be safe and efficacious, even when participants might be able to access the vaccine candidate outside the trial, for example, through an EUA.
By contrast, if continuing the trial does not offer the opportunity to collect socially valuable data, or comparable data can be obtained in less risky ways, continuing the trial with a placebo arm for any length of time would be ethically problematic. Inviting participants who are at low risk of severe disease to remain blinded and stay in the trial for a longer period can be acceptable when it offers the potential to collect data that might be helpful for addressing the pandemic. In most cases, continuing a blinded, placebo-controlled design with high-risk individuals for longer periods will not yield data of sufficient value to justify it. Exceptions might include when the individuals cannot access an efficacious vaccine outside the trial and their participation is needed to collect valuable data, or they are in a group for whom no efficacious vaccine candidate has been identified.
Otherwise, individuals at high risk of severe disease should be unblinded and those on the placebo arm offered the vaccine within a redesigned study or given the opportunity to seek the vaccine outside the trial. When the value of the data to be collected does not justify the risks of continuing the trial as designed, researchers have several options. They can unblind participants, offer placebo recipients the vaccine, possibly as part of an expanded access program, and follow them to collect additional data. Alternatively, researchers might redesign the trial, for example, to include a crossover in which the blind is maintained and those on the placebo arm receive the vaccine after they complete the placebo arm. Finally, in some cases, it may make sense to simply stop the trial and unblind participants, thus allowing those in the placebo arm to seek the vaccine elsewhere.
Let’s distribute the “standard of prevention” equitably before testing new vaccines
Rieke van der Graaf, Associate Professor, University Medical Center Utrecht, Julius Center for Primary Care and Health Sciences, Department of Medical Humanities, Netherlands
To answer the question whether it can be ethical to continue to test further COVID vaccines now that some COVID vaccines have been authorized it may first of all be helpful to look at relevant international ethical guidance documents. For example, the CIOMS guidelines (2016) set out that
As a general rule, the research ethics committee must ensure that research participants in the control group of a trial of a diagnostic, therapeutic, or preventive intervention receive an established effective intervention. Placebo may be used as a comparator when there is no established effective intervention for the condition under study, or when placebo is added on to an established effective intervention.
When there is an established effective intervention, placebo may be used as a comparator without providing the established effective intervention to participants only if:
- there are compelling scientific reasons for using placebo; and
- delaying or withholding the established effective intervention will result in no more than a minor increase above minimal risk to the participant and risks are minimized, including through the use of effective mitigation procedures.
The CIOMS guidelines also explain that “established effective interventions may need further testing, especially when their merits are subject to reasonable disagreement among medical professionals and other knowledgeable persons” and that in some cases this may include testing against placebo. At the time of this writing, the Pfizer and Moderna vaccines are authorised for Emergency Use by the FDA in the United States and by the European Commission, following evaluation by the EMA, to prevent COVID in respectively the United states and European Union. In the EU also the AstraZeneca vaccine has been authorised. The FDA found no specific safety concerns and determined, for one of them, that “the vaccine was 95% effective in preventing COVID occurring at least 7 days after the second dose”. CIOMS defines an established effective intervention as follows: “an established effective intervention for the condition under study exists when it is part of the medical professional standard. The professional standard includes, but is not limited to, the best proven intervention for treating, diagnosing or preventing the given condition.” Given the absence of safety concerns, the high effectiveness of these vaccines and the fact that they have been authorized in the EU and the US for prevention of COVID, these vaccines seem to fall in the category of an established effective preventive method.
At the same time, there can be legitimate reasons to do further testing despite this standard because there are still many uncertainties, as set out in the overview of this dilemma. Whether this testing can be done in the form of randomization is a further question. In the short-term there may be participants who are not yet eligible for vaccination outside the trial. But in the longer term, there will be a tipping point where vaccination through the regular national health program provides people with an established effective vaccine earlier than with the experimental vaccine. Researchers, sponsors and research ethics committees should be sensitive to that moment while approving new trials. Moreover, at some point, the world will regard the now-authorized vaccines in the EU and the US as part of the so-called standard of prevention package: a term used in discussions of HIV prevention methods, designating the comprehensive package of methods to prevent HIV, including condoms, and pre- and post-exposure prophylaxis, which are approved for clinical use (see UNAIDS, Van der Graaf et al and Singh). In HIV prevention trials all participants (both in the experimental and control arm) must receive access to this package that is recommended by WHO. It is reasonable to assume that for COVID a similar package of preventive methods will come into existence that is recommended by an organization such as WHO. This package then may consist of a range of preventive methods running from hand hygiene and facial protection to vaccines. This package may provide participants with more protection, while making it more complex to start new trials for preventive methods, not only for vaccines, but also for other preventive methods such as monoclonal antibodies when used as a prevention strategy. This dilemma is well-known within the field of HIV prevention.
Another question is whether it is ethical to develop and test new vaccines in low-resource settings that do not have access to the vaccines available in the US and the EU. The CIOMS guidelines recognize that there is a dilemma when placebo-controlled trials are proposed in a low-resource setting when an established effective intervention cannot be made available for economic or logistic reasons:
In some cases, an established effective intervention for the condition under study exists, but for economic or logistic reasons this intervention may not be possible to implement or made available in the country where the study is conducted. In this situation, a trial may seek to develop an intervention that could be made available, given the finances and infrastructure of the country (for example, a shorter or less complex course of treatment for a disease). This can involve testing an intervention that is expected or even known to be inferior to the established effective intervention, but may nonetheless be the only feasible or cost-effective and beneficial option in the circumstances. Considerable controversy exists in this situation regarding which trial design is both ethically acceptable and necessary to address the research question. Some argue that such studies should be conducted with a non-inferiority design that compares the study intervention with an established effective method. Others argue that a superiority design using a placebo can be acceptable. The use of placebo controls in these situations is ethically controversial for several reasons: 1. Researchers and sponsors knowingly withhold an established effective intervention from participants in the control arm. However, when researchers and sponsors are in a position to provide an intervention that would prevent or treat a serious disease, it is difficult to see why they are under no obligation to provide it. They could design the trial as an equivalency trial to determine whether the experimental intervention is as good or almost as good as the established effective intervention. 2. Some argue that it is not necessary to conduct clinical trials in populations in low-resource settings in order to develop affordable interventions that are substandard compared to the available interventions in other countries. Instead, they argue that drug prices for established treatments should be negotiated and increased funding from international agencies should be sought. When controversial, placebo-controlled trials are planned, research ethics committees in the host country must: 1. seek expert opinion, if not available within the committee, as to whether use of placebo may lead to results that are responsive to the needs or priorities of the host country…; and 2. ascertain whether arrangements have been made for the transition to care after research for study participants …, including post-trial arrangements for implementing any positive trial results, taking into consideration the regulatory and health care policy framework in the country.
The particular dilemma for COVID may be that vaccines are for logistical reasons proposed to be tested against placebo and are already known to be less safe and effective than the vaccines available in the EU and the US. On the one hand, as long it is reasonable to assume that these trials may lead to a vaccine that is easier to scale up than existing ones and so help to stop the pandemic in these settings, this may be an argument in favour of starting these trials. On the other hand, what currently seems to make the start of new vaccine trials for local production in low-resource settings impermissible is that 172 countries have made agreements by means of Covax to secure “2 billion doses from five producers, with options on more than 1 billion more doses” (see WHO). These doses have not been delivered yet, but the aim is to make them available before the end of 2021. Before considering and approving further trials in resource poor settings, research ethics committees, researchers, sponsors, manufacturers, national health authorities, regulators and others should consider whether more can be done first to ensure global equitable access to existing COVID vaccines through Covax.
A more favorable view of (some) trials in developing countries
Brian Berkey, Assistant Professor of Legal Studies and Business Ethics, Wharton School, University of Pennsylvania
Despite the fact that we now have several authorized COVID vaccines, continued research remains necessary. Some of the trials that could provide valuable information require that at least some participants don’t receive one of the authorized vaccines during the trial period. Consider, for example, trials involving new vaccines. These trials are important because new vaccines could have important advantages in comparison with those already approved. They might, for example, provide greater protection against emerging variants of the virus, or be storable at temperatures that would make them easier to distribute in developing countries.
Other trials that could provide valuable information require that participants receive altered regimens of authorized vaccines (e.g. two half-doses instead of two full doses). If these trials were to show that an altered regimen involving less vaccine per person is roughly as effective as the approved regimens, this could allow for quicker vaccination of the global population.
Wealthy countries have procured most of the current supply of the authorized vaccines, and can be expected to control most of it for some time. In addition, these countries are already in the process of vaccinating their populations using the approved regimens. Because of these facts, the prospects for trials of new vaccines and altered regimens of approved ones to be effectively carried out may be greatest in developing countries, where citizens will likely not have access to the authorized vaccines and regimens for some time.
Since there are powerful reasons to think that it’s unjust that citizens of wealthy countries have access to the authorized vaccines long before those in poorer countries will, there are grounds to worry that if trials are conducted in poorer countries while those in richer countries are being vaccinated using the approved regimens, those conducting the trials would be wrongfully exploiting the participants. Some would claim that this is the case even if the participants give informed consent, face limited risks, and may benefit significantly from their participation. One argument for this conclusion relies on the claim that it’s objectionable to take advantage of those who are vulnerable, at least if their vulnerability is the result of injustice. Proponents of this argument hold that taking advantage of unjust vulnerabilities constitutes wrongful exploitation.
I think that the charge of wrongful exploitation would be correct in some cases. Perhaps the most obvious are cases in which trials in developing countries are run by, and stand to benefit, agents that are among those responsible for or benefitting from the injustice in access to authorized or approved vaccines and regimens that makes those countries especially suitable sites for further trials. Consider, for example, the governments of wealthy countries, which, in my view, have acted unjustly by procuring the bulk of current and near-future vaccine supplies for their own citizens, rather than allowing for a more equitable distribution. If these governments were to fund studies in poorer countries with the aim of using the knowledge gained primarily to further benefit their own citizens, this would constitute wrongful exploitation. But importantly, in my view this is primarily because the governments of wealthy countries have an independent obligation to contribute to ensuring an equitable distribution of approved vaccines globally. Instead of funding these trials, they should be funding greater provision of approved vaccines to more of the poor around the world, and seeking to promote further trials that distribute both the risks and potential benefits more justly among the global population. If wealthy country governments did not have these obligations, and the trials could reasonably be expected to benefit the participants and others in the countries in which they might take place, it is harder to see on what basis we might object to them.
Because of this, I think that we should have a more favorable view toward at least some trials that could be run in developing countries, despite the fact that it may only be because of unjust disparities between richer and poorer countries in access to the approved vaccines and regimens that they’re possible. Consider, for example, a pharmaceutical company that hasn’t yet produced an authorized or approved vaccine, but has a promising candidate that requires further testing. If developing countries had access to authorized or approved vaccines that was comparable to what wealthy countries enjoy, running trials in developing countries may not have the prospect of generating results that would be as informative. In a sense, then, if the company runs trials in developing countries, it would be taking advantage of the fact that participants unjustly lack equitable access to approved vaccines.
But since it’s at least plausible that companies that don’t yet have an authorized or approved vaccine aren’t obligated to contribute directly to the equitable distribution of other companies’ authorized or approved vaccines, there’s reason to think that their running trials for promising candidates wouldn’t be wrong. So long as familiar obligations such as securing informed consent and ensuring the safety of participants as much as possible are met, the fact that participants would improve their prospects by taking part, in comparison with the status quo, provides sufficient grounds for preferring that these trials take place.
A more difficult case to assess is one in which a company that’s produced an authorized or approved vaccine intends to test alternative regimens in developing countries that currently lack equitable access to supplies of that vaccine. If the company is obligated to contribute substantially to providing equitable access (by, for example, reserving some of the existing supply and selling it to poorer countries at a discount), but is failing to meet that obligation, then running such trials is wrongfully exploitative. Even if that is the correct conclusion, however, we likely still have reason to prefer, from a moral perspective, that the trials are run rather than not. After all, as long as participants would improve their prospects by taking part, failing to run them would leave their unjust disadvantages entirely unaddressed, rather than mitigated at least a bit. This means that if there’s nothing that can be done to get the relevant companies to satisfy their obligation to promote equitable access to approved vaccines, we shouldn’t attempt to stand in the way of their running trials that would benefit unjustly disadvantaged participants.
Unsatisfactory justifications for COVID trials in developing countries
Monica Magalhaes, Program Manager, Center for Population-Level Bioethics, Rutgers University
In developed countries that were able to buy up the first authorized COVID vaccines early, a complication is arising for continuing COVID vaccine research. As highly efficacious vaccines are rolled out to the general population, some vaccine trial participants and potential participants now have their health prospects lowered by participating in controlled studies. Participants are dropping out of studies to get vaccinated as they become eligible, at the expense of the final quality of the data and of knowledge that would be gained from these studies.
One apparent solution for this complication is to conduct any further COVID vaccine trials in developing countries where vaccination prospects for the vast majority of the population will remain low for the foreseeable future. Where no-one has access to the vaccine outside a trial, no-one’s prospects of accessing a vaccine are worsened by participating in a trial. The concern about this option, as put in the overview of this dilemma, is that this justification relies on much of the world’s population lacking access to the same vaccines that are, or will soon be, widely available for the minority living in richer nations. That seems to be unjust, or at least exploitative of an injustice.
As with any disease, someone(s) will have to be in the studies that will continue to advance COVID prevention and treatment after the first line of prevention and therapy is found and made available. Studies that withhold or withdraw proven interventions to test experimental interventions raise particular ethical concerns, but they are not unique to COVID—as Rieke van der Graaf explains in this dilemma, this ethical territory has been trod before and we have guidelines and years of debate in research ethics to show for it. And yet, the fact that someone has to do it does not seem like a satisfactory justification for the fact that these someones will predictably be in developing countries where access to vaccine will be unjustly slower to arrive.
One possible way to justify this predictable outcome is to argue that, even though background inequalities are unjust, trials in developing countries are justified by the individual benefits to participants whose health prospects are increased by participating in the trial; and by the societal benefits of improved prospects for the participants’ compatriots’ access to vaccines, compared to what their prospects would be had the country not hosted trials. This seems unsatisfactory too, because these societal benefits will go only as far as the trial sponsors’ post-trial obligations or agreements extend (a simple obligation to provide the vaccine to all trial participants would not do much for the country), and only as far as these obligations or agreements are enforced or lived up to. Developing countries are rarely well-placed to demand or enforce strict obligations from large corporations based in developed countries, or from developed countries themselves; and hosting a trial has yet to catapult a poor country towards the front of the line.
Another way to soothe worries about relying on background inequities in access to vaccines is to appeal to the expectation that trial findings would benefit mainly poorer countries and their populations—for example, in trials seeking to establish safety and effectiveness of new vaccines that are cheaper or easier to store, transport, or administer. But this too is only persuasive up to a point: the benefit from discovering alternative vaccines will accrue to the entire world, as lower costs and easier logistics would help even the richest of countries to get their populations vaccinated sooner and faster. While it is true that developing countries need these benefits more, that rationale itself relies on developing countries’ lower level of resources for health, health personnel, and infrastructure. This line of thought should refocus, rather than appease, our equity concerns.
As vaccines start to roll out, we all watch as the gap between rich and poor countries predictably widens. A “catastrophic moral failure” results from institutions that enable vaccine nationalism by countries that can pay the highest prices, to the exclusion of much of the world. We ought to remain uneasy about relying on those without access to vaccines to participate in future COVID-related research. Globally fair distribution of the risks, burdens and benefits of the COVID research that remains to be done requires globally fair distribution of effective vaccines and interventions.
Vaccine trials in the developing world, exploitation, and post-trial responsibilities
Daniel Wang, Associate Professor, Fundação Getúlio Vargas School of Law, Brazil
By joining a placebo controlled COVID vaccine trial in a vaccine-deprived developing nation, individual participants will not lose anything that they would have received had they not joined the trial. Nobody is made worse-off by participating in such research. Indeed, some or all will gain. Those who participate will have at least the possibility of being vaccinated effectively against the disease. In addition, every participant (including those in the placebo arm) will usually benefit from additional tests (which are usually more beneficial than burdensome) and (if the trial is otherwise conducted ethically) from optimal care during and after the trial if they fall sick. In short, placebo controlled trials are Pareto improvements (because they harm no one and benefit some), and perhaps strong Pareto improvements for the primary stakeholders (because they may benefit participants, certainly ex ante and by and large ex post).
Moreover, even for those in the placebo arm, the risk of having serious COVID may not be enormous if compared with the risks normally accepted in clinical trials. Certainly if the use of placebo is accepted in countries where vaccines have been approved, then it must be accepted in vaccine-deprived countries.
From the perspective of communities, the countries where access is currently limited or inexistent are the main beneficiaries of more trials. They are far behind in the race for accessing approved vaccines and will benefit from more options. Research for vaccines that are cheaper and easier to administer are particularly responsive to the health needs of these populations. Even if this is not the case, more competition means more vaccines available in the global market, which would possibly facilitate access, reduce price, and give countries some bargaining power in negotiations with pharmaceutical companies.
Any “exploitation” objection to placebo controlled COVID vaccine trials in countries without vaccine access is far more plausible in situations where sponsors do research in vaccine-deprived countries but sell their products, once approved, exclusively (or mostly) in the developed world. It is then important that sponsors are committed to fulfilling their post-trial responsibilities. At a bare minimum, they need to guarantee vaccine availability in the country where the trial was conducted. Sponsors must be committed to applying as soon as possible for regulatory approval (including emergency/conditional approval) of their products in the countries where they conducted their trials (see CIOMS, Guideline 2) and to distributing their successful vaccine products there.
Availability, however, does not guarantee access. Availability refers to the presence of an intervention in an intended place and time, while access refers to the use of such treatment by an individual. Sponsors need to make reasonable efforts to promote access, for instance, through donation, price reduction, technology transfer, training, and support to build infrastructure. The more is done to promote access, the lesser the concern about exploitation.
Conducting research in low-resource settings often raises difficult ethical questions. What constitutes exploitation? Should mutually beneficial exploitation be allowed? Can an intervention be tested against the local standard of care if this is inferior to the best current treatment available where sponsors and researchers come from? What is owed to research participants and their communities? There will be reasonable disagreement about these issues in general and in particular cases, so it is important to consider who will make the decision on whether a trial is ethical.
In many developing countries, there are institutions that apply international scientific and ethical standards to assess research protocols. For instance, in Brazil, where access to approved vaccines is still very limited, vaccine trials cannot take place without the approval of the drugs agency (ANVISA) and the National Ethics Committee (CONEP). The ethics committees/review boards of funding bodies, academic institutions, and companies in the developed world should avoid blocking trials that are Pareto improvements before local institutions are given the opportunity to make their own evaluation on these difficult ethical issues. Such institutions, particularly if they allow public involvement and participation, will probably have a much better understanding of the circumstances and social values in their own countries.
Finally, there is merit in the argument that the insufficiency of approved vaccines globally does not justify allowing in the developing world research that would be unacceptable in developed countries. The root of the problem, the argument goes, lies in global inequality, lack of international aid, and patent laws. However, those making micro-decisions about whether to give an ethical approval for a trial to go ahead will rarely have the power to address these large gaps in global justice.
Testing vaccines when an effective vaccine exists: if that’s all I can get…
Sarah Conly, Professor of Philosophy, Bowdoin College and vaccine trial participant
I was a participant in the Phase III Moderna COVID vaccine trial. I received my two injections four weeks apart in August and September, 2020, and on January 2, 2021 I was very pleased to learn that I had received the actual vaccine, not the placebo.
I was motivated to participate in the trial by three things: First and foremost, I hoped I would get the vaccine, and not the placebo. At that point, and even now (January, 2021) there would have been no other way for me to get access to the vaccine, and, since I am 68, I thought COVID could prove quite dangerous for me. Second, I wanted to contribute to research on the vaccine. Third, I thought it would make a good story, especially for my Bioethics students. I should note that we were also paid by Moderna for each visit to the clinic, but for me that was not a consideration: it was nice, but didn’t affect my decision. The hope of getting an effective vaccine was what led me to brave the two-hour drive from Maine through the hell of Boston traffic, and, of course, to accept the possibility of known or unknown side effects.
This makes me think that offering placebo-controlled trials in places where the vaccine is not available, or to those to whom it is not available even where it exists, is morally acceptable. Of course, there shouldn’t be inequality in healthcare around the world, but there is. Given this, I think participants could very rationally decide that the 50% chance of getting a possibly effective vaccine is much, much better than nothing, especially where rates of infection are currently high. Why not take a gamble with positive expected value? Of course, it would be better if there were a vaccine available to everyone everywhere; but since we can’t make that happen, for many people a placebo-controlled trial is the best chance for getting a vaccine. And, of course, it furthers the research that we still need. To me, this makes it a win-win proposition.
Once COVID-19 vaccines are widely available, under what conditions would it be permissible for governments to create “immunity passports” that facilitate conditioning of services on prior vaccination?
Overview of the dilemma
Societies’ best ticket back to normalcy is, at this time, vaccinating enough of the population to reach or approach herd immunity, particularly if vaccines continue to be shown to reduce COVID-19 transmission. To increase vaccination rates, governments must procure and provide vaccines, remove access barriers, and make the case that the vaccines are safe and efficacious. In addition, governments and institutions can create incentives for becoming vaccinated or disincentives for staying unvaccinated.
One way for governments to achieve that is to institute some form of documentation (a paper card, a smart card, a phone app) to prove vaccination status, which government agencies or private businesses can then require before e.g. rendering services that involve sharing of public spaces. Such immunity passports, or “green passports,” could be required, for example, for boarding a plane, train, bus or taxi, attending a gym or dining in a restaurant, or continuing to work at a hospital, at least absent an up-to-date negative COVID test, evidence of natural antibodies from recent COVID, medical exemption from vaccination, and perhaps other narrowly defined exemptions.
The federal government is creating standards for such passports, and the government of the State of New York is backing a specific initiative. Rutgers University, CPLB’s own home institution, has announced that students will need proof of vaccination (or a medical or religious exemption) to return to campus in the fall of 2021. Such approaches may become trends across US states and US institutions of higher education.
This Dilemma is asking what affects the permissibility of green passports. For example,
Does it matter whether only private sector services are conditioned on a green passport, or also government ones?
Letting private businesses be the ones conditioning service on passports (which may be in the interests of many businesses, and may save the government from some confrontations) raises a further question: Does it matter whether the government forces, encourages, or merely makes it legal for businesses to condition service on green passports? Leaving individual businesses or institutions free to make their own policy allows for different approaches to be tried (without randomization) and “compete” in the marketplace, but may reduce both the passports’ (perceived) coerciveness and their impact on vaccination rates.
Does it matter whether the government’s use of green passports aims to increase vaccination rates, or, alternatively, any such predictable increase is a mere side effect of their use to achieve other aims, such as protecting other users of shared public spaces, increasing public trust in the safety of public spaces, facilitating the reopening of many kinds of businesses and activities, and making those who voluntarily choose not to be vaccinated to internalize the effects of their decisions on others instead of free-riding? What if the government welcomes these side effects of green passports and feels that they would provide ample justification, but is driven by the importance of increasing vaccination rates?
Would merely partial (or partially equitable) access to vaccines completely rule out use of green passports, since it would penalize those with deficient vaccine access? Or should these compounded disadvantages merely be added into the overall calculation of the benefits, costs, (in)inequality, and other effects of green passports on equity, many of which will be positive? Could the correct approach lie in the middle, say, adding these compounded disadvantages to the calculus, but lending them extra weight, because they (allegedly) come from the government’s own actions?
Does it matter whether the goods and services conditioned on having such passports are “essential” (bus access, in-person school access), or only “elective” (cruise-ship access, dine-in restaurant access)? If conditioning essential services on vaccination (with appropriate exemptions) makes incentives for vaccination more efficacious, and if those deprived of these services could resume their access at any time by getting vaccinated, then what, if anything, is wrong with conditioning essential services on vaccination? Conversely, could conditioning of even elective services accumulate to a point where the resulting differences in access threaten what political philosophers call “relational equality” by formulating a two-class society? And even if to some degree they do, is this merely an “expressive cost” that is worth paying to save more lives?
What will be the effects of green passports on global inequality? If international travel comes to be conditioned on vaccination, many citizens of rich countries and only a select few from elsewhere will probably be able to travel freely, at least for some time to come. Does that ethically rule out the use of these passports? Or should these bad effects be weighed against the potential economic benefits to (many) developing countries of reopening the tourism industry and invigorating rich country purchase of goods and raw materials from developing countries?
Immunity passports: what is the true dilemma?
Ruth W. Grant, Professor Emerita of Political Science, Duke University
The key condition that legitimizes limiting access to various services and public spaces to the vaccinated is that the unvaccinated have freely chosen their status. Practically speaking, for this condition to be met, the vaccine has to be readily available to all who want it. If this condition is met, or if COVID tests are readily available and negative results are accepted in lieu of proof of vaccination, it is hard to see an ethical dilemma here. It is an easy call. Governments have a responsibility to act to promote public health and safety. Private enterprises have a similar responsibility, but on different grounds. And individuals do not have absolute freedoms. Individual freedom is always limited by considerations of harm to others. In principle, then, governments may condition services on prior vaccination. And businesses may do the same. Moreover, governments could not prohibit businesses from doing so.
Stated in this way, it looks as if there is no ethical conflict. But to many people, immunity passports would undoubtedly appear to be an illegitimate government imposition on individuals who choose not to be vaccinated. What is the alternative? If the unvaccinated are not excluded, the vaccinated are disadvantaged. They cannot trust that an airplane or a sports stadium, for example, is a safe place to be. The result may well be further delay in the opening of public spaces. In other words, there is not a neutral policy option: either the unvaccinated or the vaccinated will have their options constrained. If this is the case, the choice is clear—it is the unvaccinated who are a threat to others and to the public good. And, as has been true since the start of the pandemic, the same policies that promote public health hasten the opening of the economy.
The really difficult dilemma here, I think, is not on the level of conflicting ethical principles. It is on the level of empirical and political realities. Establishing a vaccine passport system might be perfectly legitimate, and still be the wrong thing to do. In the United States right now, everything related to the pandemic is so politicized, it would be hard to predict whether immunity passports would encourage people to get vaccinated or cause a serious backlash. The details would matter a lot: would a state- and local- level policy be more effective than a national one? How would the limitations on the unvaccinated be enforced, especially where it is private business imposing the constraints? What sorts of public messaging could lead people to see the immunity passport as a welcome step forward in the fight against the pandemic?
How to permissibly distinguish the vaccinated and the unvaccinated
In a recent position paper, we (and colleagues) outlined the main justifications for policies that distinguish between the vaccinated and the unvaccinated (“green passport policies”), for instance in access to cultural events, leisure activities, indoor dining, and so on. We also discuss the main conditions in which such policies may be morally permissible.
Importantly, our paper is based on the factual situation in Israel in the past months, and it should be read in that context. Perhaps the most important feature of the Israeli context is that vaccines are widely available, free of charge and typically in easily accessible locations, to all within Israel proper. This is how vaccines should be available everywhere else as well. Of course, if vaccines are expensive, unavailable, or not realistically accessible, this strongly affects the permissibility of green passport policies. We here assume a situation in which vaccines are widely and easily available. We also assume that, absent inoculation, high infection rates lead to (justifiable) severe restrictions with harsh economic, social, educational, and other consequences. This is the situation in Israel, in most of the United States, and in many parts of the world.
In such circumstances, for most people, getting a vaccine that has been shown to be both safe and effective is both rationally and morally called for. It is the main way in which one can play a role in the collective effort to battle the essentially collective phenomenon of the pandemic. Yes, some uncertainty about the long-term effects of the vaccines (and indeed of contracting COVID) remains, but given the certain harms, both direct and indirect, of the pandemic, vaccination is clearly called for. This does not mean, of course, that anyone refusing to get the vaccine is to blame, but it does mean that some green passport policies may be justified.
On what terms, though? We argue that green-passport-based distinctions may be justified, as long as they are effective at promoting compelling goals, and as long as they satisfy a proportionality requirement. The justifiable ends we point to include reducing the numbers of infections and controlling pandemic-related harms and derivative general health harms (e.g., lower quality of care due to hospitals congestion); returning to economic and social normalcy; imposing the costs of the decision not to be vaccinated on those making it; and incentivizing inoculation.
The decision about each proposed use of green passports should be made with these ends and with the proportionality requirement in mind, and no general recipe for a decision can be supplied. Here, though, are some important guidelines:
• How pandemics work, and how this challenges traditional categories: A pandemic is, by its very nature, a collective phenomenon. Given this collective nature—and perhaps especially, the exponential pattern of infection—pandemics challenge the traditional liberal protection of a private sphere of a person’s behavior, which is no one’s business but their own. During a pandemic, one person’s decision to not get vaccinated and nevertheless interact with others (often without their knowledge on that choice) imposes costs—often serious costs—on others. This does not mean that people should be forced to get the vaccine, nor does it warrant a vaccination mandate backed by a criminal sanction. It does mean, however, that at least in the paradigmatic case, there is no plausible objection to policies that impose the costs of the decision not to get the vaccine on those making it.
Thus, if the risk of opening up restaurants for indoor dining or campuses for in-person classes is too high in the absence of sufficiently high vaccination rates, there is no reason to make the vaccinated bear the cost of the unjustified decision by others not to get the vaccine. In such cases, then, it is justifiable to open up such activities under green passport restrictions.
• Equality and discrimination: What this means, of course, is that policies that distinguish between the vaccinated and the unvaccinated are not discriminatory. There are relevant distinctions between the groups that justify (some) restrictions on the unvaccinated. Currently there is no similar justification for restrictions on the vaccinated.
• Sensitivity to facts: Such green-passport-based distinctions and restrictions are not a penalty, but rather a form of risk regulation and cost allocation that should be fully sensitive to the ever-changing facts. If, for instance, the rates of vaccination are high enough to approximate herd immunity, so that an individual’s decision not to get the vaccines imposes no cost on others, there will no longer be any justification for restrictions.
• Distinctions, distinctions: Because proportionality is crucial here, different cases must be treated differently. For instance: access to vital locations and services such as polling places and hospitals should remain available to all. Access to places like restaurants and movie theaters, while undoubtedly important, may be restricted. Decisions on specific cases should also be sensitive to the available—even if not quite as good—non-risky alternatives. So long as deliveries are an option, for instance, access to grocery stores may be restricted. Similarly for university campuses, at least as long as distance learning is a viable (if less than perfect) option.
• Trust and incentives: Incentivizing vaccination is a legitimate government purpose at this time. Still, it should be pursued wisely. And some incentivizing measures may be counterproductive. Perhaps in some cases a more effective policy will be focused on attempts to foster trust (especially with populations wherein mistrust of government agencies is both entrenched and arguably justified). Countries where it will take some time until vaccines are sufficiently widely available to make green passport policies a viable option, such as the US, should work on building trust in vaccination now. But the importance of building trust and of rationally convincing people to get the vaccine does not rule out the potential contribution of incentivizing vaccination, or the permissibility of green-passport-based distinctions.
• Equality and impact: Vaccine refusal and vaccine hesitancy are unfortunately correlated with membership in marginalized groups and with low socioeconomic status. Arguably, then, green passport policies will disproportionately harm the most vulnerable. This is a valid concern, of course, which should affect permissible policies. It should also affect the effort and resources put into establishing trust and rationally persuading the population with regard to vaccines. Notably, however, the indirect effects of the pandemic—of lockdowns and closures, of economic slowdowns, of higher unemployment rates, and so on—also fall disproportionately on the vulnerable and marginalized. So considerations of impact on the most vulnerable cut both ways in this dilemma. Green passport policies, by incentivizing inoculation, protecting from health-related harms, and reducing the economic effects of the pandemic, may ultimately serve an important role in mitigating these negative effects.
For now, benefits are too uncertain to justify green passports
Nicole Hassoun, Professor of Philosophy, Binghamton University
To decide if immunity passports are a good idea, it is important to get clear on: 1) what objectives we are trying to achieve by implementing them; 2) whether passports will achieve those objectives; 3) any ethical problems that remain if they do; 4) whether there are better ways of securing the same benefits without the ethical costs; and 5) whether there are ways of limiting the costs these passports create or expanding access to the benefits they provide.
Some argue that immunity passports will limit health risks while letting economies return to normal—but what risk levels are acceptable? Will passports reduce the risk below that threshold? Can we lower risk levels sufficiently without implementing without passports? And can we compensate people for, or limit, passports' ethical costs?
I do not believe the data supports implementing an immunity passport system at least for the reasons outlined above. To date, there is limited data on how the vaccines affect transmission rates. Moreover, there is significant uncertainty in tests for natural immunity. Especially with a quickly evolving virus, it is difficult even to figure out how long immunity will last, never mind whether we can achieve any particular public health objective with a passport system. Economic benefits are likewise uncertain—immunity passports will keep some people from accessing some public spaces even as they may allow others to do so, such that the overall economic effect may be positive or negative. And social distancing and other policies may also help us lower health risks and secure economic benefits, which reduces the expected benefit that can be achieved from passports and could not be achieved by other, less burdensome means.
If we do implement immunity passports, I believe that (at a minimum) they should not constrain people’s access to the objects of their human rights and that we should try to limit and compensate for passports’ ethical costs. Global vaccines distribution is highly inequitable. Most people, even in rich countries, have not been vaccinated to date. Some cannot ever be vaccinated for health reasons, and most of those in poor countries may have to wait years for a vaccine. I believe that implementing an international passport system would give us even more reason to help everyone around the world access the vaccines as quickly as possible. Rich countries might compensate for any negative economic effects of passport systems on poor countries, for instance by providing unconditional international aid. Passport systems should also include exceptions to allow people who are willing to take appropriate precautions to access public spaces when they need to do so for important (e.g. health or family) reasons.
Of course, it is possible that COVID transmission and health risks will be worse if we do not implement passport systems than if we do, but, at least insofar as they constrain individual freedom and will exacerbate existing inequity in vaccine distribution, the burden of proof for establishing that they are justified falls squarely on those who advocate for them. Proponents of passports, even if they defend this measure on philosophical grounds, cannot simply assume or make up facts to justify their preferred policies.
In a pandemic, governments can require vaccination
Mark Budolfson, Assistant Professor, Rutgers Center for Population-Level Bioethics
In an infectious disease pandemic, governments should require vaccination if the infectious disease is bad enough and the vaccine is good enough. Incentivizing vaccination is not enough in such a case; governments should require vaccination, as long as the vaccine has been confirmed sufficiently safe and effective and is freely available to all. Whether this applies to the actual COVID situation depends on several empirical questions about risk and other factors that I highlight below, which should be answered by empirical experts. Thus, a policy of merely incentivizing vaccination and requiring immunity passports for many activities may not go far enough—even setting aside worries about whether immunity passports are feasible, given that records are often not kept of who has been vaccinated, and given that vaccine cards are easy to forge.
Governments can require vaccination in some circumstances for the same reason we can require people not to drive drunk and engage in other unacceptably risky behaviors: if an antisocial behavior creates an unacceptable risk of death or other serious noncompensable harm to others, and if that behavior can be prohibited without significantly harming anyone or imposing unreasonable costs, then the correct response is to require people not to engage in that unacceptably risky antisocial behavior. This explains why the correct public policy is to require people not to drive drunk, and why it would not be enough merely to tax drunk drivers based on the expected monetized value of the lives that will be lost due to their behavior. In other words, given the magnitude of the noncompensable risks imposed on others, it is not enough merely to create incentives; we must require people not to drive drunk in order to protect the basic rights of others. Similarly, in a context in which an excellent vaccine is freely available to all, and the unvaccinated would run an unacceptable risk of killing and maiming others, we must require people to be vaccinated (except in a small number of cases where there is a clear medical reason why they cannot be vaccinated).
In a pandemic, those who do not get vaccinated may impose an unacceptable risk to others, and vaccination can easily remove that unacceptable risk. Under those conditions, the correct policy is to require vaccination to protect basic rights. Note that this argument remains silent on the question of whether vaccines should be required in other important public health contexts in which there is a less dramatic risk that an unvaccinated individual will harm others. The correct policy in those other contexts depends on more thorny questions about the ethics of collective action and public goods—namely, exactly when and how governments might be justified in providing public goods in the domain of health and beyond. In contrast, these contested questions need not be answered when a pandemic is bad enough and a vaccine is good enough, because requiring vaccination is then justified by a more urgent and uncontroversial need to protect people’s basic rights.
To illustrate the key factor regarding risk, note that in the first year of the COVID pandemic, on the order of 1 in 1,000 U.S. adults died from complications of novel coronavirus disease. It is therefore realistic to imagine a bad pandemic in which a person who chooses to remain unvaccinated in a population where many are not fully protected could impose an additional 1 in a 1,000 risk of infecting and causing the death of another person, and an even higher risk of causing other serious harm (i.e. serious illness). Even if the risk were lowered by vaccinating, say, 50% of a population, the risk imposed on others by those who choose to remain unvaccinated could remain unacceptably high (as many others may remain vulnerable, vaccines do not fully protect against death and serious illness, and the unvaccinated may promote mutations that create new risks of death and serious illness even for the vaccinated). In a bad pandemic, the degree of risk imposed by an unvaccinated person could realistically be on the same order as the risks involved in drunk driving and other antisocial behaviors that uncontroversially must be prohibited based on a government’s most fundamental obligation to protect basic rights to life and bodily integrity.
In contrast, someone who opts out of a vaccine for measles in a developed nation may impose a risk of death on the order of 1 in 10 million. While one could think that vaccination should still be required in such a case, that would require an additional argument based on a more contested set of questions about the ethics of collective action and public goods, given that imposing a 1 in a 10 million risk on others is not generally thought to be a degree of risk imposition that must be prohibited to protect basic rights. The case for requiring vaccination in a bad pandemic does not depend on contested questions, since it depends only on recognizing the obvious need for governments to prohibit actions that impose as unacceptable a risk to others as a 1 in a 1,000 risk of death.
Thus, when a pandemic is bad enough, the ethics of collective action and public goods takes a backseat to the more urgent ethics of protecting basic rights, as the decisive reason outlined above to require vaccination in order to protect basic rights provides an independently decisive case that trumps additional reasons we may have to promote welfare. Even libertarians who reject the idea that people should be required to contribute to public goods should still agree that vaccination can be required in a pandemic, as even those libertarians agree that government should outlaw behavior that imposes an unacceptable risk of noncompensable harm to others.
In sum, no one would think that the correct policy response to drunk driving is merely to charge a tax of $1,000 a year for those who want to drive drunk and do nothing further. For similar reasons, while creating incentives to vaccinate is sometimes better than nothing, it may not be enough. The correct policy may be to require vaccination, depending on empirical facts about risk and other factors highlighted above. Choosing to remain unvaccinated may impose an unacceptable risk of noncompensable harm to others, and if so it should be prohibited (except when there are medical reasons not to vaccinate a specific individual) given that requiring vaccination isn’t an overly costly intervention into people’s lives in light of what is at stake for others.
When employers vaccinate eligible employees against COVID-19, what kinds of sub-prioritization criteria are permissible?
Overview of the dilemma
As part of the national effort to roll out COVID-19 vaccination, some states are allocating some of the vaccine doses at their disposal to large employers, so that the employer can distribute the vaccine to its employees. Early on in the vaccination effort, when health care workers were a priority, hospitals received vaccines to administer to their staff directly; now, as vaccine eligibility expands, employers may apply to set up their own vaccine dispensation points. CPLB’s own home institution, Rutgers University, has been approved by the state of New Jersey to administer vaccines on campus when vaccine supplies become available.
The number of people eligible for employer-provided vaccination according to state criteria will often exceed the number of doses available. Employers then need to sub-prioritize, or set rules to allocate vaccines among the eligible population, keeping in mind that that population would usually be entitled to get vaccinated through the state as well.
Early on in the vaccination campaign, some hospitals and nursing homes impermissibly vaccinated donors, trustees, board members, and relatives of executives in violation of state eligibility rules. But how should employers that honor those rules allocate vaccines?
On one possible approach, employers’ sub-prioritization ought to serve the same goals as the state’s criteria for allocating the vaccines across the state do. On that approach, the employer cannot permissibly bring additional goals into its sub-prioritization decisions.The state’s general goals can be served in two quite different ways:
- The employer replicates state allocation criteria. For instance, if the state prioritizes based on age and comorbidities only, the employer prioritizes based on age and comorbidities only, with similar cut-offs etc. It rations vaccines as any other distribution point in the state does.
- The employer enacts different criteria than the state does, in service of the state’s goals, considering the employer’s special conditions. For example, imagine that the state prioritizes any state resident thought to have a certain characteristic. Assume that an employer could have provided vaccination to any area resident thought to have that characteristic. Instead, the employer provides vaccination only to those area residents thought to have that characteristic who are its employees, only because on its own employees, it has reliable and fine-grained data with no need to rely on self-reports; or because it has its employees’ up-to-date contact details so could reach those entitled to the vaccine faster than it could reach other area residents. The employer could therefore serve the state’s goal of targeting people with that characteristic faster and more accurately than other distribution points in the state if it focuses on those who are its own employees. In such cases, for the employer to offer vaccination to other eligible area residents would throw away that potential efficiency in serving the state’s own goals.
An altogether different approach holds that employers’ sub-prioritization decisions can permissibly serve additional goals. Which additional goals?
- Meeting societal obligations beyond what the state’s distribution system already does? For instance, given the well-known socioeconomic and racial/ethnic disparities in vaccine access, may an employer, frustrated with the state’s failure to achieve equity, go beyond state guidelines by prioritizing eligible employees who earn the lowest salaries, belong to underserved racial or ethnic groups, or reside in high social vulnerability areas?
- Meeting the employer’s own special obligations? For instance, may the employer prioritize employees at high risk of COVID infection when the employer is responsible for that risk (by requiring these employees to work in person) over employees at similar risk that is not due to the employer’s actions? May the employer prioritize its own employees and even its own retirees over other area residents (when the state would permit it to cover other residents) so as to discharge its own duties of reciprocity, which it incurred even before COVID?
- Furthering its own business objectives? For instance, so long as it meets any constraints specified by the state, may the employer prioritize employees whose return to work in person would facilitate reopening or have the most benefits on productivity? May the employer judge that the state has given it the prerogative to prioritize employees whom the employer most wants to retain? Does the permissibility of these considerations vary between, for instance, a for-profit company and a public hospital? Does it depend on what the state explained as to the reason why it is empowering the employer to make some allocation decisions?
Public justification and employer distribution of vaccines
Helen Frowe, Professor of Practical Philosophy and Knut and Alice Wallenberg Scholar, Stockholm University
The proposal that employers be (a) charged with administering vaccines to their employees and (b) permitted, within certain limits, to decide the pattern of vaccine distribution amongst those employees raises a range of moral questions.
Foremost amongst these questions, it seems to me, is why a state might be justified in outsourcing the allocation of vaccines to employers in this way. One candidate answer is that outsourcing to some types of employers is simply an efficient way to distribute vaccines to where they will do the most good. Here, ‘most good’ might be understood as, for example, (a) reaching those most at risk from serious harm if they catch COVID, or (b) reaching those who are already socially or economically disadvantaged, or (c) some combination of each of these, given that the data suggests correlations between suffering the worst effects of COVID and belonging to various socially disadvantaged groups. So, for example, if we take a university which has employees from a range of backgrounds, including from socially disadvantaged groups, then we might think that outsourcing vaccine allocation to the university will be an efficient means of getting vaccines to where they will do the most good.
If this is indeed what justifies the state’s outsourcing vaccine allocation to an employer, then it also provides a clear rationale that should guide the employer’s pattern of allocation. If, for example, our example university has been given this task because it is well-placed to meet the goal of getting the vaccine to members of disadvantaged groups, then its pattern of allocation should prioritise getting the vaccine to members of these groups. Indeed, this rationale suggests that the university should not merely prioritise getting the vaccine to members of these groups, but that it should only be distributing vaccines to employees that fall within those groups. If the university has surplus vaccines, it may not give those vaccines to employees that are not members of these groups. Rather, as far as possible, these vaccines should be made available for distribution to members of socially disadvantaged groups who are not employees.
Another candidate answer to the question of why a state might outsource the allocation of vaccines to employers invokes the importance of enabling certain organisations to function. The functioning of an organisation such as our example university protects and promotes morally important goods: it’s good for the economy, especially the local economy; it protects people from unemployment; it meets educational needs, and so on. If the importance of securing these goals is what justifies outsourcing vaccine allocation to large employers, then this suggests quite a different pattern of allocation. Vaccines should be allocated in a way that is most likely to enable the institution to (continue to) function and thereby protect these important goods.
So, on this model, the university should prioritise vaccines by asking (a) how likely a given employee is to contract COVID, and (b) what the effect would be on the university’s functioning if this employee were incapacitated. Note that the first question is not tied to how likely the employee is to contract COVID as a result of working for the university. On this model, there is no reason for the university to care about this role-related exposure rather than the employee’s general risk of exposure. Imagine that Anita, an administrator at our university, can do her job at home but cohabits with a partner who works in a public-facing role. Anita’s risk of contracting COVID is largely determined by the risk of her partner’s contracting COVID. If outsourcing vaccine allocation to Anita’s employer is justified by the importance of enabling it to function, the university should care about Anita’s absolute degree of risk of infection and not her role-related risk. It is the absolute degree of risk that is relevant to whether Anita will be able to perform her role.
Of course, we might think that each of these justifications—efficient distribution and organisational functioning—is likely to be instrumental in a state’s decision to outsource vaccine allocation to employers. But there is reason to be cautious of combining these justifications. This is partly because, as we’ve seen, they support quite different patterns of allocation. And it is partly because adopting a kind of middle path that gives weight to each might inadvertently undercut the functioning justification. The university’s ability to function presumably requires a critical mass of employees able to do their jobs. By diluting this justification with our reasons to reach disadvantaged groups, we risk failing to secure that critical mass. Balancing these justifications requires, at least, careful thought about what degree of functioning vindicates the outsourcing of vaccine allocation to an employer on the basis of functioning.
I suggested above that the functioning justification supports caring about employees’ absolute exposure to risk, rather than role-related exposure. Nevertheless, we might feel the pull of the view that the university has stronger reason to prioritise vaccinating those who are exposed to risk by their employment there. I am not arguing that we cannot or should not accommodate such an intuition. I am merely pointing out that this prioritisation is not derived from the justification of enabling the university to function. If we think that the university ought to give extra weight to risks that are incurred in the course of undertaking work for it, then this looks like an independent constraint on how the university may promote its functioning.
There is good reason to think that there are such constraints. In conversation, Brian Berkey suggests that we might justify ascribing extra weight to role-related risks by pointing to the fact that employees in roles involving in-person interaction are exposed to risks as a means of promoting the kinds of ends suggested above. The university is asking these employees to incur risks not only as a means of keeping their own jobs, but also as a means of helping other people keep their jobs, or secure an education, or help the (local) economy and so on. Whereas Anita is exposed to risk as a side-effect of her partner’s being usefully exposed, those whose roles involve in-person interaction are themselves usefully exposed to risks for the sake of benefits to others. Insofar as it is hard to justify requiring people to treat themselves as a means for the sake of others, particularly when this involves incurring risks of harm, we have reason to reduce those risks. This explains why an employer’s pursuit of its capacity to function is restricted by its obligations to limit the extent to which people are usefully exposed to risks for the sake of others.
Note, though, that this is not a claim that our example university has special obligations to mitigate the risks to which it exposes its employees that it may discharge through the use of a public good such as a vaccination. The obligation to reduce the risks to which individuals are exposed for the sake of others is not, in this instance, a special obligation attached to the university, because the goods at stake are broader public goods rather than goods for the university as such. It seems to me impermissible for the university to use public goods to discharge any special obligations that it might have (for example, to retired employees). Nor is such use supported by either the efficiency or functionality justifications considered here.
Employer vaccine prioritization must be consistent with legitimate public aims
Brian Berkey, Assistant Professor of Legal Studies and Business Ethics, Wharton School, University of Pennsylvania
When employers such as for-profit corporations, universities, or public hospitals are in charge of distributing limited supplies of COVID vaccines among employees and others associated in some way with the organization, there will be temptations for those involved in deciding how to prioritize among potential recipients to treat a range of factors as relevant. Executives at for-profit corporations may, for example, want to prioritize those employees whom they want to retain, and who are most likely to have appealing alternative options. And university administrators may want to prioritize students who pay full tuition, and are most likely to take a semester off if they’re not able to return to having a largely normal social life at the start of the fall semester.
These prioritization decisions may best serve the goals and interests of the relevant institutions, but they would also involve prioritizing employees who will tend to be higher up in the corporate hierarchy, and students from the most privileged backgrounds, respectively. If the state were making the relevant prioritization decisions directly, it would clearly be unacceptable to treat employee retention or preventing full tuition-paying students from taking a semester off as aims that justify providing priority access to vaccines to some over others in similar risk categories. In my view, it is no more acceptable for employers to prioritize on these grounds than it would be for the state to do so. This is because employers that are put in charge of distributing vaccines among employees and others associated with the organization should be understood as entrusted with the distribution of a public resource, and therefore must make decisions about how the resource is distributed that are justifiable in terms of legitimate public aims (as Helen Frowe suggests in the conclusion of her contribution to this Dilemma).
This principle rules out not only especially troubling grounds on which employers might want to prioritize, such as those that I noted above, but also others that we might initially find intuitively acceptable. For example, it rules out employers appealing to obligations that they have to employees in virtue of subjecting them to risks from COVID by requiring them to work in-person in order to justify prioritizing them over other employees who face similar overall risks, but have been permitted to work from home during the pandemic. It’s plausible that employers that have required certain employees to work in-person during the pandemic have special obligations to those employees that they don’t have to others. But, in my view, they can’t permissibly satisfy those obligations by using public resources with which they’ve been entrusted, such as a supply of vaccines that they’re charged with distributing.
Whether employees who have been required to work in-person can permissibly be prioritized over others at similar overall risk levels depends, instead, on whether there’s a legitimate public justification for prioritizing them. And it seems to me that in many cases there will in fact be such a justification. Some employees, for example, have performed (and continue to perform) work that’s genuinely essential and can’t be done remotely. In these cases, the state was or would have been justified in requiring their employers to continue to operate (at least largely) normally, at least with respect to their working conditions. The fact that some employees have put themselves at risk in the course of performing work that’s essential to the continued functioning of society during a pandemic is plausibly a legitimate basis on which the state might prioritize them for vaccine access over others at similar overall risk who don’t perform such essential work. If this is correct, then employers are permitted (or perhaps even required) to prioritize these employees – but importantly, this is only because the state could also legitimately (or perhaps would be required to) prioritize them if it was distributing access directly.
To see what my view implies for particular cases, consider a simple example: Firm F is a grocery store chain that employs A and B. A is a 65-year-old accountant who is in good health and has worked from home during the pandemic. B is a 40-year-old in-store worker with a minor health condition that increases her risk of hospitalization and death from COVID somewhat. Overall, they face roughly equal risks. My view implies that because the state has a legitimate interest in B’s work being performed, it may not be inconsistent with legitimate public aims for F to prioritize her over A for vaccine access. In addition, if B’s performing her work makes it the case that the state would be obligated to prioritize her over A if it were distributing access directly, then F is obligated to prioritize her as well (even if F’s interests would be better served by prioritizing A). This is because the public reasons that would require the state to prioritize B carry over to F when F is entrusted with the distribution of a public resource.
It’s worth noting that my view suggests that there isn’t any fundamental justification for employers distributing vaccines that they’re allocated only to employees and others associated with the organization. There may be general efficiency-based reasons for their doing so, and in some cases there may be legitimate public aims that would be served by distributing vaccines only to those within a particular organization. But this won’t be true in all cases, and when it’s not true my view implies that employers will have reasons to extend distribution beyond the institutions’ membership. If doing this would better serve the public aims that ought to guide the distribution of public resources, then it seems to me correct to think that employers ought to do it.
Private ends and the allocation of public vaccines
Bastian Steuwer, Postdoctoral Associate, Center for Population-Level Bioethics
Coronavirus vaccines are currently not available on the free market for private individuals to buy. Suppliers have entered into contracts with governments who acted on behalf of their populations to ensure that vaccines are available. The purchased vaccines, therefore, belong to the government and most governments choose to distribute the vaccines free at the point of delivery, either by relying on existing health insurance coverage to cover the costs or by paying for the vaccine for those uninsured.
In the United States, several states have nonetheless decided to involve private actors in the distribution of the vaccine. This is unlike many other countries in which governments are the sole distributor of vaccines, aided only by health care providers like hospitals or doctors that distribute in strict accordance with the government’s priorities. The special situation in the United States raises the following questions: how should private actors distribute the vaccines allocated to them? Can they use the vaccine to further their own ends, by which I mean either their own self-interest or their special obligations which are not shared by the state?
A possible resolution to this question lies in the contrast with other countries which rely entirely on public distribution. Why do states like New Jersey think it sensible to give large employers vaccines? Why do they give away a public resource, purchased for the general population, to private employers? We might think that the answer to this question also helps us understand how private employers should allocate vaccines when they receive them.
The why question is, however, importantly ambiguous. It can refer to one of two things. First, it may be thought to refer to the actual intentions of the government. Second, it can be understood as referring to the possible justifications the government has for using private agents in the distribution of vaccines. In practice, the first question is somewhat moot. There is often no clearly communicated intention by the state that explains why vaccines should be distributed privately. It seems to me that this does not constitute a particular problem, since the first interpretation of the question does not seem to be the relevant one. Imagine that a state gave away vaccines to appease big business. Big business liked having vaccines at its disposal due to both the additional power this entails over their employees and the positive PR coming from being portrayed as the benefactor. In this scenario, we would not consider the state’s intention to be morally relevant. That’s because the state’s intentions display an unjustified attitude towards the prioritization of vaccines. If so, then we might as well take the second interpretation of the question to be more important.
So what are the possible justifications for using private employers to distribute vaccines? A first one is that private employers might be more efficient at giving vaccines to those at greatest priority. Employers have information that the state does not have. For example, the state cannot easily distinguish between different Walmart employees. Walmart has a better sense of who really is on the frontline in their stores. This improves the fairness of the vaccine distribution. Employers also have better information that allows them to reach out to employees and set up vaccination delivery in ways that are convenient to employees. Reducing missed appointments and better communication improves the efficiency and speed of vaccination. A second, and distinct, rationale is that it is socially very important for certain employers to resume operations. For example, online education may be a poor substitute for the real experience. If so, then this provides a good reason for schools and universities to resume in-person classes as soon as possible to reduce setbacks in the education of children and young adults. Each of these justifications may, by itself, be sufficient to justify giving vaccines to employers.
A problem with this approach is that the two justifications can pull in opposite directions. Take the following example. Adam is a current frontline worker who has a moderate risk of harm from COVID should he get it. His current risk of exposure is high. Beatrice currently works from home because she has a higher risk of harm from COVID should she catch it. At home, her exposure risk is very low. The efficiency idea would favor Adam who is currently at higher risk of adverse outcomes and at higher risk of infecting others. But the reopening idea would favor Beatrice. If employees like Beatrice are vaccinated, then the employer can restart operations more easily.What should the private employer do? The reasons for which the employer is given the vaccine are overdetermined. Might the intention of the government serve as a guide? I am not convinced. Unless the government attaches specific strings to the vaccines, the democratic process has not precluded either of the two allocation plans. But perhaps the employer should nevertheless heed the democratic intention. I am more inclined to think, however, that in such a case the employer can further their own ends in a limited way. It can choose between the two justifications from its own perspective. That does not necessarily resolve the case. One can argue that Adam has a strong argument that the employer owes the vaccine to him given that the employer put Adam on the frontline. One can argue that the employer has a self-interested reason to give the vaccine to Beatrice. Either way, the employer is using its own perspective in a limited manner. I don’t think that this conflicts with seeing the vaccine as a public good paid for by the government. Whichever option the employer chooses, there is a public justification for the allocation. The vaccine is always treated as a public, and not a private, good.
Employees, clients, and everyone else
PhD candidate, Rutgers University and Stockholm University
Large public and private firms are now aiding in the distribution of COVID vaccines. The rationale, roughly, is that at least some large firms are well positioned to efficiently distribute vaccines to their employees. Presumably, they are capable of such efficiency due to their preexisting infrastructure and the information they possess about many of their employees.
However, if the rationale is based on efficiency gains alone, it's difficult to see why employees are so special. Many large firms also have significant information about their clients. Universities, for example, often have at least as much information about their students as they do about their employees. Why should universities prioritize giving access to vaccines to their employees rather than their students? One might think that, in the case of COVID, many students are not under as significant a risk as many employees due to their younger age. However, this is only true in vague generality. There will, of course, be non-traditional students who are older than many employees and some students who have a comorbidity. And there will also be many employees who are relatively young and lack relevant comorbidities. It's thus unclear that the employee/client distinction can even serve as a proxy measure for where people fall among the various priorities.
From a rather abstract point of view, I find it difficult to see why the difference between employees and at least certain kinds of clients, like students, should make a difference to the moral obligations of employers. Both employees and some kinds of clients are in regular economic exchange with the employer. One simply exchanges labor for money while the other exchanges money for goods or services. Of course, we might think that the repeated, sustained exchange relationship of any employee with their employer often generates special duties between them. Employees, in some sense, have more contact with their employers than does any one client. However, again, at least in the case of students, this doesn't create an obvious difference between employee and client. The university has a close, and in some ways even more intimate, relationship with its present students. Many students live, eat, and spend their off-work hours in the institutions of the university. And this relationship does continue, to varying degrees, after the student graduates.
But suppose that what I say just can't be right. Suppose that we think the employers in the university setting have special duties to their employees that they do not have to their students. These duties are not grounded in efficiency or ongoing exchange interactions or anything else like that. They're just grounded in something unique to the employee-employer relationship. Suppose that that's right and that we think the universities are permitted or even should act on these special duties to their employees in distributing vaccines. If so, then we seem committed to the claim that large firms like universities may appeal to their special duties—duties that are not part of the public ethical justification, founded upon efficiency, for having large firms aid in vaccine distribution—in their prioritizing some individuals rather than others.
This raises a host of further questions. And not just for universities. If we think universities can appeal to special obligations to prioritize some over others, then why can't other large firms do the same? And why should only special obligations to employees count? After all, many firms have special obligations to others as well. What about retirees? Or why can't a private firm appeal to special obligations they “just owe” to their stockholders? Yet giving stockholders priority seems like an objectionable use of a public good. Of course, this implication might be avoidable. My point isn't to dismiss the idea that there are special employee-employer duties out of hand, or to prove that large firms cannot appeal to such duties. Rather, my point is that considerable work must be done. We need an explanation of such duties and a justification for letting firms appeal to them which doesn't collapse into moral absurdity. After all, this is a domain of great public importance. Transparently acceptable justifications are required.
If we're unable to work out such a justification for allowing large firms to privilege their employees over others, including to some others to whom they may have special obligations like stockholders, then how these firms should go about distributing the vaccine may just be a matter of efficiency. This too may seem strange. After all, efficiency is hostage to local circumstances. If, for example, a university is in a rural or impoverished area, it might be the most efficient distributor of vaccines for a great many people who have neither an employee nor client relationship with the university. And this would simply be because, despite a lack of personalized information, the university might have infrastructure appropriate to the task in a way no other institution within a reasonable distance does. In such cases, why shouldn't the firm be forced to ignore all of their employee and client relationships in the name of efficiency?
When employers act as vaccine distributors
Assistant Professor, Rutgers Center for Population-Level Bioethics
Some employers are well-placed to serve as vaccine distributors given our societal goals of health, welfare, and equity. Many employers have greater know-how, capacity, and incentive than governments to vaccinate their employees quickly. And in cases where these employers are large and have diverse workforces, allocating vaccine to them can therefore be an efficient way of getting vaccine to individuals in a quick and equitable way.
For example, many state universities have a very large and diverse workforce, and have the knowledge, capacity, and incentive to vaccinate their employees quickly, given that they know who their employees are and where they will be, have sophisticated biomedical staff and facilities, and stand to lose millions of dollars if vaccination is delayed. Allocating some vaccine to such well-placed employers is part of the best feasible way that society can promote health, wellbeing, and equity, because it allows government to better achieve our societal goals (quickly getting vaccine to individuals in an equitable way) than if government insisted on directly distributing all vaccine to individuals itself. This explains why we should make some (but not all) employers vaccine distributors. Note that the claim here is merely that some vaccine should be allocated to employers to distribute to individuals, not that all vaccine should be allocated in this way. We should not allocate all vaccine to employers, because our societal goals also imply that some (perhaps most) vaccine should be allocated to public health agencies, pharmacies, and other entities to distribute directly to individuals.
Because some vaccine should be allocated to employers, an important question arises about how those employers should distribute their allotted vaccine. This question cannot be answered by vaccine allocation guidelines at the governmental level, because employers face additional unique questions that are not addressed by those guidelines. For example, large employers face the question of who the set of individuals is to whom it should distribute its allocated vaccine. Should retirees be included? What about part-time vs. full-time employees? What about vendors such as cafeteria workers who are technically employees of a different company but are assigned to work full-time at the buildings run by the employer in question and may be more exposed to COVID risk than its own employees in the same buildings? Does the employer have a special obligation in virtue of the risk it imposes on some non-employees to treat them as part of the set of individuals to whom it should allocate its vaccine allocation? Should any member of the general public have a right to demand vaccine from employers who are allocated it, whether they are an employee or not? And whatever one makes of these questions about what the relevant set of individuals is to whom an employer should distribute vaccine, there are further questions such as whether employers should add additional allocation parameters within the allocation guidance provided by government, for example to always allocate scarce vaccine to older individuals first in order to break ties within allocation groups provided by government.
In answering these questions, it is important to recognize that employers have special obligations to two partially overlapping groups: those who are at higher risk from COVID because of the employer’s actions, and those who have a legitimate claim to membership in employer’s community, which I suggest is a larger set than merely those who are current employees of the employer. To see the importance of these two types of obligations, an analogy can help. Suppose society was different, and most people lived in very large households, with one person as the head of each household. And suppose we faced a similar pandemic, with similar dynamics, and similar need for vaccination. In this situation, it could make sense to distribute vaccine to heads of households to distribute within their household community, given the special knowledge, capacity, and incentive of households to vaccinate their community. With that setup in mind, we can imagine individuals within these households analogous to retirees, non-employee vendors, and others considered above. So, imagine that these large households tend to contain people who are old enough that they no longer do physical labor—they are ‘retirees within the household’—and tend to contain people who are simply paid to provide childcare and the like for those within the household, but are not family members or official members of the household in other ways—these are ‘contract laborers within the household’. And finally, suppose that many of these retirees and contract laborers are more vulnerable to the pandemic than others within the household.
Now suppose that you learned that when some heads of households are allocated vaccine to allocate, they treat retirees and contract laborers as if they should simply be ignored and not considered eligible to receive any of the vaccine allocated to the household. Imagine that these heads of households tend to argue that retirees and contract laborers are not official working members of the household, and thus are not contributing to its profitable operations, and so have no claim to receive any of the vaccine allocated to the household. The correct response to such an argument for excluding retirees and contract laborers is that it is ethically incorrect. Similar remarks apply in our actual situation to employers who exclude retirees and contract workers put at elevated risk within the employer’s operations—and the point holds with even more force with respect to employers, given that employers do not have special familial bonds at play that might otherwise tell in favor of prioritizing family members in the households example.
Thus, the ethically correct analysis is that both retirees and contract laborers are members of the set of individuals to whom employers should distribute vaccine—they are members of the employer’s ‘relevant community’. The government is allocating vaccine to the employer because of the special knowledge, capacity, and incentives to vaccinate individuals within its relevant community; this creates a social contract between government and society to the effect that when the vaccine is transferred to the employer, it must be allocated to all members of the relevant community in a way that promotes society’s goals, rather than in a way that merely maximizes profits of the employer. If the employer does not allocate in this pro-social way, and instead merely prioritizes the most profitable employees and artificially excludes retirees and contract laborers, then it violates its obligation to society, and also violates special obligations to its retirees and contract laborers to treat them as valued members of its community. At the same time, employers have no obligation to provide vaccine to all members of society outside their relevant community, just as heads of households would have no obligation to provide vaccine to those with no connection to their household: once the vaccine is transferred to the employer, it is no longer a public resource.
The considerations above explain why vaccine should sometimes be allocated to employers, and begin to answer questions about how employers should distribute vaccine. Beyond the special obligations identified here, vaccine should presumably be distributed by employers so as to equitably mitigate risk within the relevant community.
In deciding between funding different health programs with limited resources, what number of deaths of newborns is as intrinsically important to avoid as the deaths of 100 young adults?
Overview of the dilemma
In people’s judgments about health resource prioritization, saving the life of a young adult is often assigned greater inherent priority than saving the life of a very old person, either on the assumption that saving young adults tends to preserve more life years or because young adults have had less chance for a full life. In that spirit, there is usually greater emphasis on preventing death from HIV/AIDS (a disease that is especially prevalent among young adults) than on preventing death from cardiovascular diseases (which typically affect the old), both because the death of a young person typically takes away more life years, and because it takes these years away from someone who has enjoyed fewer years. Other people insist that a death is a death (or a future life year is a future life year) and prioritize older people and young adults equally. Either way, what remains uncommon is to assign greater priority to saving older adults than to saving younger ones, when they are at similar risk of dying (while priority for older adults was well accepted in COVID vaccination, this was because the risk of dying of COVID is far greater for the old.)
However, when we compare newborns and fetuses to young adults, the pattern reverses, and saving the lives of young adults, who are older, is usually prioritized. One survey’s findings can be interpreted as showing that a lay population prioritizes saving the life of a 39-week fetus over that of a 10-week fetus; treats full-term fetuses and newborns alike; and prioritizes saving one-year-old children over saving fetuses and newborns, and almost as highly as saving adult women. What explains this pattern of prioritization? And what, if anything, might justify it?
The answer has real ramifications in health resource prioritization around the world. It directly affects prioritization between, for example, prevention of stillbirths, of neonatal death, and of death from HIV/AIDS. It may shed light on the abortion debate. It affects measurement of the burden of disease: should a stillbirth count as generating extremely high disease burden (because it often imposes loss of many expected life years), or extremely low burden (because the stillborn’s death does not “count” for the purpose). It is also of great philosophical interest.
Here are a few candidate explanations for the common tendency to prioritize young adults over embryos, fetuses, and even newborns for life-saving when other things (e.g. risk of short-term death) are equal. Each candidate explanation is followed by lines of questioning that may be raised against it, at least when that explanation purports also to justify that common tendency:
While infant mortality is common in many parts of the world, young adults have already escaped death in infancy, and many societal resources have been invested in them. Young adults are therefore more likely to become both productive and reproductive contributors to society than embryos, fetuses and newborns. We may have either economic or species preservation reasons, therefore, to prioritize saving the lives of young adults.
However, the deepest philosophical question, and the question relevant to measuring the burden of disease for each individual, concerns only the inherent importance of preventing deaths, not its instrumental importance to society or to the continuity of the species.
Young adults are personally more invested in continued living than fetuses and infants are.
However, what does being invested in continued living mean, and why should that drive priorities in resource allocation?
Young adults have a concept of a history which may be cut short, more than fetuses and infants do, so survival means more to the former.
However, we do not usually think that someone’s full comprehension that an evil is done to them is necessary for its designation as an evil. Why, then, condition the badness of a death on the dying person’s own sense of history?
Young adults harbor long-term goals, which dying soon would typically thwart, arguably unlike fetuses and infants.
But is the frustration of our goals inherently bad for us? And is it so bad that it can outweigh the typically longer future that a newborn might have had upon short-term survival, making young adult death worse overall?
Young adults resemble themselves psychologically as older adults more than embryos, or even fetuses and newborns, resemble their older selves. So on psychological-continuity accounts of personal identity, dying deprives young adults of many decades of a life that they would have, and not so for embryos, fetuses, and newborns.
But our tendency to discount deaths of young humans seems deeper than considerations of psychological continuity and, relatedly, of personal identity. Embryos, fetuses, and newborns arguably retain their personal identity at least for some weeks in which no transformative developments take place. Intuitively, however, one extra week of continued existence is not particularly beneficial to them. So the account in terms of continuity and identity at best captures only a part of the truth.
Young adults are full-blown persons, with higher moral status and firmer rights than fetuses and newborns possess.
However, setting priorities based on assigning different statuses seems wrong in other areas of health—for example, fetal pain and infant pain arguably should not command fewer resources than pain in young adults.
Young adults are already living their “life story” or “narrative”, which even a painless death would disrupt, unlike very early humans, who haven’t yet started “writing” their book of life.
However, why is a “life story” key to determining health-resource prioritization, as opposed to, for example, the stakes in terms of health, capability, and the like? And doesn’t our story start at conception or at birth, rather than only when we become full-blown persons?
This Dilemma asks which of these explanations, if any, can justify the common tendency to prioritize young adults over embryos, fetuses, and even newborns and the related decisions in health resource allocation.
Valuing mortality risks at different ages
Lisa A. Robinson, Deputy Director, Center for Health Decision Science, Harvard T.H. Chan School of Public Health
How should we trade off deaths of newborns versus deaths of young adults? Clearly this is a normative issue, one that philosophers are accustomed to addressing. But I am an economist or, more precisely, a policy analyst. Here I describe how the framework within which I most often work, benefit-cost analysis, would approach this dilemma.
Addressing this dilemma from the benefit-cost analysis perspective requires first clarifying that framework. Conventionally, benefit-cost analysis compares scenarios without the policy to scenarios with the policy over the time period when policy would be implemented. It considers all impacts, positive or negative, related or unrelated to health. In this dilemma, presumably a decision-maker is faced with only two options, each of which has equivalent costs and only one outcome: averting deaths to infants or averting deaths to young adults. Such a choice is obviously an intentionally artificial construct. Choices are rarely (if ever) this stark nor limited to so few options and outcomes. Accepting this artificial construct, however, how would a benefit-cost analysis answer the question?
In benefit-cost analysis, as conventionally conducted, value is derived from individual preferences for exchanging money for outcomes of concern. Money is not important per se. Rather, it is a convenient measure of exchange, representing the allocation of scarce resources (labor, materials, and so forth). If an individual spends money on a good or service, he or she cannot use that same money for other purposes; the expenditure has an opportunity cost. Presumably, the individual purchases a good or service if she or he values it more than the other things that money could buy. Equivalently, the amount an individual is willing to pay for a mortality risk reduction, such as a 1 in 10,000 decrease in the risk of death in a given year, indicates the extent to which he or she is willing to forego other consumption to achieve that improvement.
Conventional benefit-cost analysis is also based on respect for individual preferences; it is not paternalistic. Each individual is assumed to be the best, or the most legitimate, judge of his or her own welfare. This means that the value of an outcome, such as mortality risk reductions, is derived from the preferences of the individuals affected. Because infants and young children lack the ability to develop thoughtful and well-informed preferences for these tradeoffs, researchers typically rely on parents to estimate their preferences. This means that, in benefit-cost analysis, the mortality risk reductions envisioned under this dilemma would be valued based on the willingness of the affected individuals to exchange money for the risk reductions they would experience. To compare the total benefits of these two policies, one affecting newborns and the second affecting young adults, a benefit-cost analysis would sum individual willingness to pay for the risk reductions across those affected in each case.
Within this framework, what does the available research tell us about these tradeoffs? In high income countries, researchers often find that, on average, values for children exceed values for adults by a factor of 1.5 or more. The extent to which these values vary with the age of the child is uncertain. For working age adults (e.g., between ages 18 and 65), values are often found to follow an inverse-U pattern, increasing throughout young adulthood, peaking in middle age (generally somewhere between ages 35 and 45), then declining. However, the slope of the curve and the age at which it peaks varies across studies. For older adults (generally above age 65), the pattern is less clear; values may increase, decrease, or remain stable.
These patterns suggest higher values for each newborn affected than for each young adult, at least in high income settings. For example, if we start with the 2019 population-average values recommended by the U.S. Department of Health and Human Services, the value of averting an expected death at age 40 would be $10.6 million. If we assume that the value for an average child is 1.5 times the value for an average adult (age 40), then the value of averting an expected death for a child would be $15.9 million. As an example of the values for younger adults, one analysis that applies an inverse-U function to those of working age finds a value per expected death averted of $5.4 million for ages 24 and under and $8.5 million for ages 25 to 34. These estimates are uncertain, however. The research findings are not entirely consistent across studies and many issues are unresolved. Thus while we can conclude that the policy affecting newborns would likely be preferred if both policies were to avert the same number of expected deaths (all else equal), we are uncertain about the size of the difference.
If the policy would instead affect a low- or middle-income country, these patterns may not hold. The relationship between the value of mortality risk reductions and age has not been well studied in these settings and may differ for cultural and other reasons. More generally, regardless of location, values will likely vary due to other characteristics of the individuals affected, such as income and health status, and characteristics of the risks, such as the degree of morbidity prior to death and the extent to which the risk is viewed as voluntary and controllable.
Thus, within the benefit-cost analysis framework, we would address this dilemma by investigating the value that individuals place on their own risk reductions, asking parents to estimate values for risks to newborns and young adults to estimate values for their own risks.
Given space limitations, this essay ignores many other relevant concerns. These include confusion about the “value per statistical life” or “VSL” terminology that is often used to describe willingness to pay for small changes in mortality risk, as well as options for valuing changes in life expectancy (the value per statistical life year, VSLY) rather than changes in mortality risk. These and other concerns are explored in detail elsewhere and many government agencies and organizations have developed related guidance.
Benefit-cost analysis provides an incomplete basis for policy decisions, however. Like any form of analysis, it ignores pragmatic concerns, such as legal, financial, and political constraints. A perhaps more difficult challenge is the need to ensure that unquantified impacts are appropriately communicated and weighed, neither ignored nor exaggerated. Whether and how to incorporate preferences for others’ wellbeing within this framework raises many conceptual and empirical issues. Perhaps most importantly, the distribution of outcomes across advantaged and disadvantaged groups must be considered.
Avert the worst deaths or prioritize the worst off?
Joseph Millum, Bioethicist, Clinical Center, National Institutes of Health
Disclaimer: The views expressed are the author’s own. They do not represent the position or policies of the National Institutes of Health, the Department of Health and Human Services, or the U.S. Government.
In deciding between funding different health programs with limited resources, what number of newborn deaths is as intrinsically important to avoid as the deaths of 100 young adults?
Health systems are rarely faced with direct choices between lives of newborns and adult lives. However, in deciding which interventions to fund or where to expand access first, policy-makers reveal the relative value they put on different sources of morbidity and mortality. There are some interventions whose major benefit to populations is through preventing very young deaths, such as rotavirus vaccination. Others, such as interventions to prevent and treat HIV/AIDS also have a considerable effect on reducing young adult deaths. When decisions must be made about where to direct scarce health care resources it can therefore make a difference how much value is placed on preventing a newborn or infant death versus preventing the death of a young adult. It will affect how much the system should be willing to pay per death averted.
How should we compare the value of averting deaths? One way to do so is to calculate the amount of healthy life that the decedent would live if they were saved. We can do this using summary measures of health such as disability-adjusted life-years (DALYs) or quality-adjusted life-years (QALYs). These combine length of life and health-related quality of life into one measure.
Assume, for present purposes, that we are considering young adults and newborns who would go on to live otherwise-average lives provided that their immediate risk of death is averted. (We are not, for example, considering newborns with congenital conditions that will have serious sequelae even if their lives are saved.) Even in countries where neonatal and child mortality is very high, the average newborn has more life years ahead than the average twenty-year-old. If our goal is to maximize benefits in terms of DALYs averted or QALYs gained, we should spend more to save newborns than to save young adults.
The appropriate goal for the health care system may not be to maximize benefits. Indeed, most people who think about the ethics of allocating scarce resources conclude that maximizing benefits should be at most only one of a health system’s goals. In the context of comparing neonatal and young adult deaths, two other ethical considerations are relevant. These considerations pull in different directions.
First, many philosophers think that how bad it is for an individual to die is not merely a function of how much life they miss out on by dying. The individual’s cognitive development matters too. For example, for someone who is self-aware and has a sense of themselves as having a past and a future, it matters a great deal if they miss out on future life. For someone less cognitively developed, such that they do not yet have a sense of self, it may seem to matter much less to them what they miss out on. So, though newborns miss out on more by dying than do young adults, a typical young adult is much more cognitively sophisticated than a newborn. It therefore matters much more to the young adult if they are deprived of future life.
The view that how bad it is for someone to die is a function of both what they miss out on by dying and their level of cognitive development leads to a form of gradualism about the badness of death. On a gradualist view, for the typically developing human, how bad it is to die rises with age for the first few years and then gradually declines.
Depending on one’s specific reasons for adopting gradualism, the exact function relating age to the disvalue of death can differ considerably. Some gradualists think that the worst time to die is as a toddler; others that death is worse as an adolescent or young adult. These differences will be important for situations in which policy-makers need to place a value on preventing the deaths of toddlers and older children, rather than just newborns.
The second ethical consideration that I wish to flag in this context concerns the distribution of benefits. Most people think that when we are allocating a scarce resource, we should consider both the magnitude of the benefits that we can generate and the way that those benefits are distributed. It would be unfair to give all the resources to people who are already better off, even if that would maximize the benefits that the resources could generate. This concern about distribution can be captured in egalitarian or prioritarian terms. Either way, it implies that we should care more about providing benefits to those who are worse off.
Who is worse off—a newborn who dies or a young adult who dies? Since the newborn has less life, it seems very plausible that they are worse off. (Interestingly, this can be true even if the young adult’s death is worse for them than is the newborn’s for them. If we had no rationing dilemma, but were just asked whether to save a newborn who would then live twenty years, presumably we would assent. Twenty years is better than no life at all.)
Our two ethical considerations pull in different directions. Gradualism about the badness of death suggests that it is worse for young adults to die than the newborns. We should therefore be willing to spend more to avert young adult deaths. But newborns who die are worse off than young adults who die. Giving greater priority to the worst off supports spending more to save the newborns.
How we answer this dilemma will depend on how much we discount the badness of death for newborns versus how much additional priority we give on the basis of disadvantage. If health systems are to fairly allocate scarce resources they need to put numerical values on each. Currently, we are a long way from consensus on either prioritization decision.
Acquisition of human potential and the value of life
Julian C. Jamison, Professor of Economics, University of Exeter (UK)
Let me start by putting on the table that my default position is total utilitarianism, where a life-year is a life-year is a life-year. That implies perhaps 25% extra value on a newborn relative to a young adult, due to their higher life expectancy. However there are two complications to that approach (in my personal view) which have bearing on the question at hand.
The first complication is that I also want to factor in prioritarianism, giving more weight to those who are less well-off. In this generic version of the question naturally we don’t know anything about incomes or other circumstances, nor do we suppose that this would differ across the two age groups. What we do know is that we are considering health interventions targeting distinct populations that differ (at least probabilistically) in their age of death. While dying as a young adult is ‘unfair’, it seems to me that dying as an infant is even more inequitable (i.e. even further from expectation), and hence the latter group should be given extra weight in a prioritarian calculus. Let’s call it 50% higher after aggregating with the life-years argument above.
The second complexity is that implicitly the basic utilitarian “life-year” was a human life-year, but what counts as a human? I reject a categorical definition or bright line between human and not-human (if nothing else, consider our continuous evolutionary history) and instead posit agradual increase from nothing to fully human; see also aphilosophical justification here. For the present purposes I will set aside issues regarding capability at any given moment (which would be relevant for animals and some disabled humans, neither of whom are on an individual path to full human capacity) and instead focus on what seems to be the relevant transition from being a potential but indetermined human to a fully existing one. In other research, I have tried to conceptualize the continuum from a potential to an existing human in terms of how feasible it is for us (as, ideally, disinterested decision-makers) to understand and empathize with what it means for another individual’s existence to go well or poorly: how the life of an existing human might go well or poorly is more comprehensible than how the life of a merely potential human might, leading us to be more confident about being able to do good for the former. In the bullet points of the Overview of this dilemma, this way of justifying prioritizing young adults over newborns is closest to point #6 regarding “full-blown persons”, since unlike most of the others points, in #6 the justification is not from the perspective of the entity in question (partly exactly due to the various counter-arguments given to those approaches). Note that my framework regarding potential and existing humans does imply assigning lower weight to fetal and infant pain.
Next: when does this process of transition from potential to existing human start – at birth? At fetal viability? At conception? I would argue, even earlier: a cell that is a potential future human is already the slightest bit past the starting point in this transition. The ‘Procreation Asymmetry’ in population ethics says that (inJan Narveson’s words) “we are in favor of making people happy, but neutral about making happy people”. I suspect that this intuition (which many share) is the same one that leads to the relative devaluation of newborns compared to adults (and teens or school children). Merely potential people get less moral weight than instantiated people. I agree with this view for the reasons mentioned, but I would give even purely potential people (i.e. pre-conception) some positive weight, contra Narveson. And then I would place continuously increasing weight as that potential is developed and the individual human is pinned down and becomes comprehended by (or at least comprehensible to) the policy decision-maker, reaching full determinedness by (say) five years old.
Where does this leave us in terms of the original question? Our tally was at 50% extra weight on the newborns, but now we need to downgrade them due to their status as not-yet-fully-individuated humans. Admittedly the actual numbers become somewhat arbitrary at this point, but for the sake of argument let us say that purely potential future humans receive one-third the weight of existing full humans. Let us further suppose that newborns are halfway on their journey from formlessness to full-blown-ness. This puts them at two-thirds total, after which we can add the 50% for more life-years and for prioritarianism… and we reach parity! Yes I cooked the books to make it come out exactly the same, but each individual step seems roughly right to me.
Newborns or young adults? What we have and what we could have
Professor of philosophy, University of Bergen, Norway.
Carl Tollef Solberg,
Senior researcher, Center for Medical Ethics, University of Oslo, Norway
Suppose a decision-maker must prioritize between life-saving interventions that will save either newborns or young adults. It is assumed that individuals in either group will go on to live a life worth living, or die in a few weeks if denied help. Should the decision-maker give the life-saving treatment to the newborns or the young adults? The key issue at stake here is the relative importance of preventing the deaths of groups of individuals at these two ages.
There is a menu of different approaches available, and, as we will see, a choice of either of these approaches may tip the scale in favor of either newborns or young adults. One approach is backward-looking. Here we can consult distributive justice theories like egalitarianism and prioritarianism. These distributive theories emphasize what individuals have had so far in life. By definition, newborns have had fewer life-years (or anything else that matters) than young adults, which favors prioritizing the former over the latter. In sum, a backward-looking approach will tend to favor newborns.
A second approach is present-looking. Such a perspective moves our attention to the current characteristics of the individuals in our dilemma. It involves a considerable degree of actualism (putting weight on current characteristics in making ethical judgments). As a matter of fact, young adults are in possession of characteristics that newborns lack. Young adults possess personhood, they have established long-term goals and projects, interests, and ambitions – in short, they have made various life-investments that would be lost if they die prematurely. Young adults are also more productive, and society has made various kinds of investments in them. They may also themselves have children, and they probably have parents and grandparents that depend on them. Young adults are temporal beings with narrative selves and deep social bonds to other people, and a more sophisticated capacity for well-being and suffering. The present-looking approach favors these actualist characteristics, and in sum they count in favor of saving young adults over newborns.
A third approach is forward-looking. Prioritizing a group of newborns is likely to gain support from those who either believe that we should maximize utility like quality-adjusted life years (QALYs) or who believe that we should minimize disutility like disability-adjusted life years (DALYs) when setting priorities in health. A measure like the DALY is, in its very construction, defined so that the death of newborns will come out as a larger tragedy (in the sense of generating more disease burden) than the death of young adults (even if stillbirths generate no disease burden). This raises a question of when individuals begin to accrue DALYs. If one believes that death is worse the earlier it occurs, then one seems committed to the view that death is worst right after the individual has begun to exist. And if we begin to exist at conception, this implies that policymakers aiming to maximize QALYs or minimize DALYs should prioritize saving embryos over fetuses and fetuses over newborns. But this is implausible.
The forward-looking approach operates on the value of possibilism (putting weight on possible future characteristics in making ethical judgments). In contrast to the actualism of the present-looking approach – which emphasizes current characteristics and thereby young adults – the possibilism of the future-looking approach tends to emphasize what can or will happen in the future if we prevent the deaths of young individuals like newborns. Although newborns have not yet developed enough psychologically to have made life-investments, and so lack a current stake in their future, they will eventually make the relevant life-investments, and they will eventually develop a stake in their life and future. If, instead of emphasizing what we currently have as grounds for prioritizing, we emphasize what we could have in the future, this may count in favor of saving newborns over young adults.
A newborn is a potential future young adult. To some extent, saving newborns involves saving future life-years, narrative selves, life-goals, life-projects, personhood – in short, everything that a group of young adults may have now. Thus, the crucial question is to what extent should we now emphasize characteristics of individuals which they have yet to develop, but which they are likely to develop?
Both the present-looking and the future-looking approaches can be combined with badness of death approaches to priority setting in health care. The underlying assumption here is that how bad it is to die provides one kind of reason for why we should prevent an individual’s death. If newborns fail to receive life-saving treatment, one could claim that their death is particularly bad because the loss associated with their death is greater than the loss associated with the deaths of young adults. This intuition is captured by a Deprivation Account of the Badness of Death, according to which death is bad in virtue of what it deprives someone of. The death of newborns is bad for them because it deprives them of much good life.
On the other hand, some would say that the death of young adults is worse than the death of newborns, even if death deprives young adults of less good life compared with newborns. One reason for this is that young adults are psychologically more connected to their future than is the case for newborns. According to a Gradualist Account of the Badness of Death, both the future life lost and the extent to which the future matters to an individual are relevant to how bad an individual’s death is for that individual.
Gradualism will tend to favor actualism, whereas the Deprivation Account will tend to favor possibilism. Many neonatal deaths happen as a direct result of prematurity. Philosophically, we have not found strong reasons for arguing that premature newborn deaths should be measured as much larger tragedies than stillbirths. If an equal number of life years will be saved in both groups, then this will count in favor of young adults. On the other hand, if both groups will live until age 86, then the dilemma becomes a more difficult one. Proponents of Gradualism (of the badness of death) will typically argue that newborn life years should be discounted. Ultimately, the choice between newborns versus young adults will depend on the chosen discount rate.
Young adults versus newborns, and cats versus kittens
Research Professor of Bioethics, Rutgers Center for Population-Level Bioethics
I have no idea how many newborn deaths are as intrinsically important to avoid as the deaths of 100 young adults, but like many others, I believe that it is many more than 100. What, if anything, justifies this belief?
I think that there are several considerations that speak in its favor, but I shall only discuss one. Suppose one were writing biographies of both an adult and a newborn who are facing death. One would obviously have a great deal more to say about the adult, but the difference would not just be quantitative. The adult is a fully developed human being with rational capacities, objectives, convictions, passions, a cultural identity, and a personality. His or her life is a story of a human life that is cut off by death. The newborn or newborn has none of these traits. Regardless of one’s view of its rights and the wrongness of killing newborns, the death of a newborn prevents a biography from beginning, rather than ending one.
During a seminar at CPLB, philosopher Shelly Kagan mentioned that he does not feel the same about whether the death of a young adult cat is worse than the death of a newborn kitten. I share Kagan’s intuition on this. Yet, of course, one can write a biography of a two-year old cat, who has habits, likes and dislikes, memories, and abilities to plan that a newborn kitten does not have. Shelly’s point thus seems to undermine my account. Clearly the view that there is something much worse about allowing a biography that is only half complete to end than allowing the death of a newborn must rest on the difference between the end of a human or “rational” biographical life and not beginning this kind of life.
If there is some special value attached to a life attainable by most adult human beings, then one can invoke the asymmetry between interrupting such a life and not beginning it to explain why the death of a newborn is not as bad as the death of a young adult. But the account would suggest that if there is value in a cat’s life, as there surely is, then there should also be a difference – albeit a smaller one – between the end of a cat’s developed life and a newborn kitten not starting one.
My cat—a true story
Director, Center for Population-Level Bioethics
In my second-floor apartment, my adult cat always seemed somewhat down, though with a life worth living. When I moved to the ground floor, I started to let him go outside daily. The cat became lively and playful. It was evident that daily roaming of the great outdoors increased his quality of life substantially.
The veterinarian disapproved because outdoors, cars and predators often inflict sudden death. (She also pointed out that my cat would terrorize birds—set that aside.) That perplexed me. My strong intuition remains that, for the cat’s own good, ongoing fun matters far more than having a long life. Running around, smelling grass and flowers, hopping from one surface to another, and behaving like a tiger are all sources of appreciable joy for a being with my cat’s mental faculties. Dragging a dull life for longer matters less.
Death is not nothing to us; but there is an important way in which sudden death is nothing to a cat. A fairly sudden death with only a moment’s pain is almost nothing to a cat. Life quality is what matters in an entity with my normal adult cat’s mental faculties. My intuition, at least, is that to offer the cat a year of excitement is the pro-cat thing to do, more than to offer 10 years of tolerable boredom—even if the latter includes more hedons overall (“hedons,” from “hedonistic,” is philosophese for pleasurable experiences minus displeasurable ones.) Perhaps in assessing what would best promote a cat’s interests, we should not be aggregating hedons over different years at all, but instead consider the average utility per period. Crucially, these thoughts were about what would serve the cat’s own good—regardless of his moral status and rights and the tradeoffs with other beings’ rights.
A human newborn, fetus or embryo has many mental and other potentials for future development that an adult cat lacks. But the actualized mental faculties of all these human entities are on a par with, or lesser than, the adult cat’s.
If what I felt about my cat is sensible and applies to human newborns, fetuses, or early embryos, then other things matter more to these human entities’ good than continuing to exist. Most centrally, inasmuch as the newborn or fetus has concurrent experiences like pain and pleasure, then, at least inasmuch as that entity has full moral status and no other entity’s interests conflict, we should prioritize action on these experiential “hedons”. Accordingly, programs for treatment and prevention of fetal pain matter greatly. Pain is not nothing to a fetus or a newborn. It matters here and now, even when it has no impact either on survival or on the health and flourishing of any person whom this human entity might one day become. Surprisingly, however, promoting the entity’s own good scarcely requires preventing its demise--by analogy with my cat story above. Not so for a young adult human, for whom death is an incontrovertible tragedy. Therefore, when preventing stillbirth conflicts with preventing the death of a young adult, as it does in our dilemma, the latter easily wins.
Thus, applying my intuition would tend to support funding interventions that avert young adult deaths when the opportunity cost is funding interventions that would avert similar numbers of stillbirths and neonatal deaths, other things being equal. While either life could be as important to preserve for the person once her or she is a person, fetuses and newborns are not there yet, and their demise would arguably be a smaller tragedy for them at this point than it would be for young adults, including the young adults whom these fetuses and newborns might one day become by surviving now.
It is true that the death of any human entity, tiny or grown up, usually means a lot to its near and dear. But it is unclear that impact on third parties should inform health policy. Do unpopular people get fewer rights to healthcare? What I say here pertains only to the inherent tragedy and corollary reasons to prevent death.
Applying my intuition to cats and early humans may affect additional matters:
1. “Deprivation” theories of death: The badness of death is regularly thought to include, at least among other things, the “deprivation” of the dying entity of all the good things that a longer life would have included. But in my intuition about my cat, that is not (a significant) part of what makes death bad. So deprivation of future hedons might not always make death (significantly) worse for the decedent.
2. The abortion debate: This revolves mainly around the (in)existence and force of the fetus’s rights compared to those of the pregnant woman. But it also matters how much continued existence, which is clearly good for adults with a life worth living, is at all important for a fetus likely to develop a life worth living. Abortion may remain a woman’s right, among other things on the ground that stopping to exist before one turns into a person is not very bad for a fetus. This somewhat recalls the view that we have a duty to make people happy but not to make happy people. (Again: by contrast, pain in a fetus and in a woman who carries it may weigh similarly, suggesting that their statuses are in some respects similar, and supporting some pro-lifer moves.)
3. The “time-relative interest” theory: An alternate account of why a fetus lacks an interest in the existence of the adult person which the fetus would become is that the fetus and the adult person are psychologically so different that they are different entities, who need not care much for each other’s survival. This alternate account may be redundant. An adult cat presumably possesses close psychological continuity to its own future self, and is definitely the same cat as that later cat; yet if my intuition about my own cat was right, painless death remains nothing to an adult cat. Whatever makes death nothing to an adult cat may obtain in the case of the fetus as well, accounting for my intuition and potentially for why a fetus lacks an interest in the existence of the adult person which the fetus would become if the fetus survives, without invoking psychological discontinuity.
4. The importance of a sense of history: I am inclined to think that what makes painless death (nearly) nothing to a cat (and, by implication, to very early humans) is primarily their absence of thoughts about and aspirations for their own future selves, which death would thwart. (Inasmuch as that’s psychological discontinuity with the future self, psychological continuity matters.) If that is correct, it lends some support to the idea that such thoughts and aspirations for the future are key to the badness of death.
5. Calculation of the global burden of disease: For calculations of disease burden, whether stillbirths from that disease contribute to the life years lost from it does not depend on how much death is “harmful” to fetuses. The global burden of disease purports to assess, not the harmfulness of disease but how much it detracts from health—by which is meant, from the health of the beings whose health comprises global health. A human disease that spreads to cats and kills them does not thereby augment in burden. So what matters to calculating disease burden is whether fetal health counts as part of global health—an independent question.