Skip to main content

Dilemmas

Should governments delay the second dose of COVID vaccines, administer lower dosages, or otherwise depart from protocol in order to vaccinate more people earlier?



Overview of the dilemma

Nir Eyal, Henry Rutgers Professor of Bioethics, Rutgers Center for Population-Level Bioethics

The three vaccines for COVID-19 authorized for emergency use in some Western countries as of mid-January 2021—by Moderna, Pfizer (and BioNTech), and AstraZeneca—were tested for two doses, 3-4 weeks apart. In recent weeks, however, several options for lower dosing, spaced vaccinations, or mixing and matching different vaccines have been discussed:

I.          One and a half dose: By mistake, AstraZeneca tested on some participants a regimen of 1.5 dose instead of 2, and these participants did even better than other groups, constituting very limited evidence in support of an unconventional 1.5 dose regimen.

II.            A single dose in a single jab: Some suggested experimenting with giving only one dose of the Pfizer and Moderna vaccines in order to save more doses for others, on grounds that in trials of both vaccines the sharpest drop in disease in the vaccinated group started before active-arm participants received the second dose. Others suggested that rolling out single doses is justified even without further experimentation, because of the expected public health windfall from vaccinating more people earlier, and because some unconventional evidence already exists that even a single dose probably works, in their view, sufficient evidence to warrant the “gamble”.

III.            Spacing out: The UK’s Chief Medical Officers decided to lengthen to 12 weeks the interval between doses of the Pfizer-and AstraZeneca vaccines, which had been authorized for use in the UK with 3-4 weeks between doses. The main goal was to reserve doses for vaccinating more people earlier. In the US, many opponents warned about the dearth of trial evidence in its support. But on an assumption of 52% efficacy after a single dose, modeling suggested that giving the first shot to more people, rather than keeping enough to ensure a second dose at 3-4 weeks, would prevent more cases of COVID-19. And President Biden announced that he would immediately deliver all doses available (that is, if any are), which would mean spacing out doses. The UK is now moving fast on immunizing the population and claims to have evidence of clinical benefit from even further spacing of the AstraZeneca vaccine. Israel, however, is reporting that the first dose of the Pfizer vaccine is less effective in the field than one might have hoped.

 IV.            A single dose divided into two half-dose jabs: Some American experts are excited about the option of distributing Moderna’s vaccine into two half doses instead of two full ones. There is some data to support it, per senior US proponents, though others oppose this option as well. While a lower dose may or may not enable more vaccine to be distributed early on, it would definitely leave more vaccine for others.

 V.            A single dose of one vaccine followed by one of another vaccine: While this wouldn’t reduce by much the number of doses needed, it should resolve logistical issues for a second vaccination when the original vaccine is locally unavailable. Yet, the vaccines were tried with two doses of a single vaccine.

Many crucial factual questions remain open. What is the likeliest efficacy of the various dosing and spacing approaches? What are the not-very-unlikely worst case scenarios for each (is undermining public trust, or creating vaccine resistance, likely? Does sheer discussion of these options already confuse the public and undermine trust?), and how bad would they be? There are some conflicting signals even in the evidence we have. For example, in the Moderna trial, there were fewer cases in the active arm than in the control arm after a mere two weeks from first vaccination; indeed, some commentators added, “Because [usually] we do not expect a protective immune response in the initial 14 days after immunization, this suggests that once immune response is more mature, the efficacy of a single dose may be higher”. However, in early testing, the vaccines had seemed quite inefficacious after a single dose—their promise of efficacy was then suggested only after two doses.

There are also questions in philosophy of science and epistemology. How should we understand the likelihood that a regimen that wasn’t tried as such will work, fail to work, or cause harm? Can we put a number on those chances?

Moral questions also surface.

1.       Protecting population health vs. protecting individual health: In a pandemic, many accept that health authorities should generally prioritize population needs over those of individuals. But some may doubt that it is ever OK to give a patient less certainty of any effect whatsoever from an intervention in their body, pandemic notwithstanding.

2.       Maximizing beneficiaries vs. maximizing fairness and protection of those at high priority: An egalitarian (pro-equality) argument for option II above was that “providing effective protection for as many people as soon as possible is more ethical because it distributes the scarce commodity more justly.” An argument for option III was fairness to priority groups, which would often serve equality as well: “In terms of protecting priority groups, a model where we can vaccinate twice the number of people in the next 2 to 3 months is obviously much more preferable in public health terms than one where we vaccinate half the number but with only slightly greater protection.” But what if a major effect of these diversions from maximal protection of vaccine recipients is to significantly lower the protection conferred to the initial recipients, who are usually in priority groups? Would that suboptimal protection of those with the strongest entitlement and, sometimes, the strongest moral claim to protection be sufficiently condoned by the earlier vaccination of more members of lower-priority groups?

3.       Maximizing human health vs. maximizing rich-country population health: In the US, the manufacturers, some serious experts, and the FDA remain skeptical of any departure from authorized protocols, and other serious experts support such departures, which they think is likelier to promote US public health. But options I and IV above, compared to the authorized regimen, would leave far more doses within manufacturers’ production capacity available for other nations. That alone could free up (for purchase or donation) enough doses to vaccinate Mexico, Central America, the Caribbean, and large swaths of Latin America, even without any COVAX- and other incoming vaccines. Couldn’t this momentous humanitarian benefit break up the tie on what is very best for US public health, and decide in favor of low-dose options?

FDA Emergency Use Authorization requires adherence to dose and dosing schedule
Eddy Bresnitz, Medical Advisor to the New Jersey Department of Health on the COVID-19 response

On January 31, 2020, the Secretary of US Department of Health and Human Services declared a Public Health Emergency under section 319 of the Public Health Service Act in response to emerging COVID-19 infections in the US. This declaration allowed the US Food and Drug Administration (FDA) to issue an Emergency Use Authorization (EUA) to “…allow unapproved medical products or unapproved uses of approved medical products to be used in an emergency to diagnose, treat, or prevent serious or life-threatening diseases (…) when there are no adequate, approved, and available alternatives” and the benefits of the intervention outweigh the risks.

In December 2020, the FDA issued EUAs on the use of two novel vaccines for the prevention of COVID-19. These EUAs were based on the agency’s thorough reviews of data from the pivotal trials, and on recommendations from its Vaccine and Related Biologics Professional Advisory Committee. The two vaccines were developed on a messenger RNA (mRNA) platform, a technology that had no precedent of a licensed vaccine. Both vaccines were tested as two-dose series, with doses separated by either 3 or 4 weeks. The pivotal trials indicated that, after the second dose, both vaccines had a vaccine efficacy (VE) of approximately 95%, with similar VE in various sub-groups (age, race, ethnicity, underlying medical conditions). The trials also showed that the vaccines caused significant reactions at the site of injection, such as pain or swelling, and systemic reactions such as fever, headache, muscle pain and fatigue; however, these effects were mild to moderate, lasting 1 to 3 days, and were self-limited. Based on these findings, the FDA considered that the benefits of both vaccines outweighed the risk and issued the EUAs.

The EUAs require that healthcare providers use the vaccines as described in the authorizations. The salient requirement is that the vaccines be used in a two-dose regimen, with the second dose given at 3 or 4 weeks after the first, depending on the vaccine. Health care providers are obligated to adhere to the requirements of the EUA. Following issuance of the EUAs, the Advisory Committee on Immunization Practices (ACIP) and the CDC issued guidance on interim clinical considerations of the use of mRNA vaccines based on the conditions of the EUA.

A surge in the pandemic beginning in the fall of 2020, and increasing incidence of disease in 2021, motivated a discussion in the scientific literature and the media about changing the dosing regimen in order to more quickly vaccinate a larger share of the public with a single dose. This debate prompted the FDA to issue a statement expressing concern about changing vaccine regimens, given the available data: “Using a single dose regimen and/or administering less than the dose studied in the clinical trials without understanding the nature of the depth and duration of protection that it provides is concerning, as there is some indication that the depth of the immune response is associated with the duration of protection provided. If people do not truly know how protective a vaccine is, there is the potential for harm because they may assume that they are fully protected when they are not, and accordingly, alter their behavior to take unnecessary risks. (…) Until vaccine manufacturers have data and science supporting a change, we continue to strongly recommend that health care providers follow the FDA-authorized dosing schedule for each COVID-19 vaccine.”

At this point, neither the ACIP nor the CDC have issued recommendations or guidance on altering the dosing or schedule for administering these vaccines. Without FDA authorization, ACIP recommendations, and CDC guidance, states and health care providers are unlikely to recommend use of the vaccine outside of the requirements of the EUAs. Until vaccine manufacturers provide additional data that would support altering the dose or schedule to the FDA, the current authorized emergency use of the vaccine is likely to remain unchanged.

Consistent messaging is key to our public health mission
Phyllis Tien, Professor of Medicine, UCSF

Phase 3 COVID-19 vaccine trials in the US are currently ongoing, and two vaccines have received an Emergency Use Authorization (EUA) from the FDA. The design of these Phase 3 trials were based upon careful review and analysis of data from Phase 1 and 2 trials that tested different vaccine doses for effectiveness as well as safety. We are now also in the midst of a COVID-19 surge that is worse than nearly one year ago, and further compounded by increasing reports of mutated strains of SARS-CoV-2, possibly more infectious and transmissible, circulating in communities. As a result, distributing vaccines rapidly is of critical importance to public health, but dosing and dose scheduling should be based upon the available scientific data.

Consistent messaging regarding prevention efforts including vaccine dosing, vaccine scheduling, mask-wearing and social distancing are needed to maintain public trust. Mixed messaging in our national response to the pandemic has likely aggravated barriers to the COVID-19 vaccine roll-out, including vaccine hesitancy and fear of adverse effects from the vaccine among parts of the population. Still, many among us are eagerly awaiting vaccination in order to return to some normalcy. Until a significant proportion of our population is vaccinated, it remains critical to send a clear message that the benefits of the vaccine outweigh the risk, and, even for those vaccinated, precautions such as masking and social distancing must be adhered to.

On the bright side, with the advent of potentially new vaccine candidates that could obtain an EUA by early spring, and the promise of consistent public health mandates to curb the US pandemic, we may be able to accomplish both our public health mission of distributing vaccines in a timely manner while also adhering to the available scientific data. 

If changing the vaccination protocol stops the pandemic sooner, change it!
Dan Hausman, Research Professor of Bioethics, Rutgers Center for Population-Level Bioethics

Two goals govern policy for COVID-19 vaccination: saving lives and preventing other harms from COVID-19; and ending the economic fallout from the public health measures imposed to limit the spread of the virus. These goals are largely, but not perfectly, aligned. In an emergency such as the current pandemic, consequentialist reasoning comes to the fore. Although policies should (of course) avoid violating rights, the central ethical questions are factual questions: which vaccination policies stop the pandemic most rapidly without causing other untoward consequences of comparable importance?

If delaying the second dose or lowering dosages is ineffective at preventing disease, then clearly neither should be adopted. If these measures are just as effective at preventing disease as the current protocol, then the second dose should be delayed or the dosage lowered. This conclusion might be questioned, because the confusion and doubts caused by a change of protocol might wind up deterring people from being vaccinated and thereby prolonging the pandemic. This disastrous consequence is highly uncertain. In circumstances such as these, where the immediate positive effects of an action are certain and the harms are speculative, I think that one should proceed with the action.

The facts seem to be that a single dose or two half-doses of any of the three vaccines whose emergency use has been authorized provide less protection than the standard two doses; and it is unknown how quickly the protection provided by a single dose will fade or what effect a delayed second dose will have. Formal modeling can tell us the consequences of assumptions concerning relevant but unknown parameters, and sensitivity analysis can give us some confidence concerning the risks that changing the protocol will have bad effects. The wild card again is the damage that confusion and doubt may cause. I would make the same response: proceed with the action.

Unless a single dose or two half doses turn out to provide poor and short-lived protection against disease and against transmission, those reluctant to be vaccinated will see their unvaccinated neighbors getting ill, unlike their vaccinated acquaintances. Will the qualms engendered by changes in protocols outweigh this persuasive experience?

If splitting doses and deferring the second dose have immediate effects in limiting disease and infection, then the change in protocol is warranted, even if there is a potential medium- or long-run risk of undermining confidence in vaccination. There is no issue of physicians violating their obligations to patients, because physicians are not making the dosing decisions; and no rights are being violated. So the policy question boils down to the empirical question of which policy stops the pandemic more rapidly. There is no way to know for sure; but, with thousands dying daily, let’s do whatever will help today and worry tomorrow about more speculative harms.

Can spacing out vaccination be justified to all?

Bastian Steuwer, Postdoctoral Associate, Rutgers Center for Population-Level Bioethics

Bottlenecks in distributing COVID-19 vaccines have led to a slower than hoped-for start of the vaccination program in many countries, including the United States. The United Kingdom has taken the unusual step of putting on hold the distribution of the second vaccine shot and using the available doses to vaccinate more people with a first dose.

The UK Chief Medical Officers’ rationale for taking this step was that doing so maximizes the number of people receiving vaccines, and thereby saves the most lives in the aggregate. In part, what underlies this thinking are contested scientific matters. Vaccine efficacy trials tested two-dose regimens. There are only preliminary and less reliable data from these trials showing efficacy from the first dose. The UK Chief Medical Officers estimate a level of protection of over 70 percent. Other writers have been more optimistic, citing 80 to 90 percent protection. Less optimistic data suggests that the Pfizer-BioNTech vaccine is 52 percent effective before the booster shot, although alternative statistical analysis may show it to be higher.

The question is not, however, exclusively scientific. An optimistic answer to the scientific question raises an ethical question: is it ethically defensible to lower the prospects of some by failing to give them a booster shot in order to improve the prospects of others by giving them a first shot?

This is a question of population-level bioethics. We need to consider the health of everyone in society, instead of adopting the perspective of a clinician charged with the interest of their patient. One contrast in population-level bioethics that is helpful for reasoning about this dilemma is the contrast between aggregative and non-aggregative reasoning. Aggregative reasoning asks about the population-level effects, as in the rationale employed by the UK Chief Medical Officers: the overall number of lives saved would be higher, they reason, under a policy of spacing out vaccines. The overall amount of benefits in terms of lives saved justifies the lesser protection afforded to those who will not receive their second shot as planned. Non-aggregative reasoning, by contrast, asks whether a policy can be justified to each individual: instead of justifying a social decision by the aggregate effect, we need to ask whether any individual could object to the decision. One might think that from this perspective the UK’s decision is problematic. Could not a person who will not receive their second shot as planned object that they now have to live with less than optimal protection?

However, I want to suggest that there is a non-aggregative rationale for spacing out vaccine doses. Our current vaccine priority-setting is already following such an approach. It is informed largely by trying to identify individuals at highest risk to give the vaccine to them first. The idea is that those at high risk have a stronger claim to the vaccine than those at low risk.

Consider a simple model of distributing vaccines. We start to give out as many doses as are being produced to persons at high risk. We continue this for three to four weeks, and then we face a choice: do we now give new vaccine doses to the originally vaccinated persons, thereby increasing their level of protection from the preliminary level to the full efficacy level? Or do we give the new doses to not-yet vaccinated persons, thereby giving them some preliminary protection? The originally vaccinated persons should no longer be treated with the same initial priority. Their risk has already been reduced, and their claim to a further risk reduction will not be as strong as the initial claim they had when they were at higher risk. Perhaps we should treat a person aged 75 who has received one shot of the vaccine like a person aged 65 who has not been vaccinated yet.

Whether we should space out vaccine doses, then, depends in part on our overall speed of vaccination. Continuing with my simple model, if after three to four weeks everyone over the age of 65 is already vaccinated and the choice is between persons aged 65 and persons aged 75 who have been given the first vaccine shot, then spacing out achieves little. However, if even after three to four weeks there is still a number of unvaccinated people left who are at higher risk, then spacing out appears more reasonable.

This simple model, as all simple models, leaves out many important considerations. It does not consider indirect effects on either vaccine hesitancy or on vaccine resistance, and it depends on finding a reasonable estimate for the level of protection from the available data. What the model suggests, however, is that opposition to aggregative reasoning does not directly translate into opposing a policy of spacing out vaccine doses.

Now that some COVID vaccines have been authorized, can it be ethical to (continue to) test these and further COVID vaccines, and how?



Overview of the dilemma

Nir Eyal, Henry Rutgers Professor of Bioethics, Rutgers Center for Population-Level Bioethics

Several Western countries have now authorized use of the first few COVID-19 vaccines following placebo-controlled efficacy testing.  More vaccines may be authorized in the next few weeks. But further COVID vaccine research, of the following types, remains necessary:

I. Continued/new studies of authorized or approved vaccines in the conventional regimen, e.g. to ascertain their impact on infection and infectiousness, the correlations and duration of vaccine protection, their success against new viral strains, their efficacy and safety in population groups excluded from the initial studies such as children and pregnant women, ratios of rare complications, and impact outside the trial setting. The original trial results and the swabs collected from trial subjects before unblinding get at some of these questions, but there is room for more.

II. New studies of authorized or approved vaccines under new regimens (e.g. on spaced out dosing regimens, or half-doses—see our previous Dilemma).

III. New studies of new vaccines. New vaccines remain necessary should authorized vaccines turn out to have short-lived efficacy, or to protect recipients without reducing their infectiousness to others; and for areas of the world where authorized vaccines are impossible to store, deliver, or procure.

This necessary research could be a combination of (a) epidemiological observations, (b) collecting more samples in existing trials or even switching subjects between trial  arms—a “blinded crossover”, (c) temporally controlled field trials (e.g. initiating a new trial that would compare people who receive the authorized vaccine early to ones who receive it later), (d) placebo controlled field trials, (e) active controlled field trials (e.g. comparing authorized vs. promising new vaccines), (f) immune-bridging studies, or (g) challenge trials

Outcomes of interest could include (i) infection status and level; (ii) disease status and level; (iii) likely infectiousness status and level; (iv) immune response status and level (as in an immune bridging study); (v) adverse events; (vi) some of the former outcomes among participants’ contacts.

Such studies could take place in (1) countries in which the approved vaccines are already being rolled out to some people (either among those people and/or their contacts, or in other people and/or their contacts), or in (2) global populations who will not have access to currently-approved vaccines anytime soon.

This Dilemma explores which combinations of object of study (I-III), research type (a-g), outcome type (i-vi), and study population type (1 and 2) might be ethically permissible.

Many bioethicists would consider it unethical to give placebo to a control group when a known safe and effective vaccine exists. That is worse for their prospects and, normally, for those of their contacts. These bioethicists would especially object when the vaccine being tested has already been approved or authorized. They would be furious if the participants put on placebo would otherwise have access to the tried and tested vaccine outside the trial. But not all bioethicists consider placebo control unethical under these circumstances. And there may be a way around some of the ethical complications here. In particular, for a limited period, some in rich nations where vaccines are rolling out will lack access (e.g. young and healthy people who are not considered frontline or essential workers), such that their immediate prospects will not be worsened by being in a trial. Can very short trials in these populations provide helpful outcomes?

Similarly, the prospects of those who would not have access to the proven vaccine outside of a trial (say, populations in less-developed countries who will not get the vaccines for some time) would not be worsened by placebo-controlled trial participation. These participants’ own nations may stand to benefit greatly from the development of vaccines that are easier for them to procure or deliver. In the past, a WHO group on placebo controlled vaccine studies in developing countries noted a number of conditions that may justify use of placebo for vaccines already known to be safe and efficacious. Still, is it ethical to rely on these populations’ lack of access to the tested vaccines, when that lack of access results from rich/vaccine producing nations’ having hoarded or outbid potential participants’ nations? Does it matter who does the testing, what nations are likely to use the product, and whether post-trial access is guaranteed to participants and their fellow citizens?

And is the ethical challenge limited to placebo control? Any controlled study compares different options, and once one option is authorized, some of its participants will have to be assigned to an unauthorized option—often, unauthorized for the person’s own protection.

We can probably get some useful data out of careful observations, and much useful data from immune-bridging studies and challenge trials, but this discussion will focus especially on the questions above.

Nir Eyal’s work on this issue was supported by an award by the National Science Foundation (NSF 2039320)

The ethics of continuing trials: does the data justify the risks?

David Wendler, Senior investigator, Department of Bioethics, NIH Clinical Center

Several vaccines for COVID-19 have been found safe and highly efficacious and are now being made available to select groups through emergency use authorizations (EUAs) and other mechanisms. At the same time, there is still significant value to continuing current trials and testing additional vaccine candidates, raising the question of whether and to what extent it is acceptable to give research participants unproven vaccines and placebos after identification of ones that are safe and efficacious.

Some commentators argue that clinical trials are ethically acceptable only as long as there is insufficient evidence that the intervention offered in one arm is superior to what is offered in another arm, or to what is available outside the trial. This view implies that it would be unethical to continue placebo-controlled trials given the findings of efficacy. It also implies it would be unethical to test other unproven vaccine candidates. This view fails to recognize that the obligations researchers have to their participants are distinct from the obligations that clinicians have to their patients.

Codes and guidelines around the world permit researchers to expose participants in clinical trials, including vaccine trials, to some risks to collect socially valuable data that cannot be obtained in a less risky way. These guidelines reveal that researchers are not obligated to provide placebo recipients with a safe and efficacious vaccine once one has been identified. Instead, researchers are obligated to ensure that any plans to conduct placebo-controlled trials remain ethically appropriate given current evidence.

Continuing a trial after the vaccine candidate has been found to be safe and efficacious can provide an opportunity to collect several types of socially valuable data. Of greatest importance, continuing trials can provide a more reliable and more precise point estimate of the vaccine’s efficacy and offer an opportunity to collect additional safety data, including data on any uncommon or delayed side effects. Continuing trials can also help to assess how long the vaccine’s protective effect lasts; offer insight into the vaccine’s impact in various subgroups, such as older individuals or those with comorbidities; and evaluate whether the vaccine candidate protects against infection itself.

Once a vaccine candidate is found to be efficacious, participants in the placebo arm of that trial are known to be at higher risk of symptomatic disease than the participants in the active arm of the trial. How much higher depends on the chances that participants in the placebo arm will become infected, the risks they face if they are, and how much protection the efficacious vaccine offers. The chances that participants in the placebo arm will be infected depends on the local transmission rate, preventive measures they adopt, and the amount of time they remain on placebo. When participants are on placebo for a short time, the chances of infection are correspondingly low.  Remaining on placebo for a few weeks, rather than accessing an efficacious vaccine, poses a low chance of substantial harm. Continuing on placebo for even longer periods also poses a low chance of substantial harm to individuals at low risk for severe disease.

Remaining on placebo for an extended period can pose considerable risks to individuals at high risk of severe disease. The extent of these risks depends critically on what options are available to them. In the setting of few effective treatments and potentially strained hospital systems, receiving placebo for an extended period rather than a safe and efficacious vaccine can pose substantial risks. However, if high risk individuals would not have access to a safe and efficacious vaccine outside of research— for example, when there is only enough supply for the trial or when they are not part of a prioritized group that will receive the vaccine during the time of the trial—receiving placebo in a clinical trial poses few additional risks to them.

There is no algorithm for determining how much social value a given clinical trial has and whether its social value justifies the risks participants face. As a result, IRBs tend to focus on ensuring that a trial has the potential to collect important data and that the risks of substantial harm are low. Trials with the potential to collect data helpful for addressing a global pandemic have considerable social value. Inviting competent adults to participate in such trials can be ethical when doing so poses a small increase in their risk of experiencing substantial harm. This suggests that it can be ethically acceptable to continue a placebo-controlled trial for a short period after the vaccine candidate has been found to be safe and efficacious, even when participants might be able to access the vaccine candidate outside the trial, for example, through an EUA.

By contrast, if continuing the trial does not offer the opportunity to collect socially valuable data, or comparable data can be obtained in less risky ways, continuing the trial with a placebo arm for any length of time would be ethically problematic. Inviting participants who are at low risk of severe disease to remain blinded and stay in the trial for a longer period can be acceptable when it offers the potential to collect data that might be helpful for addressing the pandemic. In most cases, continuing a blinded, placebo-controlled design with high-risk individuals for longer periods will not yield data of sufficient value to justify it. Exceptions might include when the individuals cannot access an efficacious vaccine outside the trial and their participation is needed to collect valuable data, or they are in a group for whom no efficacious vaccine candidate has been identified.

Otherwise, individuals at high risk of severe disease should be unblinded and those on the placebo arm offered the vaccine within a redesigned study or given the opportunity to seek the vaccine outside the trial. When the value of the data to be collected does not justify the risks of continuing the trial as designed, researchers have several options. They can unblind participants, offer placebo recipients the vaccine, possibly as part of an expanded access program, and follow them to collect additional data. Alternatively, researchers might redesign the trial, for example, to include a crossover in which the blind is maintained and those on the placebo arm receive the vaccine after they complete the placebo arm. Finally, in some cases, it may make sense to simply stop the trial and unblind participants, thus allowing those in the placebo arm to seek the vaccine elsewhere.

Let’s distribute the “standard of prevention” equitably before testing new vaccines

Rieke van der Graaf, Associate Professor, University Medical Center Utrecht, Julius Center for Primary Care and Health Sciences, Department of Medical Humanities, Netherlands 

To answer the question whether it can be ethical to continue to test further COVID vaccines now that some COVID vaccines have been authorized it may first of all be helpful to look at relevant international ethical guidance documents. For example, the CIOMS guidelines (2016) set out that

As a general rule, the research ethics committee must ensure that research participants in the control group of a trial of a diagnostic, therapeutic, or preventive intervention receive an established effective intervention. Placebo may be used as a comparator when there is no established effective intervention for the condition under study, or when placebo is added on to an established effective intervention.

When there is an established effective intervention, placebo may be used as a comparator without providing the established effective intervention to participants only if:

- there are compelling scientific reasons for using placebo; and

- delaying or withholding the established effective intervention will result in no more than a minor increase above minimal risk to the participant and risks are minimized, including through the use of effective mitigation procedures

The CIOMS guidelines also explain that “established effective interventions may need further testing, especially when their merits are subject to reasonable disagreement among medical professionals and other knowledgeable persons” and that in some cases this may include testing against placebo. At the time of this writing, the Pfizer and Moderna vaccines are authorised for Emergency Use by the FDA in the United States and by the European Commission, following evaluation by the EMA, to prevent COVID in respectively the United states and European Union. In the EU also the AstraZeneca vaccine has been authorised. The FDA found no specific safety concerns and determined, for one of them, that “the vaccine was 95% effective in preventing COVID occurring at least 7 days after the second dose”. CIOMS  defines an established effective intervention as follows: “an established effective intervention for the condition under study exists when it is part of the medical professional standard. The professional standard includes, but is not limited to, the best proven intervention for treating, diagnosing or preventing the given condition.” Given the absence of safety concerns, the high effectiveness of these vaccines and the fact that they have been authorized in the EU and the US for prevention of COVID, these vaccines seem to fall in the category of an established effective preventive method. 

At the same time, there can be legitimate reasons to do further testing despite this standard because there are still many uncertainties, as set out in the overview of this dilemma. Whether this testing can be done in the form of randomization is a further question. In the short-term there may be participants who are not yet eligible for vaccination outside the trial. But in the longer term, there will be a tipping point where vaccination through the regular national health program provides people with an established effective vaccine earlier than with the experimental vaccine. Researchers, sponsors and research ethics committees should be sensitive to that moment while approving new trials. Moreover, at some point, the world will regard the now-authorized vaccines in the EU and the US as part of the so-called standard of prevention package: a term used in discussions of HIV prevention methods, designating the comprehensive package of methods to prevent HIV, including condoms, and pre- and post-exposure prophylaxis, which are approved for clinical use (see UNAIDS, Van der Graaf et al and Singh). In HIV prevention trials all participants (both in the experimental and control arm) must receive access to this package that is recommended by WHO. It is reasonable to assume that for COVID a similar package of preventive methods will come into existence that is recommended by an organization such as WHO. This package then may consist of a range of preventive methods running from hand hygiene and facial protection to vaccines. This package may provide participants with more protection, while making it more complex to start new trials for preventive methods, not only for vaccines, but also for other preventive methods such as monoclonal antibodies when used as a prevention strategy. This dilemma is well-known within the field of HIV prevention.

Another question is whether it is ethical to develop and test new vaccines in low-resource settings that do not have access to the vaccines available in the US and the EU. The CIOMS  guidelines recognize that there is a dilemma when placebo-controlled trials are proposed in a low-resource setting when an established effective intervention cannot be made available for economic or logistic reasons:

In some cases, an established effective intervention for the condition under study exists, but for economic or logistic reasons this intervention may not be possible to implement or made available in the country where the study is conducted. In this situation, a trial may seek to develop an intervention that could be made available, given the finances and infrastructure of the country (for example, a shorter or less complex course of treatment for a disease). This can involve testing an intervention that is expected or even known to be inferior to the established effective intervention, but may nonetheless be the only feasible or cost-effective and beneficial option in the circumstances. Considerable controversy exists in this situation regarding which trial design is both ethically acceptable and necessary to address the research question. Some argue that such studies should be conducted with a non-inferiority design that compares the study intervention with an established effective method. Others argue that a superiority design using a placebo can be acceptable. The use of placebo controls in these situations is ethically controversial for several reasons: 1. Researchers and sponsors knowingly withhold an established effective intervention from participants in the control arm. However, when researchers and sponsors are in a position to provide an intervention that would prevent or treat a serious disease, it is difficult to see why they are under no obligation to provide it. They could design the trial as an equivalency trial to determine whether the experimental intervention is as good or almost as good as the established effective intervention. 2. Some argue that it is not necessary to conduct clinical trials in populations in low-resource settings in order to develop affordable interventions that are substandard compared to the available interventions in other countries. Instead, they argue that drug prices for established treatments should be negotiated and increased funding from international agencies should be sought. When controversial, placebo-controlled trials are planned, research ethics committees in the host country must: 1. seek expert opinion, if not available within the committee, as to whether use of placebo may lead to results that are responsive to the needs or priorities of the host country…; and 2. ascertain whether arrangements have been made for the transition to care after research for study participants …, including post-trial arrangements for implementing any positive trial results, taking into consideration the regulatory and health care policy framework in the country.

 
The particular dilemma for COVID may be that vaccines are for logistical reasons proposed to be tested against placebo and are already known to be less safe and effective than the vaccines available in the EU and the US. On the one hand, as long it is reasonable to assume that these trials may lead to a vaccine that is easier to scale up than existing ones and so help to stop the pandemic in these settings, this may be an argument in favour of starting these trials. On the other hand, what currently seems to make the start of new vaccine trials for local production in low-resource settings impermissible is that 172 countries have made agreements by means of Covax to secure “2 billion doses from five producers, with options on more than 1 billion more doses” (see WHO). These doses have not been delivered yet, but the aim is to make them available before the end of 2021. Before considering and approving further trials in resource poor settings, research ethics committees, researchers, sponsors, manufacturers, national health authorities, regulators and others should consider whether more can be done first to ensure global equitable access to existing COVID vaccines through Covax.

A more favorable view of (some) trials in developing countries

Brian Berkey, Assistant Professor of Legal Studies and Business Ethics, Wharton School, University of Pennsylvania

Despite the fact that we now have several authorized COVID vaccines, continued research remains necessary. Some of the trials that could provide valuable information require that at least some participants don’t receive one of the authorized vaccines during the trial period. Consider, for example, trials involving new vaccines. These trials are important because new vaccines could have important advantages in comparison with those already approved. They might, for example, provide greater protection against emerging variants of the virus, or be storable at temperatures that would make them easier to distribute in developing countries.

Other trials that could provide valuable information require that participants receive altered regimens of authorized vaccines (e.g. two half-doses instead of two full doses). If these trials were to show that an altered regimen involving less vaccine per person is roughly as effective as the approved regimens, this could allow for quicker vaccination of the global population.

Wealthy countries have procured most of the current supply of the authorized vaccines, and can be expected to control most of it for some time. In addition, these countries are already in the process of vaccinating their populations using the approved regimens. Because of these facts, the prospects for trials of new vaccines and altered regimens of approved ones to be effectively carried out may be greatest in developing countries, where citizens will likely not have access to the authorized vaccines and regimens for some time.

Since there are powerful reasons to think that it’s unjust that citizens of wealthy countries have access to the authorized vaccines long before those in poorer countries will, there are grounds to worry that if trials are conducted in poorer countries while those in richer countries are being vaccinated using the approved regimens, those conducting the trials would be wrongfully exploiting the participants. Some would claim that this is the case even if the participants give informed consent, face limited risks, and may benefit significantly from their participation. One argument for this conclusion relies on the claim that it’s objectionable to take advantage of those who are vulnerable, at least if their vulnerability is the result of injustice. Proponents of this argument hold that taking advantage of unjust vulnerabilities constitutes wrongful exploitation.

I think that the charge of wrongful exploitation would be correct in some cases. Perhaps the most obvious are cases in which trials in developing countries are run by, and stand to benefit, agents that are among those responsible for or benefitting from the injustice in access to authorized or approved vaccines and regimens that makes those countries especially suitable sites for further trials. Consider, for example, the governments of wealthy countries, which, in my view, have acted unjustly by procuring the bulk of current and near-future vaccine supplies for their own citizens, rather than allowing for a more equitable distribution. If these governments were to fund studies in poorer countries with the aim of using the knowledge gained primarily to further benefit their own citizens, this would constitute wrongful exploitation. But importantly, in my view this is primarily because the governments of wealthy countries have an independent obligation to contribute to ensuring an equitable distribution of approved vaccines globally. Instead of funding these trials, they should be funding greater provision of approved vaccines to more of the poor around the world, and seeking to promote further trials that distribute both the risks and potential benefits more justly among the global population. If wealthy country governments did not have these obligations, and the trials could reasonably be expected to benefit the participants and others in the countries in which they might take place, it is harder to see on what basis we might object to them.

Because of this, I think that we should have a more favorable view toward at least some trials that could be run in developing countries, despite the fact that it may only be because of unjust disparities between richer and poorer countries in access to the approved vaccines and regimens that they’re possible. Consider, for example, a pharmaceutical company that hasn’t yet produced an authorized or approved vaccine, but has a promising candidate that requires further testing. If developing countries had access to authorized or approved vaccines that was comparable to what wealthy countries enjoy, running trials in developing countries may not have the prospect of generating results that would be as informative. In a sense, then, if the company runs trials in developing countries, it would be taking advantage of the fact that participants unjustly lack equitable access to approved vaccines.

But since it’s at least plausible that companies that don’t yet have an authorized or approved vaccine aren’t obligated to contribute directly to the equitable distribution of other companies’ authorized or approved vaccines, there’s reason to think that their running trials for promising candidates wouldn’t be wrong. So long as familiar obligations such as securing informed consent and ensuring the safety of participants as much as possible are met, the fact that participants would improve their prospects by taking part, in comparison with the status quo, provides sufficient grounds for preferring that these trials take place.

A more difficult case to assess is one in which a company that’s produced an authorized or approved vaccine intends to test alternative regimens in developing countries that currently lack equitable access to supplies of that vaccine. If the company is obligated to contribute substantially to providing equitable access (by, for example, reserving some of the existing supply and selling it to poorer countries at a discount), but is failing to meet that obligation, then running such trials is wrongfully exploitative. Even if that is the correct conclusion, however, we likely still have reason to prefer, from a moral perspective, that the trials are run rather than not. After all, as long as participants would improve their prospects by taking part, failing to run them would leave their unjust disadvantages entirely unaddressed, rather than mitigated at least a bit. This means that if there’s nothing that can be done to get the relevant companies to satisfy their obligation to promote equitable access to approved vaccines, we shouldn’t attempt to stand in the way of their running trials that would benefit unjustly disadvantaged participants.
 

Unsatisfactory justifications for COVID trials in developing countries

Monica Magalhaes, Program Manager, Center for Population-Level Bioethics, Rutgers University

In developed countries that were able to buy up the first authorized COVID vaccines early, a complication is arising for continuing COVID vaccine research. As highly efficacious vaccines are rolled out to the general population, some vaccine trial participants and potential participants now have their health prospects lowered by participating in controlled studies. Participants are dropping out of studies to get vaccinated as they become eligible, at the expense of the final quality of the data and of knowledge that would be gained from these studies.

One apparent solution for this complication is to conduct any further COVID vaccine trials in developing countries where vaccination prospects for the vast majority of the population will remain low for the foreseeable future. Where no-one has access to the vaccine outside a trial, no-one’s prospects of accessing a vaccine are worsened by participating in a trial. The concern about this option, as put in the overview of this dilemma, is that this justification relies on much of the world’s population lacking access to the same vaccines that are, or will soon be, widely available for the minority living in richer nations. That seems to be unjust, or at least exploitative of an injustice.

As with any disease, someone(s) will have to be in the studies that will continue to advance COVID prevention and treatment after the first line of prevention and therapy is found and made available. Studies that withhold or withdraw proven interventions to test experimental interventions raise particular ethical concerns, but they are not unique to COVID—as Rieke van der Graaf explains in this dilemma, this ethical territory has been trod before and we have guidelines and years of debate in research ethics to show for it. And yet, the fact that someone has to do it does not seem like a satisfactory justification for the fact that these someones will predictably be in developing countries where access to vaccine will be unjustly slower to arrive.

One possible way to justify this predictable outcome is to argue that, even though background inequalities are unjust, trials in developing countries are justified by the individual benefits to participants whose health prospects are increased by participating in the trial; and by the societal benefits of improved prospects for the participants’ compatriots’ access to vaccines, compared to what their prospects would be had the country not hosted trials. This seems unsatisfactory too, because these societal benefits will go only as far as the trial sponsors’ post-trial obligations or agreements extend (a simple obligation to provide the vaccine to all trial participants would not do much for the country), and only as far as these obligations or agreements are enforced or lived up to. Developing countries are rarely well-placed to demand or enforce strict obligations from large corporations based in developed countries, or from developed countries themselves; and hosting a trial has yet to catapult a poor country towards the front of the line.

Another way to soothe worries about relying on background inequities in access to vaccines is to appeal to the expectation that trial findings would benefit mainly poorer countries and their populations—for example, in trials seeking to establish safety and effectiveness of new vaccines that are cheaper or easier to store, transport, or administer. But this too is only persuasive up to a point: the benefit from discovering alternative vaccines will accrue to the entire world, as lower costs and easier logistics would help even the richest of countries to get their populations vaccinated sooner and faster. While it is true that developing countries need these benefits more, that rationale itself relies on developing countries’ lower level of resources for health, health personnel, and infrastructure. This line of thought should refocus, rather than appease, our equity concerns.

As vaccines start to roll out, we all watch as the gap between rich and poor countries predictably widens. A “catastrophic moral failure” results from institutions that enable vaccine nationalism by countries that can pay the highest prices, to the exclusion of much of the world. We ought to remain uneasy about relying on those without access to vaccines to participate in future COVID-related research. Globally fair distribution of the risks, burdens and benefits of the COVID research that remains to be done requires globally fair distribution of effective vaccines and interventions.
 

Vaccine trials in the developing world, exploitation, and post-trial responsibilities

Daniel Wang, Associate Professor, Fundação Getúlio Vargas School of Law, Brazil

By joining a placebo controlled COVID vaccine trial in a vaccine-deprived developing nation, individual participants will not lose anything that they would have received had they not joined the trial. Nobody is made worse-off by participating in such research. Indeed, some or all will gain. Those who participate will have at least the possibility of being vaccinated effectively against the disease. In addition, every participant (including those in the placebo arm) will usually benefit from additional  tests (which are usually more beneficial than burdensome) and (if the trial is otherwise conducted ethically) from optimal care during and after the trial if they fall sick. In short, placebo controlled trials are Pareto improvements (because they harm no one and benefit some), and perhaps strong Pareto improvements for the primary stakeholders (because they may benefit participants, certainly ex ante and by and large ex post).

Moreover, even for those in the placebo arm, the risk of having serious COVID may not be enormous if compared with the risks normally accepted in clinical trials. Certainly if the use of placebo is accepted in countries where vaccines have been approved, then it must be accepted in vaccine-deprived countries.

From the perspective of communities, the countries where access is currently limited or inexistent are the main beneficiaries of more trials. They are far behind in the race for accessing approved vaccines and will benefit from more options. Research for vaccines that are cheaper and easier to administer are particularly responsive to the health needs of these populations. Even if this is not the case, more competition means more vaccines available in the global market, which would possibly facilitate access, reduce price, and give countries some bargaining power in negotiations with pharmaceutical companies.

Any “exploitation” objection to placebo controlled COVID vaccine trials in countries without vaccine access is far more plausible in situations where sponsors do research in vaccine-deprived countries but sell their products, once approved, exclusively (or mostly) in the developed world. It is then important that sponsors are committed to fulfilling their post-trial responsibilities. At a bare minimum, they need to guarantee vaccine availability in the country where the trial was conducted. Sponsors must be committed to applying as soon as possible for regulatory approval (including emergency/conditional approval) of their products in the countries where they conducted their trials (see CIOMS, Guideline 2) and to distributing their successful vaccine products there.

Availability, however, does not guarantee access. Availability refers to the presence of an intervention in an intended place and time, while access refers to the use of such treatment by an individual. Sponsors need to make reasonable efforts to promote access, for instance, through donation, price reduction, technology transfer, training, and support to build infrastructure. The more is done to promote access, the lesser the concern about exploitation.

Conducting research in low-resource settings often raises difficult ethical questions. What constitutes exploitation? Should mutually beneficial exploitation be allowed? Can an intervention be tested against the local standard of care if this is inferior to the best current treatment available where sponsors and researchers come from? What is owed to research participants and their communities? There will be reasonable disagreement about these issues in general and in particular cases, so it is important to consider who will make the decision on whether a trial is ethical.

In many developing countries, there are institutions that apply international scientific and ethical standards to assess research protocols. For instance, in Brazil, where access to approved vaccines is still very limited, vaccine trials cannot take place without the approval of the drugs agency (ANVISA) and the National Ethics Committee (CONEP). The ethics committees/review boards of funding bodies, academic institutions, and companies in the developed world should avoid blocking trials that are Pareto improvements before local institutions are given the opportunity to make their own evaluation on these difficult ethical issues. Such institutions, particularly if they allow public involvement and participation, will probably have a much better understanding of the circumstances and social values in their own  countries.

Finally, there is merit in the argument that the insufficiency of approved vaccines globally does not justify allowing in the developing world research that would be unacceptable in developed countries. The root of the problem, the argument goes, lies in global inequality, lack of international aid, and patent laws. However, those making micro-decisions about whether to give an ethical approval for a trial to go ahead will rarely have the power to address these large gaps in global justice.

Testing vaccines when an effective vaccine exists: if that’s all I can get…

Sarah Conly, Professor of Philosophy, Bowdoin College and vaccine trial participant

I was a participant in the Phase III Moderna COVID vaccine trial. I received my two injections four weeks apart in August and September, 2020, and on January 2, 2021 I was very pleased to learn that I had received the actual vaccine, not the placebo.

I was motivated to participate in the trial by three things: First and foremost, I hoped I would get the vaccine, and not the placebo. At that point, and even now (January, 2021) there would have been no other way for me to get access to the vaccine, and, since I am 68, I thought COVID could prove quite dangerous for me. Second, I wanted to contribute to research on the vaccine. Third, I thought it would make a good story, especially for my Bioethics students. I should note that we were also paid by Moderna for each visit to the clinic, but for me that was not a consideration: it was nice, but didn’t affect my decision. The hope of getting an effective vaccine was what led me to brave the two-hour drive from Maine through the hell of Boston traffic, and, of course, to accept the possibility of known or unknown side effects.

This makes me think that offering placebo-controlled trials in places where the vaccine is not available, or to those to whom it is not available even where it exists, is morally acceptable. Of course, there shouldn’t be inequality in healthcare around the world, but there is. Given this, I think participants could very rationally decide that the 50% chance of getting a possibly effective vaccine is much, much better than nothing, especially where rates of infection are currently high. Why not take a gamble with positive expected value? Of course, it would be better if there were a vaccine available to everyone everywhere; but since we can’t make that happen, for many people a placebo-controlled trial is the best chance for getting a vaccine. And, of course, it furthers the research that we still need. To me, this makes it a win-win proposition.

Once COVID-19 vaccines are widely available, under what conditions would it be permissible for governments to create “immunity passports” that facilitate conditioning of services on prior vaccination?



Overview of the dilemma

Nir Eyal and Monica Magalhaes, Rutgers Center for Population-Level Bioethics

Societies’ best ticket back to normalcy is, at this time, vaccinating enough of the population to reach or approach herd immunity, particularly if vaccines continue to be shown to reduce COVID-19 transmission. To increase vaccination rates, governments must procure and provide vaccines, remove access barriers, and make the case that the vaccines are safe and efficacious. In addition, governments and institutions can create incentives for becoming vaccinated or disincentives for staying unvaccinated.

One way for governments to achieve that is to institute some form of documentation (a paper card, a smart card, a phone app) to prove vaccination status, which government agencies or private businesses can then require before e.g. rendering services that involve sharing of public spaces. Such immunity passports, or “green passports,” could be required, for example, for boarding a plane, train, bus or taxi, attending a gym or dining in a restaurant, or continuing to work at a hospital, at least absent an up-to-date negative COVID test, evidence of natural antibodies from recent COVID, medical exemption from vaccination, and perhaps other narrowly defined exemptions.

The federal government is creating standards for such passports, and the government of the State of New York is backing a specific initiative. Rutgers University, CPLB’s own home institution, has announced that students will need proof of vaccination (or a medical or religious exemption) to return to campus in the fall of 2021. Such approaches may become trends across US states and US institutions of higher education.

This Dilemma is asking what affects the permissibility of green passports. For example,

  • Does it matter whether only private sector services are conditioned on a green passport, or also government ones?

  • Letting private businesses be the ones conditioning service on passports (which may be in the interests of many businesses, and may save the government from some confrontations) raises a further question: Does it matter whether the government forces, encourages, or merely makes it legal for businesses to condition service on green passports? Leaving individual businesses or institutions free to make their own policy allows for different approaches to be tried (without randomization) and “compete” in the marketplace, but may reduce both the passports’ (perceived) coerciveness and their impact on vaccination rates.

  • Does it matter whether the government’s use of green passports aims to increase vaccination rates, or, alternatively, any such predictable increase is a mere side effect of their use to achieve other aims, such as protecting other users of shared public spaces, increasing public trust in the safety of public spaces, facilitating the reopening of many kinds of businesses and activities, and making those who voluntarily choose not to be vaccinated to internalize the effects of their decisions on others instead of free-riding? What if the government welcomes these side effects of green passports and feels that they would provide ample justification, but is driven by the importance of increasing vaccination rates?

  • Would merely partial (or partially equitable) access to vaccines completely rule out use of green passports, since it would penalize those with deficient vaccine access? Or should these compounded disadvantages merely be added into the overall calculation of the benefits, costs, (in)inequality, and other effects of green passports on equity, many of which will be positive? Could the correct approach lie in the middle, say, adding these compounded disadvantages to the calculus, but lending them extra weight, because they (allegedly) come from the government’s own actions?

  • Does it matter whether the goods and services conditioned on having such passports are “essential” (bus access, in-person school access), or only ��elective” (cruise-ship access, dine-in restaurant access)? If conditioning essential services on vaccination (with appropriate exemptions) makes incentives for vaccination more efficacious, and if those deprived of these services could resume their access at any time by getting vaccinated, then what, if anything, is wrong with conditioning essential services on vaccination? Conversely, could conditioning of even elective services accumulate to a point where the resulting differences in access threaten what political philosophers call “relational equality” by formulating a two-class society? And even if to some degree they do, is this merely an “expressive cost” that is worth paying to save more lives?

  • What will be the effects of green passports on global inequality? If international travel comes to be conditioned on vaccination, many citizens of rich countries and only a select few from elsewhere will probably be able to travel freely, at least for some time to come. Does that ethically rule out the use of these passports? Or should these bad effects be weighed against the potential economic benefits to (many) developing countries of reopening the tourism industry and invigorating rich country purchase of goods and raw materials from developing countries?

  • Are there implementation issues that might threaten the entire scheme?

Immunity passports: what is the true dilemma?

Ruth W. Grant, Professor Emerita of Political Science, Duke University

The key condition that legitimizes limiting access to various services and public spaces to the vaccinated is that the unvaccinated have freely chosen their status. Practically speaking, for this condition to be met, the vaccine has to be readily available to all who want it. If this condition is met, or if COVID tests are readily available and negative results are accepted in lieu of proof of vaccination, it is hard to see an ethical dilemma here. It is an easy call. Governments have a responsibility to act to promote public health and safety. Private enterprises have a similar responsibility, but on different grounds. And individuals do not have absolute freedoms. Individual freedom is always limited by considerations of harm to others. In principle, then, governments may condition services on prior vaccination. And businesses may do the same. Moreover, governments could not prohibit businesses from doing so.

Stated in this way, it looks as if there is no ethical conflict. But to many people, immunity passports would undoubtedly appear to be an illegitimate government imposition on individuals who choose not to be vaccinated. What is the alternative? If the unvaccinated are not excluded, the vaccinated are disadvantaged. They cannot trust that an airplane or a sports stadium, for example, is a safe place to be. The result may well be further delay in the opening of public spaces. In other words, there is not a neutral policy option: either the unvaccinated or the vaccinated will have their options constrained. If this is the case, the choice is clear—it is the unvaccinated who are a threat to others and to the public good. And, as has been true since the start of the pandemic, the same policies that promote public health hasten the opening of the economy.

The really difficult dilemma here, I think, is not on the level of conflicting ethical principles. It is on the level of empirical and political realities. Establishing a vaccine passport system might be perfectly legitimate, and still be the wrong thing to do. In the United States right now, everything related to the pandemic is so politicized, it would be hard to predict whether immunity passports would encourage people to get vaccinated or cause a serious backlash. The details would matter a lot: would a state- and local- level policy be more effective than a national one? How would the limitations on the unvaccinated be enforced, especially where it is private business imposing the constraints? What sorts of public messaging could lead people to see the immunity passport as a welcome step forward in the fight against the pandemic?

How to permissibly distinguish the vaccinated and the unvaccinated

David Enoch and Netta Barak-Corren, Hebrew University of Jerusalem 

In a recent position paper, we (and colleagues) outlined the main justifications for policies that distinguish between the vaccinated and the unvaccinated (“green passport policies”), for instance in access to cultural events, leisure activities, indoor dining, and so on. We also discuss the main conditions in which such policies may be morally permissible.

Importantly, our paper is based on the factual situation in Israel in the past months, and it should be read in that context. Perhaps the most important feature of the Israeli context is that vaccines are widely available, free of charge and typically in easily accessible locations, to all within Israel proper. This is how vaccines should be available everywhere else as well. Of course, if vaccines are expensive, unavailable, or not realistically accessible, this strongly affects the permissibility of green passport policies. We here assume a situation in which vaccines are widely and easily available. We also assume that, absent inoculation, high infection rates lead to (justifiable) severe restrictions with harsh economic, social, educational, and other consequences. This is the situation in Israel, in most of the United States, and in many parts of the world.

In such circumstances, for most people, getting a vaccine that has been shown to be both safe and effective is both rationally and morally called for. It is the main way in which one can play a role in the collective effort to battle the essentially collective phenomenon of the pandemic. Yes, some uncertainty about the long-term effects of the vaccines (and indeed of contracting COVID) remains, but given the certain harms, both direct and indirect, of the pandemic, vaccination is clearly called for. This does not mean, of course, that anyone refusing to get the vaccine is to blame, but it does mean that some green passport policies may be justified.

On what terms, though? We argue that green-passport-based distinctions may be justified, as long as they are effective at promoting compelling goals, and as long as they satisfy a proportionality requirement. The justifiable ends we point to include reducing the numbers of infections and controlling pandemic-related harms and derivative general health harms (e.g., lower quality of care due to hospitals congestion); returning to economic and social normalcy; imposing the costs of the decision not to be vaccinated on those making it; and incentivizing inoculation.

The decision about each proposed use of green passports should be made with these ends and with the proportionality requirement in mind, and no general recipe for a decision can be supplied. Here, though, are some important guidelines:

How pandemics work, and how this challenges traditional categories: A pandemic is, by its very nature, a collective phenomenon. Given this collective nature—and perhaps especially, the exponential pattern of infection—pandemics challenge the traditional liberal protection of a private sphere of a person’s behavior, which is no one’s business but their own. During a pandemic, one person’s decision to not get vaccinated and nevertheless interact with others (often without their knowledge on that choice) imposes costs—often serious costs—on others. This does not mean that people should be forced to get the vaccine, nor does it warrant a vaccination mandate backed by a criminal sanction. It does mean, however, that at least in the paradigmatic case, there is no plausible objection to policies that impose the costs of the decision not to get the vaccine on those making it.

Thus, if the risk of opening up restaurants for indoor dining or campuses for in-person classes is too high in the absence of sufficiently high vaccination rates, there is no reason to make the vaccinated bear the cost of the unjustified decision by others not to get the vaccine. In such cases, then, it is justifiable to open up such activities under green passport restrictions.

Equality and discrimination: What this means, of course, is that policies that distinguish between the vaccinated and the unvaccinated are not discriminatory. There are relevant distinctions between the groups that justify (some) restrictions on the unvaccinated. Currently there is no similar justification for restrictions on the vaccinated.

Sensitivity to facts: Such green-passport-based distinctions and restrictions are not a penalty, but rather a form of risk regulation and cost allocation that should be fully sensitive to the ever-changing facts. If, for instance, the rates of vaccination are high enough to approximate herd immunity, so that an individual’s decision not to get the vaccines imposes no cost on others, there will no longer be any justification for restrictions.

Distinctions, distinctions: Because proportionality is crucial here, different cases must be treated differently. For instance: access to vital locations and services such as polling places and hospitals should remain available to all. Access to places like restaurants and movie theaters, while undoubtedly important, may be restricted. Decisions on specific cases should also be sensitive to the available—even if not quite as good—non-risky alternatives. So long as deliveries are an option, for instance, access to grocery stores may be restricted. Similarly for university campuses, at least as long as distance learning is a viable (if less than perfect) option.

Trust and incentives: Incentivizing vaccination is a legitimate government purpose at this time. Still, it should be pursued wisely. And some incentivizing measures may be counterproductive. Perhaps in some cases a more effective policy will be focused on attempts to foster trust (especially with populations wherein mistrust of government agencies is both entrenched and arguably justified). Countries where it will take some time until vaccines are sufficiently widely available to make green passport policies a viable option, such as the US, should work on building trust in vaccination now. But the importance of building trust and of rationally convincing people to get the vaccine does not rule out the potential contribution of incentivizing vaccination, or the permissibility of green-passport-based distinctions.

Equality and impact: Vaccine refusal and vaccine hesitancy are unfortunately correlated with membership in marginalized groups and with low socioeconomic status. Arguably, then, green passport policies will disproportionately harm the most vulnerable. This is a valid concern, of course, which should affect permissible policies. It should also affect the effort and resources put into establishing trust and rationally persuading the population with regard to vaccines. Notably, however, the indirect effects of the pandemic—of lockdowns and closures, of economic slowdowns, of higher unemployment rates, and so on—also fall disproportionately on the vulnerable and marginalized. So considerations of impact on the most vulnerable cut both ways in this dilemma. Green passport policies, by incentivizing inoculation, protecting from health-related harms, and reducing the economic effects of the pandemic, may ultimately serve an important role in mitigating these negative effects.

For now, benefits are too uncertain to justify green passports

Nicole Hassoun, Professor of Philosophy, Binghamton University

To decide if immunity passports are a good idea, it is important to get clear on: 1) what objectives we are trying to achieve by implementing them; 2) whether passports will achieve those objectives; 3) any ethical problems that remain if they do; 4) whether there are better ways of securing the same benefits without the ethical costs; and 5) whether there are ways of limiting the costs these passports create or expanding access to the benefits they provide.

Some argue that immunity passports will limit health risks while letting economies return to normal—but what risk levels are acceptable? Will passports reduce the risk below that threshold? Can we lower risk levels sufficiently without implementing without passports? And can we compensate people for, or limit, passports' ethical costs?

I do not believe the data supports implementing an immunity passport system at least for the reasons outlined above. To date, there is limited data on how the vaccines affect transmission rates. Moreover, there is significant uncertainty in tests for natural immunity. Especially with a quickly evolving virus, it is difficult even to figure out how long immunity will last, never mind whether we can achieve any particular public health objective with a passport system. Economic benefits are likewise uncertain—immunity passports will keep some people from accessing some public spaces even as they may allow others to do so, such that the overall economic effect may be positive or negative. And social distancing and other policies may also help us lower health risks and secure economic benefits, which reduces the expected benefit that can be achieved from passports and could not be achieved by other, less burdensome means.

If we do implement immunity passports, I believe that (at a minimum) they should not constrain people’s access to the objects of their human rights and that we should try to limit and compensate for passports’ ethical costs. Global vaccines distribution is highly inequitable. Most people, even in rich countries, have not been vaccinated to date. Some cannot ever be vaccinated for health reasons, and most of those in poor countries may have to wait years for a vaccine. I believe that implementing an international passport system would give us even more reason to help everyone around the world access the vaccines as quickly as possible. Rich countries might compensate for any negative economic effects of passport systems on poor countries, for instance by providing unconditional international aid. Passport systems should also include exceptions to allow people who are willing to take appropriate precautions to access public spaces when they need to do so for important (e.g. health or family) reasons.

Of course, it is possible that COVID transmission and health risks will be worse if we do not implement passport systems than if we do, but, at least insofar as they constrain individual freedom and will exacerbate existing inequity in vaccine distribution, the burden of proof for establishing that they are justified falls squarely on those who advocate for them. Proponents of passports, even if they defend this measure on philosophical grounds, cannot simply assume or make up facts to justify their preferred policies.

In a pandemic, governments can require vaccination

Mark Budolfson, Assistant Professor, Rutgers Center for Population-Level Bioethics

In an infectious disease pandemic, governments should require vaccination if the infectious disease is bad enough and the vaccine is good enough. Incentivizing vaccination is not enough in such a case; governments should require vaccination, as long as the vaccine has been confirmed sufficiently safe and effective and is freely available to all. Whether this applies to the actual COVID situation depends on several empirical questions about risk and other factors that I highlight below, which should be answered by empirical experts. Thus, a policy of merely incentivizing vaccination and requiring immunity passports for many activities may not go far enough—even setting aside worries about whether immunity passports are feasible, given that records are often not kept of who has been vaccinated, and given that vaccine cards are easy to forge.

Governments can require vaccination in some circumstances for the same reason we can require people not to drive drunk and engage in other unacceptably risky behaviors: if an antisocial behavior creates an unacceptable risk of death or other serious noncompensable harm to others, and if that behavior can be prohibited without significantly harming anyone or imposing unreasonable costs, then the correct response is to require people not to engage in that unacceptably risky antisocial behavior. This explains why the correct public policy is to require people not to drive drunk, and why it would not be enough merely to tax drunk drivers based on the expected monetized value of the lives that will be lost due to their behavior. In other words, given the magnitude of the noncompensable risks imposed on others, it is not enough merely to create incentives; we must require people not to drive drunk in order to protect the basic rights of others. Similarly, in a context in which an excellent vaccine is freely available to all, and the unvaccinated would run an unacceptable risk of killing and maiming others, we must require people to be vaccinated (except in a small number of cases where there is a clear medical reason why they cannot be vaccinated).

In a pandemic, those who do not get vaccinated may impose an unacceptable risk to others, and vaccination can easily remove that unacceptable risk. Under those conditions, the correct policy is to require vaccination to protect basic rights. Note that this argument remains silent on the question of whether vaccines should be required in other important public health contexts in which there is a less dramatic risk that an unvaccinated individual will harm others. The correct policy in those other contexts depends on more thorny questions about the ethics of collective action and public goods—namely, exactly when and how governments might be justified in providing public goods in the domain of health and beyond. In contrast, these contested questions need not be answered when a pandemic is bad enough and a vaccine is good enough, because requiring vaccination is then justified by a more urgent and uncontroversial need to protect people’s basic rights.

To illustrate the key factor regarding risk, note that in the first year of the COVID pandemic, on the order of 1 in 1,000 U.S. adults died from complications of novel coronavirus disease. It is therefore realistic to imagine a bad pandemic in which a person who chooses to remain unvaccinated in a population where many are not fully protected could impose an additional 1 in a 1,000 risk of infecting and causing the death of another person, and an even higher risk of causing other serious harm (i.e. serious illness). Even if the risk were lowered by vaccinating, say, 50% of a population, the risk imposed on others by those who choose to remain unvaccinated could remain unacceptably high (as many others may remain vulnerable, vaccines do not fully protect against death and serious illness, and the unvaccinated may promote mutations that create new risks of death and serious illness even for the vaccinated). In a bad pandemic, the degree of risk imposed by an unvaccinated person could realistically be on the same order as the risks involved in drunk driving and other antisocial behaviors that uncontroversially must be prohibited based on a government’s most fundamental obligation to protect basic rights to life and bodily integrity.

In contrast, someone who opts out of a vaccine for measles in a developed nation may impose a risk of death on the order of 1 in 10 million. While one could think that vaccination should still be required in such a case, that would require an additional argument based on a more contested set of questions about the ethics of collective action and public goods, given that imposing a 1 in a 10 million risk on others is not generally thought to be a degree of risk imposition that must be prohibited to protect basic rights. The case for requiring vaccination in a bad pandemic does not depend on contested questions, since it depends only on recognizing the obvious need for governments to prohibit actions that impose as unacceptable a risk to others as a 1 in a 1,000 risk of death.

Thus, when a pandemic is bad enough, the ethics of collective action and public goods takes a backseat to the more urgent ethics of protecting basic rights, as the decisive reason outlined above to require vaccination in order to protect basic rights provides an independently decisive case that trumps additional reasons we may have to promote welfare. Even libertarians who reject the idea that people should be required to contribute to public goods should still agree that vaccination can be required in a pandemic, as even those libertarians agree that government should outlaw behavior that imposes an unacceptable risk of noncompensable harm to others.

In sum, no one would think that the correct policy response to drunk driving is merely to charge a tax of $1,000 a year for those who want to drive drunk and do nothing further. For similar reasons, while creating incentives to vaccinate is sometimes better than nothing, it may not be enough. The correct policy may be to require vaccination, depending on empirical facts about risk and other factors highlighted above. Choosing to remain unvaccinated may impose an unacceptable risk of noncompensable harm to others, and if so it should be prohibited (except when there are medical reasons not to vaccinate a specific individual) given that requiring vaccination isn’t an overly costly intervention into people’s lives in light of what is at stake for others.

When employers vaccinate eligible employees against COVID-19, what kinds of sub-prioritization criteria are permissible?



Overview of the dilemma

Monica Magalhaes and Nir Eyal, Rutgers Center for Population-Level Bioethics

As part of the national effort to roll out COVID-19 vaccination, some states are allocating some of the vaccine doses at their disposal to large employers, so that the employer can distribute the vaccine to its employees. Early on in the vaccination effort, when health care workers were a priority, hospitals received vaccines to administer to their staff directly; now, as vaccine eligibility expands, employers may apply to set up their own vaccine dispensation points. CPLB’s own home institution, Rutgers University, has been approved by the state of New Jersey to administer vaccines on campus when vaccine supplies become available.

The number of people eligible for employer-provided vaccination according to state criteria will often exceed the number of doses available. Employers then need to sub-prioritize, or set rules to allocate vaccines among the eligible population, keeping in mind that that population would usually be entitled to get vaccinated through the state as well.

Early on in the vaccination campaign, some hospitals and nursing homes impermissibly  vaccinated donors, trustees, board members, and relatives of executives in violation of state eligibility rules. But how should employers that honor those rules allocate vaccines?

On one possible approach, employers’ sub-prioritization ought to serve the same goals as the state’s criteria for allocating the vaccines across the state do. On that approach, the employer cannot permissibly bring additional goals into its sub-prioritization decisions.The state’s general goals can be served in two quite different ways:

  1. The employer replicates state allocation criteria. For instance, if the state prioritizes based on age and comorbidities only, the employer prioritizes based on age and comorbidities only, with similar cut-offs etc. It rations vaccines as any other distribution point in the state does.
  2. The employer enacts different criteria than the state does, in service of the state’s goals, considering the employer’s special conditions. For example, imagine that the state prioritizes any state resident thought to have a certain characteristic. Assume that an employer could have provided vaccination to any area resident thought to have that characteristic. Instead, the employer provides vaccination only to those area residents thought to have that characteristic who are its employees, only because on its own employees, it has reliable and fine-grained data with no need to rely on self-reports; or because it has its employees’ up-to-date contact details so could reach those entitled to the vaccine faster than it could reach other area residents. The employer could therefore serve the state’s goal of targeting people with that characteristic faster and more accurately than other distribution points in the state if it focuses on those who are its own employees. In such cases, for the employer to offer vaccination to other eligible area residents would throw away that potential efficiency in serving the state’s own goals.

An altogether different approach holds that employers’ sub-prioritization decisions can permissibly serve additional goals. Which additional goals?

  1. Meeting societal obligations beyond what the state’s distribution system already does? For instance, given the well-known socioeconomic and racial/ethnic disparities in vaccine access, may an employer, frustrated with the state’s failure to achieve equity, go beyond state guidelines by prioritizing eligible employees who earn the lowest salaries, belong to underserved racial or ethnic groups, or reside in high
  2. social vulnerability areas?
  3. Meeting the employer’s own special obligations? For instance, may the employer prioritize employees at high risk of COVID infection when the employer is responsible for that risk (by requiring these employees to work in person) over employees at similar risk that is not due to the employer’s actions? May the employer prioritize its own employees and even its own retirees over other area residents (when the state would permit it to cover other residents) so as to discharge its own duties of reciprocity, which it incurred even before COVID?
  4. Furthering its own business objectives? For instance, so long as it meets any constraints specified by the state, may the employer prioritize employees whose return to work in person would facilitate reopening or have the most benefits on productivity? May the employer judge that the state has given it the prerogative to prioritize employees whom the employer most wants to retain? Does the permissibility of these considerations vary between, for instance, a for-profit company and a public hospital? Does it depend on what the state explained as to the reason why it is empowering the employer to make some allocation decisions?

Public justification and employer distribution of vaccines

Helen Frowe, Professor of Practical Philosophy and Knut and Alice Wallenberg Scholar, Stockholm University

The proposal that employers be (a) charged with administering vaccines to their employees and (b) permitted, within certain limits, to decide the pattern of vaccine distribution amongst those employees raises a range of moral questions.

Foremost amongst these questions, it seems to me, is why a state might be justified in outsourcing the allocation of vaccines to employers in this way. One candidate answer is that outsourcing to some types of employers is simply an efficient way to distribute vaccines to where they will do the most good. Here, ‘most good’ might be understood as, for example, (a) reaching those most at risk from serious harm if they catch COVID, or (b) reaching those who are already socially or economically disadvantaged, or (c) some combination of each of these, given that the data suggests correlations between suffering the worst effects of COVID and belonging to various socially disadvantaged groups. So, for example, if we take a university which has employees from a range of backgrounds, including from socially disadvantaged groups, then we might think that outsourcing vaccine allocation to the university will be an efficient means of getting vaccines to where they will do the most good.

If this is indeed what justifies the state’s outsourcing vaccine allocation to an employer, then it also provides a clear rationale that should guide the employer’s pattern of allocation. If, for example, our example university has been given this task because it is well-placed to meet the goal of getting the vaccine to members of disadvantaged groups, then its pattern of allocation should prioritise getting the vaccine to members of these groups. Indeed, this rationale suggests that the university should not merely prioritise getting the vaccine to members of these groups, but that it should only be distributing vaccines to employees that fall within those groups. If the university has surplus vaccines, it may not give those vaccines to employees that are not members of these groups. Rather, as far as possible, these vaccines should be made available for distribution to members of socially disadvantaged groups who are not employees.

Another candidate answer to the question of why a state might outsource the allocation of vaccines to employers invokes the importance of enabling certain organisations to function. The functioning of an organisation such as our example university protects and promotes morally important goods: it’s good for the economy, especially the local economy; it protects people from unemployment; it meets educational needs, and so on. If the importance of securing these goals is what justifies outsourcing vaccine allocation to large employers, then this suggests quite a different pattern of allocation. Vaccines should be allocated in a way that is most likely to enable the institution to (continue to) function and thereby protect these important goods.

So, on this model, the university should prioritise vaccines by asking (a) how likely a given employee is to contract COVID, and (b) what the effect would be on the university’s functioning if this employee were incapacitated. Note that the first question is not tied to how likely the employee is to contract COVID as a result of working for the university. On this model, there is no reason for the university to care about this role-related exposure rather than the employee’s general risk of exposure. Imagine that Anita, an administrator at our university, can do her job at home but cohabits with a partner who works in a public-facing role. Anita’s risk of contracting COVID is largely determined by the risk of her partner’s contracting COVID. If outsourcing vaccine allocation to Anita’s employer is justified by the importance of enabling it to function, the university should care about Anita’s absolute degree of risk of infection and not her role-related risk. It is the absolute degree of risk that is relevant to whether Anita will be able to perform her role.

Of course, we might think that each of these justifications—efficient distribution and organisational functioning—is likely to be instrumental in a state’s decision to outsource vaccine allocation to employers. But there is reason to be cautious of combining these justifications. This is partly because, as we’ve seen, they support quite different patterns of allocation. And it is partly because adopting a kind of middle path that gives weight to each might inadvertently undercut the functioning justification. The university’s ability to function presumably requires a critical mass of employees able to do their jobs. By diluting this justification with our reasons to reach disadvantaged groups, we risk failing to secure that critical mass. Balancing these justifications requires, at least, careful thought about what degree of functioning vindicates the outsourcing of vaccine allocation to an employer on the basis of functioning.

I suggested above that the functioning justification supports caring about employees’ absolute exposure to risk, rather than role-related exposure. Nevertheless, we might feel the pull of the view that the university has stronger reason to prioritise vaccinating those who are exposed to risk by their employment there. I am not arguing that we cannot or should not accommodate such an intuition. I am merely pointing out that this prioritisation is not derived from the justification of enabling the university to function. If we think that the university ought to give extra weight to risks that are incurred in the course of undertaking work for it, then this looks like an independent constraint on how the university may promote its functioning.

There is good reason to think that there are such constraints. In conversation, Brian Berkey suggests that we might justify ascribing extra weight to role-related risks by pointing to the fact that employees in roles involving in-person interaction are exposed to risks as a means of promoting the kinds of ends suggested above. The university is asking these employees to incur risks not only as a means of keeping their own jobs, but also as a means of helping other people keep their jobs, or secure an education, or help the (local) economy and so on. Whereas Anita is exposed to risk as a side-effect of her partner’s being usefully exposed, those whose roles involve in-person interaction are themselves usefully exposed to risks for the sake of benefits to others. Insofar as it is hard to justify requiring people to treat themselves as a means for the sake of others, particularly when this involves incurring risks of harm, we have reason to reduce those risks. This explains why an employer’s pursuit of its capacity to function is restricted by its obligations to limit the extent to which people are usefully exposed to risks for the sake of others.

Note, though, that this is not a claim that our example university has special obligations to mitigate the risks to which it exposes its employees that it may discharge through the use of a public good such as a vaccination. The obligation to reduce the risks to which individuals are exposed for the sake of others is not, in this instance, a special obligation attached to the university, because the goods at stake are broader public goods rather than goods for the university as such. It seems to me impermissible for the university to use public goods to discharge any special obligations that it might have (for example, to retired employees). Nor is such use supported by either the efficiency or functionality justifications considered here.

Employer vaccine prioritization must be consistent with legitimate public aims

Brian Berkey, Assistant Professor of Legal Studies and Business Ethics, Wharton School, University of Pennsylvania 

When employers such as for-profit corporations, universities, or public hospitals are in charge of distributing limited supplies of COVID vaccines among employees and others associated in some way with the organization, there will be temptations for those involved in deciding how to prioritize among potential recipients to treat a range of factors as relevant. Executives at for-profit corporations may, for example, want to prioritize those employees whom they want to retain, and who are most likely to have appealing alternative options. And university administrators may want to prioritize students who pay full tuition, and are most likely to take a semester off if they’re not able to return to having a largely normal social life at the start of the fall semester. 

These prioritization decisions may best serve the goals and interests of the relevant institutions, but they would also involve prioritizing employees who will tend to be higher up in the corporate hierarchy, and students from the most privileged backgrounds, respectively. If the state were making the relevant prioritization decisions directly, it would clearly be unacceptable to treat employee retention or preventing full tuition-paying students from taking a semester off as aims that justify providing priority access to vaccines to some over others in similar risk categories. In my view, it is no more acceptable for employers to prioritize on these grounds than it would be for the state to do so. This is because employers that are put in charge of distributing vaccines among employees and others associated with the organization should be understood as entrusted with the distribution of a public resource, and therefore must make decisions about how the resource is distributed that are justifiable in terms of legitimate public aims (as Helen Frowe suggests in the conclusion of her contribution to this Dilemma).

This principle rules out not only especially troubling grounds on which employers might want to prioritize, such as those that I noted above, but also others that we might initially find intuitively acceptable. For example, it rules out employers appealing to obligations that they have to employees in virtue of subjecting them to risks from COVID by requiring them to work in-person in order to justify prioritizing them over other employees who face similar overall risks, but have been permitted to work from home during the pandemic. It’s plausible that employers that have required certain employees to work in-person during the pandemic have special obligations to those employees that they don’t have to others. But, in my view, they can’t permissibly satisfy those obligations by using public resources with which they’ve been entrusted, such as a supply of vaccines that they’re charged with distributing.

Whether employees who have been required to work in-person can permissibly be prioritized over others at similar overall risk levels depends, instead, on whether there’s a legitimate public justification for prioritizing them. And it seems to me that in many cases there will in fact be such a justification. Some employees, for example, have performed (and continue to perform) work that’s genuinely essential and can’t be done remotely. In these cases, the state was or would have been justified in requiring their employers to continue to operate (at least largely) normally, at least with respect to their working conditions. The fact that some employees have put themselves at risk in the course of performing work that’s essential to the continued functioning of society during a pandemic is plausibly a legitimate basis on which the state might prioritize them for vaccine access over others at similar overall risk who don’t perform such essential work. If this is correct, then employers are permitted (or perhaps even required) to prioritize these employees – but importantly, this is only because the state could also legitimately (or perhaps would be required to) prioritize them if it was distributing access directly.

To see what my view implies for particular cases, consider a simple example: Firm F is a grocery store chain that employs A and B. A is a 65-year-old accountant who is in good health and has worked from home during the pandemic. B is a 40-year-old in-store worker with a minor health condition that increases her risk of hospitalization and death from COVID somewhat. Overall, they face roughly equal risks. My view implies that because the state has a legitimate interest in B’s work being performed, it may not be inconsistent with legitimate public aims for F to prioritize her over A for vaccine access. In addition, if B’s performing her work makes it the case that the state would be obligated to prioritize her over A if it were distributing access directly, then F is obligated to prioritize her as well (even if F’s interests would be better served by prioritizing A). This is because the public reasons that would require the state to prioritize B carry over to F when F is entrusted with the distribution of a public resource.

It’s worth noting that my view suggests that there isn’t any fundamental justification for employers distributing vaccines that they’re allocated only to employees and others associated with the organization. There may be general efficiency-based reasons for their doing so, and in some cases there may be legitimate public aims that would be served by distributing vaccines only to those within a particular organization. But this won’t be true in all cases, and when it’s not true my view implies that employers will have reasons to extend distribution beyond the institutions’ membership. If doing this would better serve the public aims that ought to guide the distribution of public resources, then it seems to me correct to think that employers ought to do it.

Private ends and the allocation of public vaccines

Bastian Steuwer, Postdoctoral Associate, Center for Population-Level Bioethics

Coronavirus vaccines are currently not available on the free market for private individuals to buy. Suppliers have entered into contracts with governments who acted on behalf of their populations to ensure that vaccines are available. The purchased vaccines, therefore, belong to the government and most governments choose to distribute the vaccines free at the point of delivery, either by relying on existing health insurance coverage to cover the costs or by paying for the vaccine for those uninsured.

In the United States, several states have nonetheless decided to involve private actors in the distribution of the vaccine. This is unlike many other countries in which governments are the sole distributor of vaccines, aided only by health care providers like hospitals or doctors that distribute in strict accordance with the government’s priorities. The special situation in the United States raises the following questions: how should private actors  distribute the vaccines allocated to them? Can they use the vaccine to further their own ends, by which I mean either their own self-interest or their special obligations which are not shared by the state?

A possible resolution to this question lies in the contrast with other countries which rely entirely on public distribution. Why do states like New Jersey think it sensible to give large employers vaccines? Why do they give away a public resource, purchased for the general population, to private employers? We might think that the answer to this question also helps us understand how private employers should allocate vaccines when they receive them.

The why question is, however, importantly ambiguous. It can refer to one of two things. First, it may be thought to refer to the actual intentions of the government. Second, it can be understood as referring to the possible justifications the government has for using private agents in the distribution of vaccines. In practice, the first question is somewhat moot. There is often no clearly communicated intention by the state that explains why vaccines should be distributed privately. It seems to me that this does not constitute a particular problem, since the first interpretation of the question does not seem to be the relevant one. Imagine that a state gave away vaccines to appease big business. Big business liked having vaccines at its disposal due to both the additional power this entails over their employees and the positive PR coming from being portrayed as the benefactor. In this scenario, we would not consider the state’s intention to be morally relevant. That’s because the state’s intentions display an unjustified attitude towards the prioritization of vaccines. If so, then we might as well take the second interpretation of the question to be more important.

So what are the possible justifications for using private employers to distribute vaccines? A first one is that private employers might be more efficient at giving vaccines to those at greatest priority. Employers have information that the state does not have. For example, the state cannot easily distinguish between different Walmart employees. Walmart has a better sense of who really is on the frontline in their stores. This improves the fairness of the vaccine distribution. Employers also have better information that allows them to reach out to employees and set up vaccination delivery in ways that are convenient to employees. Reducing missed appointments and better communication improves the efficiency and speed of vaccination. A second, and distinct, rationale is that it is socially very important for certain employers to resume operations. For example, online education may be a poor substitute for the real experience. If so, then this provides a good reason for schools and universities to resume in-person classes as soon as possible to reduce setbacks in the education of children and young adults. Each of these justifications may, by itself, be sufficient to justify giving vaccines to employers.

A problem with this approach is that the two justifications can pull in opposite directions. Take the following example. Adam is a current frontline worker who has a moderate risk of harm from COVID should he get it. His current risk of exposure is high. Beatrice currently works from home because she has a higher risk of harm from COVID should she catch it. At home, her exposure risk is very low. The efficiency idea would favor Adam who is currently at higher risk of adverse outcomes and at higher risk of infecting others. But the reopening idea would favor Beatrice. If employees like Beatrice are vaccinated, then the employer can restart operations more easily.

What should the private employer do? The reasons for which the employer is given the vaccine are overdetermined. Might the intention of the government serve as a guide? I am not convinced. Unless the government attaches specific strings to the vaccines, the democratic process has not precluded either of the two allocation plans. But perhaps the employer should nevertheless heed the democratic intention. I am more inclined to think, however, that in such a case the employer can further their own ends in a limited way. It can choose between the two justifications from its own perspective. That does not necessarily resolve the case. One can argue that Adam has a strong argument that the employer owes the vaccine to him given that the employer put Adam on the frontline. One can argue that the employer has a self-interested reason to give the vaccine to Beatrice. Either way, the employer is using its own perspective in a limited manner. I don’t think that this conflicts with seeing the vaccine as a public good paid for by the government. Whichever option the employer chooses, there is a public justification for the allocation. The vaccine is always treated as a public, and not a private, good.

Employees, clients, and everyone else

James Goodrich, PhD candidate, Rutgers University and Stockholm University

Large public and private firms are now aiding in the distribution of COVID vaccines. The rationale, roughly, is that at least some large firms are well positioned to efficiently distribute vaccines to their employees. Presumably, they are capable of such efficiency due to their preexisting infrastructure and the information they possess about many of their employees.

However, if the rationale is based on efficiency gains alone, it's difficult to see why employees are so special. Many large firms also have significant information about their clients. Universities, for example, often have at least as much information about their students as they do about their employees. Why should universities prioritize giving access to vaccines to their employees rather than their students? One might think that, in the case of COVID, many students are not under as significant a risk as many employees due to their younger age. However, this is only true in vague generality. There will, of course, be non-traditional students who are older than many employees and some students who have a comorbidity. And there will also be many employees who are relatively young and lack relevant comorbidities. It's thus unclear that the employee/client distinction can even serve as a proxy measure for where people fall among the various priorities.

From a rather abstract point of view, I find it difficult to see why the difference between employees and at least certain kinds of clients, like students, should make a difference to the moral obligations of employers. Both employees and some kinds of clients are in regular economic exchange with the employer. One simply exchanges labor for money while the other exchanges money for goods or services. Of course, we might think that the repeated, sustained exchange relationship of any employee with their employer often generates special duties between them. Employees, in some sense, have more contact with their employers than does any one client. However, again, at least in the case of students, this doesn't create an obvious difference between employee and client. The university has a close, and in some ways even more intimate, relationship with its present students. Many students live, eat, and spend their off-work hours in the institutions of the university. And this relationship does continue, to varying degrees, after the student graduates.

But suppose that what I say just can't be right. Suppose that we think the employers in the university setting have special duties to their employees that they do not have to their students. These duties are not grounded in efficiency or ongoing exchange interactions or anything else like that. They're just grounded in something unique to the employee-employer relationship. Suppose that that's right and that we think the universities are permitted or even should act on these special duties to their employees in distributing vaccines. If so, then we seem committed to the claim that large firms like universities may appeal to their special duties—duties that are not part of the public ethical justification, founded upon efficiency, for having large firms aid in vaccine distribution—in their prioritizing some individuals rather than others.

This raises a host of further questions. And not just for universities. If we think universities can appeal to special obligations to prioritize some over others, then why can't other large firms do the same? And why should only special obligations to employees count? After all, many firms have special obligations to others as well. What about retirees? Or why can't a private firm appeal to special obligations they “just owe” to their stockholders? Yet giving stockholders priority seems like an objectionable use of a public good. Of course, this implication might be avoidable. My point isn't to dismiss the idea that there are special employee-employer duties out of hand, or to prove that large firms cannot appeal to such duties. Rather, my point is that considerable work must be done. We need an explanation of such duties and a justification for letting firms appeal to them which doesn't collapse into moral absurdity. After all, this is a domain of great public importance. Transparently acceptable justifications are required.

If we're unable to work out such a justification for allowing large firms to privilege their employees over others, including to some others to whom they may have special obligations like stockholders, then how these firms should go about distributing the vaccine may just be a matter of efficiency. This too may seem strange. After all, efficiency is hostage to local circumstances. If, for example, a university is in a rural or impoverished area, it might be the most efficient distributor of vaccines for a great many people who have neither an employee nor client relationship with the university. And this would simply be because, despite a lack of personalized information, the university might have infrastructure appropriate to the task in a way no other institution within a reasonable distance does. In such cases, why shouldn't the firm be forced to ignore all of their employee and client relationships in the name of efficiency?

When employers act as vaccine distributors

Mark Budolfson, Assistant Professor, Rutgers Center for Population-Level Bioethics

Some employers are well-placed to serve as vaccine distributors given our societal goals of health, welfare, and equity. Many employers have greater know-how, capacity, and incentive than governments to vaccinate their employees quickly. And in cases where these employers are large and have diverse workforces, allocating vaccine to them can therefore be an efficient way of getting vaccine to individuals in a quick and equitable way.

For example, many state universities have a very large and diverse workforce, and have the knowledge, capacity, and incentive to vaccinate their employees quickly, given that they know who their employees are and where they will be, have sophisticated biomedical staff and facilities, and stand to lose millions of dollars if vaccination is delayed. Allocating some vaccine to such well-placed employers is part of the best feasible way that society can promote health, wellbeing, and equity, because it allows government to better achieve our societal goals (quickly getting vaccine to individuals in an equitable way) than if government insisted on directly distributing all vaccine to individuals itself. This explains why we should make some (but not all) employers vaccine distributors. Note that the claim here is merely that some vaccine should be allocated to employers to distribute to individuals, not that all vaccine should be allocated in this way. We should not allocate all vaccine to employers, because our societal goals also imply that some (perhaps most) vaccine should be allocated to public health agencies, pharmacies, and other entities to distribute directly to individuals.

Because some vaccine should be allocated to employers, an important question arises about how those employers should distribute their allotted vaccine. This question cannot be answered by vaccine allocation guidelines at the governmental level, because employers face additional unique questions that are not addressed by those guidelines. For example, large employers face the question of who the set of individuals is to whom it should distribute its allocated vaccine. Should retirees be included? What about part-time vs. full-time employees? What about vendors such as cafeteria workers who are technically employees of a different company but are assigned to work full-time at the buildings run by the employer in question and may be more exposed to COVID risk than its own employees in the same buildings? Does the employer have a special obligation in virtue of the risk it imposes on some non-employees to treat them as part of the set of individuals to whom it should allocate its vaccine allocation? Should any member of the general public have a right to demand vaccine from employers who are allocated it, whether they are an employee or not? And whatever one makes of these questions about what the relevant set of individuals is to whom an employer should distribute vaccine, there are further questions such as whether employers should add additional allocation parameters within the allocation guidance provided by government, for example to always allocate scarce vaccine to older individuals first in order to break ties within allocation groups provided by government.

In answering these questions, it is important to recognize that employers have special obligations to two partially overlapping groups: those who are at higher risk from COVID because of the employer’s actions, and those who have a legitimate claim to membership in employer’s community, which I suggest is a larger set than merely those who are current employees of the employer. To see the importance of these two types of obligations, an analogy can help. Suppose society was different, and most people lived in very large households, with one person as the head of each household. And suppose we faced a similar pandemic, with similar dynamics, and similar need for vaccination. In this situation, it could make sense to distribute vaccine to heads of households to distribute within their household community, given the special knowledge, capacity, and incentive of households to vaccinate their community. With that setup in mind, we can imagine individuals within these households analogous to retirees, non-employee vendors, and others considered above. So, imagine that these large households tend to contain people who are old enough that they no longer do physical labor—they are ‘retirees within the household’—and tend to contain people who are simply paid to provide childcare and the like for those within the household, but are not family members or official members of the household in other ways—these are ‘contract laborers within the household’. And finally, suppose that many of these retirees and contract laborers are more vulnerable to the pandemic than others within the household.

Now suppose that you learned that when some heads of households are allocated vaccine to allocate, they treat retirees and contract laborers as if they should simply be ignored and not considered eligible to receive any of the vaccine allocated to the household. Imagine that these heads of households tend to argue that retirees and contract laborers are not official working members of the household, and thus are not contributing to its profitable operations, and so have no claim to receive any of the vaccine allocated to the household. The correct response to such an argument for excluding retirees and contract laborers is that it is ethically incorrect. Similar remarks apply in our actual situation to employers who exclude retirees and contract workers put at elevated risk within the employer’s operations—and the point holds with even more force with respect to employers, given that employers do not have special familial bonds at play that might otherwise tell in favor of prioritizing family members in the households example.

Thus, the ethically correct analysis is that both retirees and contract laborers are members of the set of individuals to whom employers should distribute vaccine—they are members of the employer’s ‘relevant community’. The government is allocating vaccine to the employer because of the special knowledge, capacity, and incentives to vaccinate individuals within its relevant community; this creates a social contract between government and society to the effect that when the vaccine is transferred to the employer, it must be allocated to all members of the relevant community in a way that promotes society’s goals, rather than in a way that merely maximizes profits of the employer. If the employer does not allocate in this pro-social way, and instead merely prioritizes the most profitable employees and artificially excludes retirees and contract laborers, then it violates its obligation to society, and also violates special obligations to its retirees and contract laborers to treat them as valued members of its community. At the same time, employers have no obligation to provide vaccine to all members of society outside their relevant community, just as heads of households would have no obligation to provide vaccine to those with no connection to their household: once the vaccine is transferred to the employer, it is no longer a public resource.

The considerations above explain why vaccine should sometimes be allocated to employers, and begin to answer questions about how employers should distribute vaccine. Beyond the special obligations identified here, vaccine should presumably be distributed by employers so as to equitably mitigate risk within the relevant community.

In deciding between funding different health programs with limited resources, what number of deaths of newborns is as intrinsically important to avoid as the deaths of 100 young adults?



Overview of the dilemma

Monica Magalhaes and Nir Eyal, Rutgers Center for Population-Level Bioethics

In people’s judgments about health resource prioritization, saving the life of a young adult is often assigned greater inherent priority than saving the life of a very old person, either on the assumption that saving young adults tends to preserve more life years or because young adults have had less chance for a full life. In that spirit, there is usually greater emphasis on preventing death from HIV/AIDS (a disease that is especially prevalent among young adults) than on preventing death from cardiovascular diseases (which typically affect the old), both because the death of a young person typically takes away more life years, and because it takes these years away from someone who has enjoyed fewer years. Other people insist that a death is a death (or a future life year is a future life year) and prioritize older people and young adults equally. Either way, what remains uncommon is to assign greater priority to saving older adults than to saving younger ones, when they are at similar risk of dying (while priority for older adults was well accepted in COVID vaccination, this was because the risk of dying of COVID is far greater for the old.)

However, when we compare newborns and fetuses to young adults, the pattern reverses, and saving the lives of young adults, who are older, is usually prioritized. One survey’s findings can be interpreted as showing that a lay population prioritizes saving the life of a 39-week fetus over that of a 10-week fetus; treats full-term fetuses and newborns alike; and prioritizes saving one-year-old children over saving fetuses and newborns, and almost as highly as saving adult women. What explains this pattern of prioritization? And what, if anything, might justify it?

The answer has real ramifications in health resource prioritization around the world. It directly affects prioritization between, for example, prevention of stillbirths, of neonatal death, and of death from HIV/AIDS. It may shed light on the abortion debate. It affects measurement of the burden of disease: should a stillbirth count as generating extremely high disease burden (because it often imposes loss of many expected life years), or extremely low burden (because the stillborn’s death does not “count” for the purpose). It is also of great philosophical interest.

Here are a few candidate explanations for the common tendency to prioritize young adults over embryos, fetuses, and even newborns for life-saving when other things (e.g. risk of short-term death) are equal. Each candidate explanation is followed by lines of questioning that may be raised against it, at least when that explanation purports also to justify that common tendency:

  1. While infant mortality is common in many parts of the world, young adults have already escaped death in infancy, and many societal resources have been invested in them. Young adults are therefore more likely to become both productive and reproductive contributors to society than embryos, fetuses and newborns. We may have either economic or species preservation reasons, therefore, to prioritize saving the lives of young adults.

  • However, the deepest philosophical question, and the question relevant to measuring the burden of disease for each individual, concerns only the inherent importance of preventing deaths, not its instrumental importance to society or to the continuity of the species.

  1. Young adults are personally more invested in continued living than fetuses and infants are.

  • However, what does being invested in continued living mean, and why should that drive priorities in resource allocation?

  1. Young adults have a concept of a history which may be cut short, more than fetuses and infants do, so survival means more to the former.

  • However, we do not usually think that someone’s full comprehension that an evil is done to them is necessary for its designation as an evil. Why, then, condition the badness of a death on the dying person’s own sense of history?

  1. Young adults harbor long-term goals, which dying soon would typically thwart, arguably unlike fetuses and infants.

  • But is the frustration of our goals inherently bad for us? And is it so bad that it can outweigh the typically longer future that a newborn might have had upon short-term survival, making young adult death worse overall?

  1. Young adults resemble themselves psychologically as older adults more than embryos, or even fetuses and newborns, resemble their older selves. So on psychological-continuity accounts of personal identity, dying deprives young adults of many decades of a life that they would have, and not so for embryos, fetuses, and newborns.

  • But our tendency to discount deaths of young humans seems deeper than considerations of psychological continuity and, relatedly, of personal identity. Embryos, fetuses, and newborns arguably retain their personal identity at least for some weeks in which no transformative developments take place. Intuitively, however, one extra week of continued existence is not particularly beneficial to them. So the account in terms of continuity and identity at best captures only a part of the truth.

  1. Young adults are full-blown persons, with higher moral status and firmer rights than fetuses and newborns possess.

  • However, setting priorities based on assigning different statuses seems wrong in other areas of health—for example, fetal pain and infant pain arguably should not command fewer resources than pain in young adults.

  1. Young adults are already living their “life story” or “narrative”, which even a painless death would disrupt, unlike very early humans, who haven’t yet started “writing” their book of life.

  • However, why is a “life story” key to determining health-resource prioritization, as opposed to, for example, the stakes in terms of health, capability, and the like? And doesn’t our story start at conception or at birth, rather than only when we become full-blown persons?

 

This Dilemma asks which of these explanations, if any, can justify the common tendency to prioritize young adults over embryos, fetuses, and even newborns and the related decisions in health resource allocation.

Valuing mortality risks at different ages

Lisa A. Robinson, Deputy Director, Center for Health Decision Science, Harvard T.H. Chan School of Public Health

How should we trade off deaths of newborns versus deaths of young adults? Clearly this is a normative issue, one that philosophers are accustomed to addressing. But I am an economist or, more precisely, a policy analyst. Here I describe how the framework within which I most often work, benefit-cost analysis, would approach this dilemma.

Addressing this dilemma from the benefit-cost analysis perspective requires first clarifying that framework. Conventionally, benefit-cost analysis compares scenarios without the policy to scenarios with the policy over the time period when policy would be implemented. It considers all impacts, positive or negative, related or unrelated to health. In this dilemma, presumably a decision-maker is faced with only two options, each of which has equivalent costs and only one outcome: averting deaths to infants or averting deaths to young adults. Such a choice is obviously an intentionally artificial construct. Choices are rarely (if ever) this stark nor limited to so few options and outcomes. Accepting this artificial construct, however, how would a benefit-cost analysis answer the question?

In benefit-cost analysis, as conventionally conducted, value is derived from individual preferences for exchanging money for outcomes of concern. Money is not important per se. Rather, it is a convenient measure of exchange, representing the allocation of scarce resources (labor, materials, and so forth). If an individual spends money on a good or service, he or she cannot use that same money for other purposes; the expenditure has an opportunity cost. Presumably, the individual purchases a good or service if she or he values it more than the other things that money could buy. Equivalently, the amount an individual is willing to pay for a mortality risk reduction, such as a 1 in 10,000 decrease in the risk of death in a given year, indicates the extent to which he or she is willing to forego other consumption to achieve that improvement.

Conventional benefit-cost analysis is also based on respect for individual preferences; it is not paternalistic. Each individual is assumed to be the best, or the most legitimate, judge of his or her own welfare. This means that the value of an outcome, such as mortality risk reductions, is derived from the preferences of the individuals affected. Because infants and young children lack the ability to develop thoughtful and well-informed preferences for these tradeoffs, researchers typically rely on parents to estimate their preferences. This means that, in benefit-cost analysis, the mortality risk reductions envisioned under this dilemma would be valued based on the willingness of the affected individuals to exchange money for the risk reductions they would experience. To compare the total benefits of these two policies, one affecting newborns and the second affecting young adults, a benefit-cost analysis would sum individual willingness to pay for the risk reductions across those affected in each case.

Within this framework, what does the available research tell us about these tradeoffs? In high income countries, researchers often find that, on average, values for children exceed values for adults by a factor of 1.5 or more. The extent to which these values vary with the age of the child is uncertain. For working age adults (e.g., between ages 18 and 65), values are often found to follow an inverse-U pattern, increasing throughout young adulthood, peaking in middle age (generally somewhere between ages 35 and 45), then declining. However, the slope of the curve and the age at which it peaks varies across studies. For older adults (generally above age 65), the pattern is less clear; values may increase, decrease, or remain stable.

These patterns suggest higher values for each newborn affected than for each young adult, at least in high income settings. For example, if we start with the 2019 population-average values recommended by the U.S. Department of Health and Human Services, the value of averting an expected death at age 40 would be $10.6 million. If we assume that the value for an average child is 1.5 times the value for an average adult (age 40), then the value of averting an expected death for a child would be $15.9 million. As an example of the values for younger adults, one analysis that applies an inverse-U function to those of working age finds a value per expected death averted of $5.4 million for ages 24 and under and $8.5 million for ages 25 to 34. These estimates are uncertain, however. The research findings are not entirely consistent across studies and many issues are unresolved. Thus while we can conclude that the policy affecting newborns would likely be preferred if both policies were to avert the same number of expected deaths (all else equal), we are uncertain about the size of the difference.

If the policy would instead affect a low- or middle-income country, these patterns may not hold. The relationship between the value of mortality risk reductions and age has not been well studied in these settings and may differ for cultural and other reasons. More generally, regardless of location, values will likely vary due to other characteristics of the individuals affected, such as income and health status, and characteristics of the risks, such as the degree of morbidity prior to death and the extent to which the risk is viewed as voluntary and controllable.

Thus, within the benefit-cost analysis framework, we would address this dilemma by investigating the value that individuals place on their own risk reductions, asking parents to estimate values for risks to newborns and young adults to estimate values for their own risks.

Given space limitations, this essay ignores many other relevant concerns. These include confusion about the “value per statistical life” or “VSL” terminology that is often used to describe willingness to pay for small changes in mortality risk, as well as options for valuing changes in life expectancy (the value per statistical life year, VSLY) rather than changes in mortality risk. These and other concerns are explored in detail elsewhere and many government agencies and organizations have developed related guidance.

Benefit-cost analysis provides an incomplete basis for policy decisions, however. Like many forms of analysis, it ignores pragmatic concerns, such as legal, financial, and political constraints. A perhaps more difficult challenge is the need to ensure that unquantified impacts are appropriately communicated and weighed, neither ignored nor exaggerated. Whether and how to incorporate preferences for others’ wellbeing within this framework raises many conceptual and empirical issues. Perhaps most importantly, the distribution of outcomes across advantaged and disadvantaged groups must be considered.

Avert the worst deaths or prioritize the worst off?

Joseph Millum, Bioethicist, Clinical Center, National Institutes of Health 

Disclaimer: The views expressed are the author’s own. They do not represent the position or policies of the National Institutes of Health, the Department of Health and Human Services, or the U.S. Government.

In deciding between funding different health programs with limited resources, what number of newborn deaths is as intrinsically important to avoid as the deaths of 100 young adults?

Health systems are rarely faced with direct choices between lives of newborns and adult lives. However, in deciding which interventions to fund or where to expand access first, policy-makers reveal the relative value they put on different sources of morbidity and mortality. There are some interventions whose major benefit to populations is through preventing very young deaths, such as rotavirus vaccination. Others, such as interventions to prevent and treat HIV/AIDS also have a considerable effect on reducing young adult deaths. When decisions must be made about where to direct scarce health care resources it can therefore make a difference how much value is placed on preventing a newborn or infant death versus preventing the death of a young adult. It will affect how much the system should be willing to pay per death averted.

How should we compare the value of averting deaths? One way to do so is to calculate the amount of healthy life that the decedent would live if they were saved. We can do this using summary measures of health such as disability-adjusted life-years (DALYs) or quality-adjusted life-years (QALYs). These combine length of life and health-related quality of life into one measure.

Assume, for present purposes, that we are considering young adults and newborns who would go on to live otherwise-average lives provided that their immediate risk of death is averted. (We are not, for example, considering newborns with congenital conditions that will have serious sequelae even if their lives are saved.) Even in countries where neonatal and child mortality is very high, the average newborn has more life years ahead than the average twenty-year-old. If our goal is to maximize benefits in terms of DALYs averted or QALYs gained, we should spend more to save newborns than to save young adults.

The appropriate goal for the health care system may not be to maximize benefits. Indeed, most people who think about the ethics of allocating scarce resources conclude that maximizing benefits should be at most only one of a health system’s goals. In the context of comparing neonatal and young adult deaths, two other ethical considerations are relevant. These considerations pull in different directions.

First, many philosophers think that how bad it is for an individual to die is not merely a function of how much life they miss out on by dying. The individual’s cognitive development matters too. For example, for someone who is self-aware and has a sense of themselves as having a past and a future, it matters a great deal if they miss out on future life. For someone less cognitively developed, such that they do not yet have a sense of self, it may seem to matter much less to them what they miss out on. So, though newborns miss out on more by dying than do young adults, a typical young adult is much more cognitively sophisticated than a newborn. It therefore matters much more to the young adult if they are deprived of future life.

The view that how bad it is for someone to die is a function of both what they miss out on by dying and their level of cognitive development leads to a form of gradualism about the badness of death. On a gradualist view, for the typically developing human, how bad it is to die rises with age for the first few years and then gradually declines.

Depending on one’s specific reasons for adopting gradualism, the exact function relating age to the disvalue of death can differ considerably. Some gradualists think that the worst time to die is as a toddler; others that death is worse as an adolescent or young adult. These differences will be important for situations in which policy-makers need to place a value on preventing the deaths of toddlers and older children, rather than just newborns.

The second ethical consideration that I wish to flag in this context concerns the distribution of benefits. Most people think that when we are allocating a scarce resource, we should consider both the magnitude of the benefits that we can generate and the way that those benefits are distributed. It would be unfair to give all the resources to people who are already better off, even if that would maximize the benefits that the resources could generate. This concern about distribution can be captured in egalitarian or prioritarian terms. Either way, it implies that we should care more about providing benefits to those who are worse off.

Who is worse off—a newborn who dies or a young adult who dies? Since the newborn has less life, it seems very plausible that they are worse off. (Interestingly, this can be true even if the young adult’s death is worse for them than is the newborn’s for them. If we had no rationing dilemma, but were just asked whether to save a newborn  who would then live twenty years, presumably we would assent. Twenty years is better than no life at all.)

Our two ethical considerations pull in different directions. Gradualism about the badness of death suggests that it is worse for young adults to die than the newborns. We should therefore be willing to spend more to avert young adult deaths. But newborns who die are worse off than young adults who die. Giving greater priority to the worst off supports spending more to save the newborns.

How we answer this dilemma will depend on how much we discount the badness of death for newborns versus how much additional priority we give on the basis of disadvantage. If health systems are to fairly allocate scarce resources they need to put numerical values on each. Currently, we are a long way from consensus on either prioritization decision.

Acquisition of human potential and the value of life

Julian C. Jamison, Professor of Economics, University of Exeter (UK)

Let me start by putting on the table that my default position is total utilitarianism, where a life-year is a life-year is a life-year. That implies perhaps 25% extra value on a newborn relative to a young adult, due to their higher life expectancy. However there are two complications to that approach (in my personal view) which have bearing on the question at hand.

The first complication is that I also want to factor in prioritarianism, giving more weight to those who are less well-off. In this generic version of the question naturally we don’t know anything about incomes or other circumstances, nor do we suppose that this would differ across the two age groups. What we do know is that we are considering health interventions targeting distinct populations that differ (at least probabilistically) in their age of death. While dying as a young adult is ‘unfair’, it seems to me that dying as an infant is even more inequitable (i.e. even further from expectation), and hence the latter group should be given extra weight in a prioritarian calculus. Let’s call it 50% higher after aggregating with the life-years argument above.

The second complexity is that implicitly the basic utilitarian “life-year” was a human life-year, but what counts as a human? I reject a categorical definition or bright line between human and not-human (if nothing else, consider our continuous evolutionary history) and instead posit a gradual increase from nothing to fully human; see also a philosophical justification here. For the present purposes I will set aside issues regarding capability at any given moment (which would be relevant for animals and some disabled humans, neither of whom are on an individual path to full human capacity) and instead focus on what seems to be the relevant transition from being a potential but indetermined human to a fully existing one. In other research, I have tried to conceptualize the continuum from  a potential to an existing human in terms of how feasible it is for us (as, ideally, disinterested decision-makers) to understand and empathize with what it means for another individual’s existence to go well or poorly: how the life of an existing human might go well or poorly is more comprehensible than how the life of a merely potential human might, leading us to be more confident about being able to do good for the former. In the bullet points of the Overview of this dilemma, this way of justifying prioritizing young adults over newborns is closest to point #6 regarding “full-blown persons”, since unlike most of the others points, in #6 the justification is not from the perspective of the entity in question (partly exactly due to the various counter-arguments given to those approaches). Note that my framework regarding potential and existing humans does imply assigning lower weight to fetal and infant pain.

Next: when does this process of transition from potential to existing human start – at birth? At fetal viability? At conception? I would argue, even earlier: a cell that is a potential future human is already the slightest bit past the starting point in this transition. The ‘Procreation Asymmetry’ in population ethics says that (in Jan Narveson’s words) “we are in favor of making people happy, but neutral about making happy people”. I suspect that this intuition (which many share) is the same one that leads to the relative devaluation of newborns compared to adults (and teens or school children). Merely potential people get less moral weight than instantiated people. I agree with this view for the reasons mentioned, but I would give even purely potential people (i.e. pre-conception) some positive weight, contra Narveson. And then I would place continuously increasing weight as that potential is developed and the individual human is pinned down and becomes comprehended by (or at least comprehensible to) the policy decision-maker, reaching full determinedness by (say) five years old.

Where does this leave us in terms of the original question? Our tally was at 50% extra weight on the newborns, but now we need to downgrade them due to their status as not-yet-fully-individuated humans. Admittedly the actual numbers become somewhat arbitrary at this point, but for the sake of argument let us say that purely potential future humans receive one-third the weight of existing full humans. Let us further suppose that newborns are halfway on their journey from formlessness to full-blown-ness. This puts them at two-thirds total, after which we can add the 50% for more life-years and for prioritarianism… and we reach parity! Yes I cooked the books to make it come out exactly the same, but each individual step seems roughly right to me.

Newborns or young adults? What we have and what we could have

Espen Gamlund, Professor of philosophy, University of Bergen, Norway.

Carl Tollef Solberg, Senior researcher, Center for Medical Ethics, University of Oslo, Norway

Suppose a decision-maker must prioritize between life-saving interventions that will save either newborns or young adults. It is assumed that individuals in either group will go on to live a life worth living, or die in a few weeks if denied help. Should the decision-maker give the life-saving treatment to the newborns or the young adults? The key issue at stake here is the relative importance of preventing the deaths of groups of individuals at these two ages.

There is a menu of different approaches available, and, as we will see, a choice of either of these approaches may tip the scale in favor of either newborns or young adults. One approach is backward-looking. Here we can consult distributive justice theories like egalitarianism and prioritarianism. These distributive theories emphasize what individuals have had so far in life. By definition, newborns have had fewer life-years (or anything else that matters) than young adults, which favors prioritizing the former over the latter. In sum, a backward-looking approach will tend to favor newborns.

A second approach is present-looking. Such a perspective moves our attention to the current characteristics of the individuals in our dilemma. It involves a considerable degree of actualism (putting weight on current characteristics in making ethical judgments). As a matter of fact, young adults are in possession of characteristics that newborns lack. Young adults possess personhood, they have established long-term goals and projects, interests, and ambitions – in short, they have made various life-investments that would be lost if they die prematurely. Young adults are also more productive, and society has made various kinds of investments in them. They may also themselves have children, and they probably have parents and grandparents that depend on them. Young adults are temporal beings with narrative selves and deep social bonds to other people, and a more sophisticated capacity for well-being and suffering. The present-looking approach favors these actualist characteristics, and in sum they count in favor of saving young adults over newborns.

A third approach is forward-looking. Prioritizing a group of newborns is likely to gain support from those who either believe that we should maximize utility like quality-adjusted life years (QALYs) or who believe that we should minimize disutility like disability-adjusted life years (DALYs) when setting priorities in health. A measure like the DALY is, in its very construction, defined so that the death of newborns will come out as a larger tragedy (in the sense of generating more disease burden) than the death of young adults (even if stillbirths generate no disease burden). This raises a question of when individuals begin to accrue DALYs. If one believes that death is worse the earlier it occurs, then one seems committed to the view that death is worst right after the individual has begun to exist. And if we begin to exist at conception, this implies that policymakers aiming to maximize QALYs or minimize DALYs should prioritize saving embryos over fetuses and fetuses over newborns. But this is implausible.

The forward-looking approach operates on the value of possibilism (putting weight on possible future characteristics in making ethical judgments). In contrast to the actualism of the present-looking approach – which emphasizes current characteristics and thereby young adults – the possibilism of the future-looking approach tends to emphasize what can or will happen in the future if we prevent the deaths of young individuals like newborns. Although newborns have not yet developed enough psychologically to have made life-investments, and so lack a current stake in their future, they will eventually make the relevant life-investments, and they will eventually develop a stake in their life and future. If, instead of emphasizing what we currently have as grounds for prioritizing, we emphasize what we could have in the future, this may count in favor of saving newborns over young adults.

A newborn is a potential future young adult. To some extent, saving newborns involves saving future life-years, narrative selves, life-goals, life-projects, personhood – in short, everything that a group of young adults may have now. Thus, the crucial question is to what extent should we now emphasize characteristics of individuals which they have yet to develop, but which they are likely to develop?

Both the present-looking and the future-looking approaches can be combined with badness of death approaches to priority setting in health care. The underlying assumption here is that how bad it is to die provides one kind of reason for why we should prevent an individual’s death. If newborns fail to receive life-saving treatment, one could claim that their death is particularly bad because the loss associated with their death is greater than the loss associated with the deaths of young adults. This intuition is captured by a Deprivation Account of the Badness of Death, according to which death is bad in virtue of what it deprives someone of. The death of newborns is bad for them because it deprives them of much good life.

On the other hand, some would say that the death of young adults is worse than the death of newborns, even if death deprives young adults of less good life compared with newborns. One reason for this is that young adults are psychologically more connected to their future than is the case for newborns. According to a Gradualist Account of the Badness of Death, both the future life lost and the extent to which the future matters to an individual are relevant to how bad an individual’s death is for that individual.

Gradualism will tend to favor actualism, whereas the Deprivation Account will tend to favor possibilism. Many neonatal deaths happen as a direct result of prematurity. Philosophically, we have not found strong reasons for arguing that premature newborn deaths should be measured as much larger tragedies than stillbirths. If an equal number of life years will be saved in both groups, then this will count in favor of young adults. On the other hand, if both groups will live until age 86, then the dilemma becomes a more difficult one. Proponents of Gradualism (of the badness of death) will typically argue that newborn life years should be discounted. Ultimately, the choice between newborns versus young adults will depend on the chosen discount rate.

 

Young adults versus newborns, and cats versus kittens

Dan Hausman, Research Professor of Bioethics, Rutgers Center for Population-Level Bioethics

I have no idea how many newborn deaths are as intrinsically important to avoid as the deaths of 100 young adults, but like many others, I believe that it is many more than 100. What, if anything, justifies this belief?

I think that there are several considerations that speak in its favor, but I shall only discuss one. Suppose one were writing biographies of both an adult and a newborn who are facing death. One would obviously have a great deal more to say about the adult, but the difference would not just be quantitative. The adult is a fully developed human being with rational capacities, objectives, convictions, passions, a cultural identity, and a personality. His or her life is a story of a human life that is cut off by death. The newborn or newborn has none of these traits. Regardless of one’s view of its rights and the wrongness of killing newborns, the death of a newborn prevents a biography from beginning, rather than ending one.

During a seminar at CPLB, philosopher Shelly Kagan mentioned that he does not feel the same about whether the death of a young adult cat is worse than the death of a newborn kitten. I share Kagan’s intuition on this. Yet, of course, one can write a biography of a two-year old cat, who has habits, likes and dislikes, memories, and abilities to plan that a newborn kitten does not have. Shelly’s point thus seems to undermine my account. Clearly the view that there is something much worse about allowing a biography that is only half complete to end than allowing the death of a newborn must rest on the difference between the end of a human or “rational” biographical life and not beginning this kind of life.

If there is some special value attached to a life attainable by most adult human beings, then one can invoke the asymmetry between interrupting such a life and not beginning it to explain why the death of a newborn is not as bad as the death of a young adult. But the account would suggest that if there is value in a cat’s life, as there surely is, then there should also be a difference – albeit a smaller one – between the end of a cat’s developed life and a newborn kitten not starting one.

 

My cat—a true story

Nir Eyal, Director, Center for Population-Level Bioethics

In my second-floor apartment, my adult cat always seemed somewhat down, though with a life worth living. When I moved to the ground floor, I started to let him go outside daily. The cat became lively and playful. It was evident that daily roaming of the great outdoors increased his quality of life substantially.

The veterinarian disapproved because outdoors, cars and predators often inflict sudden death. (She also pointed out that my cat would terrorize birds—set that aside.) That perplexed me. My strong intuition remains that, for the cat’s own good, ongoing fun matters far more than having a long life. Running around, smelling grass and flowers, hopping from one surface to another, and behaving like a tiger are all sources of appreciable joy for a being with my cat’s mental faculties. Dragging a dull life for longer matters less.

Death is not nothing to us; but there is an important way in which sudden death is nothing to a cat. A fairly sudden death with only a moment’s pain is almost nothing to a cat. Life quality is what matters in an entity with my normal adult cat’s mental faculties. My intuition, at least, is that to offer the cat a year of excitement is the pro-cat thing to do, more than to offer 10 years of tolerable boredom—even if the latter includes more hedons overall (“hedons,” from “hedonistic,” is philosophese for pleasurable experiences minus displeasurable ones.) Perhaps in assessing what would best promote a cat’s interests, we should not be aggregating hedons over different years at all, but instead consider the average utility per period. Crucially, these thoughts were about what would serve the cat’s own good—regardless of his moral status and rights and the tradeoffs with other beings’ rights.

A human newborn, fetus or embryo has many mental and other potentials for future development that an adult cat lacks. But the actualized mental faculties of all these human entities are on a par with, or lesser than, the adult cat’s.

If what I felt about my cat is sensible and applies to human newborns, fetuses, or early embryos, then other things matter more to these human entities’ good than continuing to exist. Most centrally, inasmuch as the newborn or fetus has concurrent experiences like pain and pleasure, then, at least inasmuch as that entity has full moral status and no other entity’s interests conflict, we should prioritize action on these experiential “hedons”. Accordingly, programs for treatment and prevention of fetal pain matter greatly. Pain is not nothing to a fetus or a newborn. It matters here and now, even when it has no impact either on survival or on the health and flourishing of any person whom this human entity might one day become. Surprisingly, however, promoting the entity’s own good scarcely requires preventing its demise--by analogy with my cat story above. Not so for a young adult human, for whom death is an incontrovertible tragedy. Therefore, when preventing stillbirth conflicts with preventing the death of a young adult, as it does in our dilemma, the latter easily wins.

Thus, applying my intuition would tend to support funding interventions that avert young adult deaths when the opportunity cost is funding interventions that would avert similar numbers of stillbirths and neonatal deaths, other things being equal. While either life could be as important to preserve for the person once her or she is a person, fetuses and newborns are not there yet, and their demise would arguably be a smaller tragedy for them at this point than it would be for young adults, including the young adults whom these fetuses and newborns  might one day become by surviving now.

It is true that the death of any human entity, tiny or grown up, usually means a lot to its near and dear. But it is unclear that impact on third parties should inform health policy. Do unpopular people get fewer rights to healthcare? What I say here pertains only to the inherent tragedy and corollary reasons to prevent death.

Applying my intuition to cats and early humans may affect additional matters:

1.      “Deprivation” theories of death: The badness of death is regularly thought to include, at least among other things, the  “deprivation” of the dying entity of all the good things that a longer life would have included. But in my intuition about my cat, that is not (a significant) part of what makes death bad. So deprivation of future hedons might not always make death (significantly) worse for the decedent.

2.      The abortion debate: This revolves mainly around the (in)existence and force of the fetus’s rights compared to those of the pregnant woman. But it also matters how much continued existence, which is clearly good for adults with a life worth living, is at all important for a fetus likely to develop a life worth living. Abortion may remain a woman’s right, among other things on the ground that stopping to exist before one turns into a person is not very bad for a fetus. This somewhat recalls the view that we have a duty to make people happy but not to make happy people. (Again: by contrast, pain in a fetus and in a woman who carries it may weigh similarly, suggesting that their statuses are in some respects similar, and supporting some pro-lifer moves.)

3.      The “time-relative interest” theory: An alternate account of why a fetus lacks an interest in the existence of the adult person which the fetus would become is that the fetus and the adult person are psychologically so different that they are different entities, who need not care much for each other’s survival. This alternate account may be redundant. An adult cat presumably possesses close psychological continuity to its own future self, and is definitely the same cat as that later cat; yet if my intuition about my own cat was right, painless death remains nothing to an adult cat. Whatever makes death nothing to an adult cat may obtain in the case of the fetus as well, accounting for my intuition and potentially for why a fetus lacks an interest in the existence of the adult person which the fetus would become if the fetus survives, without invoking psychological discontinuity.

4.      The importance of a sense of history: I am inclined to think that what makes painless death (nearly) nothing to a cat (and, by implication, to very early humans) is primarily their absence of thoughts about and aspirations for their own future selves, which death would thwart. (Inasmuch as that’s psychological discontinuity with the future self, psychological continuity matters.) If that is correct, it lends some support to the idea that such thoughts and aspirations for the future are key to the badness of death.

5.  Calculation of the global burden of disease: For calculations of disease burden, whether stillbirths from that disease contribute to the life years lost from it does not depend on how much death is “harmful” to fetuses. The global burden of disease purports to assess, not the harmfulness of disease but how much it detracts from health—by which is meant, from the health of the beings whose health comprises global health. A human disease that spreads to cats and kills them does not thereby augment in burden. So what matters to calculating disease burden is whether fetal health counts as part of global health—an independent question.

 

Should rich countries give “booster” COVID vaccines to their own citizens, given that they can send these doses to countries where the majority is still unvaccinated?



Overview of the dilemma

Nir Eyal, and Monica Magalhaes Rutgers Center for Population-Level Bioethics

As we write this, in late August 2021, in 18 countries, mainly in sub-Saharan Africa, less than 2 COVID vaccine doses have been delivered for every 100 people. That is doses delivered, not persons fully vaccinated. In Bangladesh, roughly 6.5 million out of the target of a population of 118 million have been fully vaccinated—that’s 4% of the population. The paucity of aid from rich nations is both cruel for the suffering and death that it allows to grow, and contrary to the self-interest of rich nations, by making new strains and global economic recession likelier.

Meanwhile, in the US, 51% of the population has been fully vaccinated. Among older adults the share of fully vaccinated individuals is significantly higher. The US is now providing a third dose of mRNA vaccine to people who are moderately to severely immunocompromised. From fall 2021, anyone in America who got their two doses of an mRNA vaccine more than several months earlier will be eligible for a third dose. The rationale is that new strains like Delta can resist existing vaccination, and that protection is waning with time. Israel, Germany, France, and other developed countries are also rolling out a third dose.

The World Health Organization condemns the provision of third doses in an increasing number of rich countries, and calls for a halt to COVID-19 third-dose vaccination anywhere until at least the end of September 2021, to allow for the inoculation of at least 10% of the population of every country.

So what are the medical, epidemiological, and ethical considerations that should inform rich nations’ decisions about providing a third dose to their populations? Is it ethical for developed nations like the US to roll out “booster” shots to their own citizens, thereby better protecting the local old and immunocompromised, as well as other conationals, from the Delta strain? Or do the US and other rich countries have a duty to prioritize sending vaccine doses abroad—to countries where the majority of the population remains unvaccinated, or to COVAX? Might there be a way for rich countries to do both? And is it permissible for individuals in these rich countries to accept a third dose, even if they believe, and should believe, that these doses should have helped people abroad?

The "booster" dilemma: An actualist perspective

Frank Jackson, Emeritus Professor of Philosophy, The Australian National University

It isn't controversial that governments have special obligations in regard to their own citizens. Some explain this in consequentialist terms. Shepherds look after their own flocks. Police hunting for a suspect assign different officers to different search areas. We ask—should ask—our children to look after their own bedrooms. These examples remind us, if indeed a reminder is needed, that things go better when there are demarcated areas of responsibility, and, in the case of a government, its demarcated area of responsibility is the country and citizens it governs. Others may grant that there is a consequentialist story to be told, while insisting that governments' obligations to attend to the welfare of those they govern are special in a way that cannot be fully explained in consequentialist terms. What's important here is not the disagreement just noted, but the common ground: governments have special obligations to their own citizens.                    

Here is a second bit of common ground—or, should I say, something that should be common ground. Rich countries do not give enough aid to poor countries. We can reasonably disagree over the amount of additional aid that is called for, and the extent to which giving additional aid is a moral obligation, an example of enlightened self-interest, or a bit of both. But I argue, as do many others, that it is obvious that rich countries, including the country I come from, ought to give more aid to poor countries.

Putting these two bits of common ground together would seem to deliver a clear answer to our question. What we learn is that the special obligations governments have to their own citizens are consistent with acting in a way that transfers goods from their own citizens to the citizens of other countries. If that were not true, it would automatically be the case that rich countries give enough aid to poor countries—giving aid is transferring goods—and they don't. Increasing the number of vaccines sent to poor countries by reducing the number of booster vaccines available in rich countries is transferring goods from rich countries to poor countries, and that's what we should be doing. Of course, the extent to which we should be doing this is another question, but all the evidence is that the goods transferred in this case would not be excessive—having two shots offers a lot of protection, especially against serious illness—and we can be confident that rich countries will respond in ways that mean that the total number of vaccines available will go up over time. Booster vaccines in rich countries would be delayed, but not eliminated.

Why then do I say above 'seem to deliver a clear answer'? We will have to learn to live with COVID in somewhat the way we now live with influenza. We should not think of vaccinations against COVID as one-off events. They will have to be annual events. This means that what poor countries need most is help to ensure that their citizens receive a COVID vaccine on an annual or near annual basis. I worry that diverting booster vaccines from rich countries t o poor ones may delay what's of most importance in the long term. Diverting booster vaccines will be very unpopular and will come with substantial political costs. I worry that the backlash from citizens in rich countries may make it difficult for the rich nations—or their leaders, many of whom need to win elections—to do what is best in the long term.

Here is a way to think about my worry. What is best is: (A) In the near term, diverting a large number of booster vaccines from rich countries to poor ones—there would not be much point in diverting a small number—combined with rich nations taking steps to increase vaccine availability around the world in the years to come. In evaluating (A), it is important to bear in mind the point that there would be booster vaccines available in rich countries down the track. The loss to citizens in rich countries would not be that great. What is second best is: (B) Not diverting large numbers of booster vaccines from rich countries to poor ones in the near term, but rich nations taking steps to increase vaccine availability around the world in the years to come. What is worst is: (C) In the near term diverting large numbers of booster vaccines from rich countries to poor ones, with this having the consequence that rich nations fail to take steps, or unduly delay taking steps, to increase vaccine availability around the world in the years to come. In sum: (A) is possible and would be best, but is risky. Perhaps too risky—were large numbers of booster vaccines diverted, we might well end up in (C), which is the worst outcome. It may well be smartest to settle on (B).

Calls to suspend boosters are empty sloganeering

Arthur Caplan, Mitty Professor of Bioethics, NYU Grossman School of Medicine 

At present, the US would appear to have a ‘stockpile’ of 500 million complete two-shot doses in reserve. Some other estimates report that this surplus could grow to a slightly over a billion doses by the end of the year at present manufacturing rates, presuming a minimum of two doses per recipient.

The Brookings Institute estimated the need for vaccines for the rest of the year for adults as follows: to vaccinate the rest of American adults—about 500 million doses. Then let’s add kids over five, so another 25 million people, requiring 50 million more doses. That amounts to 550 million doses.  Booster demand starting in October—unknown, but let’s presume another 100 million. That makes 650 million in the USA by the end of the year. So, if this demand is to be met, 500 million ‘surplus’ doses might well be available to help 250 million unvaccinated persons outside the US.

Presuming the nation will not hold more supply for future ‘boosters’, the actual amount of vaccine available for first time use outside the US is hardly sufficient to vaccinate the world or anything remotely close with billions upon billions of people worldwide are in need of two shots.  Indeed, it might well be more effective to drive toward herd immunity in the USA than give scattered supplies to a small number of nations. Still, the WHO and others say boosters ought to be suspended. This claim is not persuasive on practical grounds or moral grounds.

Is the term ‘booster’ even appropriate? Pfizer and Moderna appear to be three-shot vaccines, not two. My friend, the vaccinologist Dr. Stanley Plotkin, argues that, to achieve optimal immune memory, non-live virus vaccines like inactivated polio and hepatitis B vaccines require two or three doses over a period of 4 to 6 months.  After the first two doses of mRNA vaccines, antibodies and immunity also decline considerably. Thus, the mRNA vaccines require a third dose to achieve maximal protection. The talk of boosters is just wrong. We need to be talking about a three-dose vaccine. Getting nine months of declining protection from partial vaccination is not sufficient to control either deaths and hospitalizations or continuing viral spread in the USA or anywhere. If, as I maintain, it is moral to look out for the health needs of one’s community and nation, it is important to reach herd or community immunity to gain maximum impact from vaccination. And it is important to ensure adequate protection for vulnerable American citizens before turning to other vulnerable persons.  

Globalist views that treat all lives as equal are overly simplistic. They are not reflective of duties owed to family, friends, neighbors, communities, allies and one’s nation. Even WHO, organized as it is around nation state members, implicitly recognizes this moral fact. While these duties to the ‘near and dear’ have limits, they are hardly trivial, and ought not be ignored nor dismissed out of hand.

Further confounding calls for crude globalism to guide American allocation are the realities of lack of infrastructure to rapidly distribute hard to handle vaccines and the lack of any plans as to how equitably other nations will distribute vaccines to those most likely to benefit. Corruption and ethnic strife further complicate what can be done in some parts of the world.

If the USA is going to share some surplus, which it could, it is also not clear where it ought to go. To its neighbors and allies—Mexico, Canada, Haiti, Japan? Or those nations most likely to handle distribution fairly? To those locations with people most at risk of death due to COVID? Or to those facing the greatest, widespread and rapid spread?

Those favoring global vaccine distribution prior to completion of effective vaccination in rich nations need to argue for more than ‘help the world’. It would be irresponsible to try to maximize the worth of vaccination assistance without agreement on where, how and why certain populations will go first in receiving aid once a reasonable effort is made to fully protect all Americans, including the use of tough mandates to achieve that goal. Issuing calls to suspend boosters to help the world is empty sloganeering—more detail is needed at both ends. A vaccine surplus and a distribution strategy are needed to get any real political traction.

Boosters for Americans may be necessary for vaccinating the world

Nir Eyal, Director, Center for Population-Level Bioethics

The bulk of humanity has yet to receive its first COVID shots. Yet the US, following several other rich nations, is about to offer a third dose of COVID mRNA vaccine to any American whose full mRNA vaccination happened more than a few months earlier. That third dose would ultimately come from the same pot of money that would have provided early doses to the rest of the world. What should we make of that?

Initially, it may seem as though the decision to offer Americans a third dose, beneficial and safe to recipients as it may be, cannot be justified. Patients in other countries need these doses far more than Americans do, and have not had their chances to protect themselves yet. Some American patients are vulnerable, but there are vulnerable patients elsewhere too, and the US is not limiting third doses to its most vulnerable patients. Yes, the third dose should provide more lasting immunity than did the first two doses, but one could wait a bit longer to provide it while providing vaccines for an even needier world. Yes, America and other nations with the means to do so funded vaccine development, but trial participants came from many countries and we rarely think that being in a position of privilege should accord a person or group priority  in the allocation of life-saving resources. Yes, vaccine delivery can be tricky in poor countries, but it wasn’t easy convincing Americans to get vaccinated. Yes, vaccinating populations that are comparatively unvaccinated is not always agent-neutrally better than vaccinating populations that are comparatively vaccinated, but typically it is. And yes, decisions on prioritization and mechanics would need to be made once one became serious about vaccinating the entire world, but that should not be taken as an excuse but as a prompt to go ahead, make those decisions together, and act.

All in all, it is hard to imagine numbers working out such that humanity would gain more health or utility from Americans’ getting their third doses than from Africans getting their first doses. In addition, people who are less wealthy and less vaccine-protected than Americans ought to receive some priority, from an impartial standpoint, for reasons of equality or of priority to the worse off. Altogether, the impartial case for others to get their first shots before Americans get their third  is so extreme that any plausible level of partiality to one’s compatriots cannot offset it morally. And like everyone else, Americans need for most everyone around the world to get vaccinated. Only global immunization would save the world economy of which America is part, and reduce the intensity at which new and more dangerous strains batter Americans.

But this picture assumes that the US’ third dose and the bulk of doses needed for the world must come at each other’s expense. That would be the case if an economically rational calculator presided over a shared pot of resources, and calculated how much is left for the world in light of what was expended locally. But that is not how real-world politics and economics works. Except in the short term, it is far from clear that if the US fails to give a third dose to hundreds of millions of Americans, the chance of it donating the many billions of doses that it must send overseas would increase—the former is small change compared to the latter. Nor is it clear that if the US gives that third dose to Americans, the chance of the US also donating the billions of doses needed abroad would decline. For a Democratic US administration dreading the midterm election to ask a more conservative US public to embark on the necessary “mega-PEPFAR” or “Marshall plan” overseas, it may need to satisfy Americans that they have already been offered all the vaccines money can buy them. Only then would American hearts and minds, that have so far shown indifference to the plight of the world, arguably open themselves to the acute humanitarian and pragmatic case for sending massive aid abroad. Inasmuch as this is how things are likely to work, the need to vaccinate the world against COVID, which is currently perplexingly failing to receive the full attention that it deserves, counts for, and not against, giving Americans the third dose.

Individual Americans may ask themselves whether it is ethical for them to get the third dose, when their government and other rich donors fail to address the bulk of acute global need. Large federal campaigns should invite Americans to “offset” any adverse impacts of their individual doses by donating generously to COVAX and to other schemes to vaccinate the world. Americans should also sign petitions calling on their government to donate more to such schemes. The cumulative impact could be substantial, and help ensure that the third doses they benefit from do not come at others’ expense.

An interesting theoretical lesson in this is that the way to allocate goods is sometimes more complex and attuned to realistic predictions than priority setting experts usually imagine. We should not simply rank options from best to worst (per their effectiveness, cost effectiveness, and so forth) and distribute our limited funds in that order. We should remember the potential impact of funded options on the overall amount of resources available for distribution. Even when giving the haves a third dose is ranked lower than giving the have-nots a first dose, directly doing the former may remain the recommended course of action—when it can be shown to be the likeliest route of achieving the latter, by enable the necessary massive increase in the overall amount of resources available to all.

Planning, and failing to plan, for vaccine equity

Monica Magalhaes, Program Manager, Center for Population-Level Bioethics

National governments have special moral obligations (not to mention electoral and other incentives) to prioritize the interests of their own residents over the similar interests of people abroad. That is why any plan to vaccinate the world even slightly more equitably, against COVID and against future emerging infections, requires an institution committed to the interests of the world population, not to those of a particular country’s citizens.

COVAX aims to be something along these lines, but is so far failing to meet even its modest equity goal of vaccinating 20% of the population of the 92 low- and middle-income countries eligible to access vaccines through its donation-funded low-cost mechanism. Given that all vaccine developers and manufacturers are attached to one or to a set of governments, poor countries that lack developers and manufacturers in their territory will predictably be left behind, along with any supra-national initiatives that attempt to distribute vaccines (more) impartially.

Considering that national governments have effective veto power over equitable vaccine distribution initiatives, any such initiatives will find themselves subject to the form of hijacking described by Nir Eyal in this Dilemma: until the governments who control vaccine production and supply believe their population is adequately protected, vaccines will not be made available to other countries in sufficient amounts to enable them to reach collective immunity. The standards for adequate protection may well become more demanding over time, as they already have in the case of COVID, increasing from 2 to 3 vaccine doses. Limited supplies of vaccines will, then, be primarily devoted to meeting the ever-increasing needs of a few nations, and remain out of reach for many. In the meantime, countries who have excess vaccines will use them (and the relative deprivation of other nations) to their own further advantage.

Given the limited supply of vaccines, third shots come at the expense of even the first shots of the most vulnerable at some of the poorest places in the world. Third doses—not to mention unused stocks belonging to those countries that have more than they need to vaccinate their entire populations—could have gone to COVAX, or been sold at lower cost directly to nations desperate to buy these doses (not the very poorest nations, who must rely only on COVAX, but still nations needier than the US).

It is true, as Arthur Caplan lays out in this Dilemma, that “vaccinate the world” is a slogan, not a plan, and that a plan towards equity requires a vaccine surplus and a distribution strategy. Making and implementing such a plan is challenging—especially in the face of existing incentives to the contrary—but not (as he may be suggesting) impossible or irresponsible. A vaccine surplus requires ramping up production, the only way to ensure that greater protection for some countries, in the form of a third dose, will not come at the expense of any protection for others. That again requires investment and action by governments on behalf of global interests, for example on flexibilization of intellectual property rules. In addition, a concrete plan for vaccine equity requires the willingness to help unvaccinated countries distribute the vaccine equitably to their populations—especially since storing and transporting some of the vaccines require infrastructure that only a few countries have throughout their territory. Plans for equity in future pandemics require more ambitious supra-national thinking, such as moving towards compensating drug developers on health impact, to change (at least some of) the incentives that lead to predictably inequitable access to vaccines.

Without a vaccine surplus, a distribution strategy, and changes to the current incentive structure, we are indeed failing to plan for global vaccine equity. We can still believe that current inequalities in access to vaccine, exemplified by the rollout of third doses in rich countries, are unjust, and that greater global equity should be our goal. This legitimate goal spurred the creation of COVAX, and should spur better planning and implementation of plans for equity in the future.
 

COVID-19: A global problem needs a global solution

Malabika Sarker, Professor & Director, Centre of Excellence for Science of Implementation and Scale-Up, BRAC James P Grant School of Public Health, BRAC University (Bangladesh)

Since COVID vaccines first became available, five billion doses have been administered, and 25% of the population have been vaccinated. Yet, the number of cases and deaths continues to rise because of the failure of global vaccination. Access to vaccines is not about receiving a jab but a mixture of political, social, economic, technical, and infrastructure factors. Fewer than 2% of adults in most low-income countries were vaccinated, compared with almost 50% in high-income countries. Despite the World Health Organization’s suggestion that global policymakers halt third doses for the time being, high-income countries decided to offer the booster dose to its citizens. Such a decision increases global inequity, which is unethical and shows a lack of compassion and empathy.

There are several pragmatic reasons why high-income countries should halt the booster dose, most notably for self-interest.

The first set of reasons are scientific. Foremost, the booster dose is unnecessary. The double dose vaccination already provides enough protection against both the Alpha (93%) and Delta variants (88%). Therefore, only a small share of the populations who are vaccinated will be re-infected. All other vaccines currently in use seem to be more than 90% effective against hospitalization and death from COVID. According to the latest data presented by US Centers for Disease Control and Prevention, 99.99% of people fully vaccinated against COVID did not require hospitalization or died in the USA. In addition, for those who have had an organ transplant and are receiving immune-suppression, almost half of them had no antibody response after two doses of vaccines.

Secondly, the nationalist argument is not only unwise but also short-sighted. Unlike in 1918, we are currently living in a globalized world where migration is a daily phenomenon. Therefore, unless the number of COVID cases is reduced worldwide, the pandemic will continue, leading to a high chance of new variants emerging, and another pandemic will be inevitable.

Thirdly, it is incredibly unethical to throw away unused vaccines in high-income countries while low-income countries do not have enough vaccines for their population. 

Finally, if COVID continues, the production of many goods and materials will be disrupted in low-income countries, which will also have an economic toll on high-income countries.

The COVID pandemic is a global problem that cannot be solved just at the national level. Such a decision is a repetition of the attitude and approach towards climate change that we have seen for the last 20 years. If the consequences of climate change do not teach us the negative impact of nationalist policies that serve the profit-driven agenda of a few, we will not have to wait long for another catastrophe.

 

Are there ethical (as opposed to pragmatic) reasons not to mandate COVID vaccination?



Overview of the dilemma

Nir Eyal, Director, Center for Population-Level Bioethics

US-authorized COVID vaccines are safe and efficacious. Some are now fully approved. But in many parts of the US, vaccination rates are lower than they should be, risking population health and our struggle to thwart worse emerging strains. Non-coercive measures to increase vaccination rates, such as information campaigns, improving access, and offering prizes have all had some impact, but many Americans remain unvaccinated. Among those who declare that they will “probably or definitely will not get vaccinated”, the needle has hardly moved in months. About 3-6% of Americans have not so far gotten vaccinated and state that they would do so “only if required”.

The Federal government, employers, service providers, and schools are issuing more, and more expansive, vaccine mandates. Is it ethically permissible to require even more Americans to get vaccinated? Such a mandate can involve fines, elevated insurance premiums and deductibles, denial of essential services to those refusing vaccination (as well as frequent testing), exposure to torts if one is confirmed as the person whose nonvaccination injured another,lower priority for any necessary but scarce COVID care, or still other coercive measures far short of physically forced vaccination.

Vaccine mandates can be conditional on a person’s procuring a service, visiting a physical site, or working in a certain workplace, or, alternatively, unconditional. The former, conditional, mandates can be for procuring an essential service (e.g. clinical care, a bus ride) or an inessential service (e.g. a cruise, in which calling vaccine requirements “mandates” may be misleading). The mandates can be laid down by local, state, or federal governments, or by (other) employers and service providers. Mandates can admit of medical, religious, and/or “philosophical” exemptions, or not. These exemptions can be defined strictly or loosely. The mandates’ goal can be stated as protecting the unvaccinated person, fellow clients and workers, or public health in general; forcing all to internalize their externalities as is fair; or as still other goals.

While there are differences between these many forms of mandating vaccination, would any of them be ethical, or is there a problem with all? Currently, the most respectable arguments against mandates tend to be pragmatic. For example, vaccination rates are now increasing in areas of the US with sparse vaccination (partly because the COVID risks are becoming plainer), perhaps making unpopular impositions redundant, or even counterproductive or otherwise harmful. But this argument is about PR or politics, and not, fundamentally, about ethics. It does not assume that there is anything inherently wrong with vaccine mandates.

The present Dilemma asks whether there is anything inherently wrong with vaccine mandates. Are they contrary to liberty, autonomy, ownership over one’s body, or still other putative ethical values? Is it the case that vaccination (against COVID or other diseases) should be mandated whenever it serves public health and lowers health care costs, even when the (net) risks or burdens to the individual being vaccinated are high? (For COVID vaccine mandates with medical exemptions, the risks and burdens for the individual are very low and the net risks are typically negative.) Is it permissible to mandate vaccination that keeps infection and transmission rates unaltered (perhaps a future reality for authorized COVID vaccines against some future strains) because it could still prevent severe disease and save the person being vaccinated? Does the fact that refusal to get vaccinated rarely seriously risk an identified other (and more often risks many others by a bit each, with only a serious cumulative risk), make it a victimless crime? Does it matter how “active” we should consider a decision to fail to get vaccinated, which results in our actively spreading germs?

As additional food for thought, if “philosophical” exemption means in practice only seeking an exemption based on factual error of one kind or another, does such exemption never make moral sense? Does it matter how the needle’s penetration into a person’s body is culturally interpreted—what its “thick meaning” is in one’s community, in broader society, or in one’s personal realm of meanings (say, if one was a victim of earlier trauma)? If we had to propose a formula to calculate when and for which diseases and vaccine products legal and practical vaccine mandates are also ethically appropriate, what variables would populate that formula, and how would they mathematically relate to one another?

The case against mandatory vaccination

Simon Clarke, Associate Professor in Political Science, The British University in Egypt

Opposition to mandatory COVID vaccination may take many forms, from suspicion of government motives to suspicion of the vaccines themselves. But if these concerns are set aside, and we consider a sincerely benevolent government trying to protect and promote the health of people with a reasonably safe and effective COVID vaccine, are there any principled ethical reasons not to mandate it? A plausible answer is that individuals have the right to decide what happens to their own bodies and that compulsory vaccination would violate such a right.

The right to decide what happens to our bodies, sometimes called the right to bodily autonomy or the right to bodily integrity, is an important universal human right. It accounts for the wrongness of various crimes such as assault and rape and also accounts for the wrongness of people being subjected to medical experimentation without their informed consent. The right can be thought of as part of what it means for us to be rational, self-determining creatures, and a reflection of the Kantian principle that people must be treated as ends in themselves and never merely as a means (as Nozick writes in chapter 3 of his Anarchy, State and Utopia). The right can also be thought of as one of the basic capabilities that must be possessed by each person.

Does this right provide a compelling argument against mandatory COVID vaccination? It may be objected that people have a right not to be infected by others with potentially lethal diseases such as COVID. (In fact, it may be the same principle of bodily autonomy that explains why that is.) So we seem to have a clash of rights to bodily autonomy. On the one hand, individuals have a right not to be forced to be vaccinated, but others have a right not to be infected by the unvaccinated. Which right should have priority? There are several reasons why the first right might trump (sorry) the second.

First, there is a moral difference between intentionally harming and merely foreseeing harm without intending it. ‘I didn’t do it intentionally’ is sometimes a justification or an excuse for an action that would otherwise be wrong. Consider for example someone’s failure to meet their friend at an agreed-upon time and place. Forgetting to is excusable, but deliberately failing to is a slight. The difference, known as the doctrine of double effect, is considered crucial by some moral philosophers. Its relevance for vaccination is that non-vaccinators do not intend to infect others even if they foreseeably do so. They are intending to decide what vaccines to take into their own bodies. So even if infecting others violated those others’ rights, it is not an intentional action, whereas the act of vaccinating people by mandate is an intentional act, and therefore (on the doctrine of double effect) worse than non-vaccinators infecting others.

Second, there is a distinction between acts that definitely have some undesired effect and those that have merely a probability (and possibly even a low probability) of an undesired effect. Most activities have some risk associated with them. Driving a vehicle or merely going for a walk outside could harm someone else, e.g. by accidentally tripping and falling against someone. But these activities should not be mandated against, because the probability of the harm occurring is small. Non-vaccinators could argue that their harm to others is only a probability, not a certainty. Of course, the reply is that the probability of infecting others with COVID, especially the Delta variant, is very high—much higher than the probability of falling against someone when taking a walk outside. But this probability may be reduced if non-vaccinators wear masks and practice social distancing. A different reply might be that the risk to the vaccine-resistant is also a low probability since they are unlikely to suffer any ill effects from a vaccine mandate. But this is not how they will see it. It is not the risk of harm they would complain about, but about the act of being vaccinated itself. They are being injected against their will, and it is that certain interference they have a right to refuse, as opposed to the mere probability of risk that they pose to others.

Third (and relatedly), the non-vaccinators may argue that others have some liability for the risk that the former’s decisions place on the latter because they (the latter) can do something that minimizes the risk, namely get vaccinated themselves. Non-vaccination poses a significant risk of infecting others, it could be argued, only if those others have not themselves been vaccinated, and if that’s the case then they are partly to blame. Non-vaccinators, on the other hand, have not done anything that makes them liable for the rights-violation that a vaccine mandate imposes upon them.

Fourth and finally, it could be argued that rights-violations occur only when there are clearly-identifiable rights-violator and victim. It is in the nature of infectious diseases that we may not be sure who exactly caused the infection to a given person. Since the COVID pandemic began, contact tracing has developed and, with further developments, there may come a time when medical authorities will know precisely which particular non-vaccinated person caused which illness in others. But for the time being, there is an ambiguity in identifiability that weakens the case for saying there has been a rights-violation when people are infected with COVID from others (which others?). The same ambiguity does not of course apply to the forcibly vaccinated; the state is the clearly identifiable rights-violator when vaccines are mandated.

I have tried to present the strongest argument I can think of against vaccine mandates (even though I’m actually in favour of them). The reasons above together make up a cumulative case against mandatory vaccination: there is a clash of rights between non-vaccinators and those they endanger, but the former do not intend any harm—a harm which is only possible not certain, that others are partially liable for, and for which no-one is clearly identifiable as the rights-violator. If we take rights seriously (to paraphrase one prominent defender of moral rights), there is a case against mandates that those in favour of mandatory vaccination will have to address.

Government mandates might be OK, but there are better private solutions

Jessica Flanigan, Richard L. Morrill Chair in Ethics and Democratic Values, University of Richmond 

When is it permissible to require someone to receive a vaccine? Long before the COVID pandemic, I defended vaccine mandates on the broadly libertarian grounds that contagious transmission can violate people’s bodily rights, and public officials can permissibly enforce policies that protect citizens’ bodily rights. I think a vaccine mandate could be justified in principle, but I doubt that a COVID vaccine mandate is justified at this point in the pandemic.

A vaccine mandate is only justified if compulsory vaccination is an effective way to reduce rates of contagious transmission and if the risks of contagious transmission exceed the risks associated with enforcing a vaccine mandate. If public officials can promote vaccination and prevent contagious transmission in non-coercive ways, e.g. by using incentives for vaccination, then they should use those other strategies instead.

The case for compulsory vaccination is stronger to the extent that an illness is severe and highly contagious. Vaccine mandates are not justified for non-contagious illnesses, such as tetanus. Officials also lack the authority to enforce vaccine mandates to prevent illnesses that people can avoid by abstaining from activities that risk transmission, such as HPV.

Public officials can, in principle, enforce vaccine mandates as a last resort if mandates are necessary to prevent widespread mortality and if enforcing a mandate is likely to promote health and wellbeing, on balance. Given this standard, I’m not convinced that a universal COVID vaccine mandate is justified. I’m also wary of a weaker mandate, such as the Biden administration’s recent request that OSHA orders large employers to mandate vaccination or testing for COVID. This skepticism is based on a normative judgment that the risks of a mandate, at this point, probably exceed the risks of not implementing a mandate.

Consider three risks associated with a government mandate. First, a mandate could be ineffective at promoting health on balance, e.g. by prompting people to become more suspicious of public health officials and vaccination in the long term. Or, a mandate could be enforced in an unjust and disproportionately harmful way. Or even a justified mandate could provide a precedent for unjustified restrictions on bodily autonomy and workplace freedom. These are empirical predictions and I don’t have solid evidence that a mandate would backfire in these ways. But I do think that officials should take these risks more seriously before imposing a mandate on private businesses and citizens.

At the same time, many people can avoid the worst risks associated with contagious transmission by becoming vaccinated voluntarily. A vaccine mandate could theoretically be justified as a way of protecting unvaccinated children from the risks of transmission, but this justification is not available to public officials in the case of COVID because vaccines are only unavailable to children because the CDC discourages providers from providing off-label vaccines to kids. If COVID vaccines become less effective, if hospitalizations further increase, or if COVID becomes much deadlier in ways that people cannot avoid, then officials would have a stronger case for mandating vaccination as a way of achieving herd immunity and protecting people’s rights against contagious transmission.

So before public officials resort to vaccination requirements, they should use non-coercive incentives and encourage private citizens and businesses to voluntarily require vaccination in their businesses.

On the other hand, public officials can legitimately require that government employees become vaccinated, just as private businesses may legitimately impose other safety policies for their employees. Whether public officials can require vaccine passports for the receipt of public services depends on whether citizens have an entitlement to those services and whether they can waive that entitlement by refusing a vaccine.

Private companies should be permitted to voluntarily require vaccine passports for customers. And employers can ask unvaccinated employees to pay for increased healthcare costs associated with remaining unvaccinated too. Since a vaccine mandate is only justified if it is necessary to protect people’s rights against contagious transmission, people who are concerned that the government could abuse their authority to mandate vaccination should be especially supportive of these private solutions to public health problems, which could make governmental mandates unnecessary. 

Safety, certainty and vaccine mandates

Bridget Williams, Postdoctoral Associate, Center for Population-Level Bioethics

President Biden has just taken the step of mandating vaccination for large swathes of the American population. He introduced the mandate by saying: “Many of us are frustrated with the nearly 80 million Americans who are still not vaccinated, even though the vaccine is safe, effective, and free.” Although a mandate will be unlikely to sway those most staunchly opposed to vaccines, mandates create an incentive structure that can motivate the hesitant and those for whom vaccination wasn’t a priority (often for legitimate reasons).

As has been outlined in the introduction to this Dilemma, many of the objections to mandates are practical—e.g. that mandates are difficult to enforce, ultimately not as effective as other options, or are likely to cause other harmful impacts. There are, however, clear ethical arguments in favor of mandatory vaccination—that liberty can be infringed to prevent harms to others, and that mandates create fairness in achieving a public good. Both arguments apply in the case of COVID vaccination. COVID vaccines prevent harms to others by reducing the chance that a person will contract and transmit the infection to others, including to those who aren’t able to be vaccinated. Some countries (e.g. Australia) have made vaccination coverage targets part of their requirements for releasing movement restrictions (to avoid overwhelming health system capacity), and reaching these targets is a collective effort.

However, these arguments are written for “safe” vaccines. How we determine when a vaccine is sufficiently safe to be mandated is an ethical question. It has been argued that making a COVID vaccine mandatory would be unethical due to the uncertainty in the safety profiles of these relatively new vaccines. Indeed, safety concerns are the most common reason cited for COVID vaccine hesitancy. According to Biden, COVID vaccines are “safe”. But how safe is safe enough for a mandate? And how certain do we need to be in our estimate of safety?

These are complicated questions, which I cannot answer comprehensively here. It seems that a reasonable starting point would be to require that the known and likely benefits of the vaccine outweigh the known and likely risks to the individual. Although this still leaves the question of how much data is required for us to be able to assess ‘likely’ risks and benefits. And how should this requirement be influenced by the degree of harm posed to others by a person who chooses to remain unvaccinated? Should a greater risk of harm bring less stringent safety requirements?

Rather than attempting a general answer to these questions, it might be easier to consider the specifics of COVID, as they are at present.

Although COVID vaccines were not associated with serious side effects in clinical trials, rare side effects have emerged after trials. The association between the AstraZeneca-Oxford vaccine and vaccine-induced thrombocytopenic thrombosis led to many countries’ restricting the use of this vaccine to older adults. Cases of myocarditis and pericarditis have also raised questions about vaccination with the Pfizer-BioNTech vaccine being in the interests of young men, although these have been rare and time-limited.

However, the appearance of these side-effects has generated concern about other side effects emerging, particularly potential longer-term unknown side effects of the vaccine. Should the possibility of further unknown side effects prevent a vaccine mandate?

COVID vaccines have now been administered to billions of people, and with many large national campaigns commencing in late 2020, we now have many months of data on many millions of people. It is possible that further adverse events will emerge as even more of the population is vaccinated. However if these do emerge, they will be even rarer than the already extremely rare side effects that weren’t detected in clinical trials, so they seem unlikely to cause a great change to the overall risk-benefit estimate. It is also possible that longer-term side effects will emerge at a later date. However, the uncertainty cuts both ways—COVID infection may also have currently unknown longer-term effects. It would also be unusual for vaccines to cause longer-term side effects. So, although some uncertainty persists, for the majority of adults, at this point in time, we can be fairly confident that contracting COVID carries far greater risks than vaccination does.

However, for children and adolescents, the situation is less clear. COVID poses smaller risks to children, and there are few data available regarding the safety of COVID vaccines for children. Contrary to the US CDC, which encourages the use of the Pfizer-BioNTech vaccine in children over 12, the United Kingdom’s Joint Committee on Vaccination and Immunisation (JCVI) has recently advised against widespread vaccination of 12-15-year-olds. It suggests that COVID vaccination is likely to be marginally in the interests of healthy children of this age, but that this margin is too small to introduce widespread vaccination.

At this point in time, it seems we should be confident that at least some COVID vaccines clearly bring greater benefits than risks to most adults. This seems like an acceptable bar for deeming a vaccine safe when considering a mandate (assuming there are reasons for the mandate). However, whether COVID vaccines provide greater benefits than risks to children and adolescents is less clear. If the risks to others were large enough then it seems likely that some uncertainty should be tolerated. Overall, whether the current level of risk posed by COVID is sufficient to relax this safety requirement is highly uncertain. It seems likely that a mandate would require a greater level of confidence of a “safe” vaccine for this group.

The case for mandatory vaccination

Nir Eyal, Director, Center for Population-Level Bioethics

A vaccine mandate that employs non-invasive means (e.g. increases in health insurance premiums, not physically forced jabs) and accommodates narrow exemptions does not violate any rights of the vaccinated person. If, in practice, such a mandate is effective and free from terrible societal side effects, it is ethical.

The basic indication that such a vaccine mandate violates no rights is the broad consensus that “your right to swing your arm leaves off where my right not to have my nose struck begins” (John B. Finch), accepted by philosophies, political ideologies, and religions of all stripes and creeds. Law enforcers’ intentional intrusion into your liberty to swing your arms would be permitted when likely to protect my nose. It would be permitted even if you are unaware whose nose you would hit (indeed, even if that is determined randomly) and even if you are unaware that you would hit a nose. An intention on your part to smack is unnecessary. The authorities may stop you from striking others intentionally or unintentionally. Surely, then, the law is permitted to stop you from spreading into other people’s noses a virus that may kill them or leave them disabled. That you do not know whom you would jeopardize or harm, or are too ignorant to understand that this is what nonvaccination does, and entertained no intention to kill anyone, is neither here nor there. 

There are some differences between smacking noses and spreading lethal viruses. First, a smack is sure to cause small pain, whereas spreading lethal viruses is only statistically related to harm. But the harm, when it materializes, is death or major trauma or long COVID. So the expected disutility may well be greater for some individuals than the one of getting smacked, and certainly the cumulative expected harm is greater--your may end up causing multiple deaths.

Second, flinging one’s arms is an action, whereas failure to get vaccinated is an omission. But if the wind blew your arms and then you could have, but failed, to stop them from smacking my nose, then it would be permissible for the law to stop your arms from smacking it.

Third, your arm is part of your body, whereas the virus you spread is not. But if you held a stick in your hand and the direct danger to my nose came only from the stick not from your arm, the law could still permissibly stop your arm from swinging, in order to protect my nose from the stick.

Fourth, your decisions can pretty much ensure that my nose will be smacked willy-nilly, whereas people who may get infected by an unvaccinated person can always reduce their own chance of infection by getting vaccinated and by maintaining social distance. But the mandate remains justified: even vaccination and masking do not guarantee protection from unvaccinated persons’ infectious variants, which can still cause COVID. It is also true that some of us could have forever stayed indoors, far from other people, but few can afford to do so. Smacking noses is enforceably stoppable even if the person being smacked could have forever stayed indoors. And it remains enforceably stoppable even if she has the excellent judo skills to stop such smacks.

Fifth, stopping you from swinging your arm does not penetrate your body, and cannot do you any significant long-term harm, whereas vaccines enter the body and, rarely, have serious side effects. But, with narrow medical exemptions granted, authorized or approved COVID vaccines are much likelier to save anyone vaccinated than to harm them. And, with further narrow exemptions for e.g. those Christian Scientists for whom letting healthcare into their bodies is generally forbidden, a forced choice between a needle poke and paying higher insurance premiums is no more invasive than forcibly grabbing or hitting an arm to stop it from swinging—something that we all feel is legitimate.

For libertarians, in particular, the problem with your arm reaching my nose is not only that this may do me harm, but that it is trespass. Your arm has no business there without my consent, and the libertarian night-watchman state will stop you from helping yourself to my (bodily) property. Indeed, the libertarian watchman’s two-barrel rifle would already threaten you with severe punishment, even if all you tried were to nonconsensually approach and rub my sore nose to my benefit. Libertarians should have therefore been the first to endorse an enforceable obligation to stop you from extending your germs into other people’s noses, at the pain of more severe penalties than paying higher premiums, simply because spreading your viremia to others is trespass. It is therefore astonishing that many libertarians oppose vaccine mandates, out of either misunderstanding or hypocrisy. And it is doubly astonishing that libertarian governors violate business owners’ liberty to decide whom to admit or refuse on board their own cruise ships and other private businesses, based on candidate visitors’ vaccination status. Governors Ron DeSantis and Greg Abbott are here being flagrantly anti-libertarian, presumably out of misunderstanding of their own political ideology, toeing the party line at the expense of both ideology and public health, or utter opportunism.

I would personally support vaccine mandates that did the equivalent of stopping your arms from hitting and cutting off your own nose, even without affecting others. In my home country, Israel, soldiers in mandatory service are forced to undergo anti-tetanus vaccination for their own protection only. That saves lives, without leading to wider tyranny. In the US, wearing seat belts is mandatory despite libertarians’ historic false warnings of a surge in traffic accidents and tyranny. However, to endorse COVID vaccine mandates, one need not endorse such paternalistic mandates, to which libertarians may consistently object. Everyone can and should endorse effective COVID vaccine mandates with narrow exemptions. Such mandates would substantially protect both the net health prospects of the vaccinated individual and those of many other people.

Should “last hope” drugs get priority approval and funding over other drugs even when they otherwise offer a less favorable balance of risks to prospective benefits?



Overview of the dilemma

Emma J. Curran, Postdoctoral Associate, Center for Population-Level Bioethics

In a recent Wall Street Journal op-ed (“The FDA Could Help Save My Son From a Rare Disease”) Judy Stecker shares the story of her son, who suffers from juvenile CLN3 – a rare, and currently incurable, neurodegenerative disease. Stecker describes how, as the development of experimental treatments for CLN3 stalled, so too did the hope that they afforded her and her family.

Stecker and her son are not alone. For many terminal patients, last hope can often be found in therapies and drugs aimed at increasing a patient's chances of near-term survival. A burgeoning public debate surrounds the funding and approval of so-called, “last-hope” drugs. Access to these treatments is often limited because, due to their experimental nature, they carry insufficient evidential support of their efficacy, or because they are far too expensive to meet typical cost-effectiveness thresholds. But there are many attempts to carve exceptions for last-hope drugs.  Some of the most expensive and least cost effective drugs funded in recent decades were last-hope drugs. The US Food and Drug Administration (FDA) has expedited approval processes when it comes to last-hope drugs, whilst the UK’s National Institute for Health and Care Excellence (NICE) has recommended exceptions to its general threshold of cost effectiveness primarily for “life-extending treatments at the end of life”.

Debates over last-hope drugs and therapies tend to center around whether gambling on under-tested treatments, which may ultimately turn out to be inefficacious, or even harmful, is rational for terminal patients without other hope or, rather, something from which these patients ought to be protected. In favour of approving such experimental treatments, some point to the fact that, when otherwise faced with certain death, any chance of survival – even if so small as to fail to reach the usual evidentiary standards – tends to be a prospective benefit. Moreover, whilst a number of experimental treatments might fare no better, on average, than a placebo, if the outcomes feature high variance, then it may still be rational for patients to take these treatments in hope of being one of the ‘lucky’ ones. Finally, allowing patients to choose for themselves whether they wish to gamble on these drugs may be  an important part of their  autonomy rights. Alternatively, critics point out that terminal patients’ desperation renders them vulnerable to being persuaded to use costly, exhausting, or painful treatments which offer meager chances. Further, critics say that if patients could take experimental drugs outside clinical trials, recruiting for these trials – to produce the necessary evidence – would become even harder.

Stepping beyond these traditional debates, in this Dilemma we take seriously the view that hope is good for the patient in itself – it has a value beyond the improvements that an experimental treatment might make to a patient’s prospects of near-term survival. After all, as a state of mind, hope typically beats despair.

Our Dilemma encompasses two sets of questions. The first concerns whether and how hope might be good for patients: is more hope always good for dying patients? What about the bitter disappointments which come along with excessive hope? If the hope is false, is it still good for them? Or worse, what if the hope is based on deceit? Might hope be very good for some of us and less good, or even bad, for others? Can a mental state of hopefulness arise without any increased belief in one’s prospects, such as by accentuating in one’s mind what is possible instead of what is impossible?

The second set of questions concern the policy implications of this discussion. Assuming that giving patients hope is good for them, how would – or should – that assumption affect public health systems more broadly, including decisions about approval,  funding, and otherwise facilitating  access to non-life saving treatments? And if hope is good for patients, how might that affect the way we calculate the effectiveness of a treatment – ought hope be included in its outcomes? If so, how do we do so in a manner which recognizes the possibility that different patients may place different values on maintaining hope?

Risk aversion about remaining lifespan would militate against “last hope” treatments

Richard Cookson, Professor, Centre for Health Economics, University of York

Is the “value of hope” a good reason for governments and insurance companies to pay a price premium over and above what they would normally be willing to pay for a new life-extending treatment?  There are plenty of other potential justifications for a price premium, including (1) cost-effectiveness, (2) severity of illness (the size of the expected health loss due to this condition), (3) potential for benefit (the size of the average expected health gain), and (4) health equity impact (the impact of the new treatment on reducing social inequality in health).

To focus specifically on the value of hope, let us consider an example that holds these other four factors constant. So let us imagine two treatments that are equally cost-effective and offer the same life expectancy gain of 3 years to the same group of patients with the same severity of illness and the same social advantage characteristics. The only difference lies in the hope of a long remaining lifespan – one treatment is a “last hope” treatment that provides greater hope of surviving more than 5 years. This example is illustrated in the figure below.

Image for dilemma 8

If people are risk-seeking, then they will prefer the “last hope” treatment. But if they are risk-averse, they will prefer the standard treatment. Because we are holding effectiveness constant – i.e., the same life expectancy gain of 3 years – there is an inescapable mathematical trade-off between greater hope of long survival and greater risk of short survival. The two must balance out to yield the same remaining life expectancy. People who are risk-averse will dislike risk of short survival more than they like hope of long survival. Whereas people who are risk-seeking will like hope of long survival more than they dislike risk of short survival.

It is not clear whether patients generally are in fact risk-seeking or risk-averse about future remaining lifespan or – perhaps more to the point in the case of third party funding decisions made on their behalf – whether impartial social decision makers should be risk-seeking or risk-averse on their behalf. But if social decision makers do think that risk aversion about remaining lifespan is the more appropriate normative stance, then this would militate against giving special priority to the “value of hope” over and above other relevant considerations such as cost-effectiveness, severity of illness, potential for benefit, and health equity impact.

Risk aversion, downside risk aversion, and hope for a good outcome

James K. Hammitt, Harvard University

As Richard Cookson points out in his response to this Dilemma, treatments that offer the same increase in life expectancy (and are in all other ways equivalent) can differ in spread – one treatment can offer both larger and smaller probabilities of surviving for significantly longer and shorter periods than the average. Patients may prefer one treatment over the other, depending on whether they fear or favor a wider range of probable outcomes. Patients who prefer the treatment with a greater spread of survival times exhibit “risk-seeking” preferences, while those who prefer the treatment with less spread are said to be “risk-averse.” Patients who judge the treatments equally desirable are “risk-neutral.” (In all three cases, the preference is with respect to longevity; an individual can have different preferences for risks to longevity and to other outcomes, such as wealth.)

In addition to differences in spread, probability distributions on length of survival can differ in skewness. That is, the part of the probability distribution on one side or the other of the mean can be relatively flat and spread out, or relatively concentrated on a narrow range of survival times. Patients’ preferences for treatment can also depend on differences in skewness. Those who prefer a spread out upper tail (implying a wider range of survival times if survival exceeds the mean) to a comparably spread out lower tail are said to be “downside risk-averse” while those with the opposite preference are “downside risk-seeking”. Downside risk aversion seems likely to be more common than downside risk seekingness.

Several studies have sought to elicit people’s preferences between different probability distributions on length of life. Those that have asked the general public about preferences over a normally healthy lifetime have found great diversity: similar fractions of people exhibit risk-seeking, risk-averse, or risk-neutral preferences. Studies that have asked about preferences in the context of a short remaining life expectancy (e.g., when suffering from cancer) have found that a majority of respondents express risk-seeking preferences. I am not aware of any studies that have attempted to identify downside risk aversion with respect to longevity.

It seems likely that patients have diverse preferences over the survival-risk profiles of alternative treatments. If so, some will be well served by the availability of treatments that offer some chance of a significantly greater than average survival time, albeit at the cost of a higher chance of a shorter than average survival. A challenge will be to help patients and their physicians make wise choices among the available options.

Yes, “last hope” drugs should sometimes get priority

Orri Stefánsson, Professor of Practical Philosophy and Wallenberg Academy Fellow, Stockholm University, Pro Futura Scientia Fellow, Swedish Collegium for Advanced Study, Researcher, Institute for Futures Studies

Let’s set aside the potential instrumental effects that hope may have on health. Might there nevertheless be a reason to take into account the value of hope when making decisions about approval and funding of drugs or treatments?

This is the question that stands out for me when reading the recent New York Times essay by critical care physician Dr. Daniela Lamas. She connects such hope to chances. Without access to some new and experimental drug or treatment the patients she works with have no chance; they face a certain death. However, new drugs or treatments, while risky and uncertain, may give these patients a chance of surviving and perhaps even of living a healthy life.

Having hope is however not the same as having a chance. To have hope you also need to be aware of the chance. Philosophers of hope in fact generally take there to be two necessary elements for hope that P: a desire that P and a belief that P is possible but not guaranteed.

This way of understanding hope means that some hope may be unfitting, even on a Humean view according to which any desire is fitting. For instance, if all the available evidence suggests that there is no chance that Ann will survive the year, then it is not fitting for her to hope that she will do so. More generally, I will say that it is fitting, for person i, to hope that P only if (1) i in fact desires that P and (2) it is not the case that i should believe P to be impossible or guaranteed.

Hoping that P is mistaken when in fact the chance of P is either zero or one. Some fitting hope might thus be mistaken. Suppose that Ann has a lethal condition for which a new treatment seems to have shown positive results in trials, but these results were in fact due to measurement errors. Then the new treatment may offer fitting but mistaken hope for Ann.

Combining the above observations and clarifications with my prior work on the value of chances leads me to the following (undoubtedly controversial and somewhat radical) suggestion:

Hope in and of itself makes a person better off over and above the extent to which affects their health and hedonic state just in case the hope is both fitting and not mistaken.

If this suggestion is right, then it provides a reason for funding and prioritising “last hope” drugs over other drugs that are expected to cause better health outcomes (on a narrow understanding of “health”). So, this would be a reason for a positive answer to the question posed by this Dilemma.

Interestingly, hope might not be equally good for everyone. This follows from my view that people can rationally have different attitudes to risk. Recall that non-mistaken hope partly consists in having a chance. Those who are risk-seeking value small increases in chances from low levels more than those who are risk-averse; that is, they are willing to give up more of sure benefits in order to gain such chances. If one assumes, as I have done elsewhere, that this is because such chance-benefits are better for those who are risk-seeking than for those who are risk-averse, then it would follow—on the proposed conception of hope—that hope is better for those who are risk-seeking than for those who are risk-averse.

The above point has an interesting connection to a (perhaps somewhat surprising) trade-off between the value of hope and the value of ex post equality (first brought to my attention by Nir Eyal). Consider the following two treatment options to a group of terminally ill patients. Both options offer a median increase in life expectancy of six months and the same predicted quality of life. But here is how they differ: Option A, the ‘risky’ option, is predicted to cause a significant spread in life extension; having no to small effects for many patients but resulting in many additional years for some. Option B, the ‘safe’ option, is predicted to uniformly lengthen patients’ lives by about six months; a little longer for some, a little less for others, but the spread is predicted to be small.

Concerns for outcome (i.e., ex post) equality favour B over A. Assuming that all else is equal—for instance, there is no reason to believe that the risky option would be most beneficial to those whose lives are bad (nor good) in other respects—ex post egalitarians (and prioritarians) would in this case take the predicted equality in the effects of the safe option, B, to be decisive in a comparison with A.

In contrast, those who place a high value on hope might find that the risky option, A, is better  (this paragraph is inspired by remarks by Richard Cookson). Suppose that there is no way to tell in advance how beneficial (or harmful) option A will be for a particular patient; let’s say it’s indeterminate, even (I’ll make the same assumption for B). Then option A may offer all the patients non-mistaken hope. There is perhaps a sense in which B offers non-mistaken hope too, but that hope is surely qualitatively different from the hope of living many more years.

Recall from above that hope is better for those who are risk-seeking than those who are risk-averse. Indeed, those who are risk-averse (w.r.t. years lived) would choose the safe option, B, while those who are risk-seeking (w.r.t. years lived) would choose the risky option, A. So, assuming that rational risk attitudes are part of (ex ante) betterness, it follows that option A would be better for those who are risk-seeking while option B would be better for those who are risk-averse.

The option favoured by those who prioritise ex post equality over the value of hope is thus better for those who are risk-averse but worse for those who are risk-seeking. That may not come as a surprise: it is generally assumed that those who are risk-averse would rationally prefer a more equal distribution of resources than those who are risk-seeking. This assumption is for instance typically made when discussing ‘veil of ignorance’ arguments in the spirit of Rawls and Harsanyi.

So, incorporating the value of hope into cost-benefit analysis may not be straightforward. It requires sensitivity to the fact that people are differentially benefitted by hope and that there is a tension between hope and equality. Moreover, the effects of prioritising hope have to be weighed against potential negative incentive effects on pharmaceutical companies and behavioural effects on patients. But these tensions and complexities are of course already familiar from cost-benefit analyses in health-care.

Leave normative judgments to individual patients

Jessica Flanigan, Professor of Leadership Studies and Philosophy, Politics, Economics, and Law and the Richard L Morrill Chair in Ethics & Democratic Values at the University of Richmond

When public officials decide whether to invest in or approve a drug, their decisions are typically informed by judgments of the drug’s potential risks and benefits. For example, officials are less likely to invest in a drug that has serious side effects and less likely to approve a drug that doesn’t clearly improve people’s long-term well-being relative to the current standard of care. These risk-benefit judgments aren’t scientific judgments. Scientific research can tell public officials whether a drug puts some users at an increased risk of death or injury or whether using a new drug extends patients’ life expectancy with fewer side effects than available treatments. Scientific research cannot tell public officials whether a new drug investment or approval is worth it. For that, public officials must make normative judgments about the value of avoiding serious side effects and the value of using a treatment where the benefits are uncertain.

The problem with putting public officials in charge of drug development and approval is that they are not in a good position to make these kinds of normative judgments, even if they are qualified to evaluate scientific research. Public officials make decisions for an entire population, but every patient is different. So, public officials are bound to misjudge whether using a particular drug is worth it for at least some patients. This is why it would be better if public officials empowered each patient to decide what drugs they took and what they spent their money on rather than pre-emptively making that decision for everyone.

Against this proposal, some people worry that if everyone could decide what drugs to use, desperate patients would choose to use dangerous drugs out of a false sense of hope that the drugs could improve or extend their lives. There are two reasons to be worried about these ‘last hope’ cases. The first concern is drugmakers could trick desperate patients into using drugs that actually make their lives worse. No one would defend fraudulent drug marketing that preys on vulnerable populations. In these cases, public officials should require drugmakers to disclose all the known information about a drug’s expected benefits, risks, and side effects.

The second concern about ‘last hope’ drugs is that people will knowingly choose to use drugs that are bad for their long-term health. In these cases, people may know something about their situation that public officials do not. For example, some people might value the chance of extending their lives more highly than the risk of ending their lives sooner. People may have different degrees of tolerance for painful and disabling side effects. Some people might just value the feeling of hope that comes with the opportunity to try a new drug, even if they don’t decide to take it. Public officials should not take these opportunities away from patients just because they think patients should make different tradeoffs regarding risks, benefits, and side effects.

I am not suggesting that patients who use ‘last hope’ drugs are always making the right choice. But even if some patients make choices that end up harming them in the long run, dying people also have defensive rights to access any drugs or therapies that could potentially save their lives, as long as they don’t harm anyone else along the way. And as a matter of policy, even if some patients make mistakes, each patient is likely to be in a better position than anyone else to judge whether it’s worth it to use a drug, even if it’s their last hope.

There is no sufficient reason to give “last hope” drugs approval and funding

Martijn Boot, Assistant Professor, University of Groningen

Approval

FDA-approval usually means that it has been demonstrated – or at least made plausible – that the drug is effective and that the balance between risk and benefit is favorable. Does this apply to “last hope” drugs?

In a double-blind randomized controlled trial in which 189 ALS patients participated, “last hope” drug Nurown was not better than a placebo.

It is true that a subgroup of treated patients who were in an early stage of ALS showed less decline than comparable patients in the placebo group. However, this difference was statistically insignificant, leaving it uncertain whether the effect was due to Nurown or merely chance. Additionally, the outcomes were based on a post-hoc analysis – a method susceptible to false-positive results and generally considered less reliable.

The FDA is more flexible when it concerns approving a drug for a fatal disease for which no other therapy exists. Such a drug may be considered for approval even if it has a less favorable benefit-risk balance. However, it has not been demonstrated that Nurown has any real benefit at all, so that little can be said about the benefit-risk balance.

The upshot is that if an FDA approval entails that it has been made plausible that the drug is effective and that the balance between risk and benefit is favorable, this does not apply to the “last hope” drugs under consideration. This forms a strong argument against granting FDA approval.

This was also what an FDA panel, consisting of 19 experts, advised with respect to Nurown. A similar negative advice was given by European neurologists from ENCALS – a network of European centres dedicated to ALS treatment.

Funding

There is still another reason not to give “last hope” drugs the label “FDA-approved”. FDA approval is often linked to insurance coverage. For instance, Medicaid must reimburse virtually all drugs approved by the FDA.

So, if “last hope” drugs were approved by the FDA, they should be paid by all of us.

US health care spending is already very high – both as a share of the Gross Domestic Product and per capita.

Financial resources, however large, are always relatively scarce. Money spent on “last hope” drugs cannot be spent on other aspects of health care or on other important social goals, such as education, defense, infrastructure, fighting poverty, and social security.

Therefore, it is important to use relatively scarce financial resources as efficiently as possible. It is inefficient to spend them on health care that is not evidence-based.

Part of the efficient use of health care resources is cost-effectiveness, often expressed in QALYs (quality adjusted life-years) gained. “Last hope” drugs are very expensive. Even if they were effective, this would not necessarily mean that they are cost-effective.

Considering access to “last hope” drugs 

From the perspective of questionable efficacy and cost-effectiveness, there are good reasons not to grant FDA approval to “last hope” drugs.

However, from the patient’s perspective, there may still be good reasons to try “last hope” drugs. As Daniela Lamas asks in The New York Times: “Why withhold a safe drug that might be beneficial, when there is no alternative?”

Is it possible to access “last hope” drugs without FDA approval?

Access without FDA approval

There are two possibilities:

(1)     So-called “Expanded Access” or “Compassionate Use”.

(2)     So-called “Right-to-Try,” based on a law passed in 2018.

Both procedures allow patients with a life-threatening disease access to experimental drugs when there is no FDA-approved drug to treat that disease. Insurance companies are not required to cover the costs of these drugs. Therefore, access to the drug often depends on the person's ability to pay or find sponsors. 

Last hope?

It is hard to live without a gleam of hope. Access to unapproved drugs gives hope to patients with a life-threatening disease. We speak of “last hope” drugs. Does this mean that patients without access to these drugs will lose any hope?

Also without treatment there may occur spontaneous improvement. In the Nurown trial, more than 25% of patients who did not receive the drug showed nevertheless improvement or less decline.

It is true that the median survival time for ALS patients is short, but a median can easily conceal a wide range of differences. About 50% of ALS patients live three or more years after diagnosis. About 25% live five years or more, and up to 10% will live more than 10 years. Especially young ALS patients have a reasonable chance of longer than median survival. (Physicist Stephen Hawking was 21 years old when he got ALS. He survived 55 years after diagnosis. But this is very exceptional.)

For many patients, the possibility of spontaneous improvement or less decline may produce a gleam of hope that is not inferior to the hope produced by access to a possibly ineffective drug. That is why the designation “last hope” drug seems less fortunate.  

Conclusion

From the above discussion, it follows that there appears to be insufficient reason to grant “last hope” drugs FDA approval and funding. (This does not mean that there cannot be good reasons for giving priority to, and funding of, investigation into development drugs for the treatment of terminal illnesses, such as ALS. But that is another question.) The negative recommendations given by FDA experts and European neurologists seem justified.

This does not deny that patients with life-threatening diseases may have good reasons to desire access to these drugs. “Expanded Access” and “Right-to-Try” offer possibilities, although covering the costs may present an insurmountable challenge.