Medical research ethics is based on several principles that restrict the way researchers are allowed to treat research participants. Three of most fundamental are respect for persons, justice, and beneficence. “Respect for persons” means that researchers cannot treat the welfare or autonomy of research participants as optional, to be disregarded if research requires it; they must treat people as intrinsically valuable.
Justice as understood in medical research requires that the benefits of research be distributed fairly, and makes it unacceptable to ask one person or group to accept the risks of research if it is another person or group who will benefit, especially if the first group is vulnerable to exploitation – this is the principle that is most violated in famous cases of testing HIV drugs in Africa, and then refusing to sell them at prices that African health care systems could afford. Beneficence requires that researchers never set out to harm research participants, and that any potential harms that might occur incidentally are explained to participants, giving them the opportunity to refuse to participate.
These principles are clearly intimately related to one another, and what we commonly identify a unethical research often violates all of them. This is partly because these principles are deeply grounded in a broader ethical framework, specifically a deontological one. This is a form of ethics which places rights above all else, and forbids potential benefits to some people being used as a justification for violating the rights of others.
This framework is the foundation of medical ethics; it explains, for example, why it is unthinkable to harvest organs from one healthy person to save five dying people. The potential benefits to the five are not in question, nor is the idea that their lives are worth saving; it is simply not acceptable, if one operates in a deonotological system, to take the life of one person to save or improve the lives of others. No matter how great the potential benefits, some things are beyond the pale. This is doubly true in research, where harms may be immediate, but benefits are only theoretical. Nobody would question the incredible potential of a cure for HIV to alleviate human suffering, but no matter how many lives could be saved, there are some things researchers are not permitted to do in the search for such cures.
This attitude places medical and research ethics in opposition to the competing ethical system of utilitarianism, which says that in fact, the potential benefits of an act can justify inflicting harm on individuals, if that’s what is necessary. In this framework, it might very well be considered acceptable to experiment on the unconsenting or the vulnerable, if a new drug looked sufficiently promising that it might prevent more suffering than the experimentation would cause. If the research would cause pain or death, this attitude would strike most people as unacceptable. In it, we hear echoes of some very dark eras in scientific history.
There is a profound conflict in medical research ethics, however, one which is rarely confronted head on: this is precisely the framework we use to justify our use of animals.
Research ethics does recognise that it isn’t acceptable to treat animals however we like. Experiments on animals require ethical approval, and experiments that are considered unnecessarily cruel will not be approved, nor will experiments thought to use an excessive number of animals. What the committee considers “unnecessary” cruelty, however, depends on the aim of the research. The justifications researchers are expected to offer for their use of animals are fundamentally utilitarian – you need to convince the committee that the potential benefits of your research are outweighed by the harm to animals, and part of how you do this is by demonstrating that you are causing the smallest number of animals the minimum suffering necessary to investigate your research question. These are utilitarian calculations.
It is exceedingly strange that animal ethics committees base their decision-making on an ethical framework that has been roundly and vehemently rejected in every other area of science and medicine. It would absolutely unthinkable to propose research that deliberately harmed human participants, but to argue that it was justified by the potential benefits to others, and to expect a human ethics committee to approve it on those grounds. So why do we do this for animals?
The most obvious reponse seems simply to be, “because they are animals, not humans”. And yet, this answer isn’t entirely satisfying – it’s clear that being human on its own doesn’t completely protect you from doctors. Many doctors see no problem with turning off life support to conduct organ transplantation, for instance, or with abortion. These are cases where one human life (that is, one living human body) is destroyed out of preference for the well-being of others. Our inclination is to say, of course, that foetuses and those who are brain dead don’t really “count” as human beings, and what we mean by this is that although they are physically human, they don’t count philosophically as persons. This is why it isn’t wrong to end such lives; not all living things have a “right” not to be killed, otherwise it would be murder to kill oysters or to weed one’s garden. In order to have rights, one must be a person, not merely a human.
The debate as to the precise requirements to be counted as a “philosophical person” has been raging for centuries, and I do not expect to settle it. But I would like to highlight a modern contradiction. Ethicist Tom Regan has tried to distill the requirements that we seem to apply in modern medicine down to a single criterion, and I think he’s done quite a good job. He points out, as I just did (borrowing from him), that being physically human clearly isn’t enough. One must be conscious enough to have some kind of subjective experience, which means that one can value one’s own life, regardless of how valuable others might think it is. As a minimum standard, this seems uncontenious. Others, most notably Kant, have argued that even being conscious is not enough, and one must be “rational” as well. This doesn’t gel with current medical practice, though; very young children are not capable of rational thought, but we definitely regard them as having rights, although perhaps not as many as we accord to adults. Like adults, children are certainly recognised by most people as having a right not to be killed.
One could argue that children have rights partly in virtue of the fact that they will one day become adults, since they will be rational at that point. If this were true, though, we would have to extend rights to foetuses as well – without intervention, they too will most likely become rational adults. Conversely, this criterion would not require us to extend rights to people with severe, permanent intellectual disabilities, and would imply that killing such people is not murder, since they will never be rational. This contention, too, would appall most people. So it’s clear that most people, including most doctors, do not consider rationality necessary to be accorded rights, at least not the right not to be killed. The line for personhood in modern medicine appears to be where Regan puts it – between consciousness and irreversible unconsciousness.
Note that this is not a claim about what really constitutes personhood, rather it is a description of how people, especially people in medical professions, seem to accord it. We recognise young children and the intellectually disabled as persons, because they are conscious and capable of subjective experiences, regardless of the fact that they may not be rational. This presents us with a serious problem, though. If these are the criteria that we use to accord personhood, how can we deny it to animals that satisfy those same criteria? Nobody who has ever interacted with a domestic mammal would deny that they have subjective experiences. They obviously not only experience pain, but emotional distress, as well as pleasure and very probably joy. Many birds, as well as apes and many other non-domesticated mammals seem to as well. What is it that makes them so different to children, then, in ethical terms?
This problem is rendered all the more extreme by certain special considerations in research ethics. I mentioned vulnerability before, as a mitigating factor in selecting acceptable research participants. Not only does medical research ethics require researchers to respect the personhood of children and the intellectually disabled – in fact we accord them special protection, on account of that fact that they are less in control of their own lives, and less able to protect themselves from those that would harm them. As such, experimentation that would be acceptable with neurotypical adults requires special consideration before it is approved in these groups – there must be a clear and pressing need to conduct the research with the vulnerable group, and it must be for that group’s own benefit, not for the benefit of others. It is precisely because children cannot protect themselves from the violation of their rights that we must take special measures to protect them. We do not consider children or the intellectually disabled worthy of protection in spite of the fact that they have diminished capacities, but because of this. This stands in complete opposition to the way we justify research on animals.
So if we refuse to accept Regan’s position, the argument for rights goes in circles, from humans to children to animals and back again. Unless we are willing to accord the right not to be killed on the basis of “humanity” alone, ruling out abortion and organ donation under almost any circumstances, the basis on which we could legitimately withhold that right from sentient animals is unclear. In the absence of an argument demonstrating the another coherent, non-arbitrary criterion, the harms we impose on animals through medical research for our own benefit appear to be unjustifiable. To the best of my knowledge, none has ever been formulated.
Meanwhile, the argument for utilitarianism in research threatens to lead down dark paths. It seems as though if we abandon the concept of rights, we are lost in a world where nobody is safe, conscious or not. A utilitarian argument might be used to justify truly reprehensible things for sufficient benefit; if we give up on rights entirely, who or what mightn’t we sacrifice in the search for a cure for cancer? And yet, a cure for cancer is a real concern for only a minority of humanity. The West favours technological solutions because, outside the US, we have very nearly reached the limit of what health systems can provide; the majority of the world’s population is not so lucky. And so, on the ethical front at least, we are not quite so lost as we appear.
Of the top ten causes of premature death globally, the majority are already curable, partially preventable, or both. That AIDS-related illnesses remain the leading cause of death in young people in the era of Highly Effective Anti-Retroviral Therapy is inexcusable. The leading killer of people with HIV is tuberculosis, the spread and fatality of which is largely attributable to poverty. TB has been curable since the 1950s, and the same drugs still work in the majority of cases. In spite of this, a third of the 9 million people who get TB every year are never even diagnosed, let alone treated. One and a half million lives are lost to TB each year, most needlessly. Most liver cirrhosis and cancer can be prevented with hepatitis B vaccination, but the first dose must be given on the day of birth, which is unfeasible in impoverished settings where many women give birth at home, without skilled birth attendants, and at high risk of devastating complications. Most lower respiratory infections can be prevented with antibiotics, if only the right ones are available, and administered correctly. The list goes on.
Most people would agree that doing something harmful is wrong if you could do something harmless to achieve the same beneficial end. There is no harm in delivering the vaccines and drugs we have already, which are still so far from universally accessible. A huge amount of human suffering is preventable not through finding miracle cures, but through the slow, painful, complex work of strengthening health systems and alleviating poverty. Animal experimentation is morally questionable at its best, and monstrous at its worst. If our ultimate goal is preventing needless suffering and death, we already have most of what we need. It’s not in laboratories. It’s in our wallets.