Institutional Research Boards, Syphilis, and You!

The United States government recently disclosed that in the 1940s, American public health doctors deliberately infected almost 700 Guatemalans, including some people institutionalized for mental illness, with syphilis. Although those infected were given antibiotics, it’s impossible to say how many died or experienced long term health consequences from the infection. The research was performed by the United States Public Health Service, which means it took place entirely under government control and oversight and used government funding from the National Institute of Health.

The worst part (well, there are lots of worst parts) is that nobody seems to have been covering this up. The main doctor involved was also involved with the Tuskegee experiments on black men and a professor researching Tuskegee uncovered details of the Guatemalan experiments in the doctor’s personal papers. The researcher presented the information at a conference in January, but the information didn’t go anywhere. It was only after she wrote it up for publication and sent a draft to a former director of the Centers for Disease Control that the government realized – or noticed – what had happened.

All of this strongly suggests that there should be some kind of rules and regulations – a policy – to prevent the government from conducting that kind of health research. To be required to treat human beings as humans, rather than non-sentient objects to be literally experimented on. To make sure that if something like this were ever to happen again, someone would have to take official responsibility and do whatever possible to make restitution. And there are. After World War II, in response to the terrible human experimentation carried out by the Nazis, the Counsel for War Crimes developed the Nuremberg Code, which is the basis of the United States’ current policy. The ethical guidelines from the Code were later made into formal law in the United States – in direct response to atrocities like Tuskegee – so there are currently federal regulations (Title 45, Part 46, if you’re interested) governing research on any human subjects. The policy has six major principles:

  • the proposed research design is scientifically sound & will not unnecessarily expose subjects to risk;
  • risks to subjects are reasonable in relation to anticipated benefits to subjects and the importance of knowledge that may reasonably be expected to result;
  • subject selection is equitable;
  • additional safeguards are required for vulnerable subjects (pregnant women, children, and prisoners);
  • informed consent is obtained from research subjects; and
  • risks to subjects are minimized.

All proposed experiments that involve experimenting on humans must be written up and submitted to the National Institute of Health, where they’re reviewed by the Human Research Subjects Advisory Committee. Most universities have their own internal board to review research involving human subjects and are accountable to the NIH for their actions.

So. Is that policy good enough to prevent another horror like this from occurring? Well, if the Guatemalan experiments had been written up and submitted for review, they certainly would not have passed. The research would unnecessarily expose subjects to risk that is far out of proportion to any potential benefit to them (none that I can see). There was significant research benefit – we learned a lot about how penicillin acts on the syphilis infection and how syphilis tests work, things we had not been able to learn from previous experiments on non-human subjects. Subject selection certainly was not equitable – the study picked extremely vulnerable people, including prisoners and PWDs, who did not receive any additional safeguards. Risk to people was not at all minimized. And most horrifically, no informed consent was obtained, because the people infected did not get any information and definitely did not consent.

But it’s easy to imagine the possibility that some horrible things could still happen. Take the prohibition on “unnecessarily expos[ing] subjects to risk.” What would the NIH consider necessary risk? Or a bigger question, what do they consider to be a risk? If they determine some risk exists, is there more incentive for them to overlook some risk when the potential knowledge benefits of the research are really big? There are more detailed regulations that discuss some of these issues, but it’s hard to craft statements that can apply to every potential situation that could come up.

Beyond that, there’s potential problems in implementation. You can imagine a group of people who believed that the quest for larger human knowledge and saving a whole lot of lives was more important than any single individual – they would make different decisions than a group of people who believed that any individual is more important than saving lives. There are complex rules governing the composition of review boards to ensure diversity, but it’s hard to ensure a good balance – and it’s also complicated by the number of different review boards that exist.

Finally – there’s the question of enforcement. The NIH can’t follow up on every single scientist and experiment to make sure they’re carried out exactly as described in the application. What is there to prevent a scientist from falsifying the writeup of their research and carrying out unauthorized experiments? This part is mainly carried out by the scientific community itself, which requires peer review in order to publish the  results of research. This method of enforcement saves the government money but might be inexact.

Despite all these potential problems, we’ve created a complicated policy like this instead of banning all experiments that involve humans for clear reasons. I’ve been a subject in a whole host of experiments, including one measuring the effect of the personality of lab personnel in desensitization experiments. I contributed to the study and now have a fondness for, rather than a terror of, tarantulas.

So have we created the right policy? The best policy? A good policy? I don’t know. Our review of the policy gave us only a little bit of answer and a whole lot of questions. It turns out that’s often how policy works, and the kinds of problems with this policy are the same kinds of problems we see in a lot of policies. There’s the problem of specificity – how do you write a policy that says exactly what you want? There’s the problem of implementation – how do you get people to do what the policy says? And then there’s the problem of enforcement – how do you make sure that people are really following the policy? It’s easy to make sure that the rules apply to the most extreme examples – the Guatemalas, and Tuskegees, the Milgrams and the Stanford Prison Experiments – but it’s harder to address the cases nearer the line between ok and not ok.

Further reading and reference:

Various National Policies on Research on Human Subjects

Page 20 of 134« First...10...1819202122...304050...Last »