“I will remember that I remain a member of society, with special obligations to all my fellow human beings, those sound of mind and body as well as the infirm.”
Throughout history it has proven to be increasingly difficult not to be a hypocrite to this modern version of the Hippocratic Oath, once passed down by Hippocrates himself. Medical professionals have not been spared the influence of social and non-medical cultural characteristics of their patients in the decision making process for proper treatment and ethically sound medical practice. Some physicians and researchers remained members of society but forgot their special obligation to their fellow human beings, and in many instances did not even consider their patients human beings. From the mid 19th century to the mid 20th century societies beliefs and prejudices about race and ethnicity, social class, gender, and educational and occupational status has governed the way our medical professionals treated their patients. I will be discussing the ways these cultural presumptions influenced the experiences of patients and research subjects, and the justifications and rationales that have been offered for the consideration of non medical criteria in the treatment of patients and research subjects. I will also explore, as human beings, with an innate ability to learn from our mistakes, what was the relevance to this history for today’s medical and scientific professionals.
In the midst of the fight for social equality, emancipation from slavery and the overall expansion of the United States there also came the scientific world of bacteria, termed “bacteriomania”. The idea of the cause of diseases had changed and the new villain was microscopic bacterium. Louis Pasteur and Robert Koch had paved the way to the new study of bacterium. “Isolating and identifying a disease germ under the microscope was the first step. After growing a microorganism in a pure culture, the physician needed to use the germ to produce the disease in a healthy organism.” The use of human beings to confirm the causes of these microbes became a harsh reality. Unfortunately, criteria for choosing who to experiment on became a product of social class, and educational and occupational status. The blatant apathy for the mentally impaired and vulnerable is shockingly apparent in the following quote in Lederer’s reading:
“In 1895 New York pediatrician, Henry Heiman, for example, described the successful gonorrheal infections of a 4 year old (‘an idiot with chronic epilepsy”), a 16-year-old boy (an idiot’), and a 26-year-old man in the final stages of tuberculosis”
These disparities in social class and education were not indicative to the United States alone. This was a common thread that permeated throughout the “modern” world. “In 1900 Walter Reed injects 22 spanish immigrant workers in Cuba with the agent for Yellow Fever, paying them $100 if they survive and $200 if they contract the disease.”
In the early 1900s, Lederer writes, the social organization of the American medical care bolstered the experimental study of human health and disease. What America began to see was a phenomenal increase in the amount of hospitals and hospital beds. “In 1873 the first American hospital survey reported only 178 hospitals (including institutions for the mentally ill) and fewer than 50,000 beds. By 1909 the number of hospitals (excluding hospitals for the mentally ill) had expanded to 4,359 institutions with 421,065 beds.”2. Hospitals shifted from custodial institutions caring for the indigent into scientific institutions drawing patients in from the middle and upper classes. This shift in the social organization of the American medical care system by no means meant the shift in the non medical influences of medical treatment. What this shift did was further alienate the mentally ill, the indigent, the people of different social and economic class, ethnicity, race and gender. These people were vulnerable populations almost guinea pigs for experimentation for the betterment of “humanity”.
Racism became the the underlying driven force for one of the great American tragedies:
“Since the contemporary reader begins by knowing what happened in the end in this quintessential American tragedy, the inexorable progress of the study, in its seeming banality and scientific neutrality, is painful to read. For the scientist, researcher, historian, or citizen, these documents are humbling reminders of how much medical research and treatment decisions are inextricably intertwined with assumptions about medical uncertainty, scientific progress, racial and gender stereotypes, and class power.”
In 1932, the Public Health Service along with the Tuskegee Institute in Macon County, Alabama, began a study to record the natural course of untreated syphilis in black males. The study was called ‘Tuskegee Study of Untreated Syphilis in the Negro Male’. This study was initiated without any of the patients’ informed consent and furthermore the researchers at the time told the patients they were being treated for “bad blood”, a term locally used to describe a slew of ailments ranging from fatigue to syphilis. Although this study was to last six months the racism, medical uncertainty, and outright lies lasted for forty years.
I feel two things are needed to fully grasp the experience of the patients and research subjects that were lured into the Tuskegee Study. First, “If the infected person has not received treatment, he/she still has syphilis even though there are no symptoms. Syphilis remains in the body and begins to damage the internal organs including the brain, nerves, eyes, heart, blood vessels, liver, bones, and joints.”5 An infected woman has about a 40% chance of having a stillbirth or a 40-70% chance of having a baby infected with syphilis. These infected babies can have very serious health problems which include: jaundice, swollen liver, anemia, inflamed joints, etc. A study that was initiated to record the natural course of syphilis among black males not only affected black males but it affected the entire family and culture.
Secondly, in the early twentieth century, Darwinism had provided a new rationale for American racism. Social Darwinists believed “the Negro race in America was in the throes of a degenerative evolutionary process.”. Medical professionals and physicians agreed with these point of views and some physicians even outwardly seeked the dissemination of the entire race. Dr. Thomas W Murrell was quoted as saying “disease will accomplish what man cannot do.”. These ideas and racial prejudices were common place among the medical professionals at the time and it provided a backdrop for the indecencies of the Tuskegee Study.
Amidst social prejudices and experimental negligence. Science continued to advance by leaps and bounds. New ethical questions began to arise with the advent of organ transplant and hemodialysis. Who was to get the available scarce resource of an organ? What kind of criteria would be set medical or non-medical that would select the organ recipient or hemodialysis candidate?
“During the decade 1954-1964, more than 600 renal transplants between living persons were performed in the United States, Great Britain, and France, two-year survival rates hovering around fifty percent. These pioneering days of renal transplantation transformed the divine miracle of Cosmos and Damien into a human, clinical miracle-at first with halting success, then with increasing efficacy in prolonging life and `restoration health. Renal transplantation was hailed a miracle of modern medicine.” This miracle came with many ethical issues. In many renal transplant cases the donors were minors that were not considered able to give consent. During a judicial review of these cases a court ruled that surgery was allowed under four conditions: parental consent, informed and free consent of the donor, necessity to save the twins life(transplants were initially only successful amongst twins), and psychiatric evidence that the future of the healthy twin welfare was promoted by the donation. In 1969, a Kentucky court was forced to forego consent when it turned out the donor was in a mental institution and his mental state did not allow him to give consent. The court deemed that as a donor, the “retarded” child would himself be benefited by the prolonged life of his brother. “The court upheld, though shakily, the principle that a person should not be put at risk unknowingly and unwillingly unless a benefit would accrue to that same person.”7. This is a case in which the ethical boundaries for a vulnerable population were clearly overstepped because of the overzealousness of the scientific community. This is driven home by this quite satirical illustration:
In America, we began to see non medical criteria being applied to medical decisions in a new way. In the case of hemodialysis, the formation of the “God Committee” prevailed. The Seattle Artificial Kidney Center’s Admissions and Policy Committee was formed. They convened in 1961 to “begin drawing up guidelines for the non-medical screening and selection of dialysis candidates.”. This power to decide who lives and who dies gave them the name “God Committee”. The committee consisted of a three doctors, a lawyer, a minister, a housewife, and a labor leader. All of the members were upper middle class in education, occupation, income and general social background. This led to a tremendously bias selection criteria. In Shana Alexander’s Life Magazine article she reported “that the committee members weighed factors such as “sex of patient; marital status and number of dependents; income; net worth; emotional stability, with regards to the patients’ capacity to accept the treatment; educational background; nature of occupation; past performance and future potential; and names of people who could serve as references”7. One can clearly see with non- medical criteria such as these the decisions of who is worthy of hemodialysis become quickly biased. Social worth can not and should not be used as criteria for life and death medical decisions.
Science and research offers rationalization for mostly all of its decisions and actions. “Bacteriomania” offered no other way at the time than to test the researcher’s hypothesis on humans. The bacteria, in many cases, was not known to exists in animals or at least the animals available and if so it was entirely too expensive to do animal research. Also, at this time self experimentation amongst the researchers was common. The overall feeling was if I can experiment on myself then to experiment on any group should be acceptable. In the Tuskegee Study, one rationale proposed was that due to the promiscuity and non adherence of the black male they will never be treated for venereal disease therefore it was only natural to design a study to record untreated syphilis in that cohort. In the case of organ transplantation and hemodialysis, social worth offered many ways to rationalize the medical profession’s decision for organ recipients and dialysis candidates. We get a remarkable view into the process of rationalization in the following excerpt from the “God Committee’s” selection process. “Summary: Overall, the present circumstances of this family offer little to recommend them. Their financial prospects are poor, the legal situation is murky, and the family history indicates much instability and acting out of problems. However, in spite of all this I was struck by the fact that neither of the As appeared at this moment to be overwhelmed by these problems”.
Today’s medical professionals and researchers have a set of guidelines, laws, and templates to follow and implement in the course of their work. It is no surprise that the history of research science, patient care, and animal research paved the way of today’s practice and thought. In some ways, the carriage was placed before the horse. Science began to reinvent itself at an alarmingly fast pace. Antibiotics cured previously deadly diseases, lives were prolonged due to machines and transplants, and emergency medicine saved countless lives. The ‘horse’ of ethics, morals, and understanding of differences remained behind the carriage of new inventions and research. America had their own social dilemmas to address well before and will after medical treatments. It was inevitable that these social biases would wrap its roots around every idea and practice. It is a lesson that unfortunately can not be taught outside of experience. All too often, by the time we realize how much culture affects our judgement and practice the bias decision or action has already been made. History, stands alone, as the crystal ball to the future. From this history came the AMA Code of Ethics, the world’s first national code of professional ethics; the President’s Council on Bioethics was formed in 2001 directly in response to the atrocities of the Tuskegee Study; and the social worth criteria implemented for life and death hemodialysis and transplants were done away with. This history taught us to let the horse lead the carriage, let culture and differences be questioned and most of all it instilled a certain amount of empathy that continues to be bolstered. Dr. William Osler once said, “By far the most dangerous foe we have to fight is apathy – indifference from whatever cause, not from a lack of knowledge, but from carelessness, from absorption in other pursuits, from a contempt bred of self satisfaction”.