15.9: Conflicting ethics - Biology

15.9: Conflicting ethics - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Fundamental conflicts in ethical behavior are hidden just below the surface of the cold mathematics in these equations. Artificially increasing the virulence (alpha) of a disease to equal or exceed the infectivity (eta) will therefore drive the disease extinct.

The enduring ethical conflict here is between the individual and the population. With all other things being equal, working to reduce virulence benefits the individual but may cause more individuals in the population to become infected. Working to increase virulence, in contrast, harms individuals but may reduce the number who become infected.

The ethics of modern medicine emphasizes individuals— working to cure disease and reduce virulence, ameliorating symptoms, reducing discomfort, and recognizing patient needs. Increasing the virulence of a disease in a human patient to reduce its spread is unthinkable, both in medicine and public health. The ethics of modern agriculture, however, are the diametric opposite. If a crop is infected with a destructive communicable disease, entire fields of the crop may be mowed, burned, or otherwise disposed of. Infected populations of poultry and livestock are treated similarly, killed en masse and buried or burned to contain the disease.

Artificially altering the infectivity (eta) is also a possibility. In Equation 15.4, (eta) appears in the denominator of a term having a minus sign—meaning that decreasing (eta) will decrease the equilibrium level of the disease, (hat{p}). Ethical conflicts also arise here, though they are not as stark as the conflicts connected with (alpha). During the influenza epidemic of 1918–19, San Francisco leaders required citizens to wear breathing masks—for “conscience, patriotism, and self-protection,” wrote the mayor.

This lowered (eta) by containing respiratory droplets from infected individuals, and lowered the chance of infected droplets entering the respiratory systems of susceptible individuals, in turn reducing the infectivity (eta). Some citizens, however, refused to wear the masks.

During the Ebola outbreak of 2014–15, amidst fears and warnings of the disease becoming established around the world, some U.S. governors ordered temporary quarantine of returning medical workers who had been in direct proximity with Ebola, until it was clear that they were not infected. At least one refused the quarantine based on individual rights, and the courts upheld the refusal.

These ethical conflicts surrounding (eta) are not as grim as those surrounding (alpha), with options currently practiced for domestic plants and animals but so extreme that they are never proposed for human populations.

It is curious that these high-level social dilemmas are perceptible within the most basic equations of ecology. Science can inform such ethical issues, but society must decide them.

Ethical issues involved with in vitro fertilization

There are three elements to consider with in vitro fertilization. First, the paramount concern needs to be the well-being and best interests of the child, even though he or she may be an embryo at the time. Second, some people think a child will solve their marital problems. Although couples seeking in vitro fertilization should not be subjected to more scrutiny than couples conceiving in the traditional way, the stresses and uncertainties of in vitro fertilization can further strain a marriage. Clinic staff members should be sensitive to this issue as a way of helping to avoid complications later. Third, how we resolve the status and fate of the frozen embryo and who has disposition over it surely will reflect how we consider abortion rights. For example, if the standards of Roe vs Wade were applied, one could argue that the woman should have total disposition over the frozen embryo. On the other hand, if the father receives a say in the matter, what impact would this have? Given the nature of our society and the tenuous state of marriage, the problem of disposing frozen embryos is a critical one that has no satisfactory solution. Finally, there is the stress factor. Although this is not an issue of direct ethical concern, it is related to the necessity of the couple receiving accurate information. If the couple receives an incorrect impression of a clinic's success rates, they may be exposed unnecessarily to further stress and frustration.(ABSTRACT TRUNCATED AT 250 WORDS)

AMA Journal of Ethics

This spring—and for the first time in 30 years—the U.S. Food and Drug Administration approved a medication for the treatment of nausea and vomiting associated with pregnancy (NVP). Though the condition occurs in an estimated 80 percent of pregnancies, up to this point women with NVP had to weigh two less-than-ideal options: either manage the condition with diet and alternative therapies or take a drug “off label” and with limited official guidance regarding safety and efficacy for use during pregnancy.

Such in fact remains the story for most medications used during pregnancy. Due to ethical concerns about exposing pregnant women and fetuses to the risks of research, many researchers and institutional review boards regard pregnancy as a near-automatic cause for exclusion from research studies, even when the risks are negligible and the study addresses a question of critical relevance to maternal or fetal health. Though deployed in the spirit of “protection,” decisions to exclude pregnant women and their interests in the research agenda come at a profound cost for women and children alike.

First, it is widely known that pregnancy is no “magic bullet” against illness. It is estimated that at least 10 percent of women face serious medical conditions that require treatment during pregnancy—hypertension and heart disease, diabetes, even cancer. Nearly 90 percent of women take medication at some point in their pregnancy approximately 50 percent take at least one prescription medication, and use has generally increased over the last 3 decades [1]. Given dramatic increases in the proportion of births to women aged 35 and older and increasing rates of obesity and its associated morbidities, it is likely that the use of medications in pregnancy will only grow. Yet Diclegis (the newly approved NVP drug) is an exception to the rule: few drugs have been approved by the FDA for use in pregnancy (2 from 1962 to 1995) [2]—and all for gestation or birth related issues. Any medicine taken to treat a nonobstetric illness during pregnancy is used without adequate data about its safety or effective dosing.

This can be a serious problem because pregnancy often changes the ways that drugs act in the body—the drug’s pharmacokinetics and pharmacodynamics. Several recent studies have shown that using standard adult doses of drugs or vaccines in pregnant women can lead to undertreatment or overtreatment. For instance, in the wake of rates of morbidity and mortality among pregnant women that exceeded that of the general population in the recent H1N1 pandemic [3], researchers investigated the pharmacokinetics of the drug oseltmavir phosphate (Tamiflu) in pregnant women and found that the standard adult dose (which was recommended for pregnant women during the pandemic) may be inadequate for treatment or prevention of flu during pregnancy [4].

Further, there are few data to address worries about fetal safety. For 98 percent of the drugs approved between 2000 and 2010, the teratogenic risk is unknown [5] for drugs approved in the previous 20 years, we still don’t know enough about nearly 9 out of 10 [5]. The average time it takes for a drug to be categorized in terms of risk is 27 years after market approval [5].

In the absence of clear data about the appropriate dosing or safety of medications, women (and their doctors) are often reticent to use (or prescribe) drugs during pregnancy. But excess precaution has serious downsides. Specifically, untreated illness can present far greater risks than those posed by medications. Untreated asthma is associated with preeclampsia, premature delivery, low birth weight, and hemorrhage, but women whose asthma is controlled have outcomes comparable to women without asthma [6].Treatment delays possibly attributable to reticence had serious consequences for pregnant women during the H1N1 pandemic: women who received treatment more than 4 days after the onset of symptoms were more likely to be admitted to the intensive care unit and receive mechanical ventilation—and more than 50 times as likely to die—than women who received timely treatment with antivirals [7].

How should we redress this state of affairs? Perhaps the most important lesson is that we can no longer hide behind claims that ethics precludes the inclusion of pregnant women and their interests in research. Rather, ethics—and to be more precise, justice—demands that we move forward with their responsible inclusion. Pregnant women have not benefitted fairly from the research enterprise. It is well past time that they do.

The first step is recognizing that there are many ways to gather data without having to sort out the ethical complexities of risk trade-offs between pregnant women and their fetuses. There is plenty of what might be called ethical low-hanging fruit—ethically unproblematic research that can help fill the evidence gap about health care for pregnant women. For instance, a wealth of critical information about the pharmacokinetics of drugs in pregnancy could be garnered by doing a simple series of blood tests on pregnant women who are already taking medications. The National Institutes of Health’s Obstetric-Fetal Pharmacology Research Units have funded several such “opportunistic” studies in the last several years [8], yet major gaps remain. For instance, HIV-related tuberculosis accounts for 10 percent of maternal deaths in some developing countries [9], yet there are no pharmacokinetic data on any TB medications and, of the 40 TB trials currently underway, all exclude pregnant women [10].

In addition to opportunistic pharmacokinetic studies, large cohort trials can be a rich source of information, but these golden opportunities are—all too often—overlooked. For instance, in 2009 the NIH launched the National Children’s Study more than 100,000 women were to be followed during pregnancy and their children would be followed for 20 years to understand the impact of the environment on children’s health. The problem is that pregnant women—consenting research participants—were understood not as subjects but as part of the environment to be studied, as the data collected pertained almost exclusively to children’s health [11].

Studies that involve more than minimal risks to fetuses tend to raise red flags among researchers, IRBs, and even patients themselves. It is important to remember, however, that participation in a research study—in which there are rigorous standards for informed consent and close monitoring—may well be a safer context for the use of medications in pregnancy than the clinical setting, where the evidence base is so profoundly lacking. In considering the ethics of trial participation, we cannot forget context: if women are excluded from research, their only option may be to take a medication in an uncontrolled clinical environment absent the data to inform dosing or safety considerations specific to pregnancy. Absent systematic research involving pregnant women, their only option will remain having their illnesses treated in this uncontrolled clinical environment in which the data needed to secure FDA approval remains elusive. Indeed, the American College of Obstetricians and Gynecologists endorsed—for nearly a decade before FDA approval—the use of the medications in Diclegis in pregnant women suffering from NVP [12].

Though approval by the FDA, and a pregnancy category A to boot [13], are both reassuring—and in the case of Diclegis, long-awaited by the many women who did take the drug years ago—what we need most are data, so that women can make informed decisions about whether or not to use a medication during pregnancy and so that doctors can prescribe such medicines at appropriate and effective doses. Still, with the FDA’s recent decision, it feels like a page has turned in the history of maternal health. Let’s hope the momentum continues.

2. Affordability

The rising cost of healthcare — and the cost of medications in particular — is a political hot potato and will remain so. No matter what the U.S. Food and Drug Administration might say or attempt, a large swath of the public, their federal representatives, and their governors do not seem to believe the pharmaceutical industry’s argument that research and development are funded by today’s prices and that price controls could retard Rɭ.

The ethical concerns are likely to get still more heated when the value of expensive biotech treatments for chronic illnesses is debated. After all, a needed pill for cholesterol might cost $3 daily, which amounts to nearly $1,100 per year. Compare that to a biologic that carries a $20,000-per-year price tag — or something even more costly.

The cost of defending the United States against bioterrorism raises a host of issues, says David Krause, MD, of Vicuron Pharmaceuticals. “If we fund this, what are we not funding?” he asks. 𠇊nd can we ever predict all the possible terror threats?”

�ordability is, arguably, an issue across the board,” says Jeff Kimmell, RPh, vice president of healthcare services and chief pharmacy officer at, in Bellevue, Wash. “In the United States, we say we want the best [treatments]. But it’s also an ethical dilemma. At what point will people say 𠆎nough is enough?’”

This may place payers and purchasers, who are already struggling with the question of how much cost sharing is appropriate, on the defensive. Insurers and employers juggle actuarial concerns with the risks of patient nonadherence and its potential for poor clinical outcomes when coverage decisions are made. The ethical questions do not fit neatly into this decision-making process but, rather, transcend it.

What happens when some patients can’t afford the out-of-pocket share of a given treatment? What if an insurer declines to add a biologic to its formulary because of its acquisition cost? What happens when a patient on an expensive chronic therapy maxes out his lifetime insurance benefit? Such instances may not be the norm, but their possibility disturbs some experts who see a pivotal clash between patients and profits.

“It’s certainly an economic issue if biologics are priced so high that some patients are priced out of the market,” says Sean Nicholson, PhD, assistant professor of policy analysis and management at Cornell University. “Perhaps an insurer may not cover a particular therapy. If there’s nothing else the patient could take to save his or her life, or to improve quality of life, that’s a dilemma.”

What Is Ethics in Research & Why Is It Important?

When most people think of ethics (or morals), they think of rules for distinguishing between right and wrong, such as the Golden Rule ("Do unto others as you would have them do unto you"), a code of professional conduct like the Hippocratic Oath ("First of all, do no harm"), a religious creed like the Ten Commandments ("Thou Shalt not kill. "), or a wise aphorisms like the sayings of Confucius. This is the most common way of defining "ethics": norms for conduct that distinguish between acceptable and unacceptable behavior.

Most people learn ethical norms at home, at school, in church, or in other social settings. Although most people acquire their sense of right and wrong during childhood, moral development occurs throughout life and human beings pass through different stages of growth as they mature. Ethical norms are so ubiquitous that one might be tempted to regard them as simple commonsense. On the other hand, if morality were nothing more than commonsense, then why are there so many ethical disputes and issues in our society?

Alternative test methods are methods that replace, reduce, or refine animal use in research and testing

Learn more about Environmental science Basics

One plausible explanation of these disagreements is that all people recognize some common ethical norms but interpret, apply, and balance them in different ways in light of their own values and life experiences. For example, two people could agree that murder is wrong but disagree about the morality of abortion because they have different understandings of what it means to be a human being.

Most societies also have legal rules that govern behavior, but ethical norms tend to be broader and more informal than laws. Although most societies use laws to enforce widely accepted moral standards and ethical and legal rules use similar concepts, ethics and law are not the same. An action may be legal but unethical or illegal but ethical. We can also use ethical concepts and principles to criticize, evaluate, propose, or interpret laws. Indeed, in the last century, many social reformers have urged citizens to disobey laws they regarded as immoral or unjust laws. Peaceful civil disobedience is an ethical way of protesting laws or expressing political viewpoints.

Another way of defining 'ethics' focuses on the disciplines that study standards of conduct, such as philosophy, theology, law, psychology, or sociology. For example, a "medical ethicist" is someone who studies ethical standards in medicine. One may also define ethics as a method, procedure, or perspective for deciding how to act and for analyzing complex problems and issues. For instance, in considering a complex issue like global warming , one may take an economic, ecological, political, or ethical perspective on the problem. While an economist might examine the cost and benefits of various policies related to global warming, an environmental ethicist could examine the ethical values and principles at stake.

See ethics in practice at NIEHS

Read latest updates in our monthly Global Environmental Health Newsletter

Many different disciplines, institutions , and professions have standards for behavior that suit their particular aims and goals. These standards also help members of the discipline to coordinate their actions or activities and to establish the public's trust of the discipline. For instance, ethical standards govern conduct in medicine, law, engineering, and business. Ethical norms also serve the aims or goals of research and apply to people who conduct scientific research or other scholarly or creative activities. There is even a specialized discipline, research ethics, which studies these norms. See Glossary of Commonly Used Terms in Research Ethics.

There are several reasons why it is important to adhere to ethical norms in research. First, norms promote the aims of research , such as knowledge, truth, and avoidance of error. For example, prohibitions against fabricating , falsifying, or misrepresenting research data promote the truth and minimize error.

Join an NIEHS Study

See how we put research Ethics to practice.

Visit Joinastudy.niehs.nih.govto see the various studies NIEHS perform.

Second, since research often involves a great deal of cooperation and coordination among many different people in different disciplines and institutions, ethical standards promote the values that are essential to collaborative work, such as trust, accountability, mutual respect, and fairness. For example, many ethical norms in research, such as guidelines for authorship , copyright and patenting policies , data sharing policies, and confidentiality rules in peer review, are designed to protect intellectual property interests while encouraging collaboration. Most researchers want to receive credit for their contributions and do not want to have their ideas stolen or disclosed prematurely.

Third, many of the ethical norms help to ensure that researchers can be held accountable to the public. For instance, federal policies on research misconduct, conflicts of interest, the human subjects protections , and animal care and use are necessary in order to make sure that researchers who are funded by public money can be held accountable to the public.

Fourth, ethical norms in research also help to build public support for research. People are more likely to fund a research project if they can trust the quality and integrity of research.

Finally, many of the norms of research promote a variety of other important moral and social values, such as social responsibility, human rights, animal welfare, compliance with the law, and public health and safety. Ethical lapses in research can significantly harm human and animal subjects, students, and the public. For example, a researcher who fabricates data in a clinical trial may harm or even kill patients, and a researcher who fails to abide by regulations and guidelines relating to radiation or biological safety may jeopardize his health and safety or the health and safety of staff and students.

Codes and Policies for Research Ethics

Given the importance of ethics for the conduct of research, it should come as no surprise that many different professional associations, government agencies, and universities have adopted specific codes, rules, and policies relating to research ethics. Many government agencies have ethics rules for funded researchers.

  • National Institutes of Health (NIH)
  • National Science Foundation (NSF)
  • Food and Drug Administration (FDA)
  • Environmental Protection Agency (EPA)
  • US Department of Agriculture (USDA)
  • Singapore Statement on Research Integrity
  • American Chemical Society, The Chemist Professional’s Code of Conduct
  • Code of Ethics (American Society for Clinical Laboratory Science)
  • American Psychological Association, Ethical Principles of Psychologists and Code of Conduct
  • Statement on Professional Ethics (American Association of University Professors)
  • Nuremberg Code
  • World Medical Association's Declaration of Helsinki

Ethical Principles

The following is a rough and general summary of some ethical principles that various codes address*:


Strive for honesty in all scientific communications. Honestly report data, results, methods and procedures, and publication status. Do not fabricate, falsify, or misrepresent data. Do not deceive colleagues, research sponsors, or the public.


Strive to avoid bias in experimental design, data analysis, data interpretation, peer review, personnel decisions, grant writing, expert testimony, and other aspects of research where objectivity is expected or required. Avoid or minimize bias or self-deception. Disclose personal or financial interests that may affect research.


Keep your promises and agreements act with sincerity strive for consistency of thought and action.


Avoid careless errors and negligence carefully and critically examine your own work and the work of your peers. Keep good records of research activities, such as data collection, research design, and correspondence with agencies or journals.


Share data, results, ideas, tools, resources. Be open to criticism and new ideas.


Disclose methods, materials, assumptions, analyses, and other information needed to evaluate your research.


Take responsibility for your part in research and be prepared to give an account (i.e. an explanation or justification) of what you did on a research project and why.

Intellectual Property

Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give proper acknowledgement or credit for all contributions to research. Never plagiarize.


Protect confidential communications, such as papers or grants submitted for publication, personnel records, trade or military secrets, and patient records.

Responsible Publication

Publish in order to advance research and scholarship, not to advance just your own career. Avoid wasteful and duplicative publication.

Responsible Mentoring

Help to educate, mentor, and advise students. Promote their welfare and allow them to make their own decisions.

Respect for Colleagues

Respect your colleagues and treat them fairly.

Social Responsibility

Strive to promote social good and prevent or mitigate social harms through research, public education, and advocacy.


Avoid discrimination against colleagues or students on the basis of sex, race, ethnicity, or other factors not related to scientific competence and integrity.


Maintain and improve your own professional competence and expertise through lifelong education and learning take steps to promote competence in science as a whole.


Know and obey relevant laws and institutional and governmental policies.

Animal Care

Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.

Human Subjects protection

When conducting research on human subjects, minimize harms and risks and maximize benefits respect human dignity, privacy, and autonomy take special precautions with vulnerable populations and strive to distribute the benefits and burdens of research fairly.

* Adapted from Shamoo A and Resnik D. 2015. Responsible Conduct of Research, 3rd ed. (New York: Oxford University Press).

Ethical Decision Making in Research

Although codes, policies, and principles are very important and useful, like any set of rules, they do not cover every situation, they often conflict, and they require considerable interpretation. It is therefore important for researchers to learn how to interpret, assess, and apply various research rules and how to make decisions and to act ethically in various situations. The vast majority of decisions involve the straightforward application of ethical rules. For example, consider the following case,

The research protocol for a study of a drug on hypertension requires the administration of the drug at different doses to 50 laboratory mice, with chemical and behavioral tests to determine toxic effects. Tom has almost finished the experiment for Dr. Q. He has only 5 mice left to test. However, he really wants to finish his work in time to go to Florida on spring break with his friends, who are leaving tonight. He has injected the drug in all 50 mice but has not completed all of the tests. He therefore decides to extrapolate from the 45 completed results to produce the 5 additional results.

Many different research ethics policies would hold that Tom has acted unethically by fabricating data. If this study were sponsored by a federal agency, such as the NIH, his actions would constitute a form of research misconduct , which the government defines as "fabrication, falsification, or plagiarism" (or FFP). Actions that nearly all researchers classify as unethical are viewed as misconduct. It is important to remember, however, that misconduct occurs only when researchers intend to deceive: honest errors related to sloppiness, poor record keeping, miscalculations, bias, self-deception, and even negligence do not constitute misconduct. Also, reasonable disagreements about research methods, procedures, and interpretations do not constitute research misconduct. Consider the following case:

Dr. T has just discovered a mathematical error in his paper that has been accepted for publication in a journal. The error does not affect the overall results of his research, but it is potentially misleading. The journal has just gone to press, so it is too late to catch the error before it appears in print. In order to avoid embarrassment, Dr. T decides to ignore the error.

Dr. T's error is not misconduct nor is his decision to take no action to correct the error. Most researchers, as well as many different policies and codes would say that Dr. T should tell the journal (and any coauthors) about the error and consider publishing a correction or errata. Failing to publish a correction would be unethical because it would violate norms relating to honesty and objectivity in research.

There are many other activities that the government does not define as "misconduct" but which are still regarded by most researchers as unethical. These are sometimes referred to as " other deviations " from acceptable research practices and include:

  • Publishing the same paper in two different journals without telling the editors
  • Submitting the same paper to different journals without telling the editors
  • Not informing a collaborator of your intent to file a patent in order to make sure that you are the sole inventor
  • Including a colleague as an author on a paper in return for a favor even though the colleague did not make a serious contribution to the paper
  • Discussing with your colleagues confidential data from a paper that you are reviewing for a journal
  • Using data, ideas, or methods you learn about while reviewing a grant or a papers without permission
  • Trimming outliers from a data set without discussing your reasons in paper
  • Using an inappropriate statistical technique in order to enhance the significance of your research
  • Bypassing the peer review process and announcing your results through a press conference without giving peers adequate information to review your work
  • Conducting a review of the literature that fails to acknowledge the contributions of other people in the field or relevant prior work
  • Stretching the truth on a grant application in order to convince reviewers that your project will make a significant contribution to the field
  • Stretching the truth on a job application or curriculum vita
  • Giving the same research project to two graduate students in order to see who can do it the fastest
  • Overworking, neglecting, or exploiting graduate or post-doctoral students
  • Failing to keep good research records
  • Failing to maintain research data for a reasonable period of time
  • Making derogatory comments and personal attacks in your review of author's submission
  • Promising a student a better grade for sexual favors
  • Using a racist epithet in the laboratory
  • Making significant deviations from the research protocol approved by your institution's Animal Care and Use Committee or Institutional Review Board for Human Subjects Research without telling the committee or the board
  • Not reporting an adverse event in a human research experiment
  • Wasting animals in research
  • Exposing students and staff to biological risks in violation of your institution's biosafety rules
  • Sabotaging someone's work
  • Stealing supplies, books, or data
  • Rigging an experiment so you know how it will turn out
  • Making unauthorized copies of data, papers, or computer programs
  • Owning over $10,000 in stock in a company that sponsors your research and not disclosing this financial interest
  • Deliberately overestimating the clinical significance of a new drug in order to obtain economic benefits

These actions would be regarded as unethical by most scientists and some might even be illegal in some cases. Most of these would also violate different professional ethics codes or institutional policies. However, they do not fall into the narrow category of actions that the government classifies as research misconduct. Indeed, there has been considerable debate about the definition of "research misconduct" and many researchers and policy makers are not satisfied with the government's narrow definition that focuses on FFP. However, given the huge list of potential offenses that might fall into the category "other serious deviations," and the practical problems with defining and policing these other deviations, it is understandable why government officials have chosen to limit their focus.

Finally, situations frequently arise in research in which different people disagree about the proper course of action and there is no broad consensus about what should be done. In these situations, there may be good arguments on both sides of the issue and different ethical principles may conflict. These situations create difficult decisions for research known as ethical or moral dilemmas . Consider the following case:

Dr. Wexford is the principal investigator of a large, epidemiological study on the health of 10,000 agricultural workers. She has an impressive dataset that includes information on demographics, environmental exposures, diet, genetics, and various disease outcomes such as cancer, Parkinson&rsquos disease (PD), and ALS. She has just published a paper on the relationship between pesticide exposure and PD in a prestigious journal. She is planning to publish many other papers from her dataset. She receives a request from another research team that wants access to her complete dataset. They are interested in examining the relationship between pesticide exposures and skin cancer. Dr. Wexford was planning to conduct a study on this topic.

Dr. Wexford faces a difficult choice. On the one hand, the ethical norm of openness obliges her to share data with the other research team. Her funding agency may also have rules that obligate her to share data. On the other hand, if she shares data with the other team, they may publish results that she was planning to publish, thus depriving her (and her team) of recognition and priority. It seems that there are good arguments on both sides of this issue and Dr. Wexford needs to take some time to think about what she should do. One possible option is to share data, provided that the investigators sign a data use agreement. The agreement could define allowable uses of the data, publication plans, authorship, etc. Another option would be to offer to collaborate with the researchers.

The following are some step that researchers, such as Dr. Wexford, can take to deal with ethical dilemmas in research:

What is the problem or issue?

It is always important to get a clear statement of the problem. In this case, the issue is whether to share information with the other research team.

What is the relevant information?

Many bad decisions are made as a result of poor information. To know what to do, Dr. Wexford needs to have more information concerning such matters as university or funding agency or journal policies that may apply to this situation, the team's intellectual property interests, the possibility of negotiating some kind of agreement with the other team, whether the other team also has some information it is willing to share, the impact of the potential publications, etc.

What are the different options?

People may fail to see different options due to a limited imagination, bias, ignorance, or fear. In this case, there may be other choices besides 'share' or 'don't share,' such as 'negotiate an agreement' or 'offer to collaborate with the researchers.'

How do ethical codes or policies as well as legal rules apply to these different options?

The university or funding agency may have policies on data management that apply to this case. Broader ethical rules, such as openness and respect for credit and intellectual property, may also apply to this case. Laws relating to intellectual property may be relevant.

Are there any people who can offer ethical advice?

It may be useful to seek advice from a colleague, a senior researcher, your department chair, an ethics or compliance officer, or anyone else you can trust. In the case, Dr. Wexford might want to talk to her supervisor and research team before making a decision.

After considering these questions, a person facing an ethical dilemma may decide to ask more questions, gather more information, explore different options, or consider other ethical rules. However, at some point he or she will have to make a decision and then take action. Ideally, a person who makes a decision in an ethical dilemma should be able to justify his or her decision to himself or herself, as well as colleagues, administrators, and other people who might be affected by the decision. He or she should be able to articulate reasons for his or her conduct and should consider the following questions in order to explain how he or she arrived at his or her decision: .

  • Which choice will probably have the best overall consequences for science and society?
  • Which choice could stand up to further publicity and scrutiny?
  • Which choice could you not live with?
  • Think of the wisest person you know. What would he or she do in this situation?
  • Which choice would be the most just, fair, or responsible?

After considering all of these questions, one still might find it difficult to decide what to do. If this is the case, then it may be appropriate to consider others ways of making the decision, such as going with a gut feeling or intuition, seeking guidance through prayer or meditation, or even flipping a coin. Endorsing these methods in this context need not imply that ethical decisions are irrational, however. The main point is that human reasoning plays a pivotal role in ethical decision-making but there are limits to its ability to solve all ethical dilemmas in a finite amount of time.

Promoting Ethical Conduct in Science

Most academic institutions in the US require undergraduate, graduate, or postgraduate students to have some education in the responsible conduct of research (RCR) . The NIH and NSF have both mandated training in research ethics for students and trainees. Many academic institutions outside of the US have also developed educational curricula in research ethics

Those of you who are taking or have taken courses in research ethics may be wondering why you are required to have education in research ethics. You may believe that you are highly ethical and know the difference between right and wrong. You would never fabricate or falsify data or plagiarize. Indeed, you also may believe that most of your colleagues are highly ethical and that there is no ethics problem in research..

If you feel this way, relax. No one is accusing you of acting unethically. Indeed, the evidence produced so far shows that misconduct is a very rare occurrence in research, although there is considerable variation among various estimates. The rate of misconduct has been estimated to be as low as 0.01% of researchers per year (based on confirmed cases of misconduct in federally funded research) to as high as 1% of researchers per year (based on self-reports of misconduct on anonymous surveys). See Shamoo and Resnik (2015), cited above.

Clearly, it would be useful to have more data on this topic, but so far there is no evidence that science has become ethically corrupt, despite some highly publicized scandals. Even if misconduct is only a rare occurrence, it can still have a tremendous impact on science and society because it can compromise the integrity of research, erode the public&rsquos trust in science, and waste time and resources. Will education in research ethics help reduce the rate of misconduct in science? It is too early to tell. The answer to this question depends, in part, on how one understands the causes of misconduct. There are two main theories about why researchers commit misconduct. According to the "bad apple" theory, most scientists are highly ethical. Only researchers who are morally corrupt, economically desperate, or psychologically disturbed commit misconduct. Moreover, only a fool would commit misconduct because science's peer review system and self-correcting mechanisms will eventually catch those who try to cheat the system. In any case, a course in research ethics will have little impact on "bad apples," one might argue.

According to the "stressful" or "imperfect" environment theory, misconduct occurs because various institutional pressures, incentives, and constraints encourage people to commit misconduct, such as pressures to publish or obtain grants or contracts, career ambitions, the pursuit of profit or fame, poor supervision of students and trainees, and poor oversight of researchers (see Shamoo and Resnik 2015). Moreover, defenders of the stressful environment theory point out that science's peer review system is far from perfect and that it is relatively easy to cheat the system. Erroneous or fraudulent research often enters the public record without being detected for years. Misconduct probably results from environmental and individual causes, i.e. when people who are morally weak, ignorant, or insensitive are placed in stressful or imperfect environments. In any case, a course in research ethics can be useful in helping to prevent deviations from norms even if it does not prevent misconduct. Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research. For example, some unethical authorship practices probably reflect traditions and practices that have not been questioned seriously until recently. If the director of a lab is named as an author on every paper that comes from his lab, even if he does not make a significant contribution, what could be wrong with that? That's just the way it's done, one might argue. Another example where there may be some ignorance or mistaken traditions is conflicts of interest in research. A researcher may think that a "normal" or "traditional" financial relationship, such as accepting stock or a consulting fee from a drug company that sponsors her research, raises no serious ethical issues. Or perhaps a university administrator sees no ethical problem in taking a large gift with strings attached from a pharmaceutical company. Maybe a physician thinks that it is perfectly appropriate to receive a $300 finder&rsquos fee for referring patients into a clinical trial.

If "deviations" from ethical conduct occur in research as a result of ignorance or a failure to reflect critically on problematic traditions, then a course in research ethics may help reduce the rate of serious deviations by improving the researcher's understanding of ethics and by sensitizing him or her to the issues.

Finally, education in research ethics should be able to help researchers grapple with the ethical dilemmas they are likely to encounter by introducing them to important concepts, tools, principles, and methods that can be useful in resolving these dilemmas. Scientists must deal with a number of different controversial topics, such as human embryonic stem cell research, cloning, genetic engineering, and research involving animal or human subjects, which require ethical reflection and deliberation.

Research Ethics Timeline

Note: This list is the author&rsquos own interpretation of some important events in the history of research ethics and does not include every event that some people might regard as important. I am open to suggestions for additions, revisions, etc.

Francis Bacon publishes The Novum Organon, in which he argues that scientific research should benefit humanity.

Galileo Galilea publishes his Dialogue on Two World Systems, in which he defends a heliocentric theory of the solar system, a view that contradicted the Catholic Church&rsquos position that the Earth does not move but that the Sun moves around it. In 1633, Galileo appeared before an inquisitor from the Catholic Church. He refused to recant his views and was sentenced to house arrest for the remainder of his life. The Church banned his book. In 1992, 359 years after Galileo&rsquos arrest, Pope John Paul II formally apologized for its treatment of Galileo.

The Royal Society of London institutes peer review procedures for articles submitted to The Philosophical Transactions of the Royal Society of London. The Philosophical Transactions, the world&rsquos first scientific journal, was first published in 1665.

Edward Jenner inoculates eight-year-old James Phipps with fluid from a cowpox pustule to immunize him against smallpox.

Charles Babbage publishes Reflections on the Decline of Science in England, And Some of Its Causes, in which he argues that many of his colleagues were engaging in dishonest research practices, including fabricating, cooking, trimming, and fudging data.

Charles Darwin and Alfred Wallace publish The Origin of Species, which proposes a theory of evolution of living things by natural selection. The book generates a great deal of controversy because it proposes that human beings were not created by God (as most religions claimed) but descended from apes. Darwin collected most of the data for the theory while serving as the ship&rsquos naturalist on the voyage of the HMS Beagle (1831-1836). He waited over twenty years to publish his ideas because he knew they would meet with strong opposition and he wanted to ensure that he could back up his claims with evidence and arguments. George Lyell urged Darwin to publish his theory after reading a paper by Alfred Wallace that proposed a theory similar to Darwin&rsquos, so that Darwin could establish precedence. Instead, Darwin shared credit with Wallace.

Louis Pasteur administers an experimental rabies vaccine to nine-year-old Joseph Meister without testing it on animals first.

Robert Bartholomew inserts electrodes into a hole in the skull of Mary Rafferty caused by a tumor. He notes that small amounts electric current caused bodily movements and that larger amounts caused pain. Rafferty, who was mentally ill, fell into a coma and died a few days after the experiment.

Giuseppe Sanarelli injects the yellow fever bacteria into five patients without their consent. All the patients developed the disease and three died.

Walter Reed experiments to determine the cause of yellow fever. Thirty-three participants, including eighteen Americans and six Cubans, were exposed to mosquitoes infected with yellow fever or injected with blood from yellow fever patients. Six participants died, including two researcher-volunteers. The participants all signed consent forms, some of which were translated into Spanish.

Robert Millikan performs oil drop experiments to determine the charge of an electron. Millikan received a Nobel Prize for this research in 1923. Historians and journalists who studied Millikan&rsquos notebooks discovered that he did not report 33 out of 149 oil drop observations that he had marked as &ldquofair&rdquo or &ldquopoor.&rdquo Millikan also did not name his student, Harvey Fletcher, as an author on the paper that reported the results of these experiments, even though Fletcher made important contributions to the design of these experiments, such as suggesting that Millikan use oil droplets instead of water droplets.

Museum curator Charles Dawson discovers a skull in at Piltdown gravel bed near Surrey, U.K. It was thought to be the fossilized remains of a species in between humans and apes (i.e. &ldquoa missing link&rdquo). A controversy surrounded the skull for decades and many scientists believed it to be fake. Chemical analyses performed in 1953 confirmed these suspicions by showing that the skull is a combination of a human skull and orangutan jaw, which had been treated with chemicals to make them appear old. The identity of the forger is still unknown, though most historians suspect Dawson.

The University of Wisconsin establishes the Wisconsin Alumni Foundation (WARF), an independent organization that manages intellectual property (e.g. patents) and investments owned by the university and supports scientific innovation and discovery on campus. At that time, few universities owned or managed patents that were awarded to their researchers. WARF helps Harry Steenbock develop his invention for fortifying fats with vitamin D.

The Tuskegee Syphilis Study, sponsored by the U.S. Department of Health, Education and Welfare, begins in 1932. The study investigated the effects of untreated syphilis in 400 African American men from the Tuskegee, Alabama area. The researchers did not tell the subjects that they were in an experiment. Most subjects who attended the Tuskegee clinic thought they were getting treatment for "bad blood." Researchers withheld treatment for the disease from participants even when penicillin, an effective form of treatment, became widely available in the 1950s. The study ended in 1972, after a news story from the Associated Press alerted the public and Congress to the ethical problems with the research. The U.S. government settled a lawsuit brought by the participants and their families.

Japanese scientists working at Unit 731 performed morally abominable experiments on thousands of Chinese prisoners or war, including biological and chemical weapons experiments, vaccination experiments, and wound-healing and surgical studies, including vivisections. The U.S. government agreed to not prosecute the scientists for war crimes in exchange for data from the biological and chemical weapons research. Unit 731 of the Imperial Japanese Army also conducted research on Korean prisoners/civilians (such as Dong Ju Yoon (arguably the most famous modern era Korean poet) and Chung-Chun Lee (a Korean national hero and freedom fighter)), as well as Mongolians, Manchurians (separate from Chinese), and Russians.

German scientists conducted morally abominable research on concentration camp prisoners, including experiments that exposed subjects to freezing temperatures, low air pressures, ionizing radiation and electricity, and infectious diseases as well as wound-healing and surgical studies. The Allies prosecuted the German scientists for war crimes in the Nuremberg Trials. The Nuremberg Code provided the legal basis for prosecuting the scientists.

Two German refugee scientists, Frisch and R.E. Peierls, warn the U.S. about Germany's nuclear weapons program. Albert Einstein sends a letter to President Roosevelt warning him about the threat posed by Germany. The letter, which was written by Leó Szilárd in consultation with Edward Teller and Eugene Wigner, was signed by Einstein. The letter suggested that the U.S. should develop a nuclear weapons program.

The U.S. conducts the $2 billion Manhattan Project to develop an atomic bomb. General Leslie Groves direct the Project and physicist Robert Oppenheimer oversees the scientific work.

The U.S. Department of Energy sponsors secret research on the effects of radiation on human beings. Subjects were not told that they were participating in the experiments. Experiments were conducted on cancer patients, pregnant women, and military personnel.

The U.S. drops atomic bombs on Hiroshima and Nagasaki, Japan, killing an estimated 200,000 civilians.

Led by President Eisenhower and atomic bomb scientist Robert Oppenheimer, the "atoms for peace" movement begins.

Vannevar Bush writes the report Science: The Endless Frontier for President Roosevelt. The report argues for a major increase in government spending on science and defends the ideal of a self-governing scientific community free from significant public oversight. It advocates for investment in science and technology as a means of promoting national security and economic development.

The Nuremberg Code, the first international code of ethics for research on human subjects, is adopted.

Norbert Wiener, the founder of cybernetics, published an article in the Atlantic Monthly titled "A Scientist Rebels" in which he refuses to conduct research for the military.

Alfred Kinsey publishes Sexual Behavior in the Human Male. Five years later, he publishes Sexual Behavior in the Human Female. These books were very controversial, because they examined topics which were regarded as taboo at the time, such as masturbation, orgasm, intercourse, promiscuity, and sexual fantasies. Kinsey could not obtain public funding for the research, so he funded it privately through the Kinsey Institute.

The Soviet Union tests an atomic bomb the Cold War begins.

James Watson and Francis Crick propose a model for the structure of DNA, for which they eventually would share the Nobel Prize in 1962. They secretly obtained key x-ray diffraction data from Rosalind Franklin without her permission. Franklin was not named as an author on Watson and Crick&rsquos paper. She was not awarded a Nobel Prize because she died in 1953 from ovarian cancer (at age 37), and the prize is not awarded posthumously.

Saul Krugman, Joan Giles and other researchers conduct hepatitis experiments on mentally disabled children at The Willowbrook State School. They intentionally infected subjects with the disease and observed its natural progression. The experiments were approved by the New York Department of Health.

The CIA begins a mind control research program, which includes administering LSD and other drugs to unwitting subjects.

The Soviets launch Sputnik, the first satellite, which triggers the U.S. government to increase its investments in science and technology to avoid falling behind in the space race.

In 1957, thalidomide is marketed in West Germany as medication to treat morning sickness during pregnancy. About 10,000 infants, mostly in West Germany, are born with severe birth defects as a result of exposure to this drug. 2,000 children die from thalidomide exposure. In 1960, Frances Kathleen Oldham Kelsey, a drug reviewer for the FDA, refused to approve the drug. Soon, countries around the world ban the drug. Kelsey is awarded the President's Award for Distinguished Federal Civilian Service in 1962.

President John F. Kennedy commits the U.S. to the goal of putting a man on the moon by the end of the decade.

Rachel Carson publishes Silent Spring, which alerts people to the harmful effects on the environment of various toxins and pollutants, including DDT. Her book launches the environmentalist movement.

Stanley Milgram conducts his "electric shock" experiments, which proved that people are willing to do things that they consider to be morally wrong when following the orders of an authority. The experiments, which had several variations, included a learner, a teacher, and a researcher. The learner was connected to electrodes. If the learner gave an incorrect response to a question, the researcher would instruct the teacher to push a button on a machine to give the learner an electric shock. Teachers were willing to do this even when the dial on the machine was turned up to &ldquodangerous&rdquo levels and the learner were crying out in pain and asking for the experiments to stop. In reality, no shocks were given. The purpose of the experiments was to test subjects&rsquo willingness to obey an authority figure. Since then, other researchers who have repeated these experiments have obtained similar results.

The World Medical Association publishes Declaration at Helsinki, Ethical Principles for Research Involving Human Subjects. The Helsinki Declaration has been revised numerous times, most recently in 2013.

The U.S. Surgeon General's office issues its first of several reports on health problems related to smoking.

Henry Beecher publishes an article in the New England Journal of Medicine alerting scientists and doctors to 22 unethical studies, including the Tuskegee syphilis study and the Willowbrook hepatitis study.

The animal rights movement impacts scientific research. The U.S. Public Health Service publishes its Guide for the Humane Care and Use of Laboratory Animals in 1963. The Guide requires research institutions to form Institutional Animal Care and Use Committees (IACUCs) to review and oversee animal experiments. The U.S. Congress adopts the Animal Welfare Act in 1966, which protect animals used in research, excluding rodents and birds. Various states adopt or revise animal cruelty laws, which also protect animals used in research. In 1975, Peter Singer publishes Animal Liberation, which provides a philosophical defense of the animal rights movement. Singer argues that most animal research is immoral.

The U.S. lands the first man on the moon.

After conducting hearings on unethical research involving human subjects, including the Tuskegee study, Congress passes the National Research Act in 1973, which President Nixon signs in 1974. The Act authorizes federal agencies (e.g. the NIH and FDA) to develop human research regulations. The regulations require institutions to form Institutional Review Boards (IRBs) to review and oversee research with human subjects.

William Summerlin admits to fabricating data by using a marker to make black spots on white mice at Sloan Kettering Cancer Institute. He was developing a technique for transplanting skin grafts.

Monsanto and Harvard reach a deal for the first major corporate investment in a university.

Scientists gather at Asilomar, California to discuss the benefits and risks of recombinant DNA experiments and agree upon a temporary moratorium for this research until they can develop biosafety standards. The NIH forms the Recombinant DNA Advisory Committee to provide guidance for researchers and institutions. Research institutions form Institutional Biosafety Committees (IBCs) to review and oversee research involving hazardous biological materials.

E.O. Wilson publishes Sociobiology, which reignites the centuries-old "nature vs. nurture" debate. His book proposes biological and evolutionary explanations of human behavior and culture.

Louise Brown, the world&rsquos first baby conceived by in vitro fertilization, is born in the U.K. She is currently alive and healthy.

The National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research publishes The Belmont Report: Principles of Ethical Research on Human Subjects. The Report provides the conceptual foundation for a major revision of the U.S. research regulations in 1981.

Congress passes the Bayh-Dole Act, which allows researchers to patent inventions developed with government funds the Act is amended by the Technology Transfer Act in 1986.

In Diamond v. Chakrabarty, the U.S. Supreme Court rules that a genetically modified bacterium can be patented because it is the product of human ingenuity. This sets a precedent for patents on other life forms and helps to establish solid intellectual property protection for the new biotechnology industry.

The Whitehead Institute is established at MIT, another major private investment in a university.

The Department of Health, Education, and Welfare conducts major revisions of the federal human research regulations for human subjects research.

John Darsee, a postdoctoral fellow at Harvard, is accused of fabricating data. 17 of his papers were retracted.

William Broad and Nicholas Wade publish Betrayers of Truth. The book claims that there is more misconduct in science than researchers want to admit and suggests that famous scientists, including Isaac Newton, Gregor Mendel, and Robert Millikan were not completely honest with their data. Their book helps to launch an era of "fraud busting" in science.

Luc Montagnier accuses Robert Gallo of misappropriating an HIV strain. Gallo is found innocent of misconduct. Gallo and Montagnier also have a dispute about who should be credited with discovering HIV and who can patent a test for the virus. The U.S. and French governments reach an agreement to settle the controversy.

Roger Boisjoly warns NASA about possible O-ring failure, due to cold weather, in the space shuttle Challenger. NASA decides to go ahead with the launch, and the Challenger explodes, killing the entire crew.

A NIMH panel concludes that Steven Breuning fabricated and falsified data in 24 paper. Breuning is convicted of defrauding the federal government in 1988.

Martin Luther King is accused of plagiarizing his Ph.D. dissertation.

Margot O'Toole, a post-doctoral student at the Whitehead Institute, has some questions about data in a paper authored by six of her colleagues and published in the journal Cell in 1986. She asks to examine Thereza-Imanishi-Kari's lab notebooks, which seem to be inconsistent with published results. She accuses Imanishi-Kari of fabricating and falsifying data. The ensuing investigation leads to inquiries by MIT and Tufts as well as the NIH and a Congressional committee chaired by Rep. John Dingell. Nobel Prize winner David Baltimore is one of the co-authors on the disputed paper. Although he was not accused of misconduct, Baltimore resigns as President of Rockefeller University. He described the investigation, which was covered by the New York Times, as a "witch hunt." An appeals board at the DHHS eventually exonerated Imanishi-Kari, who admitted only to poor record keeping.

Harvard and Dow Chemical patent a genetically engineered mouse used to study cancer.

The PHS forms two agencies, the Office of Scientific Integrity and the Office of Scientific Integrity Review to investigate scientific misconduct and provide information and support for universities. It also amends its definition of misconduct. The two agencies are reorganized in 1992 as the Office of Research Integrity (ORI).

The NIH requires that all graduate students on training grants receive education in responsible conduct of research.

Stanley Pons and Martin Fleischmann hold a press conference at the University of Utah to announce that they have discovered a way to produce nuclear fusion at room temperatures. Dozens of labs across the world fail to reproduce their results. They are accused of fraud, sloppiness, and self-deception.

The NAS publishes On Being A Scientist (revised in 1994 and 2009), which is a free, short book on research ethics for scientists in training.

The U.S. launches the Human Genome Project, a $20 billion effort to map and sequence the human genome.

W. French Anderson begins the first human gene therapy clinical trial on patients with ADA deficiency, a genetic disease that affects the immune system.

In Moore v. Regents of the University of California, the California Supreme Court rules that researchers have intellectual property rights in a cell line derived from Moore's tissue, but that Moore did not have any property rights in his own tissue. The Court also rules that the researchers violated Moore's right to informed consent by not disclosing their commercial interests in his tissue sample to him. Most courts have followed this ruling by holding that patients no longer have rights to tissues leftover from surgeries or procedures or donated to researchers.

Congress investigates conflicts of interest involving Pharmatec and the University of Florida.

Europeans oppose the introduction of genetically manipulated foods and crops. Consumers in the U.S. are more receptive to GM plants and animals. Europeans finally allow GM foods but require them to be labeled as such. The U.S. does not mandate labelling of GM foods, but many manufacturers and suppliers voluntarily label foods as &ldquono GMOs&rdquo or &ldquonot GM.&rdquo

U.S. federal agencies revise their human research regulations. All U.S. government agencies, except the EPA, now accept one basic regulatory framework, known as "the Common Rule" (45 CFR 46).

NAS publishes Responsible Science: Ensuring the Integrity of the Research Process. The book estimates the incidence of misconduct, discusses some of the causes of misconduct, proposes a definition of misconduct, and recommends some strategies for preventing misconduct.

In Daubert v. Merrell Dow Pharmaceuticals the U.S. Supreme Court rules that judges serve as the gatekeepers for admitting scientific testimony in court and that they can use a variety of criteria, including testability, reliability, peer review, and general acceptance for determining whether testimony is scientific.

Fertility researchers successfully clone human embryos.

Harvard psychologist Richard Herrnstein and Charles Murray publish The Bell Curve, a controversial book that reignites the centuries old debate about biology, race and intelligence.

Roger Poisson admits to fabricating and falsifying patient data in NIH-funded breast cancer clinical trials in order allow his patients to qualify for enrollment and have access to experimental treatments.

The NIH applied for patents on thousands of gene fragments in order to undercut private efforts to patent gene fragments. The Patent Office rejected the NIH's applications.

The Ryan Commission, convened by NIH, holds meetings on scientific misconduct.

The Clinton Administration declassifies information about secret human radiation experiments conducted from the 1940s-1980s and issues an apology.

Two scientists who worked at Philip Morris, Victor DeNobel and Paul Mele, testify before Congress about secret research on the addictive properties of nicotine. If the research had been made public, the FDA or Congress might have taken additional steps to regulate tobacco as a drug. Many states and individuals brought litigation against tobacco companies, which led to a $206 billion settlement between tobacco companies and 46 states. The scientific community also publishes more data on the dangers of second-hand smoke.

Boots Pharmaceuticals pressures Betty Dong to withdraw a paper from publication in JAMA showing that its drug, Synthroid, is not more effective than generic equivalents at treating hypothyroidism.

Dozens of studies are published in biomedical journals which provide data on the relationship between the source of research funding and the outcomes of research studies, the financial interests of researchers in the biomedical sciences, and the close relationship between academic researchers and the pharmaceutical and biotechnology industries.

The NIH and NSF revise their conflict of interest policies.

Scientists and defense analysts become concerned about the use of chemical or biological weapons by a terrorist group after Aum Shinrikyo, a Japanese doomsday cult, releases sarin gas in a Tokyo subway, killing 12 people and sending 5,500 to hospitals. The group also attempted (unsuccessfully) to spray anthrax spores over Tokyo. In 1998, terrorism experts warn about the use of biological or chemical weapons by Osama bin Laden and Saddam Hussein.

Over 200 religious leaders, led by biotechnology critic Jeremy Rifkin, protest the patenting of plants, animals, and human body parts in Washington, D.C.

Dolly, the world's first cloned sheep, is born her birth is announced in 1997. Several European nations ban human cloning. Congress considers a bill to ban all human cloning but decides not to after scientists argue that the bill would undermine biomedical research.

The ICMJE, representing over 400 biomedical journals, revises its authorship guidelines.

In an article published in New England Journal of Medicine, Peter Lurie and Sidney Wolfe accuse the NIH, WHO, UN and CDC of designing and conducting unethical studies on the prevention of mother-child transmission of HIV in developing countries. The dispute spurs a closer examination of international research ethics codes and guidelines.

Scientists perfect methods for growing human embryonic stem cells. Some countries ban the research others promote it.

Craig Venter forms Celera Genomics and begins a private effort to sequence the human genome, using dozens of automated sequencing machines.

Apotex forces Nancy Olivieri, a clinical researcher at the University of Toronto, to withdraw a paper that exposes safety concerns about its drug deferiprone, which is used to treat thalassemia. The company tries to discredit Olivieri and have her fired.

Jessie Gelsinger dies in a human gene therapy experiment at the University of Pennsylvania. The event triggers heightened scrutiny of conflicts of interest in human subjects research, including institutional conflicts of interest. Penn settles a lawsuit brought by the Gelsinger family for an undisclosed amount of money.

Human research lawsuits increase dramatically. Alan Milstein, from the law firm Sherman, Silverstein, Kohl, Rose & Podolsky, P.A., instigates 13 lawsuits against researchers, universities, pharmaceutical companies, and Institutional Review Board members.

The U.S. NIH and OHRP require all people conducting or overseeing human subjects research to have training in research ethics.

The U.S. Office of Science and Technology Policy finalizes a federal definition of misconduct as "fabrication, falsification or plagiarism" but not "honest error or differences of opinion.&rdquo Misconduct must be committed knowingly, intentionally, or recklessly.

ORI proposes mandatory training in responsible conduct of research (RCR) for all researchers on PHS grants, including junior senior investigators, students, and technicians. Several scientific associations and universities oppose the policy as an unnecessary and un-funded mandate. The Bush Administration suspends the ORI proposal in 2001 on the grounds that the agency failed to follow proper procedures for proposing new government regulations. Many research institutions voluntarily expand their RCR training programs.

Celera and the Human Genome Project both complete 99% complete drafts of the human genome and publish their results in Science and Nature.

Congress debates legislation on human cloning.

Several journals, including Nature and JAMA, experiment with requiring authors to describe their responsibilities when publishing research.

The Bush Administration announces that the NIH will only fund human embryonic stem cell research on approximately 64 cell lines created from leftover human embryos.

Terrorists hijack 3 airplanes on September 11 and kill over 5,000 people. Several weeks later, someone send four letters containing anthrax, killing 5 people and infecting 23. U.S. Army medical researcher Bruce Ivins, who committed suicide, is the prime suspect.

Bell Labs determines that Jan Hendrick Schön, a rising star working in condensed matter physics and nanotechnology who published dozens of articles in a short period of time in prestigious journals, had fabricated and falsified data. 28 papers authored by Schön were retracted.

The President's Council on Bioethics recommends that the U.S. ban reproductive cloning and enact a moratorium on research cloning.

Historian Stephen Ambrose is accused of plagiarism.

The NAS publishes Integrity in Scientific Research, which recommends that universities develop programs for education in responsible conduct of research (RCR) as well as policies and procedures to deal with research ethics.

North Korea declares that it has a secret nuclear weapons program and warns that it has other "more powerful" weapons.

Scientists publish several papers in prominent journals with direct implications for bioterrorism. A paper published in the Journal of Virology described a method for genetically engineering a form of mousepox virus that is much deadlier than the naturally occurring strain. A paper published in Science showed how to make the poliovirus by obtaining supplies from a mail-order company. A paper published in PNAS develop a mathematical model for showing how many people would be killed by infecting the U.S. milk supply with botulinum toxin. In 2003, the American Society for Microbiology (ASM), the National Academy of Sciences, and the Center for Strategic and International Studies held a meeting to discuss the censorship biological research that poses security risks. Journals agree to self-censor some research.

The U.S. invades Iraq with the stated purpose of eliminating its chemical, biological, and nuclear weapons programs. The U.S. found evidence of weapons programs but no actual weapons.

The EPA suspends the CHEERS study due to criticism from advocacy groups and members of Congress, who claimed that the study was intentionally exposing children to pesticides. The EPA revised its human subjects rules in response to a Congressional mandate to strengthen protections for children and pregnant or nursing women.

Ronald Reagan, Jr. makes a presentation in support of federal funding for embryonic stem cell research to the Democratic Convention. Stem cell research (and therapeutic cloning) become hot issues in the 2004 Presidential election.

Merck withdraws its drug Vioxx from the market, due to safety and liability issues. As many as 50,000 people had a heart attack or stroke while take the drug, and thousands sued the company. As early as 2001, Merck scientists suspected that Vioxx could increase the cardiovascular risks, but researchers funded by Merck did not publish some of the data that would support these suspicions, even though they reported it to the FDA. In 2001, the FDA warned Merck that it had misrepresented Vioxx&rsquos safety profile to the public and in 2002 it issued a black box warning for the drug. A systematic review of antidepressant medications known as selective serotonin uptake inhibitors (SSRIs) found that some of these drugs increase the risks of suicide in adolescents and children. The review included data from the U.K.&rsquos Committee on Safety in Medicines, which had not been previously published. Patients, parents, researchers, and policymakers accused companies intentionally hiding this data from the public, and New York Attorney General Eliot Spitzer sued Glaxo for fraud. As a result of these problems related to data suppression, government agencies (including the FDA) and journals now require clinical trials to be registered on a publicly available website. Registration includes important information about the studies, including research design, interventions, and methods research sites and personnel contact information and research results (but not raw data).

In response to criticism from Congress, the NIH revises its conflict of interest rules for intramural research. NIH researchers cannot hold stock in pharmaceutical or biotech companies or consult with these companies for pay.

Seoul University research Woo Suk Hwang admits to fabricating data in two papers published in the journal Science. In the papers, Hwang claimed that he had used nuclear transfer techniques to develop patient-specific human embryonic stem cells.

University of Vermont researcher Eric Poehlman admits to fabricating or falsifying data in 15 federal grants and 17 publications. Poehlman served a year and day in federal prison and agreed to pay the U.S. government $180,000 in fines.

In response to recommendations from a National Research Council report titled &ldquoBiotechnology in the Age of Terrorism,&rdquo the Department of Health and Human Services establishes the National Science Advisory Board for Biosecurity (NSABB) to provide advice and guidance to federal agencies, scientists, and journals concerning oversight and public of research in biotechnology or biomedicine which can be readily applied to cause significant harm to public health, agriculture, the economy, or national security (i.e. &ldquodual use&rdquo research).

Someone hacked into the email server at the University of East Anglia Climatic Research Unit (CRU) and posted on the internet thousands of emails exchanged between climate change researchers at the CRU and researchers around the world. The emails showed that the researchers refused to share data and computer codes with climate change skeptics, who called the incident "climategate." The Intergovernmental Panel on Climate Change (IPCC), which relies heavily on data and models from CRU researchers, vowed to promote greater openness in climate research.

The Obama Administration announces it will significantly expand NIH funding of human embryonic stem cell research which had been restricted under the Bush Administration.

The National Science Foundation (NSF) announces RCR training requirements for funded investigators, students, and trainees. The NIH expands and strengthens its RCR training requirements.

While doing research on the Tuskegee Syphilis Study, Susan Reverby, Professor of Women&rsquos Studies at Wellesley College, uncovered documents concerning unethical research experiments on human subjects conducted by the U.S. government in Guatemala from 1946 to 1948. The research involved intentionally infecting over 1300 subjects with syphilis to test the effectiveness of penicillin in preventing this disease. Only 700 subjects were given penicillin and 83 died as a result of the study. The subjects were not informed that they were participating in an experiment.

Lancet retracts a paper for fraud, published in 1998 by Andrew Wakefield and colleagues, linking autism to vaccine for measles, mumps, and rubella. Members of the anti-vaccine movement cited the paper as proof that childhood immunizations are dangerous. Vaccination rates in U.K., Europe, and the U.S. declined after Wakefield&rsquos study was published. An investigation by journalist Brian Deer found that Wakefield had not disclosed a significant financial interest and had not obtained ethics board approval for the study. Wakefield&rsquos research had been supported by a law firm that was suing vaccine manufacturers. A lawyer for the firm had helped Wakefield recruit patients. Wakefield did not disclose his relationship to the law firm in the 1998 paper. In 2010, the U.K.&rsquos General Medical Council (GMC) revoked Wakefield&rsquos license to practice medicine following an investigation which concluded that he had not disclosed a significant financial interest and had performed risky procedures, such as colonoscopies and lumbar punctures, without appropriate pediatric qualifications or ethics committee approval.

Jeffrey Beale publishes a list of what he calls &ldquopredatory journals.&rdquo Predatory journals are profit-driven journals that charge high fees for open access publication, promise rapid publication, and have poor (or nonexistent) standards for peer review. Beale later withdraws his list due to pressure from journals.

Ivan Oransky and Adam Marcus launch Retraction Watch, a blog that post retractions of scientific papers and articles related to research integrity.

The World Conference on Research Integrity releases the Singapore Statement on Research Integrity, a code of ethics for scientists in various disciplines.

The NIH and NSF revise their conflict of interest rules for funded research.

The Office of Human Research Protections announces proposed changes to the Common Rule to enhance human subject protections and reduce investigator burden. The Common Rule has not been changed significantly since 1981.

Journalist Rebecca Skloot publishes a widely-acclaimed book about Henrietta Lacks, an African American woman who provided the tissue for a widely-used cell line known as HeLa (an abbreviation of her name). In 1951, Lacks underwent treatment for cervical cancer at Johns Hopkins Hospital and died later that year. Researchers discovered that they were able to culture the cells from Lacks&rsquo tumor and keep them alive, which was the first time that scientists had been able to grow a human cell line. HeLa cells have been used in thousands of laboratories around the world in various biomedical experiments. Skloot was interested in finding out where the HeLa cell line came from, and she discovered that it came from Henrietta Lacks. Skloot interviewed Lacks&rsquo family and learned that researchers had grown her tumor cells without her consent and without providing the family any compensation, which was a common practice at that time. Skloot decided to share profits from her book with the family. In 2013, the NIH reached an agreement with Lacks&rsquo family concerning access to genomic data from the cell line. The agreement gives the family control over access to the data and acknowledgment in scientific papers.

Several authors publish papers documenting a dramatic increase in the number of retracted papers since 2001. A majority of the retractions are due to research misconduct.

Two papers embroiled in controversy were published in Science and Nature after several months of debate about their implications for bioterrorism. The papers reported results of NIH-sponsored research conducted by a team working in the Netherlands, led by Ron Fouchier, and a team working at the University of Wisconsin, led by Yoshihiro Kawaoka. The researchers were able to genetically modify an H5N1 avian flu virus so that it can be transmitted by air between mammals, including humans. Currently, avian flu can only be contracted through direct contact with birds. The virus is highly lethal, with a mortality rate of over 50%. Over 300 people have died from the virus since 1997. The National Science Advisory Board for Biosecurity (NSABB) initially recommend that the papers be published in redacted form, with key details removed and only made available to responsible scientists, so the terrorists or others could not use the information to make deadly bioweapons. However, the NSABB changed its mind and recommended full publication of both papers after learning more about the value of the research for public health (e.g. monitoring of bird populations, vaccine development), biosafety measures, how difficult it would be for terrorists to replicate the work, and problems with redacted publication.

The NIH launches the reproducibility initiative in response to problems with the reproducibility of scientific research.

In Association for Molecular Pathology et al. v. Myriad Genetics, the U.S. Supreme Court rules that isolated and purified DNA cannot be patented. Only DNA that has been modified by human beings can be patented. The ruling invalidates Myriad&rsquos patents on BRCA1 and BRCA2 genes and creates uncertainty concerning the legal validity of other types of patents on isolated and purified chemicals.

Haruko Obokata, a biochemist at the RIKEN Center for Developmental Biology in Kobe, Japan, and coauthors published two high-profile papers in Nature describing a method for converting adult spleen cells in mice into pluripotent stem cells by means of chemical stimulation and physical stress. Several weeks after the papers were published, researchers at the RIKEN Center were unable to reproduce the results and they accused Obokata, who was the lead author on the papers, of misconduct. The journal retracted both papers in July after an investigation by the RIKEN center found that Obokata had fabricated and falsified data. Later that year, Obokata&rsquos advisor, Yoshiki Sasai, committed suicide by hanging himself.

Various funding agencies and journals, including the NIH, Science, and Nature, take steps to promote reproducibility in science in response to reports that many published studies in the biomedical, behavioral, and physical sciences are not reproducible.

17 federal agencies publish a Notice of Proposed Rule-Making (NPRM) for revisions to the Common Rule. The changes would increase oversight of human biological samples, expand the categories of research exempted from the rule, enhance informed consent requirements, require a single IRB for multisite research, and reduce some regulatory burdens on researchers and institutions.

The NIH places a temporary moratorium on funding for experiments involving human-animal chimeras while it revises existing rules that govern this research.

17 federal agencies publish the Final Rule for revisions to the Common Rule. The Final Rule eliminates controversial provisions that would have required prior consent for all research involving human biological samples. The rule became effective in 2019.

In October, He Jiankui, a scientist of the Southern University of Science and Technology in Shenzhen, China, announces the birth of the world&rsquos first gene edited babies, both girls. He claims that he used CRISPR-Cas 9 technology to modify the CCR5 gene to give the girls immunity to HIV. The announcement generates outrage around the world and many scientists and policymakers call for a ban on human germline, genome editing.

Is compromise the best way forward?

Let me take stock of what I have said thus far. In the previous section, I have shown how the arguments of beneficence and technical feasibility in favour of embryo research and of extending the 14-day limit are less straightforward than their proponents seem to suggest. I have also suggested, using the slippery slope argument as an example, that extending the limit for embryo research might undermine public trust in scientists, regulators and overseeing bodies. In order to show the importance of compromise and the value of respecting pluralism in the context of embryo research, I will not juxtapose the arguments of the beneficence of research and of technical feasibility with arguments pertaining to the sanctity of human life and human dignity. These arguments arise in the context of fundamental disagreements concerning the beginning of human life, the value of personhood, and concerning what respect human dignity ought to entail. They are portrayed as factual questions by both advocates and critics of research (i.e. research beyond the 14-day should not be allowed/should be allowed because human embryos are/are not persons and doing research on them would/would not violate their dignity) however, they are not merely a matter of fact, but they are informed and shaped by values, feelings and beliefs. Regardless of one’s opinion regarding the values and beliefs of those defending the sanctity of life view, the burden of justifying one’s claim should rests both on those defending this view and on those advocating technological progress, contrary to what seems to be normally believed [48].

What I intend to argue in this last section is that even if the question of the moral status of the embryos cannot be easily settled, there are two arguments in favour of reaching a compromise and respecting value pluralism in the context of embryo research: the argument of trust and the argument of respect. I argue that the argument of trust in favour of compromise, albeit being sound and widely used, could, in certain instances, assume instrumental and paternalistic forms. I then argue that in the context of embryo research and more generally in the governance of scientific and technical breakthroughs it would be helpful to employ what I call the argument of respect.

The argument of trust and the argument of respect

The first argument in favour of reaching a compromise that, other things being equal, respects value pluralism is what I define as “the argument of trust”. It is structured as follows:

Scientific research is important because it improves people’s lives and it should be allowed to carry on

Public trust is necessary to carry on scientific research

Therefore, public trust in scientific research ought to be preserved

Given competing views concerning the moral status of the embryo, this argument provides a reason in favour of finding a solution of compromise that accommodates as much as possible these views and avoids the risk of overriding those of one camp with those of the other. The argument of trust relies on premise a) to show that people’s lives are improved by scientific research [76]. It relies on premise b) to show that public trust is a necessary condition for scientific research to be carried on [77, 78]. Trust is needed to ensure public acceptance of concrete applications of research to preserve public confidence in policies informed by scientific research and to allow the investment of public resources in scientific research [77, 78]. In the context of embryo research, the argument shows that, given the potential benefits of embryo research (premise a), and given the importance of public trust to carry on this type of research (premise b) there are good reasons to preserve public trust (conclusion c). Following this argument, it is possible to draw two conclusions: on the one hand, if the extension of the 14-day limit for embryo research is strongly opposed by the public, Footnote 11 then there are good reasons not to extend the limit. On the other, if opposing views coexist in the public understanding of embryo research, then there are good reasons to find a solution that strikes a compromise between these views.

The 14-day limit was a solution of compromise between conflicting moral views designed to maintain public trust whilst allowing research to go forward [12, 24, 79]. Today, there are two questions that need to be addressed, an empirical and a normative-theoretical question. The empirical question is whether the public (or at least a vast majority of it) is against the extension of the 14-day limit for embryo research. The normative-theoretical question is whether public opinion should influence the decision to change or retain the current 14-day rule, and if so, to what extent. An implication of taking into account the empirical question is that, if the public view of embryo research has become more favourable, then there is at least one good reason in favour of revisiting the 14-day rule. Footnote 12 In January 2017, a YouGov poll commissioned by the BBC in the United Kingdom, asked respondents’ views on an extension of the limit up to the 28th day. Interestingly, 48% of the 1740 respondents said that they would be in favour of extending the limit, while 19% wanted to keep the current limit. In addition to these respondents, 10% maintained that they would want embryo research to be banned altogether, while 23% did not express any of the aforementioned preferences [80]. In addition to the empirical question regarding public attitudes towards the extension of the 14-day limit, one may wonder how such attitudes would be towards therapies and scientific results obtained thanks to research on embryos beyond this limit in countries that may extend it. Currently, the 14-day limit is either enshrined in the laws (for instance in the United Kingdom, Canada and Spain) or specified in the scientific guidelines (for instance in Singapore, China and in the United States) of many countries. However, these regulatory frameworks may change in the future. Hence, if this becomes the case, it would be interesting to investigate public attitudes towards those therapies and other advances of basic research that are made possible by research in countries that allow embryo research beyond day 14. Footnote 13

I will not provide an answer to these empirical questions here, if only because of the dearth of empirical data on public attitudes towards the extension of the limit, and embryo research more generally. Regarding, instead, the normative-theoretical question (i.e. whether public opinion should influence the decision to change or retain the current 14-day rule) the argument of trust would indicate that the answer is yes: public opposition to extending the 14-day rule should prevent its extension, while public agreement to a proposed change (i.e. the 28-day limit or other future proposals) should facilitate its extension. The risk of proceeding regardless of public attitudes towards an extension of the limit is that policies derived by embryo research will not be backed up by public consensus and applications of embryo research (e.g. therapies developed thanks to the knowledge yield by embryo research) not accepted. If the importance of maintaining public trust in scientific research (premise b) is motivated by these considerations, then it seems that public trust is only valued for instrumental and extrinsic reasons. In other words, this understanding of the importance of maintaining public trust in scientific research does not value public trust for its own sake, but only for its role in allowing research to go forward. What is problematic of this approach to public trust is that it offers a consequentialist reason in favour of respecting value pluralism, a reason that pertains to the better tangible outcomes of respecting value pluralism over other strategies of governance. In addition to this, when the instrumental justification of maintaining public trust is associated with a representation of the public as ill-informed and with little or no understanding of the potential benefits of research, it could be motivated by paternalistic considerations. Scientists and ethicists may risk misinterpreting public concerns and views over embryo research as the result of a lack of expertise or evidence-based information rather than a matter of legitimate and genuine disagreement over values [81, 82].

The second premise of the argument of trust, however, could be also motivated by a concern for a deliberative conception of democracy. This conception of democratic governance requires to both citizens and their representatives to provide public justifications of their views and to engage in deliberative processes. Public trust becomes then fundamental to allow these deliberative processes to take place and to foster better strategies for policy-making [82, 83]. These deliberative processes of mutual exchange between experts and the public, together with a commitment to respecting conflicting moral views (i.e. respect for value pluralism) provide a reason in favour of finding a solution of compromise that, given competing views concerning the moral status of the embryo, respect this plurality of views and values regarding embryo research. These considerations concerning the importance of maintaining public trust echo other considerations employed to defend democracy as a political system and as a valuable form of governance. These include, for instance, equality: given the existence of conflicting views, values and beliefs, a good reason to respect them is that people or groups holding these different views will be respected by being granted an equal say on matters of common concern [84, 85]. Mertens and Pennings [8] have argued in favour of the benefit of compromise in the context of different policies regulating embryonic stem cell research and have concluded that there is a moral obligation to respect conflicting moral views [8]. Similarly, Devolder argued that in spite of the epistemic costs of compromise, middle-ground positions could still be defended in the context of policy-making [6]. What I suggest here is that the commitment to a democratic decision-making process entails a fundamental respect for value pluralism [86]. In Warnock’s and the IVF-Inquiry’s time, this respect for value pluralism translated into a deliberation resulting in the 14-day rule. Today it translates into favouring an assessment of the rule and of the potential reasons to change it that once again takes into account the conflicting moral views held in society an assessment that cannot rest on the argument of the benefice of research and of scientific feasibility alone.


Given the documented difficulty of communicating with vaccine-hesitant and vaccine-opposing families in a way that addresses their concerns and respects their autonomy, coupled with challenges in communicating the greater good of vaccinations in typical face-to-face clinical encounters, it is time to rethink how health care practitioners, policymakers, and communicators approach vaccine education and communication. From a policy and clinical ethics perspective, this might mean making the informed-consent process more educationally intensive and applicable not only to parents choosing to immunize their children but also, and especially, to those refusing or declining immunizations or requesting a modified schedule. Although findings regarding the impact of educational and messaging efforts on vaccine attitudes and intentions are mixed, one approach worth investigating might be an informed opt-out process in which parents are presented with information regarding what it is like to see one’s child suffer from a vaccine-preventable illness such as measles.75

From a policy perspective, it may mean reevaluating the ease with which nonmedical exemptions are handled, with increased attention toward ensuring that parents are making informed decisions, especially when they opt out of vaccination. The state of California recently passed legislation that removes the option of personal belief exemptions.5 This has led to much public deliberation as to whether the state has overstepped its authority by encroaching on individual parental rights in the name of promoting public health, with some arguing that mandatory vaccinations also violate the Nuremburg Code.76 We disagree with both of these claims. Regarding the former, it is precisely the business of state actors to make these decisions, and the acceptability of such decisions will be adjudicated at the ballot box. Regarding the latter, we fail to see how a 6-decade-old statement crafted after a military tribunal for unethical human experiments applies to the present case.

Given the reality of limited clinical encounter time and the challenges of tailoring large-scale public health media campaigns, it might make sense to illustrate concepts through other means of information transmission. For example, parents of pediatric patients could be directed to online video narratives of individuals describing their experiences with vaccine-preventable illnesses, or to decision-support instruments and educational Web sites that can present information that is targeted or, ideally, tailored to parents’ specific concerns. Researchers are developing and refining such tools.77,78 The timing of information provision could also be fine-tuned, adding prenatal visits as an opportunity for families and providers to discuss childhood immunizations as well as to identify opportunities and resources for vaccine education well before an infant’s first vaccines.

Striking a balance between respecting parental rights and autonomy and maximizing the greater good of herd immunity may seem an intractable problem, especially in the current climate of heated vaccine debates. It undoubtedly calls for a multifaceted set of interventions however, deliberate efforts must be made now. The alternative—permitting opinions and attitudes alone (which may be based on erroneous information or misperceptions) to support behavior—is as great a threat to public health as the unvaccinated population itself. Although this most recent measles outbreak has largely subsided, it is likely that another, potentially worse outbreak will occur. Developing sound policy now will help to reduce the severity of or altogether stop future outbreaks. Thus, as media attention to this subject waxes and wanes, we implore readers to keep the topic of vaccine policy and ethics at the forefront.


Ethical dilemmas are situations in which an agent stands under two (or more) conflicting ethical requirements, none of which overrides the other. Two ethical requirements are conflicting if the agent can do one or the other but not both: the agent has to choose one over the other. Two conflicting ethical requirements do not override each other if they have the same strength or if there is no sufficient ethical reason to choose one over the other. [1] [2] [3] Only this type of situation constitutes an ethical dilemma in the strict philosophical sense, often referred to as a genuine ethical dilemma. [4] [5] Other cases of ethical conflicts are resolvable and are therefore not ethical dilemmas strictly speaking. This applies to many instances of conflict of interest as well. [2] For example, a businessman hurrying along the shore of a lake to a meeting is in an ethical conflict when he spots a drowning child close to the shore. But this conflict is not a genuine ethical dilemma since it has a clear resolution: jumping into the water to save the child significantly outweighs the importance of making it to the meeting on time. Also excluded from this definition are cases in which it is merely psychologically difficult for the agent to make a choice, for example, because of personal attachments or because the knowledge of the consequences of the different alternatives is lacking. [4] [1]

Ethical dilemmas are sometimes defined not in terms of conflicting obligations but in terms of not having a right course of action, of all alternatives being wrong. [1] The two definitions are equivalent for many but not all purposes. For example, it is possible to hold that in cases of ethical dilemmas, the agent is free to choose either course of action, that either alternative is right. Such a situation still constitutes an ethical dilemma according to the first definition, since the conflicting requirements are unresolved, but not according to the second definition, since there is a right course of action. [1]

Various examples of ethical dilemmas have been proposed but there is disagreement as to whether these constitute genuine or merely apparent ethical dilemmas. One of the oldest examples is due to Plato, who sketches a situation in which the agent has promised to return a weapon to a friend, who is likely to use it to harm someone since he is not in his right mind. [6] In this example, the duty to keep a promise stands in conflict with the duty to prevent that others are harmed. It is questionable whether this case constitutes a genuine ethical dilemma since the duty to prevent harms seems to clearly outweigh the promise. [4] [1] Another well-known example comes from Jean-Paul Sartre, who describes the situation of one of his students during the German occupation of France. This student faced the choice of either fighting to liberate his country from the Germans or staying with and caring for his mother, for whom he was the only consolation left after the death of her other son. The conflict, in this case, is between a personal duty to his mother and the duty to his country. [7] [4] The novel Sophie's Choice by William Styron presents one more widely discussed example. [8] In it, a Nazi guard forces Sophie to choose one of her children to be executed, adding that both will be executed if she refuses to choose. This case is different from the other examples in which the conflicting duties are of different types. This type of case has been labeled symmetrical since the two duties have the same type. [4] [1]

The problem of the existence of ethical dilemmas concerns the question of whether there are any genuine ethical dilemmas, as opposed to, for example, merely apparent dilemmas or resolvable conflicts. [1] [5] The traditional position denies their existence but there are various defenders of their existence in contemporary philosophy. There are various arguments for and against both sides. Defenders of ethical dilemmas often point to apparent examples of dilemmas while their opponents usually aim to show their existence contradicts very fundamental ethical principles. Both sides face the challenge of reconciling these contradictory intuitions. [4]

Arguments in favor Edit

Examples of ethical dilemmas are quite common: in everyday life, in stories or thought experiments. [9] On close inspection, it may become apparent in some of these examples that our initial intuitions misled us and that the case in question is not a genuine dilemma after all. For example, it may turn out that the proposed situation is impossible, that one choice is objectively better than the other or that there is an additional choice that was not mentioned in the description of the example. But for the argument of the defenders to succeed, it is sufficient to have at least one genuine case. [4] This constitutes a considerable difficulty for the opponents since they would have to show that our intuitions are mistaken not just about some of these cases but about all of them. One way to argue for this claim is to categorize them as epistemic ethical dilemmas, i.e. that the conflict merely seems unresolvable because of the agent's lack of knowledge. [10] [9] This position can be made somewhat plausible because the consequence of even simple actions are often too vast for us to properly anticipate. According to this interpretation, we mistake our uncertainty about which course of action outweighs the other for the idea that this conflict is not resolvable on the ontological level. [4]

The argument from moral residue is another argument in favor of ethical dilemmas. Moral residue, in this context, refers to backward-looking emotions like guilt or remorse. [4] [11] These emotions are due to the impression of having done something wrong, of having failed to live up to one's obligations. [5] In some cases of moral residue, the agent is responsible herself because she made a bad choice which she regrets afterward. But in the case of an ethical dilemma, this is forced on the agent no matter how she decides. Going through the experience of moral residue is not just something that happens to the agent but it even seems to be the appropriate emotional response. The argument from moral residue uses this line of thought to argue in favor of ethical dilemmas by holding that the existence of ethical dilemmas is the best explanation for why moral residue in these cases is the appropriate response. [5] [12] Opponents can respond by arguing that the appropriate response is not guilt but regret, the difference being that regret is not dependent on the agent's previous choices. By cutting the link to the possibly dilemmatic choice, the initial argument loses its force. [4] [11] Another counter-argument allows that guilt is the appropriate emotional response but denies that this indicates the existence of an underlying ethical dilemma. This line of argument can be made plausible by pointing to other examples, e.g. cases in which guilt is appropriate even though no choice whatsoever was involved. [4]

Arguments against Edit

Some of the strongest arguments against ethical dilemmas start from very general ethical principles and try to show that these principles are incompatible with the existence of ethical dilemmas, that their existence would therefore involve a contradiction. [5]

One such argument proceeds from the agglomeration principle and the principle that ought implies can. [11] [1] [5] According to the agglomeration principle, if an agent ought to do one thing and ought to do another thing then this agent ought to do both things. According to ought implies can, if an agent ought to do both things then the agent can do both things. But if the agent can do both things, there is no conflict between the two courses of action and therefore no dilemma. It may be necessary for defenders to deny either the agglomeration principle or the principle that ought implies can. Either choice is problematic since these principles are quite fundamental. [4] [1]

Another line of argumentation denies that there are unresolvable ethical conflicts. [5] Such a view may accept that we have various duties, which may conflict with each other at times. But this is not problematic as long as there is always one duty that outweighs the others. It has been proposed that the different types of duties can be ordered into a hierarchy. [4] So in cases of conflict, the higher duty would always take precedent over the lower one, for example, that telling the truth is always more important than keeping a promise. One problem with this approach is that it fails to solve symmetric cases: when two duties of the same type stand in conflict with each other. [4] Another problem for such a position is that the weight of the different types of duties seems to be situation-specific: in some cases of conflict we should tell the truth rather than keep a promise, but in other cases the reverse is true. [4] This is, for example, W. D. Ross's position, according to which we stand under a number of different duties and have to decide on their relative weight based on the specific situation. [13] But without a further argument, this line of thought just begs the question against the defender of ethical dilemmas, who may simply deny the claim that all conflicts can be resolved this way. [5]

A different type of argument proceeds from the nature of moral theories. According to various authors, it is a requirement for good moral theories that they should be action-guiding by being able to recommend what should be done in any situation. [14] But this is not possible when ethical dilemmas are involved. So these intuitions about the nature of good moral theories indirectly support the claim that there are no ethical dilemmas. [4] [1]

Ethical dilemmas come in different types. The distinctions between these types are often important for disagreements about whether there are ethical dilemmas or not. Certain arguments for or against their existence may apply only to some types but not to other types. And only some types, if any, may constitute genuine ethical dilemmas.

Epistemic vs ontological Edit

In epistemic ethical dilemmas, it is not clear to the agent what should be done because the agent is unable to discern which moral requirement takes precedence. [4] [10] [9] Many decisions in everyday life, from a trivial choice between differently packaged cans of beans in the supermarket to life-altering career-choices, involve this form of uncertainty. But unresolvable conflicts on the epistemic level can exist without there actually being unresolvable conflicts and vice versa. [11]

The main interest in ethical dilemmas is concerned with the ontological level: whether there actually are unresolvable conflicts between moral requirements, not just whether the agent believes so. [11] The ontological level is also where most of the theoretical disagreements happen since both proponents and opponents of ethical dilemmas usually agree that there are epistemic ethical dilemmas. [4] This distinction is sometimes used to argue against the existence of ethical dilemmas by claiming that all apparent examples are in truth epistemic in nature. In some cases, this can be shown by how the conflict is resolved once the relevant information is obtained. But there may be other cases in which the agent is unable to acquire information that would settle the issue, sometimes referred to as stable epistemic ethical dilemmas. [10] [4]

Self-imposed vs world-imposed Edit

The difference between self-imposed and world-imposed ethical dilemmas concerns the source of the conflicting requirements. In the self-imposed case, the agent is herself responsible for the conflict. [4] [2] A common example in this category is making two incompatible promises, [15] for example, to attend two events happening at distant places at the same time. In the world-imposed case, on the other hand, the agent is thrown into the dilemma without being responsible for it occurring. [4] The difference between these two types is relevant for moral theories. Traditionally, most philosophers held that ethical theories should be free from ethical dilemmas, that moral theories that allow or entail the existence of ethical dilemmas are somehow flawed. [4] In the weak sense, this prohibition is only directed at the world-imposed dilemmas. This means that all dilemmas are avoided by agents who strictly follow the moral theory in question. Only agents who diverge from the theory's recommendations may find themselves in ethical dilemmas. But some philosophers have argued that this requirement is too weak, that the moral theory should be able to provide guidance in any situation. [15] This line of thought follows the intuition that it is not relevant how the situation came about for how to respond to it. [4] So e.g. if the agent finds herself in the self-imposed ethical dilemma of having to choose which promise to break, there should be some considerations why it is right to break one promise rather than the other. [15] Utilitarians, for example, could argue that this depends on which broken promise results in the least harm to all concerned.

Obligation vs prohibition Edit

An obligation is an ethical requirement to act in a certain way while a prohibition is an ethical requirement to not act in a certain way. Most discussions of ethical dilemmas focus on obligation dilemmas: they involve two conflicting actions that the agent is ethically required to perform. Prohibition dilemmas, on the other hand, are situations in which no course of action is allowed. It has been argued that many arguments against ethical dilemmas are only successful in regard to obligation dilemmas but not against prohibition dilemmas. [4] [16] [17]

Single-agent vs multi-agent Edit

Ethical dilemmas involve two courses of action that are both obligatory but stand in conflict with each other: it is not possible to perform both actions. In regular single-agent cases, a single agent has both conflicting obligations. [18] In multi-agent cases, the actions are still incompatible but the obligations concern different people. [4] For example, two contestants engaged in a competition may have both the duty to win if that is what they promised to their families. These two obligations belonging to different people are conflicting since there can be only one winner.

Other types Edit

Ethical dilemmas can be divided according to the types of obligations that are in conflict with each other. For example, Rushworth Kidder suggests that four patterns of conflict can be discerned: "truth versus loyalty, individual versus community, short term versus long term, and justice versus virtue". [2] [19] These cases of conflicts between different types of duties can be contrasted with conflicts in which one type of duty conflicts with itself, for example, if there is a conflict between two long-term obligations. Such cases are often called symmetric cases. [1] The term "problem of dirty hands" refers to another form of ethical dilemmas, which specifically concerns political leaders who find themselves faced with the choice of violating commonly accepted morality in order to bring about some greater overall good. [4] [20]

Towards a New Bioengineering Ethics

Advances in stem cell science and bioengineering have given rise to many types of synthetic living models of human biology. These lab entities do not neatly fit into any of the existing approaches to bioethics guidelines and institutional review, or into the areas covered by engineering ethics. Traditionally, bioethics and engineering ethics have stood as separate spheres of scholarship and practice. These fields should be combined to form a new hybrid approach – “bioengineering ethics” – to address research that unites the biological capacity of human cells with engineered platforms.


Historically, research ethics arose out of a concern for the treatment and research uses of “natural kinds” [1] (entities found in the natural world) that are believed to have some degree of moral status or level of moral considerability, be they human beings, fetuses in utero, embryos, animals, genes, gametes, etc. Ethical standards for review committees have developed over the past forty plus years and are now considered to provide robust guidance and appropriate oversight for studies involving typical kinds of research entities. However, “non-natural kinds” of entities have emerged rapidly over the past few years – particularly, synthetic human biological constructs – and existing ethical canons and oversight infrastructure are not well suited to address these new creations. As discussed below, there exists much uncertainty over even basic research ethics issues, such as which institutional committee should be responsible for reviewing this form of research and what ethical standards ought to be employed for determining approval.

Traditional research ethics is not sufficient to capture all important ethical aspects of this cutting-edge research, for these traditional standards tend to focus on informed consent requirements for original cell line donors and their genetic privacy interests. But this typical bioethics approach says little about where the ethical limits lie for the myriad ways in which stem cells can be radically bioengineered in the lab after they have been ethically procured. A new form ofbioengineering ethics should be developed by way of a novel combination of bioethics and engineering ethics to promote the responsible development and use of synthetic entities built from human cells.

New Research Entities

Engineered entities are now being generated in laboratories from human stem cells to form biologically dynamic, living models of human biology. These models can be used to study various aspects of human development and to test new drugs and therapeutics. Prominent among these models are organoids (small stem cell-derived 3D structures that self-organize into functional cell types and recapitulate basic organ functions) and embryo models (stem cell-derived simulations of post-implantation embryos). Organoids and embryos models are just the tip of the iceberg of what is possible, however. George Church’s lab at Harvard Medical School is actively interested in bioengineering various aspects of human biology to confer even more advanced specific capabilities and traits to lab entities for research. In a 2017 article, Church and his colleagues, John Aach, Jeantine Lunshof, and Eswar Iyer coined the term “SHEEFs” to refer to these and other new lab creations, which stands for “synthetic human entities with embryo-like features.” [2] “Around the same time, a broader term “M-CELS” was coined by a research consortium based at the Massachusetts Institute of Technology (MIT) called EBICS (Emergent Behaviors of Integrated Cellular Systems) – a National Science Foundation Science and Technology Center. The term M-CELS stands for “multi-cellular engineered living systems,” and in comparison to SHEEFs, M-CELS encompass an even broader array of synthetic entities that can be made to model human biology. [3] Essentially, M-CELS are lab entities built from human stem cells (or their direct derivatives) paired with engineered non-biological components. Some M-CELS research is aimed at forming functional living machines that might become capable of sensing and information processing. An early example of this was described in 2014 in the Proceedings of the National Academy of Sciences of the United States of America (PNAS). [4] Here EBICS researchers developed a 3D printed hydrogel “bio-bot” made of mammalian skeletal muscle that could be controlled with electrical stimulation. Technological capabilities have evolved in the past few years to the point that it is now conceivable that research might expand upon this 2014 experiment. For example, by attempting to link human brain organoids to 3D printed artificial bone-like scaffolds seeded with muscle and nerve cells, EBICS researchers might investigate whether these types of M-CELS can be programmed to model the neural-muscular interface of real human beings.

This growing trend toward creating complex living models of human biology (i.e. organoids, embryo models, SHEEFs, and M-CELS) is a natural progression for stem cell research and bioengineering. (For convenience, the umbrella term M-CELS will be used throughout this essay to refer to all of these living models of human biology.) Many scientists have come to the realization that working with stem cells in flat, two-dimensional culture systems is limited for understanding how real tissues are formed in the body. Real tissue systems, organs, and embryos arise through self-organizing cell behaviors in three-dimensional environments with necessary mechanical and chemical stimuli. M-CELS of various types offer significant biomedical research benefits because they enable scientists to study accurate representations of human biology at the benchside without having to utilize human subjects or animals to study tissue and organ formation, the developmental effects of genetic diseases, or new drug targets.

Insufficiency of Current Oversight to Address M-CELS

Alongside these great scientific promises, however, the quasi-human/artificial ontological ambiguity of these synthetic models complicates how one might think about their moral status as research objects and the ethical limits surrounding their creation and use. Researchers working with M-CELS have intimated that they are uncertain about how far ethically they can push their experiments in the lab. [5] Researchers do not want to hear after spending significant amounts time and energy on a project that they have ventured into an ethical red zone. And institutional regulators do not want to see the publication of contentious research that did not first go through rigorous ethical review. Yet, despite these wishes, no one really knows at this point exactly what kinds of ethical issues will be raised that will be important to consider and manage as this new form of research develops.

Attempts thus far to identify and address ethical uncertainties at this early stage of research have been inadequate. As researchers active in this nascent area, the Church lab leadership has maintained that they have a social responsibility to think proactively about the ethics of this research and its future directions. A few years ago, they sought guidance from their institutional oversight body, the Harvard Embryonic Stem Cell Research Oversight (ESCRO) Committee. The Harvard ESCRO decided not to define bright-line limits at that time in the case of SHEEFs and asked to be updated as the science progresses toward synthetic entities that may have “morally troubling” features. [6] The Church lab also sought direction from guidelines issued by the International Society for Stem Cell Research (ISSCR), which I helped develop. However, the 2016 ISSCR guidelines relating to the creation and use of synthetic biological entities were too vague to be of much help on the applied and specific questions raised by the Church lab. (I should note that the ISSCR guidelines were left vague on this issue because the ISSCR realizes this is a novel subfield of stem cell science and thus should be closely followed before more specific guidelines can be offered). [7]

Perhaps one chief reason why we are currently in this state of regulatory and ethical uncertainty arises from an underappreciated historical quirk. Ever since guidelines for stem cell research were first formulated over a decade ago, all studies using well-established stem cell lines confined only to in vitro studies (i.e. no human or animal subject involvement) have been routinely categorized as the “least controversial” form of stem cell research. As such, they normally did not require close ethical monitoring by research institutions, much less full review by stem cell oversight boards. But now, with the rise of complex biological models of human biology – all of which are confined to the sphere of benchside research – the traditional fast-track approvals process for many forms of in vitro stem cell research may no longer be warranted. It is also unclear whether stem cell oversight committees, as they are currently constituted, are the best institutional bodies to review this type of research, since the human cells are usually differentiated by the time they are paired with heavily engineered non-biological components. In short, this is not the typical picture of “stem cell research” to which stem cell oversight committees have grown to become accustomed.

A Productive Path Forward: Bioengineering Ethics

At roughly the same time that the Church lab and ESCRO Committee conversations at Harvard were occurring, I independently conceived of and published a bioethics article in Cell Stem Cell calling for the need to expand the scope of bioethics and its approach to dynamic models of human development by bringing components of contemporary engineering ethics into the mix. 8 My article was prompted by advances in organoid technology as well as the embryo modeling work of Dr. Jianping Fu’s lab at the University of Michigan, which raised intriguing ethical and legal questions about the status of embryo models and the limits of their use. I argued in this article that a productive path forward would be to bring bioethicists and scientists together at the benchside to discuss collaboratively the ethical choices and value tradeoffs informing early research decisions during the design phase of their experiments, and to remain deeply engaged through the development and implementation of experiments, helping to navigate ethics throughout. One important benefit of this new approach is that it would avoid presenting bioethics as always a reaction to radical developments after the fact, which largely focuses people’s bioethical attention on a new technology’s ethical, legal, and social implications. Instead, my proactive collaborative approach calls for bioethicists to be the co-designers of research trajectories and choices made by scientists at the benchside, thus helping to infuse ethical reflection far more upstream during the development phases of new biotechnologies.

Often-used methods for defining new ethical practices and standards in cutting-edge science, such as stem cell research and genome editing, do not seem optimal for addressing the extraordinary variety of synthetic biological constructs possible in human modeling research, which can change configurations very quickly based on relatively unconstrained decisions at the bench. For example, multidisciplinary working groups and expert workshops may be too slow, infrequent, top-down, and removed from the action at the benchside to adequately identify and address emerging ethical issues in this area. (Recall that the ISSCR guidelines are too vague to guide labs like the Church lab.) We must take an alternative, much more nimble path, directly through the trenches.

It should be acknowledged that some institutions have research ethics consultants who can provide advice to those institutions’ own research teams. However, the bioengineering ethics approach I outline here is distinct from these types of consult services in two important ways.

First, research ethics consultation services almost always focus on improving informed consent processes at some pre-Institutional Review Board (IRB) stage of the investigators’ research. Research ethics consultants are also often asked to help find ways to reduce risk to study subjects in the design of the research protocol. In contrast, bioengineering ethics should explore ethical areas that lie far outside human subjects protections considerations. Furthermore, the type of cutting edge research referred to above does not qualify as human subjects research as such. Thus, there may be important emerging ethical issues for this field that would not be captured appropriately, or at all, by existing research ethics consultation services.

Second, institutional research ethics consultation services are summoned at the request of the research team onlyafter they have identified a concern regarding their research project (again, usually around human subjects protections). Bioengineering ethics, on the other hand, should be structured in a proactive way so that ethicists are well positioned to identify potential issues at the earliest stages of the research that the teams otherwise might have overlooked, and that can then be considered in the protocol. The ethicists and research teams can also help inform each other by collaboratively thinking through any ethical issues that may appear during the lifecycle of a research project.

Bioengineering ethics can offer a fresh new way to approach research ethics aimed specifically at the creation and use of complex bioengineered constructs. Traditionally, bioethics and engineering ethics have represented different fields of scholarship and practice. These two fields must be combined in order to address adequately M-CELS research, which unites the biological capacity of human cells with engineered platforms and artificial support systems. The collaborative nature of contemporary engineering ethics provides a valuable reference point for scientists in the lab to understand the need to explicitly discuss the trade off decisions that inevitably drive design choices. Given that new biological models are being generated through a dynamic blend of engineered components and autonomous cell behaviors, what would it mean to bring engineering ethics into the mix? To answer this question, one must consider important advances in contemporary engineering ethics.

New Conceptual Ground

Given the strong engineering aspects involved in creating dynamic models of human biology, it is tempting to assume that their associated ethical issues will be limited to their uses and societal impacts, as is the case with many other artifacts or tools created for biomedical research. But this view merely resurrects an old-fashioned version of engineering ethics. Engineering ethics has traditionally focused on issues that are already familiar to many scientists, such as anticipating the negative social implications of engineered products, or assigning blame for adverse outcomes, or promoting the professional virtues of whistleblowing or public education. But contemporary engineering ethics goes far beyond these considerations and is rooted in the belief that engineering itself is a value-driven activity and that, as such, there exists a range of possible values, including ethical values, that can inform the choices engineers make during the design process. [9] There will of course always exist some restrictions on the range of engineering choices available based, for example, on regulatory requirements, intended uses and goals, safety, cost, and – with respect to the present issue – biological relevance. Nevertheless, there will also typically exist more design choices than can be simultaneously fulfilled, and an engineer’s decision of which trade-offs are acceptable will not be a value neutral one. Doing engineering ethics in this contemporary sense involves actively contemplating which values ought to guide the engineering task at hand and why, thereby creating awareness of how trade-offs between design choices are being framed. Contemporary engineering ethics is therefore collaborative values from varying perspectives need to be weighed explicitly during the design phases of engineering projects.

For M-CELS research to proceed responsibly, the field of bioethics can help by incorporating the principle (borrowed from engineering ethics) that stem cell scientists and their collaborating bioengineers must actively deliberate about their guiding values during the design stages of their experiments, along with bioethicists who can assist in these bioengineering deliberations. Although many different biomedical researchers could benefit from taking a multi-perspectival approach to experimental design, one chief advantage of bringing contemporary engineering ethics into M-CELS research is that researchers at the bench will be encouraged to reflect openly on the implicit ethical choices they are making in designing their models while still satisfying their research aims. This reorientation would require a shift in emphasis for both engineers and bioethicists to arrive at a new form of bioengineering ethics. As I have argued elsewhere, what is needed is a fresh approach that utilizes contemporary engineering ethics, which accepts that engineering itself is a value-laden activity and that the values that drive design decisions are often themselves ethical in nature. [8] Regulators, scientific organizations like the ISSCR, and other scientists and trainees may benefit from a fresh approach by viewing the ethics of M-CELS research as arising out of a proactive collaborative endeavor between scientists and bioethicists.

It is my hope that bioengineering ethics will normalize regular lab interactions between bioethicists and scientists. It may also revive fundamental debates within engineering ethics itself that have pitted competing definitions of engineers’ professional norms of conduct against one another. For example, some scholars have argued that the primary orientation of engineers’ responsibilities is to the public good (i.e. engineers are important conduits for promoting social well being), while others have questioned this claim, arguing that engineers are not trained to make ethical and policy decisions themselves. [10] Part of the task of defining what bioengineering ethics is will require careful reflection about what the bioengineer’s socially responsible roles may be, if there are any.

In debates about emerging biotechnologies, it is well known that tensions exist between technological pessimists and technological optimists. The former believe that, although not all technological advancements ought to be opposed, developers should internalize a critical attitude toward technology and its promise of producing social good. Some may argue, for instance, that M-CELS developers should be technological pessimists in this sense. If this is the case, then one would first need a clear idea of what the real scientific possibilities are. Otherwise all one is left with are vague admonitions about “drawing lines” in order to avoid going “too far.” Thus there is a need for ethicists to work closely with scientists at all stages of the research process.

Technological optimists on the other hand may risk not being reflective enough of the development of new biotechnologies. M-CELS researchers (who tend to side with technological optimists) should be prompted to acknowledge that technology can have undesirable aspects. Getting to the point where bioengineering ethics can be carved out as a productive space for both technological pessimists and optimists will first require communicative capacity-building on the workfloor of the lab, which will be necessary for bioengineering ethics to take shape as a new approach to research ethics.

Challenges Ahead

Above I have sketched the bare outlines of what bioengineering ethics would look like. There still remains much that needs to be worked out. I conclude this essay by highlighting four areas that warrant further development.

First, complex multi-cellular constructs like M-CELS pose a unique problem for bioengineers that other engineered constructs made from non-living matter do not entail – namely the potentially unpredictable nature of biologically autonomous, self-organizing human cells. While M-CELS designers might intend for their constructs to behave in certain desired ways, biology may offer surprises that upset their best-laid plans. However, the possibility that biology might surprise us is no reason to dismiss the bioengineering ethics approach I have outlined here. As knowledge grows of how to harness the autonomous capabilities of human cells, I am confident that bioengineers will become better positioned to design model systems that are more controllable and reproducible. Progress will depend on the fact that – since early attempts at incorporating bioengineering ethics into design choices will themselves be experimental – bioengineers and their collaborators will have to watch iterations of these early attempts closely and learn from them.

Second, in thinking about the role that values must play in making design trade off decisions during the course of M-CELS development, it is easy to gloss over a fundamental philosophical difficulty. I call this challenge the “incommensurability problem.” Saying that decision makers ought to make explicit trade off decisions at the benchside implies that the design goals that must be balanced against one another are, in principle, commensurable. That is, it might be presumed that there exists a common unit of measurement upon which to justify how this balancing act can be decided. But is there in fact such a common unit of measurement available to bioengineers?

Incommensurability results when the values of different aims or goods cannot be reduced to a common measure for comparison and choice. In cases of true incommensurability (as opposed to the practical incompatibility of not being able to have one’s cake and eat it, too), the value of each option cannot be placed on the same scale of measurement without grossly distorting our understanding of what it means for each option to be valuable in its distinct way. Many, perhaps even most, design goals that cannot be simultaneously met or maximized could be constrained by incommensurability problem, not just practical or budgetary constraints.

Maybe one way to avoid the incommensurability problem is to argue that it is not really incommensurability we should worry about but rather incomparability. Incommensurability occurs when there is no common unit of measurement against which we can rationally base our comparisons and decisions. But this lack need not preclude our ability to make comparisons among options. One leading philosophical explanation is that comparisons between two incommensurable goods can be achieved if these comparisons are made in terms of some “covering value” that holds between them. [11] Such comparisons will take the form ‘X is better than Y with respect to covering value V.’ For example, one might argue that a legal career is better than an artistic career with respect to income, even though the value of law and art are incommensurable, or that doing philosophy is better than bowling with respect to the engagement of one’s higher-order mental faculties. In the case of M-CELS, one might imagine that the relevant covering values used to make comparisons across conflicting design goals could boil down to fundamental design principles that bioengineers would agree upon from the outset, such as cost effectiveness, scalability, controllability, and reproducibility that are best served by various design options for the M-CELS in question. Alternatively, the appropriate covering values might be determined through consultation with other stakeholders, such as the various publics (e.g. patients) who will likely benefit from a particular M-CELS technology.

Of course, determining these basic bioengineering design principles or other possible covering values is yet another area in need of development for bioengineering ethics. This does not appear to be a straightforward task to me. Nevertheless, the articulation of determinative covering values could aid – indeed, might even constitute – the standards of evaluation for M-CELS protocols by research review committees. Before we consider what M-CELS institutional review and oversight would look like, however, and who would be responsible for conducting it, we need to spell out the covering values that are necessary to make trade off decisions about design choices at the benchside.

Finally, fostering a new bioengineering ethics as I have sketched above will require high-level attention and dedicated resources from departments, universities, and funding agencies. [12] It is not enough to say that ethicists and bioengineers ought to work together during the life cycle of an M-CELS research project. There also needs to be multi-level institutional mechanisms and incentives to make these collaborations possible. For example, universities and funding agencies should allocate time and resources to enable ethicists, researchers, and trainees to collaborate in the manner that bioengineering ethics demands, and to reward their efforts by establishing norms and financial structures that recognize such work.

In closing, I believe these challenges, both philosophical and organizational, are tractable. Attention to these issues will aid the responsible development of M-CELS research. And the further development of bioengineering ethics could provide an appropriate model for the ethical conduct of other areas of science and engineering that depend on technologically and ontologically novel research entities for their advancement.

Insoo Hyun, PhD, Director of Research Ethics at the Center for Bioethics, can be reached at insoo_hyun (at)


[1] Beebee, Helen and Nigel Sabbarton-Leary (eds.). The Semantics and Metaphysics of Natural Kinds. (Abingdon: Routledge, 2010).

[2] Aach, John, Jeantine Lunshof, Eswar Iyer, and George M. Church. "Addressing the ethical issues raised by synthetic human entities with embryo-like features." eLife 6 (2017).

[3] Kamm, Roger D., Rashid Bashir, Natasha Arora, Roy D. Dar, Martha U. Gillette, Linda G. Griffith, Melissa L. Kemp, et al. "The promise of multi-cellular engineered living systems." APL Bioengineering 2 (2018).

[4] Cvetkovic, Caroline, Ritu Raman, Vincent Chan, Brian J. Williams, Madeline Tolish, Piyush Bajaj, Mahmut Selman Sakar, et al. "Three-dimensionally printed biological machines powered by skeletal muscle." PNAS 111, no. 28 (2014).

[5] Personal communications, Insoo Hyun and the scientific participants of the EBICS Meeting, "Workshop on Multi-Cellular Engineered Living Systems" M-CELS Workshop 2021, Q Center, St. Charles, IL, August 2-4, 2018.

[6] Harvard University Embryonic Stem Cell Research Oversight (“ESCRO”) Committee. Ethical issues related to the creation of synthetic human embryos (Cambridge MA: Petrie-Flom Center, 2018). .

[7] Kimmelman, Jonathan,. Insoo Hyun, Nissim Benvenisty, Timothy Caulfield, Helen E. Heslop, Charles E. Murray, Douglas Sipp, et al. "Global standards for stem-cell research." Nature. 533, no. 7603 (2016): 311-313.

[8] Hyun, Insoo. "Engineering ethics and self-organizing models of human development: opportunities and challenges." Cell Stem Cell 21, no. 2 (2017): 718-720.

[9] van de Poel, Ibo and A.C. van Gorp. "The need for ethical reflection in engineering design." Science, Technology, & Human Values 31, no. 3 (2006): 333–360.

[10] Harris, Jr. Charles, Michael S. Pritchard, Ray James, Elaine Englehardt, Michael J. Rabins (eds.). Engineering Ethics: Concepts and Cases. 6 th ed. (Boston: Cengage, 2014).

[11] Chang, Ruth. Introduction to Incommensurability, Incomparability, and Practical Reason. Edited by Ruth Chang. (Cambridge: Harvard University Press, 1997).

It may be ethically problematic to continue these trials without reconsiderations, since the basis for the the study and the informed consents given has changed.

Clinical trials are often divided into 1) prevention trials, which test new approaches believed to lower the risk of developing a certain disease, 2) screening trials, which study ways of detecting a certain disease earlier, 3) diagnostic trials, which study tests or procedures that could be used to identify a certain disease more accurately, and 4) treatment trials, which are conducted with patients suffering from a certain disease. They are designed to answer specific questions and evaluate the effectiveness of a new treatment such as a new drug [21].

Beauchamp & Childress [22] suggest the Scandinavian health care systems as ideal way of organising health care delivery in the way indicated. However, these health care systems are currently under pressure and are undergoing a perceptible change. In Denmark, for instance, private hospitals and private health insurances now supplement the public system.