Report of the NIRT Workshop on Research Ethics

Cornell University, Monday and Tuesday, June 2-3, 2003

I. Introduction

A two-day NSF-funded Workshop on Research Ethics was held at Cornell University on June 2-3, 2003.

Organized by Cornell’s Nanoscale Interdisciplinary Research Team (NIRT) investigating Nanoscale Engineering of Inorganic-Organic Interfaces, the workshop brought together twenty participants to discuss ethical and social issues in scientific and engineering research. Associate Professor James Engstrom, Chemical Engineering, and Ronald Kline, Bovay Professor of History and Ethics of Engineering, co-chaired the workshop.

Attendees included faculty, post-docs, and graduate students in Chemistry, Chemical Engineering, Physics, Philosophy, and Science and Technology Studies, an editor from Science, an industrial researcher, and an expert in intellectual property from the Cornell Research Foundation.

Intense discussions of case studies from these disciplinary perspectives and an insider’s view of a recent case of data fabrication at Bell Laboratories set the workshop apart from previous endeavors at other universities.

Research integrity, credit and authorship, proposal preparation, refereeing journal articles, and intellectual property topped the list of issues discussed.

The frank exchange of opinions resulted in a surprising degree of consensus about methods to promote ethical behavior in scientific and engineering research. The workshop concentrated on common-sense ways to improve the ethical climate within research teams.

II. Introductory Session (Monday morning)

Michele Moody-Adams, Director of the Ethics and Public Life Program at Cornell and Hutchinson Professor of Ethics in the Philosophy Department, set the stage for the workshop by placing research ethics in the context of moral philosophy. Noting that the subject was an aspect of “practical ethics,” an applied branch of “normative ethics,” Moody-Adams stressed the importance of case studies for developing a sense of moral autonomy in research settings. In these studies, we should ask who’s affected, what’s at stake, what values are in conflict?

Co-chair Ronald Kline related the main issues in research ethics (integrity of research; credit, authorship, and conflict of interest; welfare of subjects, experimenters and the environment; and social implications of research) to the history of public scandals about “fraud” in science in the United States from the early 1980s to the present. Kline pointed out the importance of understanding such social relations as peer review, the referee system, replication, the phenomenon of “golden hands,” gender, other power relations, and trust in analyzing ethical issues.

The highlight of this session was an hour-long discussion of the recent, widely-publicized case of data falsification at Lucent’s Bell Laboratories by Jan Hendrik Schön. He had risen to be a star by publishing path-breaking research on organic semiconductors and molecular-scale transistors. In May 2002, Professor Paul McEuen of the Physics Department at Cornell helped bring the case to light when he and a colleague at Princeton noticed the similarity of noise patterns in graphs of organic semiconductors having dissimilar properties.

Calling the conditions for the case a “perfect storm,” McEuen narrated its events from his vantage point and led a discussion about why the fabrication occurred and why it was not caught earlier. The workshop debated the motives of Schön, the decline of an informal but strict peer review system at the lab and practices of keeping research notes at Lucent, the dependence on referee reports at scientific journals, and the responsibilities of group leaders in the research community. One participant noted that researchers are reluctant to accuse someone of fabrication because the charge, if substantiated, is equivalent to handing down a “death sentence” in science. Thus, as Park Doing has observed, there is a high and quantum social barrier to openly distrusting collaborators. Philip Szuromi, an editor at Science, related how the case was handled by his magazine. He also noted that innovative papers have a difficult time going through the referee process.

III. Research Case Studies (Monday afternoon and Tuesday morning).

The workshop broke up into four groups of about five people to discuss four hypothetical cases, or scenarios. These were selected from Research Ethics Cases and Commentaries, ed. Brian Schrag, vols. 1-6, (Association for Practical and Professional Ethics, 1997-2002, available at See the Appendix for full case descriptions and questions.

Integrity of Research

Scenario 1: “Truth or Consequences.”

Peter, a graduate student, is experimenting on cellular function and development in “knockout mice” (mice genetically engineered not to have a certain gene) in Dr. Larson’s laboratory. Excited about the results, Larson assigns another student, Sally, whose work is not progressing, to the project to assist Peter. Larson promises to place Sally as second author to Peter if she gets good results. Sally produces graphs for two sets of experiments showing an unexpected, promising pattern. Peter questions Sally’s work when she at first thinks the mice were mixed up for the repeated set of experiments. He looks through her notebook and finds that it documents her first set of experiments, but not the second set.


A general consensus was reached about this case. One group remarked that “Peter should not ignore his misgivings. He needs to confront both Sally and Dr. Larson concerning his doubts.” As first author, Peter “is responsible for all of the data and procedures to be printed in the paper.” Sally is responsible for the validity of her data and the evidence for it. “As the corresponding author, Larson shares these responsibilities, but as is often the case must trust in his students to provide this information to him.” Thus it is important to have full documentation. One participant said, “It doesn’t count if you can’t account for it.”

Some groups thought Larson put too much pressure on the researchers and created tension and resentment in the group by assigning Sally to Peter’s project.

Credit in Research and Publication

Scenario 2: “Ownership of Knowledge and Graduate Education.”

Susan Moss, a graduate student in Dr. Abrams’s laboratory, talks to Jim Reynolds, one of the post-docs in the lab whom Abrams relies on to train and assist grad students, about problems she has synthesizing her data. Together, they come up with a model and Moss presents a successful report at the group’s weekly meeting. Later, while reviewing a grant proposal from the lab, she discovers that it contains several of the experiments she mentioned to Reynolds as the next steps in her thesis research. He says the ideas were his. Moss informs Abrams, who says the two must work it out alone. Moss pushes her claims. Abrams responds, “Ideas are a dime a dozen; it’s the execution of the experiments that receives credit, and this you certainly can do.”


There was general consensus about this case, with some differences. All groups thought that Abrams had an obligation to help sort out the conflict, that the three should work it out together, and that Abrams should make clear policies about research credit within the team. While two groups leaned toward Susan documenting her ideas better and thereby claiming some credit for them, another group thought she should “chill out” and understand the proposal process. “Drawing on the experience of the members in this group, the ideas generated belonging to the laboratory as a whole can work in this case” because the lab contains only 8 people.

Three groups disagreed about the importance of research ideas and, consequently, how to allocate credit for them. The first group said “The nature of the idea determines when it becomes part of the experimental process.” More specific ideas directed toward experimentation are valued above general ideas. The second group said “Ideas become concrete only when we have a working experimental protocol tested over a period of time which ascertains [sic] the validity of the ideas.” The third group commented that “Reynolds and Moss use different ideas of the status of ideas in the lab in (as is often the case) ways that bolster their own positions” of power.

Proposal Review

Scenario 3: “Protection from Idea Scooping.”

Dr. Susan Ness is a post-doc in the laboratory of Dr. Black working on superconductivity. They decide to submit a joint proposal to NSF to continue her funding and finish conclusive tests on a new material. Ness has not published or presented anything on her results as yet. Concerned that reviewers of the proposal might steal her ideas, Ness decides to introduce an error into the methods section of the proposal. She thinks the error will be hard to identify and will prevent replication of the new material. They receive the grant, the proposals are destroyed per NSF policy, and she publishes an error-free publication to acclaim. A graduate student at another university, who has read the NSF proposal but is unaware of the publication, emails Ness about difficulties replicating her results. If Ness tells the student about the publication, she is concerned that the error in the proposal will be revealed.


This case prompted the most debate at the workshop. All groups said it was wrong for Ness to introduce the error, but some participants thought it was a viable second option. Proposals do not have the status of publications and, regrettably, reviewers often appropriate ideas, advertently or inadvertently, in the competitive environment of hot research areas. Since some data is left out of proposals, why is omission a worse sin that commission? Other participants thought lying was lying. It betrays trust in a proposal as well as in a publication, and it helps foster a bad habit of falsification and fabrication.

One group concluded that “Susan’s doubts in the confidentiality of the NSF peer [review] process are not unfounded. She is justified in acting on her beliefs although the methods she chose to protect herself are unethical.” She could have presented the methodology vaguely, “in effect stating to the peer review process that she is protecting her methodology.” She also should have discussed her problem with Dr. Black.

Journal Refereeing

Scenario 4: “To Review or Not: Reviewing the Competition.”

Prof. John Slater supervises graduate student Alice Parker. She is troubleshooting a protein purification protocol necessary for her research. He receives a manuscript to review from a competitor’s lab with a title indicating that it is very close to the work he and Parker intend to publish. He decides he can do so objectively and asks for her comments on it. They decide the data are not convincing and he recommends to the journal that the paper should not be published. Slater likes a novel technique described in the manuscript and suggests that Parker use it in her purification protocol. It works and she completes her experiment.


This case generated a good deal of debate. One participant strongly defended the minority position that Slater should, on principle, refuse to review a competitor’s paper, thus avoiding the slippery slope outlined in the case. The editor from Science said he tried to avoid situations like this in assigning referees. One group suggested contacting the editor about the situation. The majority thought Slater should review the paper if he honestly thought he could be fair, but also recognized the ethical labyrinth he was entering by doing so. Most groups agreed that it was permissible to let a graduate student help with reviews. But one participant thought it was solely Slater’s responsibility. One group said it was wrong in this case because of how close the research was to Parker’s.

Most groups thought Slater and Parker could only use the protocol if they obtained permission from the rejected authors. Collaboration might be possible. One participant argued that it was okay to use the technique because the authors had not protected their intellectual property rights on it. Another disagreed, calling it “theft, pure and simple.”

That debate revealed opposing ideals regarding credit in basic scientific and industrially-oriented research, which became a theme in the next two sessions.

IV. Intellectual Property (Tuesday afternoon)

Scenario 5: “Managing Intellectual Property” (written by Ernie Davis).

Three graduate students–Juan Silvia, Philip Dunston, and Tonya Wilkes–work on a MEMS fabrication process in the laboratory of Dr. Samuel Baker. They have executed his plans to perfection, and the results look promising. Baker assigns another graduate student, Renee Armstrong, to help write the article with the overworked students. She discovers some small inconsistencies in the data, which leads to modifications of the new process. Before sending the article to a journal, Baker lists himself and the original three graduate students as inventors on an inventor disclosure statement. In preparing a patent application, Baker discovers that an obscure journal from an unaccredited, small college has published similar findings. He decides not to acknowledge this in the patent application. The patent is invalidated when the Patent Office discovers the prior art. Baker continues developing his project, resigns from the university, and starts his own company that utilizes the MEMS process improvement.


Because few people at the workshop knew enough intellectual property law to comment knowledgeably, Davis led the group through his interpretation of the law in the case. All inventors were properly identified–an important issue because a patent can be overturned if inventors are improperly identified. The onus is on Baker to disclose all prior work, no matter where it is published. Baker does not have the right to start a company with the technology he developed at a university, without obtaining permission from its technology transfer office.

One participant strongly disagreed with Davis’s wish that all scientific and engineering research results should be screened for IP by a tech-transfer office. The policy would tend to make intellectual property, rather than basic science, the goal of university research. Another participant agreed, but thought there was not enough screening for IP as it is. Others said the university was moving in this direction in any event. The Cornell Research Foundation served the interests of the research community by using part of its proceeds to fund research at the university.

Several graduate students commented that this session opened their eyes to the potential commercial value of their research and the necessity for keeping careful records on it.

V. Industrial Research (Tuesday afternoon)

Dr. Kathy Vaeth, Research Scientist at Eastman Kodak, briefly compared ethical issues in university and industrial research. She observed that many of the issues brought up at the workshop would not have even come up where she worked. Her company enforces strict policies on dealing with outside researchers, refereeing competitors, and so forth, in order to protect its intellectual property rights and avoid lawsuits.

VI. Recommendations and Comments (Tuesday afternoon)

The frank exchange of opinions at the workshop led to general recommendations on how to promote an ethical climate in research groups. Most deal with improving the culture within groups.

  1. The P.I. should strive to encourage a culture of trust, openness, full disclosure, and clear guidelines within the group in regard to research responsibilities, resolving conflicts, data handling and interpretation, credit and authorship, confidentiality of proposals and refereed papers, and funding decisions.
  2. Post-docs and students should be aware of lab management issues, power relations within the group, and the need for openness while protecting oneself and the group. They should know about available mediation forums at the university, such as committee members, the Graduate School, and the university’s ombudsman.
  3. Some mechanisms for creating this climate are regular group meetings, an annual group meeting on ethics, and keeping detailed lab notebooks on experiments and ideas. Individual styles should not be lost in the process.
  4. The department or college could institute an annual workshop in research ethics, to be taken before the A-exam, and add material to introductory graduate courses on the big picture in research and the difference between IP and basic science.
  5. Funding agencies should investigate the confidentiality problem in proposal review.
  6. Faculty, post-docs, and students should recognize the vital importance of trust and its context-dependent relationships in all parts of the research enterprise.

One graduate student summed up the workshop by saying:

  • “Case studies are useful since one is able to think about these sorts of situations before having to deal with them in real life.”
  • “We learned how best to handle conflicts and maintain values and integrity, but at the same time how to protect yourself from abuse.”
  • “[I] gained a better understanding of how the scientific community functions.”