QCist

Encouragement for good scientists

  • Qian-Chen Yong
  • May 13, 2017
image

We need good scientists. We need a lot of good scientists to really drive science forward. Other than punishing scientists who committed scientific frauds, we should also do our best to encourage good scientists to continue to make reliable scientific discoveries.

 

Scientific misconduct is not new. It has existed for a long time, probably ever since fame and a sense of glory become associated with scientific achievements among scientists. However, this issue has been a lot more prevalent now based on several reports (New York Times; PNAS ; Nature; Boston Globe and many more). With several high-impact studies found to be fraudulent, the problems of scientific dishonesty and misconduct have finally received media exposure and attracted public attention. Consequently, many scientific organizations are trying to develop ideas to tackle this issue, and one high-impact journal promised the world that they will do their best to avoid publishing unreliable data (Nature). These improvements are not possible without help (or pressure) from media and websites, including Retraction Watch and PubPeer, who raise red flags regarding potential scientific misconduct and poor quality data published in high-impact journals.

 

To be fair, data may still be difficult to reproduce even when scientists have done everything right, so we cannot jump to the conclusion that irreproducible data have been falsified (although there was a study showed that scientific misconduct is the major issue). But certainly, something is wrong. Unreliable, poor quality, and irreproducible data come from different sources (here I will talk about these briefly; I will share more personal experience in my next blog… stay tuned):
(1) Poor or inconsistent quality reagents or chemicals were used in the experiment. For example, many reagents have batch-to-batch variations that can lead to variations in results.
(2) Mutations in cell lines can lead to variations in experimental results.
(3) Different experimental methods are used in different labs. For example, fixatives used to fix the tissue section may lead to different results in immunohistochemistry.
(4) Inadequate descriptions of experimental methods in publications.
(5) Poor research recordkeeping and without standard operating protocols—SOPs. This can lead to variations in experimental conditions when the experiment is replicated.
(6) Poor maintenance of laboratory instruments and equipment. For instances, pipette calibration, weighing balance calibration, and pH calibration should be done regularly, otherwise the actual value may be quite different from the optimal value.
(7) Poor mentorship regarding good scientific methods and statistics. For example, I personally encountered a researcher who thought if he ran the same sample in quadruplicate (i.e., 4 times in a single assay), this would count as n = 4 and adequate for publication.
(8) Poor scientific integrity, including cherry-picking data. No further explanation is required to understand why this action will lead to irreproducible data.

 

In my opinion, the top three points are currently beyond our control. We can only try to buy the same chemical from the same reputable company when we are carrying out our experiments (but they can be expensive). To avoid genetic mutation in cell lines, we can only do our best to have good master and working cell bank systems to avoid passaging the cell lines for too many generations (we can only hope that the cell lines does not have serious mutations when we start working on them, and we have adequate space in the liquid nitrogen tank to have large enough master and working cell bank systems for the cell line). Differences in experimental methods used by different labs are even harder to resolve because everyone is more willing to use the method established in their lab than a method from another lab. On the other hand, there are many scientists who do not share detailed experimental method in the publication for various reasons (I will discuss this topic in the future). The rest of the issues (5-8 in the list above) require a scientist to spend more time and money to fix them.

 

As we all know, time is money; so, we can assume that everything is about money. Nowadays, money is scarce, therefore pressure to get money is high. Even scientists who have secured 4 to 5 years funding (e.g., R01) cannot relax; if they relax, they might not produce enough (publications) to give them a competitive edge to get the next grant before their funding runs out in 5 years. Scientists will probably spend most, if not all, of their time focusing on producing new data, instead of working on quality-control (QC) of the data and good recordkeeping. Scientists with less patience (and integrity) might start cherry-picking data when findings are not consistent; scientists with no integrity will start falsifying data. These scientists will be able to publish faster (and likely in higher impact journals) than other scientists who devote more time to troubleshooting experiments or confirming the quality of the data to make experiments and data more consistent. Scientists who really want to produce high-quality, reliable, and significant discoveries will need to spend a lot of their personal time in the lab or office to keep up with the requirements of the current academic system: publish or perish. As a result, scientists with high scientific integrity will likely burn out and get discouraged. Our current academic system selects for scientists who publish more and in journals with higher impact factors , so we can guess who will survive better in this system. If these vicious cycles go on, soon we will have our scientific world occupied by scientists who do not really care about scientific advancement, but their personal academic status and fame. If no measure is taken, this could become a Dark Ages in scientific history (for life sciences and medicine) and disastrous to humanity.

 

So how to solve this problem? The answers are simple. Punish the scientists who have low integrity AND encourage the scientists who have high integrity. Here is the tough part: execution. How do we identify the bad scientists with confidence, knowing that an investigation for scientific misconduct usually takes years and remains inconclusive? In fact, one of the ultimate goals of QCist.com is to differentiate bad scientists from the good ones. However, it is hard to do so with the resources that we currently have (we are actively looking for more funding support and other resources, please contact us) if you want to join us and help us to QC sciences); therefore, we may not be able to distinguish a real whistleblower from an angry person making up false statements. Only if we combine the reviews of scientists from QCist.com with the evidence of scientific misconduct in PubPeer and/or Retraction Watch will we have the confidence to tell whether a scientist’s work is questionable.

 

In the meantime, we will need to identify responsible scientists with high integrity and start to encourage their survival in this scientific world. Here are a few things that can be done (again, I will discuss these points further in my next blog… stay tuned):
1) High-impact journals should embrace manuscripts that aim to replicate others’ work.
2) For high-impact journals to accept replication of others’ work, scientists should try to add new information to the manuscript (e.g., obtain the same conclusion using different methods and/or include the details of troubleshooting processes if a different result is obtained and different conclusions are reached).
3) Create a new journal that targets the replication of other works; this can facilitate publication of high-quality replication studies, especially when few journals are willing to accept this kind of work.
4) When reviewing grant applications, funding agencies should value the replication of others’ work and give certain levels of credit to scientists who spend time replicating others’ work.
5) Encourage scientists with our kind words. Personal verbal encouragement is important, but it may be more useful to leave a testimonial for the scientist through QCist.com, which can serve as a common place for good scientists to receive and save testimonials and recommendations.

 

Leaving testimonials may seem useless and not practical. However, as QCist.com is trying to be an information source for prospective trainees to choose their postgraduate or postdoctoral training, QCist.com may sway the decision of a prospective trainee to a lab with better research transparency and mentor. With better training, these trainees will become responsible and transparent scientists. In the long run, we can reduce the number of irresponsible scientists and promote scientists with greater integrity.

 

In summary, we all know how hard it is to have a successful career as a scientist, especially those who are doing their very best to produce reliable, reproducible, and even robust data. We also know that science will only advance with the accumulation of truly robust experiments and data. It is important for us to do whatever we can to encourage good scientists to stay in research. Let’s start by writing some testimonials and good reviews for those good scientists around us in QCist.com!

 

Many thanks to Dr. Kaputna for editorial and other comments.


Add comment

Leave a Reply

Your email address will not be published. Required fields are marked *