Top 3 bizarre (and wrong) scientific misconducts

  • Qian-Chen Yong
  • August 26, 2017

In my previous posts, I have discussed some examples of wrong ways to carry out an experiment or analyze data. Disclaimer: these top examples are all from my personal experience, meaning I have talked to people who personally thought what they were doing was correct, or that what they were being told to do was correct or not a big deal. I will not disclose the identity of any personnel who were involved in those wrongdoings.


The most common form of data manipulation is probably cherry picking. I once encountered a researcher who thought it was okay to disregard some of the original raw data to make the overall reported data become statistically significant without legitimate justification. This researcher carried out an experiment with 12 mice in each control and test group, but the researcher, who was instructed by his mentor, only reported 3-5 animals that showed the data they want to see. I believe everyone involved in biomedical research will agree that you will see outliers in almost any kind of experiment. To me, data may only be discarded if you have proper justifications, which may include knowing something definitely went wrong in that particular experiment. For example, let’s say the positive control did not respond as expected; therefore, the data of the subject tested along with the positive control from the same experiment batch should probably be discarded. Another example of outlier data that can be cautiously removed is when an outlier is in the same direction as the remaining data; this creates a huge standard deviation and causes the statistical analysis to lose its significance. Still, when deleting the data you should make sure you have at least another 4-6 data points remaining. All in all, you need to have legitimate reasons to remove outliers. If you just remove data to make data become statistically significant, you’re committing a scientific misconduct. You’re cherry-picking the data you want to see, and this is WRONG! For a little bit more information, check out this website.
Here I’d like to talk about the top 3 bizarre scientific misconducts I have encountered:


1) First of all, I have heard people say, “for in vitro experiments, we run qPCR experiments in quadruplicate (each sample is run across 4 wells); thus, n = 4.” Usually for qPCR experiments, researchers run the same sample in duplicate/triplicate to control for variation between reactions (personally, I prefer to run them in duplicate to save reagent, but I recommend triplicate). Nonetheless, this postdoc I know was told that the same single sample can be counted as 4 different samples; all you have to do is run the same one sample in quadruplicate. So, they just keep doing the cell experiments and run the qPCR in quadruplicate until they get the desired results. Once they obtain the “expected” qPCR data, they stop and report the data. I cannot emphasize how wrong this is. The same, one, sample from one individual experiment can only be counted as sample size n = 1. Period. No matter how many replicates you run. For in vitro experiments, it appears that most peers will agree that 3 individual experiments are sufficient. Again, this is based on whether or not you were able to repeat the experiment without much failure. Then, 3 individual experiments should be sufficient. If you need to run 6 experiments to see 3 sets of data you want to see, then you probably need to continue to optimize your experiment instead of stopping there and reporting the 3 experiments out of 6, because this is called cherry picking!


2) Another misconduct is when one creates a fake email account and serves as a reviewer for their own papers. I believe this is pretty well-known now. The first time I heard about this was before the large scale of retraction being reported. I was shocked when I first heard about it from one of my colleagues. I did not know that the system has been abused so badly. The good thing, however, is that now most of the journals will choose the reviewer based on their own research rather than using the reviewers suggested by the authors. Another similar and pretty common situation is that, the reviewer is your good friend, and he/she is too busy (or too kind) to review the paper and passes the ball back to you and asks you to review the paper yourself. Of course the manuscript will stand a good chance to receive minor revision and get published in a short time because you are one of the reviewers. This is definitely wrong. Definitely a kind of scientific fraud.


3) Thirdly, I have seen that when a group of data is close to being, but is not actually, statistically significant, some believe it is okay to add the mean value into the data set to make the data statically significant. For example:


Notably, this kind of manipulation does not change the average, and therefore, that particular student I mentioned thought it was okay to do. This is again NO NO NO NO! This is data manipulation because the experiment is imaginary! It has not been done! This is falsification! Fabrication!


In conclusion, the aforementioned ways to manipulate data are some of the most memorable and surprising that I have seen. I am sure there are many other ways to manipulate data that have probably surprised you before; please share with us in the comments.


Some of those trainees who committed those scientific misconducts were told by their mentors that they were common and acceptable actions. This is crazy! I hope their mentors were lying when they said these practices are common. But maybe this is exactly why so many works are not reproducible. I still remember one of the PhD students I once met who told me, “do not trust any data coming out from my lab.” That just showed me how vulnerable a PhD student is to being easily forced by mentors to do what something they are uncomfortable doing. Luckily a few years later, scientific misconducts from this particular lab were exposed by Retraction Watch and the mentor resigned. Good for us!


All this said, scientific discoveries will definitely be a lot more reliable if we all do our parts to review our own mentor, on QCist.com, and warn prospective trainees about our mentors if they have some problems with scientific integrity. On the other hand, we should also definitely promote our mentor if our mentor is a good scientist! Let’s start by reviewing our mentors today!

Add comment

Leave a Reply

Your email address will not be published. Required fields are marked *