Let’s Not Wait for a Panacea to Ensure Good Scientific Practice

The perfect is the enemy of the good.
Voltaire

I watched a panel discussion yesterday titled “‘Publish or perish’: Good Scientific Practice and the Battle for Attention” [German: “‘Publish or perish’: Gute wissenschaftliche Praxis und der Kampf um Aufmerksamkeit”].

It touched a lot of subjects, from scientific malpractice like plagiarism and data manipulation/fabrication, over to publication biases, open access, the role of libraries and publishers, social media and much more. The panel and the audience consisted of scientists from different disciplines and stages of their scientific career. Perhaps surprisingly, there were some interesting contributions despite the amount of topics and the short time.

There is not one size fits all solution

One thing in particular stuck out to me: It’s completely ludicrous to wait for a panacea to ensure good scientific practice.

Judging from now nearly 8 years of scientific work — there are a lot of ailments that afflict science. The panel discussion touched a few of the more obvious ones (see above). Some are present in almost all disciplines (e.g., the little value many publishers add), others are more specific (e.g., data fabrication, publication bias).

I don’t think there is one solution that will address them all perfectly in all disciplines. There is just too much variance in science — what it is, how it is done, what it produces. And while interdisciplinary exchange is good, efforts must be made within the individual disciplines. Or rather, within subsets of the individual disciplines.

It’s enough to be good for some issues within some sub-disciplines/types of research

Take, for example, pre-registration of all studies that are conducted prior to doing them. This is something medicine is working towards. But someone from another discipline mentioned the difficulty of pre-registering exploratory studies. Yup, it’s difficult to pre-register these studies, but not impossible to do. You know you are conducting a study. You’re not collecting data at random. What or even whether something interesting comes out of it or not is another question. But let the community know what you do.
Similarly, this pre-registration of studies will not solve the problem of post-hoc changes of hypotheses or outright data manipulation. But it does not have to. It’s enough that it will shed some light on the file drawer problem/publication bias.

When it comes to sharing data per default with a journal submission, yes, you cannot simply attach 2 TB of simulation data to a journal paper. But you can attach the 2 MB or even 20 MB that are all the data in some experiments. Just because you cannot do it with all experimental data is not reason to avoid doing it with some. And if you do not attach your data as a supplemental to the article, you should have good reasons.
Speaking as a psychologist dealing with data from human study participants, even privacy concerns are a non-issue in many studies. It is usually very hard to impossible to trace back study data to individual participants. Many studies are done with students and subjects that are non-threatening to privacy concerns. Even with demographic data attached it’s hard to impossible to do. So let’s do it whenever it is possible.
Sure, participants must be informed beforehand, but once this becomes standard, it would lead to a lot of benefits for science (including you). And as supplemental to an article, you would even have some checks whether you have understood the data file. Just compare the results of your analyses with the ones described in the paper. That’s something most data archives cannot provide.

Yes, that means different “standards” between and within disciplines and incremental improvements

It might seem strange to use different requirements for different kinds of studies or data, but I think it is the best approach. For example, the information provided in pre-registration will stimulate discussion. And it will point to research findings that cannot be replicated. The technical progress in data storage capacities will continuously move toward sharing more data. It will make the loss of data visible when the programs used to read this data become unusable.

And this discussion, this continuous improvement à la Kaizen, will be better for good scientific practice than to wait until we have agreed on everything.

Because we cannot wait for an agreement that might never happen.

Not if we want to do science with integrity. Not if we are after more than “bottle washing and button counting” even if that brings us the necessary publications for tenure. Not if we want to keep those individuals in science who are not only smart but care about integrity. And without them, what will become of science?

The upside: The change can happen

I’m not sure how many people think that incremental improvements in sub-disciplines are the way to go. I think with many scientists there comes a time when they decide on either quitting or pursuing a career by playing the game by its rules. Whether they like these rules or not. And if they decide to play the game, they play it as hard as they can. The competition is just too strong to do anything else if they want to survive.

But I hope there are scientists who say “enough is enough”. Who are fed up with seeing scientists spending their valuable time and wasting part of their career trying to replicate effects that aren’t really there. And who get angry when they learn that other groups already tried to find the same effect but couldn’t, yet there was no outlet to let other researchers know that the original study cannot be replicated. Especially those who are still outside the community and who do not know from informal conversations that they are chasing a ghost. Or who don’t want to see their work vanish behind paywalls or getting buried under DRM restrictions that prevent colleagues to actively work with the content.

Who simply did not go into science to work under these conditions. But who also don’t leave, but stay and try to improve the situation.

And here one of the upside of the panel discussion was the mentioned use of social media for scientists.

Today, a PhD student or post-doc is not an apprentice slaving away in an ivory tower somewhere in the swamps with no other contact but their supervisor and his/her direct colleagues. Today, social media allows rapid and continuous conversation between different departments, institutes, universities. They can talk to others and find out that science does not have to be this way. That there are better, more meaningful and honest ways to do science. They can learn from others who did run into the same pitfalls before them and learn to avoid the next ones. They can learn better methods. Better ways to communicate their research findings.

Sure, they will be still limited by not playing the game regardless of the consequences. But they will see what is possible in their disciplines.

And perhaps the interdisciplinary and overall reputation of science and its practitioners will play a beneficial role here. Perhaps some scientists in some disciplines will succeed and come up with groundbreaking findings despite hampering themselves by researching with integrity. Who go not only after publishable work but who adhere to best practice. It might just inspire (or shame) other disciplines to adopt these methods as well.

Wherever they can be applied.