Plagiarism #5 — Preventing Plagiarism

Claymore Mine

Image by Sgt. Scott Biscuiti [Public domain], via Wikimedia Commons (http://commons.wikimedia.org/wiki/File%3AClaymore_Recon_placement.jpg)

To end this series about plagiarism, let’s look at the possible ways to prevent plagiarism. After all, when plagiarism is discovered, it’s already too late to take the best actions. It’s going to be painful, embarrassing, excruciating — so what can be done in advance to prevent it?

Looking at the literature and at my own experiences with plagiarism (of students or authors of papers I had to review!), there are — at least — the following suggestions:

  • Make it explicit
    Especially for teaching, but also for journals, make it explicit that you check for plagiarism, what you consider as plagiarism and how you will punish it. Yes, students and journal authors should know about it, but the explicit reminder will serve at least two purposes: First, it will remind everyone of the possible consequences. It’s an “Here’s a minefield. Do not stray from the correct path or you get hurt.” kind of message. Second, once you discover plagiarism, there is not excuse of “not knowing” or of “having different standards in my country/field”. Ah, and dealing with students — if you ask them whether they know how to cite correctly and they say “Yes.”, give them a piece of text and ask them to cite it. Don’t take their word for it. People usually do not know what they do not know — or are embarrassed to admit a lack of knowledge.
  • Evaluation criteria should focus on originality and quality — not on quantity
    That’s a tough one and easier said then done. We have the “publish or perish” pressure in science — and unless this rule is lived, e.g., by making public cases where people followed this rule and got rewarded — it’s a waste of space to write it down. But it’s important if you want to prevent plagiarism and encourage good scientific practice — not just the survival of your department or institution.
  • Supervisors need to be content experts on the topics their PhD students work on
    It’s easy to do a peer review in psychology on a topic you know nothing about. Sure, you cannot judge whether the work is original or not, or whether the relevant literature was really covered, but you can judge whether the arguments make sense (within the limits of what was said) and you can easily judge the quality of the method section. But unless you stumble upon discrepancies in style, you will not be able to detect plagiarism, because you are not familiar with the original literature. So, besides not doing your PhD students any favor by “supervising” a thesis you know nothing about, and not being able to help them get into the community (because you are not a part of it), you will make it very likely that you will not detect deliberate or accidental plagiarism.
  • Peer-reviewers need to be content experts
    Similar to the previous point — the same is true for doing reviews on subjects you are not really familiar with. As much as I see peer-reviews as part of being a scientist, I have (and will) give back to the editor those papers which cover topics I am not really familiar with. It’s not something I like doing, but still … I could not do a good job. Not only regarding possible cases of plagiarism, but also regarding the merit or possible improvements of the paper.
  • Make it a topic in the department/journal
    One positive consequence of the discussion about plagiarism is that it entered public consciousness (of scientists). However, it should go beyond “Don’t be that researcher.” The discussion should include examples of correct and incorrect citations — even if many people think they know the difference. It should include difficult questions. One reason why I like the “Ask Retraction Watch” series — it shows that it’s not all black and white and some issues are difficult to solve.
  • Make explicit who is responsible for what and who can answer questions — confidentially
    I think that some researchers are hesitant of asking questions about best practice not only because they do not know whom to ask, but also because they think they will embarrass themselves if they ask. After all, you work in science, you did study your discipline, it should be known to you, doesn’t it? In many cases, no. So there should be people in place who answer questions confidentially. Sometimes it’s astonishing how large the gaps in knowledge can be — even of ‘established scientists’.
  • Take supervision seriously, don’t delegate your responsibility to programs, make sure you know the people you are responsible for and if you cannot do it, do not supervise
    A last point regarding supervision. I get the impression that at least in some cases, supervision is neglected as there are PhD programs at work which are supposed to convey everything that a PhD student needs to know. In my opinion — and it is an opinion — this does not put the supervisor off the hook. Yes, there are PhD programs and workshops and lectures, but the supervisor should get to know the people for whose scientific qualification s/he is at least partly responsible and find out where the gaps are. Formalized PhD programs can only do so much, there will be gaps in knowledge and the supervisor should give differentiated and helpful feedback to this person. If the supervisor cannot do this — e.g., because there is “not time”, or “other issues are more important” (some supervisors are managers or lobbyists, not scientists) — then, in my opinion, the best thing this person can do is to stop supervising. BTW, taking on PhD students and giving them to PostDocs to do the actual supervision works only if these PostDocs have the same knowledge as you have.

I admit, the possible ways of preventing plagiarism are skewed by my own experience. Personally, I think that plagiarism is both one indicator of bad supervision and an indicator of the problems with the current scientific system. Publish or perish — it doesn’t work. I understand the reason behind it — science that is not communicated, science that is not published and that does not pass the peer review of other scientists — it might as well not have happened, because it has no consequence.

But the focus is wrong. It’s looking at the outcome, the product, while badly neglecting the process. So the outcome is all that matters, and the process gets neglected. Who cares how the results were found, as long as there are results to publish. It’s short-sighted, it’s damaging to scientific progress, but it’s the way the ‘game’ is played.

Thing is — science is no longer alone.

For decades, centuries even, scientists were a group on their own. With their own societies, their own means of communication (journals, conferences), and their own means of controlling themselves (peer review by scientists for scientists).

But with the rise of the internet, there are other people entering the ‘game’. I think that websites like VroniPlag — which so far analyze primarily the dissertation theses of politicians for plagiarism — are only the beginning. Science will face public scrutiny as it has never been encountered before.

Misconduct did spread before — just visit any conference and talk to the participants. If you listen carefully enough, you find out who employs questionable research practices — people talk. But with the Internet, that discussion will go beyond the group of scientists. The public becomes more and more aware of cases of misconduct in science — and as scientists, we risk losing our reputation.

If we were a political party, I would say: “So be it.” But I still believe in something Richard Dawkins said beautifully (although I have problems with this style):

Scientific and technological progress themselves are value-neutral. They are just very good at doing what they do. If you want to do selfish, greedy, intolerant and violent things, scientific technology will provide you with by far the most efficient way of doing so. But if you want to do good, to solve the world’s problems, to progress in the best value-laden sense, once again, there is no better means to those ends than the scientific way.
Richard Dawkins

In a world where people are driven by preconceptions and ideology, science has the potential to look at the data and influence the world to act not how the world should work but how it actually works. It can lead to better decisions, it can make a difference. It can be a voice of reason, something science was supposed to be.

But this requires that science is able to deal with scientific misconduct. And yes, it starts this small. It starts with the misattribution of an idea. If scientists do not deal with tiny mistakes, how can we know that they deal with major ones?

All processes are prone to mistakes. All groups have to deal with people who play the system for whatever reason. The scientific community should be able to deal with plagiarism, based on the empirical evidence in each case (it’s plagiarism — there is written proof!), and openly. And it needs to do so continuously, because that’s just something that happens in any system where humans work.

If it does so openly, it has the chance of proving its value — by dealing with cases of misconduct. If it tries to downplay them and pretend that this does not happen in science — or in this particular discipline of science — it will be found out as a fraud.

And we all suffer for that.

 

That concludes the series on plagiarism. One positive thing I can say about the person who submitted a badly plagiarized paper to a journal where I was a reviewer — you got me thinking. Still, I would like to … anyway.

If you want to read the rest of the series, I’ve added the links between the postings:

Categories: Community Aspects, Doing Science, Improving your Creativity, Learning to do Science, Realizing Creative Projects, Science, Writing


Post Navigation
next older posting:
next newer posting:


1 Trackbacks & Pingbacks

  1. Plagiarism #4 — Consequences of Plagiarism | ORGANIZING CREATIVITY

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php