Ethics between Medicine, Psychology, and Media and Computer Science

I don’t trust a man who talks about ethics when he is picking my pocket. But if he is acting in his own self-interest and says so, I have usually been able to work out some way to do business with him.
Excerpt from the Notebooks of Lazarus Long in «Time Enough For Love» by Robert A. Heinlein

I’m working at a university with a strong medical history. Well, it looks like it was a clinic which sorta grew into a university with other disciplines as well. And I have a background in psychology, and now work in media and computer science.

The fun part of this situation is seeing how ethics are treated differently by different disciplines. And they should be. And yeah, even though there is criticism of ethics committees (e.g., “Why Ethics Committees Are Unethical“), with the usual problem of the fox guarding the hen-house and a couple other issues, there is a need for them.

But that need differs between disciplines.

Currently, my view on ethics committees, or institutional review boards (IRB), or independent ethics committee (IEC), or ethical review board (ERB), or research ethics board (REB), is that they should take into account what is actually measured.

When it comes to medicine, it’s often the human body. The will of the patient is secondary here, once the treatment has been introduced, it’s damn hard to remove it. It’s an actual nightmare scenario — no matter what your conscious mind wants, you’re in for a ride you cannot control. At all. So, the control should be high, because no matter what you want, you cannot change it later.

When it comes to psychology, it’s often the human mind. Sure, there’s an overlap with medicine when it comes to psychotherapy. But psychologists don’t influence the human body. It’s the mind, cognition, they are usually after. And this means a participant can have some control. And yeah, there were experiments or studies that played with that control, like the Milgram experiments (if you take the whole array of experiments), or the Stanford Prison study. But usually, people can stop “delivering data” and leave. The ability to stop participation without any negative consequences is one of the cornerstones of the ethical code for psychologists.

When it comes to media and computer science, it’s an artifact. One thing I usually tell my students when evaluating their apps is that they should make damn sure the participants of their evaluation know they are not evaluated — the artifacts, i.e. the apps, are. Not only because you get more accurate results this way, but because it’s true. When users fail, it’s usually because the apps aren’t good enough. The apps are evaluated, not the users’ intelligence (mind), or (the effects of a drug on) the human body. Heck, not even a mixture, like the effect of an intervention on the human mind (which includes non-clinical effects like general cognition).

So, considering the participants of each of the three domains are correctly informed, I think the standards should vary … strongly. And yeah, sometimes there’s an overlap. For example, when an app for depressive patients (media and computer science = app) uses psychological theories (psychology = mind) to influence the well-being of patients.

But overall, I think, different standards should exist. And a good ethics committee will notice the differences, and act accordingly.