Possible blind spots of IRBs (ethics commissions in psychology or medicine) regarding HCI

#Industrial Base, #TECH1
Resources exist to be consumed. And consumed they will be, if not by this generation then by some future. By what right does this forgotten future seek to deny us our birthright? None I say! Let us take what is ours, chew and eat our fill.
CEO Nwabudike Morgan, «The Ethics of Greed» in Alpha Centauri

Ethics commissions in universities aim to prevent ethical misconduct, esp. in research. Sometimes the ethics commission is rooted in medicine or psychology. These roots have far-reaching consequences, when research is evaluated by standards that do not really apply to it.

For example, with medicine, it’s usually the body that is in focus (even mental health aspects are usually focussed on biological or pharmaceutical aspects). With psychology, it’s experience and behavior.

But when it comes to human-computer interaction, a product is the focus. Yes, there is human-centered design, which is — as the name correctly implies — human-centered. There is the evaluation of usability and user experience. Perhaps of learning. Or of fun. But akin to studies in psychology, the participants are the basis for generalizations. In psychology about the population, in HCI, how the product will fare in everyday use.

And even though studies in psychology and HCI can look similar, it’s a crucial difference between the two. Especially considering the course of action. If you do not like the results, in HCI you can redesign the product. In psychology, well, hopefully you can modify your theories, because redesigning humans is … harder.

But the different focus (body-mind-product) is not the only problem when it comes to ethics commissions evaluating research in HCI. Looking through a few papers and referring back to my lectures on statistics and methods in human-centered design, I think that a «normal» ethics commission can be blind to the following aspects:

  • commercial projects with conflict of interests
    Although this can be a topic in medical products, esp. drugs, but cf. Alice Dreger with her criticism of the corporatization of universities.
  • co-creation conflicts with anonymity
    Participants contribute their creativity, which deserves public recognition (also in the minds of some participants). Conflicts with the usually expected anonymity in research.
  • casual data/covert observations
    Lots of data are available online, cf. ethics of Internet Research, it it possible to use it?
  • legal issues regarding terms of service
    Not checked by IRB, but perhaps by the legal department? Cf. cases of OKCupid membership crawling or Cambridge Analytica
  • values embedded in artifacts
    Products are used deliberately for social change (e.g., climate change, eating meat, etc.), but *any* product has affordances even without explicit behavior change intervention. To crystalize the problem: Informed consent addresses the study itself, but what if the participant does not agree with the overall goal of a product? His participation has made it better.
  • nudging, gamification, persuasive technology
    Values not only embedded, but behavior change is the goal, cf. ethics for persuasive technology. Worst example: dark patterns.
  • societal effects of products/changing socio-technical system
    In general, side- and long-term effects, different use (cf. security mindset), safety-critical products — includes predictions/social influence (individualized advertisement, election interference, etc.), change of information flow (filter bubbles, censorship, paternalism), control of attention (user is the product for social media, if the user is offline, the company makes no money), influence via changed reward structures, mob behavior (e.g., Twitter storms).
  • wasting resources
    Cf. a single-use pregnancy test with the hardware to run «Doom» (IRBs can see «research waste», but can they assess waste in products?).
  • artificial intelligence
    Topic with lots of new ethical issues in itself (e.g., fairness, autonomy/control, transparency, dependability, safety, security) and long-term issues due to model drift, skill degradation, etc.
  • ethical regulations due to profession
    For example, the German society for Computer Science has its own criteria for professional ethics (different kinds of competencies, good judgment, work conditions, courage, social responsibility, etc.), incl. when it comes to surveillance technology.
  • community rules (developer/hacker ethics)
    More bottom-up community driven rules of good behavior, includes use of technology, freedom of information, rules for writing code, etc.

And yeah, I wonder how many of these elements a good ethics commission in medicine or psychology would detect. Or whether they would be blind to these (and likely many other) issues.