Privacy, Security, Ethics and Science (esp. Psychology)

“All knowledge is worth having.”
Kushiel’s Legacy” series by Jacqueline Carey

Given all the talk about Snowden and the NSA, and (the lack of) privacy — it’s no wonder that discussions about privacy crop up in different settings. Recently, we had a discussion about our own attitudes to privacy in the workgroup I’m in. We are (more or less) media psychologists, so we are researching the potentials and “challenges” of media. I’m not going to discuss what was said — as this blog is not about my work. However, I would like to give a few points about privacy from my perspective as a (social) scientist. And yup, scientists love data, knowledge — and are very critical of limitations to research. These are positions I held prior and after the discussion.

  1. (Social) Scientists are not able to gauge the privacy/security implications of the tools they research/develop
    I think one of the easiest and frequently most devastating mistakes is to assume that you understand the implications of technology. I think that psychologists are especially vulnerable to fall prey to a Dunning-Kruger effect — they use computers, they (help) develop tools, they do research on them, yet they are not computer scientists or (even better) security experts. They do not possess a “security mindset” (“looking at the world in terms of attacks and defenses, in particular of looking for ways that things could go wrong” to quote Halderman’s “Securing Digital Democracy” MOOC). They often do not know about (or understand) the lower level processes involved in running the tools they develop. For example, while they might assume that a tool is anonymous because it does not save any logfiles, the server it runs on does save logfiles. And given that the IT department knows which computer has which IP address, they can easily find out who did what (which can be very useful if a colleague is a mobbing [censored] who orders things on the Internet in your name).
  2. Forget Kant (or the Golden Rule)
    There is this nice “feel good” attitude of “Act only in accordance with that maxim through which you can at the same time will that it become a universal law.” or “Do not impose on others what you do not wish for yourself.” or “Therefore all things whatsoever would that men should do to you, do ye even so to them”. However, I think this is an extremely arrogant and egocentric rule. Why? Because tastes differ. There are people who do things for fun that others would never do in their darkest nightmares — from jumping off bridges (granted, with a bungee cord) to experiencing excruciating pain (or pleasure) as a “recreational activity” (to use one example, many people hate needles — they even fear innoculations that are good for them — now look at this Wikipedia entry, or better yet, [NSFW] start a Google Image search about the topic). The thought that all people want the same is — in my opinion — a very narrow one. Many (more cognitive oriented) researchers would not do things that the majority of people love to do — e.g., put personal information on publicly accessible networks. And after all, nobody has elected scientists as guardians (or censors) of what should be allowed for others. We do not have public legitimization. So don’t impose your moral standards on others — don’t be a goody two shoes — let others come to their own conclusions.
  3. Technology is value-neutral
    As much as I have a problem with Dawkins “grandfather-like attitudes that takes anything but pure rational logic as personal affront” style, he was right on target when it comes to science: “Scientific and technological progress themselves are value-neutral. They are just very good at doing what they do. If you want to do selfish, greedy, intolerant and violent things, scientific technology will provide you with by far the most efficient way of doing so. But if you want to do good, to solve the world’s problems, to progress in the best value-laden sense, once again, there is no better means to those ends than the scientific way.” (Richard Dawkins)
    Thing is, technology is value neutral. It’s a tool that can be used in different ways, from very beneficial to very ugly. It’s absurd to come to a clear conclusion about a tool’s ethical aspects — it depends on the way it is used. And if you fear misuse of something you develop/research you better stop developing/researching anything because everything can be misused.
  4. You have no idea how humans will game your tool — or the system behind it
    A bit like “If you make something idiot proof, someone will just make a better idiot.” — human ingenuity is something. If you put a structure in place, there will be people who will game the system — and they are very creative about it. Sure, if you use your tool can state the overall goal behind it and warn everyone that if they find a shortcut that reaches the objective while missing the overall goal, they will be disqualified, but you can’t do it for everyone. So there will always be a risk that tools get used in a way you have not foreseen.
  5. If scientists cannot foresee the gems in the data, how could they warn users about it?
    One of the most scaring and ‘interesting’ moments in (social) sciences is when you learn about a correlation in the data that allows you to (more or less accurately) predict a very interesting behavior based on seemingly innocuous information. Sure, correlation is not causation, you need to find these relationships again in an independent sample, and a probabilistic science like psychology will — given the way research is done at the moment — not be able to deterministically predict the behavior of an individual. But still, it’s interesting to see certain correlations. Point being, it’s hard to know which kind of predictions are possible with a given set of data points. So how can social scientists foreseen what evil can be done with the data they assess?
    With all these things said … there are other issues …
  6. Data comes with risks, but also with chances
    There is a lot of talk about the risks that data capturing brings, positions of “only save the data that is absolutely necessary for that service/intervention”. However, I think this is much to one-sided. Sure, data has risks, it can be very embarrassing for some religious types if it becomes known that the subscriptions to the adult channel sky-rocked after they took over the hotel for a convention (yup), or that readers who bought a book by a very conservative writer also bought books about “the transformation of kinetic energy into pain” (to quote the title of an old MinD-Mag article), but there is also knowledge in that data. Granted, not exactly in the examples, but in gathering data in general. You often cannot foresee which kind of relationships are in the data and how they can be used. Without doing the research on it publicly (in the sense that the aggregated results are published, not the individual individuals exposed), this will happen in secret by people who do not care about laws (e.g., NSA and the like, but also private individuals and companies). By examining the potential — in terms of both risks and chances — it’s possible to educate the public and implement security mechanisms.
  7. Looking and reporting what people really do allows us to make better decisions — and improve our lives
    There is a lot of hypocrisy going on — in science, but also in everyday life (which, thinking about it, science is a part of). Looking at what people really do, at what really happens — it gives us an opportunity to create better mental models of the people we live with. We can act more in accordance with what we really want, how we really think. This is a problem for all ideologies — who think in terms of “people should” or “people must”, but a boon for pragmatists who first want to know where we are, what the consequences of our cognitions, emotions, and actions are, and how to deal with them efficiently. And personally, I’m tired of ideology — in all its forms.
  8. Science has responsibility — and you cannot suspend your own moral judgment
    The previous points might sound that there is nothing to be done, but science does have responsibility for its creations. Actually, “in dreams begin responsibilities.” (WB Yeats) — and as countless fictional and (unfortunately) real examples show, you cannot escape that responsibility (e.g., “The Physicists“; or the inability to escape responsibility by “suspending one’s moral judgment“). Scientists need to point of possible risks or actually occurring damages when they notice them. At the very least, they should blow the whistle.

But these are — a few of — my attitudes towards privacy, security, ethics and science. Personally, I like to continue the discussion. It’s a shame that there was no greater push into examining these questions — it would have been nice to develop a general statement. But that would have required interdisciplinary cooperation, at the very least with computer scientists with a background in computer security, jurists, perhaps even disciplines like philosophy and the like.

Hmm … it would have been nice, and useful, but … anyway … I’m still interested in discussing it. With all the ways technology is permeating our lives, esp. mobile media which is my area of research, these are really interesting times. But the statements above are my attitudes — what are yours?

Drop me a line or write a comment (click on the posting header — the comments are only available when you access the posting itself).

4 Comments

  1. Hello,

    Kant’s “Kategorischer Imperativ” is not the same as the “Golden Rule”.

    Your criticism applies to the “Golden Rule” but not to Kant’s “Kategorischer Imperativ” because:

    “Act only in accordance with that maxim through which you can at the same time will that it become a universal law.”

    is meant to be an universal law based on pure reason. For example: If I am an passionate liar it is not reasonable to lie for me, because if everybody lies (lying as an universal law) the difference because true speaking and lying collapses – and lying becomes meaningless. So it is not reasonable to lie for a passionate liar.

    Another example is that Kant states that you should help other people because it is reasonable to do so in accordance to the “kategorsichen Imperativ” and your motivation should not be friendship or pity.

    However, all this is problematic still (because it is based on the assumption that there is something like an universal concept of reason or that you are not allowed to perform a white lie), but it is not “extremely arrogant and egocentric”.

    Best regards!
    Carsten

  2. Hoi Carsten,

    thank you for the clarification — and yup, I’ve treated them the same, as this frequently happens in discussions (among non-philosophy-inclined people ;-)). Hmm, so, what about someone who always wants to tell the truth? That would collapse the difference between true speaking and lying as well … hmm, I see a problem with that law in almost all cases — you need to differentiate in your actions, nothing is always good or always bad. Hmmm …

    But still, I stand by my assertion that deciding for others what is good/should be done — either because the person him/herself likes it or because this person thinks it should make an universal law — is “extremely arrogant and egocentric”. It’s the person him/herself who claims that something should be made an universal law or (in case of the Golden Rule): I want this, so you want it too. It smacks of Paternalism.

    Best regards

    Daniel

  3. ” I stand by my assertion that deciding for others what is good/should be done — either because the person him/herself likes it or because this person thinks it should make an universal law — is “extremely arrogant and egocentric”

    Ok, I think I’ve got you point. I think you could make peace with Kant, because all what Kant gives you via his “Kategorischer Imperative” is a way to figure out by *yourself* what is right or what is wrong. He doesn’t cast a universal law upon everybody which everybody has to follow. In moral question there are no experts – there is only your own reasoning.

    And let’s get the liar case clear: If a person A is successfully lying to a person B, B is not meant to know that A is lying. But if it is an universal law that everybody should lie than B would expect a lie and A will not be able to successfully perform a lie. So A could not reasonably want to lie. This is not a problem for: Everybody should tell the truth.

  4. Ah, Kant’ “Kategorischer Imperative” is for the person him-/herself … okay, didn’t know that. Hmmm, which is probably an issue with these rules — how they were originally meant and how they are applied. Did step into that trap myself 😉

Comments are closed.