You are right on target when you say that mad scientists have a total disregard for the well-being of others. We don’t want to spread evil; we just see no point in bothering to spread good.
Richard M. Mathews
In an online discussion, one person characterized the Milgram Experiment and the Stanford Prison Experiment as “abusive in the worst way, causing lifelong physical and psychological trauma”. As they “were conducted without informed consent, & the reason that Institutional Review Boards were established in Universities and Hospitals.”
If you don’t know the two studies, you find the descriptions on Wikipedia (Milgram Experiment and Stanford Prison Experiment). I like the description of the Milgram Experiment on the Wikipedia illustration: “The experimenter convinces the subject (“Teacher”) to give what he believes are painful electric shocks to another subject, who is actually an actor (“Learner”). Many subjects continued to give shocks despite pleas of mercy from the actors.”
Both studies are quite famous in psychology — or rather quite infamous. When I read ethical guidelines for psychologists, I sometimes get the impression they were written with Milgram in mind.
But I also think that the two studies were severely misunderstood. And given their infamy, it’s one of these cases where you cannot say that the emperor has no clothes. If you rationally consider the issue, people look at you like you eat babies for breakfast. But still, either you take rationality and scientific thinking seriously, or you don’t.
So I wrote a reply — and I quite liked my comment. Given that I was a member of the ethics commission in the institute I worked for, I had rather good notes on the Milgram study. After all, it was rather influential when it comes to ethics and I wanted to develop my own perspective on it. Even considering that a differentiated perspective (in which you consider the benefits and contributions) likely gets you kicked off an institutional review board.
I’ll reprint the comment with small edits here (so much for my forum anonymity, ah well …):
Did the Milgram Experiment and the Stanford Prison Experiment cause lifelong physical and psychological trauma? What are your sources, esp. regarding the physical trauma (first time I’ve ever heard about it).
I did not follow up that much on the Stanford Prison “experiment” (study, and don’t get me started on the abysmal “movie based on a true story”). But as for Milgram, I concur with his 1964 article in which he addresses the objections made by Baumrind of the same year. Of the 92% of the participants (“subjects” at the time) who returned the questionnaire, 84% said they were glad to have participated, 15% were neutral, and only 1.3% were sorry (0.8%) or very sorry (0.5%).
In general, I think both studies were very important in recognizing the situational factors that make people obey (incl. giving likely lethal electroshocks in Milgram) and “forgetting” their freedom to leave (Stanford). Considering the background of the second World War and the view that there was something like an authoritarian personality susceptible to blindly following orders (which was supposed to be dominant in the German population) both studies provided much needed perspective: “Normal” people can do “inhumane” things. And Milgram, in his 1963 paper, goes into some of the factors that explain it:
- “The experiment is sponsored by and takes place on the grounds of an institution of unimpeachable reputation, Yale University”
- “The experiment is, on the face of it, designed to attain a worthy purpose”
- “The subject perceives that the victim has voluntarily submitted to the authority system of the experimenter”
- “The subject, too, has entered the experiment voluntarily, and perceives himself under obligation to aid the experimenter”
- “From the subject’s standpoint, the fact that he is the teacher and the other man the learner is purely a chance consequence” (draw was fixed, but participant did not know that)
- “ambiguity with regard to the prerogatives of a psychologist and the corresponding rights of his subject”
- “The subjects are assured that the shocks administered to the subject are “painful but not dangerous.””
- “Through Shock Level 20 the victim continues to provide answers on the signal box. The subject may construe this as a sign that the victim is still willing to “play the game.””
BTW, Milgram did many things right. After all, he did ask experts prior to the study on what would happen and did extensive debriefing. The expert replies (“Fourteen Yale seniors, all psychology majors, were provided with a detailed description of the experimental situation. They were asked to reflect carefully on it, and to predict the behavior of 100 hypothetical subjects.”) show the need for the study:
“There was considerable agreement among the respondents on the expected behavior of hypothetical subjects. All respondents predicted that only an insignificant minority would go through to the end of the shock series. (The estimates ranged from 0 to 3%; i.e., the most “pessimistic” member of the class predicted that of 100 persons, 3 would continue through to the most potent shock available on the shock generator — 450 volts.)”
Compare this to the result (of that study). The first people who stopped were the 5 who stopped at 300 (which was the point when the “victim kicks on the wall and no longer provides answers to the teacher’s multiple-choice questions”), then 4@315, 2@330, 1@345, 1@360, 4@375, 2@390, 1@405, 1@420, no-one at 435, and 26(!) did go until 450 volt.
The power of the situational factors at work, previously (IMO) quite underestimated.
I can understand that Milgram is used as the bogey man in psychological research. But I think much of it is hyperbole. The studies were an important contribution to science and (esp. considering the lack of an IRB) done quite well regarding ethical aspects. My main objection would be that the experimenter said the participant had no choice, thereby denying the (now) always available option of the participant to quit the study. But as Milgram correctly noted, the experimenter had no way to reinforce it. That’s what’s made the study so questionable — it examined an aspect you must ensure (today). Yet, I think it was worth it. And it’s important to note that some participants (albeit a minority of about 1/3) did refuse to carry on (but went up to at least 300 volts!).
Sure, given how a distorted retelling of Milgram is used as “everything you shouldn’t do” in ethics, it’s highly unlikely to be done again. (One reason why people should always read the original papers, much, much is lost in books and memes. And the 1963 and 1964 papers are really worth reading. Milgram writes really well. Even Baumrind is interesting, albeit misguided in my view.)
And it’s unfortunate that they are not repeated (with new implementations), because as written, Milgram highlighted how easy good men can obey “bad” orders, even with an intense inner conflict. I can understand the participants who learned a lot about themselves and — hopefully — don’t underestimate the situation in the future. In fact, I think they might be glad to have dodged a bullet here. In the study, the “victim” did not get any electroshocks — it was all fake. But the actions of the participants would have had serious consequences if they had been real. How’s that for a learning experience? In this view, I consider the Milgram experiment (considering the whole series where he compared different conditions) a litmus test of a society. One we seem to afraid to know the results and rather condemn the study.
My guess is that today, pretty much the same results would happen (if the paradigm wasn’t recognizable to the participants). But people, most likely those who did not participate, would start a hashtag later, complaining about it. 😉
Like said, don’t know that much on Stanford, but no-one was physically hurt if I remember correctly, and Zimbardo stopped the study when a participant said he would really want to leave but couldn’t. I think the more relevant issue here would be one mentioned by a colleague during the study: “What’s the control group?”
Anyway, I don’t think they were abusive, on the contrary. I think they were really enlightening and useful.
Sources (should find them for free on the web):
- Milgram, S. (1963). Behavioral Study of Obedience. Journal of Abnormal and Social Psychology, 67, 371–378.
- Baumrind, D. (1964). Some Thoughts on Ethics of Research After Reading Milgram’s ‘Behavioral Study of Obedience’. American Psychologist, 19(6), 421–423. doi:10.1037/h0040128
- Milgram, S. (1964). Issues in the study of obedience: A reply to Baumrind. American Psychologist, 19(11), 848–852. doi:10.1037/h0044954
Looking back on the comment, I think that people have become very distress-averse.
Sure, I like to avoid distress like any other rational being. But I wonder whether we are not overemphasizing avoiding anything that might be potentially problematic. Some things are very stressful, but also very rewarding for a person. Especially under the right guidance. Take a PhD thesis for example … usually extremely stressful. But the stress is part of the learning experience. BTW, stress, not utter despair — see this interesting presentation.
Just because the participants of Milgram’s study showed signs of extreme distress does not make the study bad. Especially given that they did get an extensive debriefing. Of course, they did not volunteer to experience this amount of distress, which is a huge ethical issue. And in today’s university context of trigger warnings … fat chance of doing a similar study.
I also think that Milgram did something very right — he did the study with “everyday people”, not students. He wanted to be able to generalize his findings. That’s something that’s also rather rare with today’s studies. No wonder, students do many things ‘for free’ (read: for course credit).
Anyway, what do you think about the two studies? Leave a comment below.