Laypeople’s access to scientific research when participating in research

“Personal questions don’t bother me. I just lie.”
Richard Fish in “Ally McBeal”

I’m working as a psychologist (researcher, not therapist) and psychology is a discipline that focuses a lot on questionnaires. These questionnaires are easy and cheap to apply, the ideal research tool in terms of efficiency. Send out a link and you (can) get a lot of respondents.

But questionnaires have a major, well established and well ignored, weakness: Reactivity.

If you give people a questionnaire, they know that you want to measure something and that might change what is measured. People might want to give a good impression — which can mean, among others, that they downplay “bad” behaviors, embellish “good” ones, or agree with a statement even if they actually disagree with it.

Unfortunately, many questionnaires make it fairly easy to “fake” responses. What is worse, the Internet allows participants to easily find out what is measured.

I noticed this when I was filling out a questionnaire looking between certain “life-style behaviors” and mental illness. To be fair, the questionnaire was already fairly open that it dealt with mental illnesses. But it did not give the name of the scales, nor how the items are related to that scale.

When answering the questions I had a certain impression what the questions on that page did measure and I wanted to find out whether I was right. So I copied the first question from the questionnaire into Google.

“I have saved up so many things that they get in the way.”

I used quotation marks to get this particular sentence, not pages where these words appear. I also made sure to remove the leading question number.

And the first hits were for the revised version of the Obsessive-Compulsive Inventory.

oci_google.png
Google search results for the first question in an online questionnaire.

The first hit ( http://www.caleblack.com/psy5960_files/OCI-R.pdf ) also thankfully provided me with the scale and a short description, including how to calculate the results and the cut-off values.

Administration & Scoring

The OCI-R is a short version of the OCD (Foa, Kozak, Salkovskis, Coles, & Amir, 1998) and is a self-report scale for assessing symptoms of Obsessive-Compulsive Disorder (OCD). It consists of 18 questions that a person endorses on a 5-point Likert scale.

Scores are generated by adding the item scores. The possible range of scores is 0-72. Mean score for persons with OCD is 28.0 (SD = 13.53). Recommended cutoff score is 21, with scores at or above this level indicating the likely presence of OCD.

Reference:

Foa, E.B., Huppert, J.D., Leiberg, S., Hajcak, G., Langner, R., et al. (2002). The Obsessive-Compulsive Inventory: Development and validation of a short version. Psychological Assessment, 14, 485-496.

Information about the scale from http://www.caleblack.com/psy5960_files/OCI-R.pdf

Very useful but also very scary when you think about the validity of the research results.

There is already a discussion in psychology about lay-people’s access to original scientific research. Use Google or Google Scholar and you can access many preprint versions of original research. This raises a lot of questions, e.g., lay-people’s ability to deal with these findings and interpret them correctly.

Note that this is not Academic elitism. I am a psychologist and I regularly read original research. I have difficulties understanding the research in — for me — more remote research areas. I know a lot of standard methods in psychology, but sometimes I stumble upon different ways to analyze data that take me a while. Science — actual science — is confusing, rarely without contradictions, rarely without flaws that can be criticized. It is not pretty and easy (this TwistedDoodles strip captures it nicely). That’s just because science is a creative endeavor and social sciences deal with the most complicated subject we have found today: a human being (interacting with other human beings).

But access to published research is only one of the consequences of our information society. Another issue is access to the measurements and scales — sometimes published in this research, sometimes available as user-friendly 2-page PDFs.

Psychology (and other disciplines) rely too much on questionnaires — and I think this will become more and more of a problem. And while some researchers might be angry about this posting, I like to point out that security by obscurity is a bad way to deal with this issue. Because it does not work anymore. Laypeople can easily find out which questionnaires are used and what they mean. Even if they do not want to use that information to fake the results, knowing about the scales will likely influence them. I think better ways to deal with this problem are, e.g., to log whether a person leaves the browser, the amount of time needed between questions, etc. pp.

And whenever possible use behavior data that is harder to influence.