By now you've read — possibly several times — about "the Facebook study." You're probably familiar with the details: 689,003 people whose news feeds had been manipulated to feature more negative or more positive emotional content were monitored for shifts in the emotional content in their own posts. At the time (mid-January 2012, in case you've been wondering why your friends all seemed happier/sadder for that one particular week), nobody knew it was happening. The results of the study were published in PNAS last week, and the authors found an interesting result: manipulating the positive or negative content of the news feed did indeed have an effect on the content of the posts (Kramer et al. 2014).

Most of us had an instinctive response to the details of this study: it felt wrong. My own Facebook news feed positively exploded with complaints — how dare they! What gives them the right?! And then, more incisively: how could this have met ethical standards? The short answer is that it almost certainly doesn't pass ethical standards, for a couple of very specific reasons, but I've been flooded with commentary and coverage in the last few days that elides the major ethical failure at hand: the failure to procure informed consent.

Let me back up for a minute: research projects involving human subjects are subject to approval by an ethics committee known as an Institutional Review Board (IRB); institutions that receive federal funding (e.g. universities) are required to have IRBs*. IRBs are governed by the principles of the Belmont Report of 1979, which attempted to clarify the basic ethical principles of human subjects research, as well as the rights of study participants and responsibilities of researchers. The Belmont Report provides guidance on applications of these ethical principles as well as the principles themselves, among them three key necessities: appropriate selection of subjects, risk/benefit analysis, and the informed consent of study subjects.

Advertisement

The requirement of informed consent is justified as follows: "Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them." The report outlines the logistics of obtaining informed consent, including: 1) that participants are given adequate information about the study, including risks and benefits; 2) that this information be presented in a comprehensible manner; 3) that subjects consent voluntarily — without coercion. Importantly, subjects must also be informed that they can withdraw from the study — otherwise participation becomes coercive rather than voluntary.

The ethical failure of the Facebook experiment (or Kramer et al. 2014, if you're into being technical) is not in the design of the study itself. It is in the failure of the researchers to secure the informed consent of participants. While the argument can and has been made that agreeing to Facebook's terms essentially allows them to manipulate you any which way you want, this agreement does not (and could not possibly) have provided all the information outlined as required for informed consent. Because of this, I am frankly shocked than an IRB approved this research (which, according to the editor of PNAS, did happen) without stipulating that subjects provide informed consent and be given the ability to withdraw. While Facebook users could technically deactivate their profiles if they experienced emotional consequences from the study, it is hard to argue that this constitutes true "ability to withdraw." For one thing, if you don't know you've been enrolled in a study, you can't make the conscious choice to leave it; for another, withdrawing from a study should never significantly impact the ordinary conduct of your life external to the study, or else your participation in the study is essentially, again, coerced. Basically, it is glaringly obvious that the 689,003 people enrolled in this study in no way provided informed consent as governed by standard requirement, nor was the option to withdraw provided in any realistic way.

This seems to be a strikingly large liability gap, and a major oversight. As a commenter pointed out yesterday, if this study was partly federally funded (as it may have been), the failure to secure informed consent would stand in clear violation of federal law. Even apart from the specific funding of this study, while Facebook receives no federal monies, researchers at institutions that do are bound by the Common Rule. Clearly, this project should in no way have been exempted from the informed consent requirement. Then again, IRBs are not always the incorruptible ethical regulators they are intended to be.

Advertisement

So why does it matter? Well, as a scientist, it matters to me that people understand that the fault here is not with the study design. Creating different emotional conditions in study subjects is hardly a new tactic — it's quite common, and there is nothing inherently wrong with it, as long as subjects consent and understand that they have the ability to withdraw. In fact, this particular study is very interesting, which is part of the reason I find this ethical misstep upsetting — there is great value in studying how our emotions and behavior are affected by virtual networks. We're spending an increasing amount of time using social media, and it is important to study how that affects us.**

But as a study participant, you always have rights, and it is the duty of the researchers enrolling you to inform you of those rights and secure your assent to be involved in their study. I hope that the scrutiny around this particular study encourages researchers to tread more carefully in their use and manipulation of people in online settings — after all, the very results of this controversial experiment suggest that such manipulation can have real consequences.


*Research involving animal subjects are governed by separate committees called Institutional Animal Care and Use Committees (IACUCs).

Advertisement

**Please note that it is not the intent of this article to critique the specifics of this study's findings, but to examine the ethical issues around study methods.

ETA: Important update. Cornell and Susan Fiske have confirmed that this study received IRB approval as a "pre-existing dataset." I won't go into the full nuance of what that supposedly means here, but suffice it to say that because the researchers were somewhat apparently involved in the design of the study, this almost certainly constitutes an unethical end-run around IRB approval. Thanks to MountainMomma for linking me to this update.