SciTech

Pugwash: Facebook users become test subjects

Credit: Eunice Mok Credit: Eunice Mok

In 2012, Facebook conducted a study to determine how the contents of users’ news feeds affect their behavior. In particular, they showed some users a more positive news feed and others a more negative news feed and showed that the users’ posts became more positive in the positive condition and more negative in the negative condition.

When this study was published in the Proceedings of the National Academy of Sciences of the United States of America earlier this year, it caused a major outcry among Facebook users. Thus, Pugwash sought to understand whether this experiment was ethical, and whether Facebook or the government should do something about it.

Members first explored the consequences of an experiment like this one. It was determined that the Facebook users were affected in an apparently minor way: Their emotions were altered to be somewhat more positive or negative. While people with depression might have done things to permanently harm themselves in the case of added negative emotions, by and large, this kind of emotional impact isn’t outside the scope of daily life. However, what’s more important is that this experiment set a precedent that it is okay for a corporation like Facebook to perform experiments on its users and publish the results.

It’s easy to imagine research that Facebook might conduct — secretly or openly — in its own best interest that’s considerably more harmful; for instance, lying to users to see their behavior. Should corporations be able to conduct this type of research on their own? And should public funds be allocated to research that is associated with such corporate experimentation?

The public funding question is somewhat easier to answer, since there are already established ethical guidelines from the famous Belmont Report which stipulates the conditions under which the government can fund human subjects research. To a certain extent, the public’s trust of scientific experimentation is already founded on this document, and therefore, it makes sense to hold it as sacred, unless something is found to be seriously wrong with it.

For an experiment like this one — which involves actually manipulating a human being — the Belmont Report requires ‘informed consent’: that the user should know that he or she will undergo experimentation, and that he or she can opt out if desired. In this case, the researchers almost had informed consent, since Facebook’s terms of service do say that users agree to participate in research.

However, the terms of service also say that the research will only be used internally. Furthermore, the vast majority of Facebook users don’t read the terms of service. In the Belmont Report, it’s the researcher’s responsibility to make sure the participant understands the consent; a statement buried in a huge document that the participant signs is not enough. In this sense, Facebook and these researchers likely overstepped their bounds. However, the majority opinion in Pugwash seemed to be that the infraction was minor, especially since the government did not do very much to fund this research (the actual experiment was funded by Facebook).

Whether Facebook and other Internet giants should be allowed to carry out such research is another matter. This kind of experimentation is largely unprecedented in corporate circles.

Fifty years ago, corporations had no way to precisely monitor users’ interactions with their products, so they had little to gain by altering the experience for some users. Of course, companies did experiment with human subjects — for instance, the drug industry couldn’t exist without human drug trials — but for those experiments there was never any ambiguity about whether a given person was participating. For the drug industry, these trials are heavily regulated. However, these laws don’t apply to Internet-based research.

For Internet experimentation, the prevailing interpretation of the law seems to be that Facebook owns the feed, it is their product, and so it’s up to them what they put in it. The assumption here is that text and images are not dangerous or unpredictable, at least not to the extent that untested drugs are. If people feel Facebook is degrading their experience, they can just leave.

Thus, it can be argued that it’s in Facebook’s best interest not to do things that may harm or anger their users. But is this really true?

For an experiment like this one, it wasn’t clear whether the negativity came from Facebook or from the users’ friends. Facebook can easily be deceptive in other ways in these experiments, and this violates one of the assumptions of a free market. Furthermore, it isn’t clear that users can get away from Facebook anymore. Much of what human society calls networking has now moved to Facebook, to the extent that it’s a serious competitive disadvantage not to use it — for example, during job searches.
Overall, though, regulating Facebook seems like a heavy-handed solution to a problem that is currently minor. Ultimately, Pugwash did not come to a firm conclusion regarding how such a regulation could be implemented.