Why Facebook Wants to Control Your Mind

By now, if you have a Facebook account or if you’ve just been on the internet in the past week, you are probably aware that Facebook conducted a study to see if it could construct users’ news feeds in order to affect their emotions. You’ve probably also see the range of responses, from shocked—shocked, to outraged, to apologetic (and more apologetic, and even more apologetic) and everything in between.

But for all of the explaining and hand-wringing, there has been little discussion of why Facebook conducted this research—or where this work is going. No matter how much furor this incident kicks up, it won’t be the last time an experiment like this happens. For Facebook to survive, it has to continue to find more and more ways to do what it has always done: change how we behave, and make money when we do. And Facebook isn’t alone. Google, data aggregators, and a zillion other tech and not-so-tech companies need to do it, too. Non-profits and governments may even need to get on the bandwagon.

Why?

First off why does Facebook do anything? It is worth repeating, since there seems to be reoccurring collective amnesia or willful denial about the fact: Facebook is a business. It isn’t a charity, or a co-op, or a public utility, or some other entity that has a fiduciary or otherwise chartered interest in putting something other than profit first in its decision-making. It’s a publicly-traded business that answers to investors. It’s a business that makes its money selling ads—and it keeps its investors happy when it does.

Facebook is getting better at increasing its ad revenue, and even better still at increasing ad revenue from ads on mobile devices where more and more users are spending more and more of their time. In fact Facebook is starting to erode Google’s mobile market share. But these are top-level numbers that only tell part of the story. Facebook’s ad revenues—really any online ad delivery system’s ad revenues—are roughly based on three factors: how many users they have, the demographics of those users, and how successful Facebook is at getting those users to click on ads.

In terms of how many users it has, it’s a good news, bad news situation for Facebook: Facebook has more than 1.23 billion active users, but there are some indications that Facebook is seeing challenges attracting new users. That, coupled with challenges it has attracting and retaining users in some key demographics, means that Facebook must continue to focus on the third piece: getting users to click on ads. So Facebook is always trying to develop new ways to make ads more effective—and to extract additional value out of every user to do so.

To this point it is also worth remembering the idea that “when something online is free, you’re not the customer, you’re the product.” As such, every single part of the Facebook experience is designed to get you to do something that is valuable for Facebook. This is part of the trade that we made when we stopped paying $9.99 a year for those magazine subscriptions. Instead of being able to read those magazines—or newspapers, or TV shows, or movies—in the privacy of our own homes, free from prying eyes, we are now observed every microsecond we are reading, or watching, or clicking, not clicking, or whatever. Even if you just turned on your computer or your phone, logged in to Facebook and walked away, that is an action that is being observed. And these actions—or at least the data or metadata collected about these actions—are not ours, by virtue of the fact that we are performing them in a system where we are essentially borrowing time and resources.

But all that said, what does Facebook get out of influencing our emotions?

Better advertising is targeted advertising. What makes me buy may be different than what makes you buy. And not just in general, but product by product. Maybe we both buy more in general when we’re happy, maybe we don’t, but there are certainly specific emotions that trigger specific purchasing actions. Had a hard day at work? Maybe you go to the bar, maybe you go to the gym—and maybe you log on to Zappos and buy a pair of shoes or two. If you’re that shoe-shopper, and Facebook can detect that you’re having that hard day, it’s going to make more money if it “wins” this “bet” that it will make with Zappos: “we bet that this user is more likely to click through an ad from you right now, and that this click is going to result in a sale, and if we’re right, you’re going to pay us a little more for that click.” And really that’s a win-win-win for everybody: Facebook gets more ad revenue, Zappos sells more products, and you get more shoes.

This is all really just an advanced version of the state-of-the-art of right now. But where is it going? To see some of the possibilities, you need to step out of the typical consumer frame that most of us use when thinking about, well, consumption. We tend to think of consumption as an activity that exists to satisfy our needs, and as an activity in which we have agency in our choices. I am hungry, I have a need for food, and I can make a choice amongst different food options to satisfy that hunger. Advertisers work for food producers to attempt to influence my choices, but the point of the system is to get me fed. But this isn’t necessarily the frame for what the food producer—our any producer—is doing. They are making predictions—otherwise known as bets—on what you and I might want, and producing based on those predictions. When they’re wrong—and sometimes even when they are right—they may find that they have too much or too little of what we want, when we want it. If so they can buy ads to try and convince us that we really do want what they have. If that doesn’t work, they can change prices or create other incentives and then make ads that tell us about those changes and try again to get us to want what they have.

But what if they also know that making us happy or sad or angry or envious would make us more likely to want what they have? From the producers’ and advertisers’ points of view, this would mean that if the producer found that it had made a bad prediction about how much of a thing we might want, they can ask advertisers—like Facebook—to try and create conditions where demand is more likely.

So what, you may be saying. That’s just advertising on steroids. No big deal. And anyway I don’t even go on Facebook. Maybe I do but I never click on the ads. So what?

But what if we aren’t talking Facebook or about shoes or about an In-N-Out Burger? What if we were talking about purchase that takes bargaining, like buying a car? Since our phones and our apps know where we are, why couldn’t a car dealer buy a feed on our phone that would be calibrated to put us in a better mood to make a deal?

What if we were talking about something that wasn’t a purchase? We spend so much time thinking about the internet as a sealed box where we buy and sell things, that we often forget that it can impact how we think about things that happen in the real world. What if a polling firm could influence our emotions prior to calling us up to ask us to participate in a survey, could they artificially skew the results of the poll? As we saw in the last presidential election, skewed polls can make people make big mistakes.

What if we were talking about something like your health care and a health care intervention? Let’s say your health care provider has decided that it’s cheaper for them to care for you in the long run if you watch a series of free videos about your diet, or to attend a series of free diet seminars—or even to make a series of paid appointments with a dietician? And they find out that you’re more likely to respond to suggestions to do these things when you’re happy or sad or angry or envious. Because you’ve downloaded a free healthcare app and you haven’t read the TOS and your activity has been monitored. Or you agreed to sign up for that app with your Facebook or Google account and your activity in those systems has been analyzed. Is this ok? Isn’t the health care provider just looking out for your health and trying to keep down the bottom line?

This kind of influence is something that Cass Sunstein and Richard Thaler talk about in their 2009 book Nudge: Improving Decisions About Health, Wealth, and Happiness—a book that you see in a lot of startup, and not-so-startup, technology offices. Some see it as a methodology to help people avoid their own inability to make choices in their best interest, others see it as flat-out paternalism.

Any way you see it, it isn’t going away.

Like us on Facebook - Follow us on Twitter - Sign up for The Cheat Sheet Newsletter