Inquiry Line (Signal only)

Live Broadcast

Economic Scene: The Facebook Fallacy: Privacy Is Up to You

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

The Facebook Fallacy: Privacy Is Up to You

Photo
Tourists at Facebook’s headquarters on April 10, the day its chief executive, Mark Zuckerberg, testified at a Senate hearing. Researchers say his pledge to give users more say over the use of their data is undercut by a phenomenon known as the “control paradox.” Credit Jim Wilson/The New York Times

Was Mark Zuckerberg pulling their leg?

As Facebook’s co-founder and chief executive parried questions from members of Congress about how the social network would protect its users’ privacy, he returned time and again to what probably sounded like an unimpeachable proposition.

By providing its users with greater and more transparent controls over the personal data they share and how it is used for targeted advertising, he insisted, Facebook could empower them to make their own call and decide how much privacy they were willing to put on the block.

He must be kidding.

As Mr. Zuckerberg surely knows, providing a greater sense of control over their personal data won’t make Facebook users more cautious. It will instead encourage them to share more. This, of course, will produce more data for Facebook to mine to its own financial advantage.

“Disingenuous is the adjective I had in my mind,” said Alessandro Acquisti, a leading expert on privacy-related behavior at Carnegie Mellon University, when I called to chat about how Mr. Zuckerberg’s testimony meshed with his research.

Advertisement

Continue reading the main story

“Fifteen years ago it would have been legitimate to propose this argument,” he added. “But it is no longer legitimate to ignore the behavioral problems and propose simply more transparency and controls.”

Continue reading the main story

Professor Acquisti and two colleagues, Laura Brandimarte and the behavioral economist George Loewenstein, published research on this behavior nearly six years ago. “Providing users of modern information-sharing technologies with more granular privacy controls may lead them to share more sensitive information with larger, and possibly riskier, audiences,” they concluded.

The phenomenon even has a name: the “control paradox.”

“Privacy control settings give people more rope to hang themselves,” Professor Loewenstein told me. “Facebook has figured this out, so they give you incredibly granular controls.”

This paradox is hardly the only psychological quirk for the social network to exploit. Consider default settings. Tons of research in behavioral economics has found that people tend to stick to the default setting of whatever is offered to them, even when they could change it easily. This applies to the share of their paycheck that will be deposited every month in a 401(k) retirement plan or to the amount of personal information they will share online.

“Facebook is acutely aware of this,” Professor Loewenstein told me. In 2005, its default settings shared most profile fields with, at most, friends of friends. Nothing was shared by default with the full internet. By 2010, however, likes, name, gender, picture and a lot of other things were shared with everybody online. “Facebook changed the defaults because it appreciated their power,” Professor Loewenstein added.

It is time that Congress appreciates it, too.

The question for members of Congress goes beyond how much Facebook and others may be exploiting our inability to rationally assess the pros and cons of sharing information — profiting from the difficulty we have measuring the immediate reward of the cute puppy video against the more distant risk of having our data sloshing around the internet for years.

As we devote more of our lives to online experiences, while offering data about ourselves in exchange for information, entertainment or whatever, the critical question is whether, given the tools, we can be trusted to manage the experience. The increasing body of research into how we behave online suggests not.

An experiment by Susan Athey of Stanford University’s Graduate School of Business, along with Christian Catalini and Catherine Tucker of the Sloan School of Management at the Massachusetts Institute of Technology, found that people who profess concern about privacy will provide the emails of their friends in exchange for some pizza. They also found that providing consumers reassuring though irrelevant information about their ability to protect their privacy will make them less likely to avoid surveillance.

Newsletter Sign Up

Continue reading the main story

Please verify you're not a robot by clicking the box.

Invalid email address. Please re-enter.

You must select a newsletter to subscribe to.

Sign Up You agree to receive occasional updates and special offers for The New York Times's products and services.

Thank you for subscribing.

An error has occurred. Please try again later.

You are already subscribed to this email.

View all New York Times newsletters.

Another experiment revealed that people are more willing to come clean about their engagement in illicit or questionable behavior when they believe others have done so, too. Warning consumers about possible privacy risks can encourage them to be more careful, as we might expect. But people can react counterintuitively to perceived security risks.

Advertisement

Continue reading the main story

When people were exposed to one of three different websites that asked embarrassing questions like “Have you ever tried to peek at someone else’s email without them knowing?” the most dangerous-looking website — decorated with a horned devil and the words “How BAD Are U?” — got, by far, the most positive responses.

Those in the industry often argue that people don’t really care about their privacy — that they may seem concerned when they answer surveys, but still routinely accept cookies and consent to have their data harvested in exchange for cool online experiences.

Professor Acquisti thinks this is a fallacy. The cognitive hurdles to manage our privacy online are simply too steep.

This is all pretty novel. The idea of online privacy didn’t exist a generation ago. While we are good at handling our privacy in the offline world, lowering our voices or closing the curtains as the occasion may warrant, there are no cues online to alert us to a potential privacy invasion. It seems foolhardy to think we could determine the boundaries of data collection on our own.

Even if we were to know precisely what information companies like Facebook have about us and how it will be used, which we don’t, it would be hard for us to assess potential harms. Could we face higher prices online because Amazon has a precise grasp of our price sensitivities? Might our online identity discourage banks from giving us a loan? What else could happen? How does the risk stack up against the value of a targeted ad, or a friend’s birthday reminder?

Members of Congress have mostly let market forces prevail online, unfettered by government meddling. Privacy protection in the internet economy has relied on the belief that consumers will make rational choices. Give them a simple dial to manage their preferences — and accurate information, in a timely fashion and a big font, about what they are sharing online — and they will make the right calls to avoid discrimination, protect themselves from predatory behavior and thrive.

Europe’s stringent new privacy protection law, which Facebook has promised to apply in the United States, may do better than the American system of disclosure and consent. Data collectors in Europe will have to get explicit consent from users before harvesting their data, rather than simply offering them a chance to opt out. Still, the European system also relies mostly on faith that consumers will make rational choices.

The more that psychologists and behavioral economists study psychological biases and quirks, the clearer it seems that rational choices alone won’t work. “I don’t think any kind of disclosure or opt in or opt out is going to protect us from our worst instincts,” Professor Loewenstein argued.

What to do? Professor Acquisti suggests flipping the burden of proof. The case for privacy regulation rests on consumers’ proving that data collection is harmful. Why not ask the big online platforms like Facebook to prove they can’t work without it? If reducing data collection imposes a cost, we could figure out who bears it — whether consumers, advertisers or Facebook’s bottom line. That could help set societywide boundaries about what data to collect and what to leave alone.

Advertisement

Continue reading the main story

It would no longer be up to confused users to protect themselves.

Email: eporter@nytimes.com; Twitter: @portereduardo

Continue reading the main storyRead the Original Article

Facebook Comments
Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Recent News

Follow Radio Biafra on Twitter

Editor's Pick