The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think
Eli Pariser

Ended: Feb. 19, 2013

With Google personalized for everyone, the query “stem cells” might produce diametrically opposed results for scientists who support stem cell research and activists who oppose it. “Proof of climate change” might turn up different results for an environmental activist and an oil company executive. In polls, a huge majority of us assume search engines are unbiased. But that may be just because they’re increasingly biased to share our own views. More and more, your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click.
Democracy requires citizens to see things from one another’s point of view, but instead we’re more and more enclosed in our own bubbles. Democracy requires a reliance on shared facts; instead we’re being offered parallel but separate universes.
It would be one thing if all this customization was just about targeted advertising. But personalization isn’t just shaping what we buy. For a quickly rising percentage of us, personalized news feeds like Facebook are becoming a primary news source—36 percent of Americans under thirty get their news through social networking sites. And Facebook’s popularity is skyrocketing worldwide, with nearly a million more people joining each day. As founder Mark Zuckerberg likes to brag, Facebook may be the biggest source of news in the world (at least for some definitions of “news”).
Left to their own devices, personalization filters serve up a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.
We are predisposed to respond to a pretty narrow set of stimuli—if a piece of news is about sex, power, gossip, violence, celebrity, or humor, we are likely to read it first. This is the content that most easily makes it into the filter bubble. It’s easy to push “Like” and increase the visibility of a friend’s post about finishing a marathon or an instructional article about how to make onion soup. It’s harder to push the “Like” button on an article titled, “Darfur sees bloodiest month in two years.” In a personalized world, important but complex or unpleasant issues—the rising prison population, for example, or homelessness—are less likely to come to our attention at all. As a
If you’re not paying for something, you’re not the customer; you’re the product being sold.
In 1990, a team of researchers at the Xerox Palo Alto Research Center (PARC) applied cybernetic thinking to a new problem. PARC was known for coming up with ideas that were broadly adopted and commercialized by others—the graphical user interface and the mouse, to mention two. And like many cutting-edge technologists at the time, the PARC researchers were early power users of e-mail—they sent and received hundreds of them. E-mail was great, but the downside was quickly obvious. When it costs nothing to send a message to as many people as you like, you can quickly get buried in a flood of useless information. To keep up with the flow, the PARC team started tinkering with a process they called collaborative filtering, which ran in a program called Tapestry. Tapestry tracked how people reacted to the mass e-mails they received—which items they opened, which ones they responded to, and which they deleted—and then used this information to help order the inbox. E-mails that people had engaged with a lot would move to the top of the list; e-mails that were frequently deleted or unopened would go to the bottom. In essence, collaborative filtering was a time saver: Instead of having to sift through the pile of e-mail yourself, you could rely on others to help presift the items you’d received.
On one end of the spectrum, he said, is sycophantic personalization—“You’re so great and wonderful, and I’m going to tell you exactly what you want to hear.” On the other end is the parental approach: “I’m going to tell you this whether you want to hear this or not, because you need to know.” Currently, we’re headed in the sycophantic direction. “There will be a long period of adjustment,” says Professor Michael Schudson, “as the separation of church and state is breaking down, so to speak. In moderation, that seems okay, but Gawker’s Big Board is a scary extreme, it’s surrender.”
Google News pays more attention to political news than many of the creators of the filter bubble. After all, it draws in large part on the decisions of professional editors. But even in Google News, stories about Apple trump stories about the war in Afghanistan. I enjoy my iPhone and iPad, but it’s hard to argue that these things are of similar importance to developments in Afghanistan. But this Apple-centric ranking is indicative of what the combination of popular lists and the filter bubble will leave out: Things that are important but complicated. “If traffic ends up guiding coverage,” the Washington Post’s ombudsman writes, “will The Post choose not to pursue some important stories because they’re ‘dull’?”
But at the moment, we’re trading a system with a defined and well-debated sense of its civic responsibilities and roles for one with no sense of ethics. The Big Board is tearing down the wall between editorial decision-making and the business side of the operation. While Google and others are beginning to grapple with the consequences, most personalized filters have no way of prioritizing what really matters but gets fewer clicks. And in the end, “Give the people what they want” is a brittle and shallow civic philosophy.
“To achieve the clearest possible image” of the world, Heuer writes, “analysts need more than information.... They also need to understand the lenses through which this information passes.” Some of these distorting lenses are outside of our heads. Like a biased sample in an experiment, a lopsided selection of data can create the wrong impression: For a number of structural and historical reasons, the CIA record on Nosenko was woefully inadequate when it came to the man’s personal history. And some of them are cognitive processes: We tend to convert “lots of pages of data” into “likely to be true,” for example. When several of them are at work at the same time, it becomes quite difficult to see what’s actually going on—a funhouse mirror reflecting a funhouse mirror reflecting reality. This distorting effect is one of the challenges posed by personalized filters. Like a lens, the filter bubble invisibly transforms the world we experience by controlling what we see and don’t see. It interferes with the interplay between our mental processes and our external environment. In some ways, it can act like a magnifying glass, helpfully expanding our view of a niche area of knowledge. But at the same time, personalized filters limit what we are exposed to and therefore affect the way we think and learn. They can upset the delicate cognitive balance that helps us make good decisions and come up with new ideas. And because creativity is also a result of this interplay between mind and environment, they can get in the way of innovation. If we want to know what the world really looks like, we have to understand how filters shape and skew our view of it.
Human beings may be a walking bundle of miscalculations, contradictions, and irrationalities, but we’re built that way for a reason: The same cognitive processes that lead us down the road to error and tragedy are the root of our intelligence and our ability to cope with and survive in a changing world. We pay attention to our mental processes when they fail, but that distracts us from the fact that most of the time, our brains do amazingly well. The mechanism for this is a cognitive balancing act. Without our ever thinking about it, our brains tread a tightrope between learning too much from the past and incorporating too much new information from the present. The ability to walk this line—to adjust to the demands of different environments and modalities—is one of human cognition’s most astonishing traits. Artificial intelligence has yet to come anywhere close. In two important ways, personalized filters can upset this cognitive balance between strengthening our existing ideas and acquiring new ones. First, the filter bubble surrounds us with ideas with which we’re already familiar (and already agree), making us overconfident in our mental frameworks. Second, it removes from our environment some of the key prompts that make us want to learn. To understand how, we have to look at what’s being balanced in the first place, starting with how we acquire and store information.
The filter bubble tends to dramatically amplify confirmation bias—in a way, it’s designed to. Consuming information that conforms to our ideas of the world is easy and pleasurable; consuming information that challenges us to think in new ways or question our assumptions is frustrating and difficult. This is why partisans of one political stripe tend not to consume the media of another. As a result, an information environment built on click signals will favor content that supports our existing notions about the world over content that challenges them. During the 2008 presidential campaign, for example, rumors swirled persistently that Barack Obama, a practicing Christian, was a follower of Islam. E-mails circulated to millions, offering “proof” of Obama’s “real” religion and reminding voters that Obama spent time in Indonesia and had the middle name Hussein. The Obama campaign fought back on television and encouraged its supporters to set the facts straight. But even a front-page scandal about his Christian priest, Rev. Jeremiah Wright, was unable to puncture the mythology. Fifteen percent of Americans stubbornly held on to the idea that Obama was a Muslim.
That’s not so surprising—Americans have never been very well informed about our politicians. What’s perplexing is that since the election, the percentage of Americans who hold that belief has nearly doubled, and the increase, according to data collected by the Pew Charitable Trusts, has been greatest among people who are college educated. People with some college education were more likely in some cases to believe the story than people with none—a strange state of affairs. Why? According to the New Republic’s Jon Chait, the answer lies with the media: “Partisans are more likely to consume news sources that confirm their ideological beliefs. People with more education are more likely to follow political news. Therefore, people with more education can actually become mis-educated.” And while this phenomenon has always been true, the filter bubble automates it. In the bubble, the proportion of content that validates what you know goes way up.
Personalization is about building an environment that consists entirely of the adjacent unknown—the sports trivia or political punctuation marks that don’t really shake our schemata but feel like new information. The personalized environment is very good at answering the questions we have but not at suggesting questions or problems that are out of our sight altogether. It brings to mind the famous Pablo Picasso quotation: “Computers are useless. They can only give you answers.”
Personalization can get in the way of creativity and innovation in three ways. First, the filter bubble artificially limits the size of our “solution horizon”—the mental space in which we search for solutions to problems. Second, the information environment inside the filter bubble will tend to lack some of the key traits that spur creativity. Creativity is a context-dependent trait: We’re more likely to come up with new ideas in some environments than in others; the contexts that filtering creates aren’t the ones best suited to creative thinking. Finally, the filter bubble encourages a more passive approach to acquiring information, which is at odds with the kind of exploration that leads to discovery. When your doorstep is crowded with salient content, there’s little reason to travel any farther.
Researcher Hans Eysenck has found evidence that the individual differences in how people do this mapping—how they connect concepts together—are the key to creative thought. In Eysenck’s model, creativity is a search for the right set of ideas to combine. At the center of the mental search space are the concepts most directly related to the problem at hand, and as you move outward, you reach ideas that are more tangentially connected. The solution horizon delimits where we stop searching. When we’re instructed to “think outside the box,” the box represents the solution horizon, the limit of the conceptual area that we’re operating in. (Of course, solution horizons that are too wide are a problem, too, because more ideas means exponentially more combinations.)
As it turns out, being around people and ideas unlike oneself is one of the best ways to cultivate this sense of open-mindedness and wide categories. Psychologists Charlan Nemeth and Julianne Kwan discovered that bilinguists are more creative than monolinguists—perhaps because they have to get used to the proposition that things can be viewed in several different ways. Even forty-five minutes of exposure to a different culture can boost creativity: When a group of American students was shown a slideshow about China as opposed to one about the United States, their scores on several creativity tests went up. In companies, the people who interface with multiple different units tend to be greater sources of innovation than people who interface only with their own. While nobody knows for certain what causes this effect, it’s likely that foreign ideas help us break open our categories. But the filter bubble isn’t tuned for a diversity of ideas or of people. It’s not designed to introduce us to new cultures. As a result, living inside it, we may miss some of the mental flexibility and openness that contact with difference creates. But perhaps the biggest problem is that the personalized Web encourages us to spend less time in discovery mode in the first place.
This is one other way that personalized filters can interfere with our ability to properly understand the world: They alter our sense of the map. More unsettling, they often remove its blank spots, transforming known unknowns into unknown ones. Traditional, unpersonalized media often offer the promise of representativeness. A newspaper editor isn’t doing his or her job properly unless to some degree the paper is representative of the news of the day. This is one of the ways one can convert an unknown unknown into a known unknown. If you leaf through the paper, dipping into some articles and skipping over most of them, you at least know there are stories, perhaps whole sections, that you passed over. Even if you don’t read the article, you notice the headline about a flood in Pakistan—or maybe you’re just reminded that, yes, there is a Pakistan. In the filter bubble, things look different. You don’t see the things that don’t interest you at all. You’re not even latently aware that there are major events and ideas you’re missing. Nor can you take the links you do see and assess how representative they are without an understanding of what the broader environment from which they were selected looks like. As any statistician will tell you, you can’t tell how biased the sample is from looking at the sample alone: You need something to compare it to. As a last resort, you might look at your selection and ask yourself if it looks like a representative sample. Are there conflicting views? Are there different takes, and different kinds of people reflecting? Even this is a blind alley, however, because with an information set the size of the Internet, you get a kind of fractal diversity: at any level, even within a very narrow information spectrum (atheist goth bowlers, say) there are lots of voices and lots of different takes. We’re never able to experience the whole world at once. But the best information tools give us a sense of where we stand in it—literally, in the case of a library, and figuratively in the case of a newspaper front page.
As law and commerce have caught up with technology, however, the space for anonymity online is shrinking. You can’t hold an anonymous person responsible for his or her actions: Anonymous customers commit fraud, anonymous commenters start flame wars, and anonymous hackers cause trouble. To establish the trust that community and capitalism are built on, you need to know whom you’re dealing with. As a result, there are dozens of companies working on deanonymizing the Web. PeekYou, a firm founded by the creator of RateMyProfessors.com, is patenting ways of connecting online activities done under a pseudonym with the real name of the person involved. Another company, Phorm, helps Internet service providers use a method called “deep packet inspection” to analyze the traffic that flows through their servers; Phorm aims to build nearly comprehensive profiles of each customer to use for advertising and personalized services. And if ISPs are leery, BlueCava is compiling a database of every computer, smartphone, and online-enabled gadget in the world, which can be tied to the individual people who use them. Even if you’re using the highest privacy settings in your Web browser, in other words, your hardware may soon give you away. These technological developments pave the way for a more persistent kind of personalization than anything we’ve experienced to date. It also means that we’ll increasingly be forced to trust the companies at the center of this process to properly express and synthesize who we really are. When you meet someone in a bar or a park, you look at how they behave and act and form an impression accordingly. Facebook and the other identity services aim to mediate that process online; if they don’t do it right, things can get fuzzy and distorted. To personalize well, you have to have the right idea of what represents a person.
The logic of the filter bubble today is still fairly rudimentary: People who bought the Iron Man DVD are likely to buy Iron Man II; people who enjoy cookbooks will probably be interested in cookware. But for Dean Eckles, a doctoral student at Stanford and an adviser to Facebook, these simple recommendations are just the beginning. Eckles is interested in means, not ends: He cares less about what types of products you like than which kinds of arguments might cause you to choose one over another. Eckles noticed that when buying products—say, a digital camera—different people respond to different pitches. Some people feel comforted by the fact that an expert or product review site will vouch for the camera. Others prefer to go with the product that’s most popular, or a money-saving deal, or a brand that they know and trust. Some people prefer what Eckles calls “high cognition” arguments—smart, subtle points that require some thinking to get. Others respond better to being hit over the head with a simple message. And while most of us have preferred styles of argument and validation, there are also types of arguments that really turn us off. Some people rush for a deal; others think that the deal means the merchandise is subpar. Just by eliminating the persuasion styles that rub people the wrong way, Eckles found he could increase the effectiveness of marketing materials by 30 to 40 percent. While it’s hard to “jump categories” in products—what clothing you prefer is only slightly related to what books you enjoy—“persuasion profiling” suggests that the kinds of arguments you respond to are highly transferrable from one domain to another. A person who responds to a “get 20% off if you buy NOW” deal for a trip to Bermuda is much more likely than someone who doesn’t to respond to a similar deal for, say, a new laptop. If Eckles is right—and research so far appears to be validating his theory—your “persuasion profile” would have a pretty significant financial value. It’s one thing to know how to pitch products to you in a specific domain; it’s another to be able to improve the hit rate anywhere you go. And once a company like Amazon has figured out your profile by offering you different kinds of deals over time and seeing which ones you responded to, there’s no reason it couldn’t then sell that information to other companies. (The field is so new that it’s not clear if there’s a correlation between persuasion styles and demographic traits, but obviously that could be a shortcut as well.)
In a small town or an apartment building with paper-thin walls, what I know about you is roughly the same as what you know about me. That’s a basis for a social contract, in which we’ll deliberately ignore some of what we know. The new privacyless world does away with that contract. I can know a lot about you without your knowing I know. “There’s an implicit bargain in our behavior,” search expert John Battelle told me, “that we haven’t done the math on.” If Sir Francis Bacon is right that “knowledge is power,” privacy proponent Viktor Mayer-Schonberger writes that what we’re witnessing now is nothing less than a “redistribution of information power from the powerless to the powerful.” It’d be one thing if we all knew everything about each other. It’s another when centralized entities know a lot more about us than we know about each other—and sometimes, more than we know about ourselves. If knowledge is power, then asymmetries in knowledge are asymmetries in power. Google’s famous “Don’t be evil” motto is presumably intended to allay some of these concerns. I once explained to a Google search engineer that while I didn’t think the company was currently evil, it seemed to have at its fingertips everything it needed to do evil if it wished. He smiled broadly. “Right,” he said. “We’re not evil. We try really hard not to be evil. But if we wanted to, man, could we ever!”
But Gerbner, a World War II veteran–turned–communications theorist who became dean of the Annenberg School of Communication, took these shows seriously. Starting in 1969, he began a systematic study of the way TV programming affects how we think about the world. As it turned out, the Starsky and Hutch effect was significant. When you asked TV watchers to estimate the percentage of the adult workforce that was made up of cops, they vastly overguessed relative to non–TV watchers with the same education and demographic background. Even more troubling, kids who saw a lot of TV violence were much more likely to be worried about real-world violence. Gerbner called this the mean world syndrome: If you grow up in a home where there’s more than, say, three hours of television per day, for all practical purposes, you live in a meaner world—and act accordingly—than your next-door neighbor who lives in the same place but watches less television. “You know, who tells the stories of a culture really governs human behavior,” Gerbner later said.
That Facebook chose Like instead of, say, Important is a small design decision with far-reaching consequences: The stories that get the most attention on Facebook are the stories that get the most Likes, and the stories that get the most Likes are, well, more likable. Facebook is hardly the only filtering service that will tend toward an antiseptically friendly world. As Eckles pointed out to me, even Twitter, which has a reputation for putting filtering in the hands of its users, has this tendency. Twitter users see most of the tweets of the folks they follow, but if my friend is having an exchange with someone I don’t follow, it doesn’t show up. The intent is entirely innocuous: Twitter is trying not to inundate me with conversations I’m not interested in. But the result is that conversations between my friends (who will tend to be like me) are overrepresented, while conversations that could introduce me to new ideas are obscured.
The good news about postmaterial politics is that as countries become wealthier, they’ll likely become more tolerant, and their citizens will be more self-expressive. But there’s a dark side to it too. Ted Nordhaus, a student of Inglehart’s who focuses on postmaterialism in the environmental movement, told me that “the shadow that comes with postmaterialism is profound self-involvement.... We lose all perspective on the collective endeavors that have made the extraordinary lives we live possible.” In a postmaterial world where your highest task is to express yourself, the public infrastructure that supports this kind of expression falls out of the picture. But while we can lose sight of our shared problems, they don’t lose sight of us.
Ultimately, democracy works only if we citizens are capable of thinking beyond our narrow self-interest. But to do so, we need a shared view of the world we cohabit. We need to come into contact with other peoples’ lives and needs and desires. The filter bubble pushes us in the opposite direction—it creates the impression that our narrow self-interest is all that exists. And while this is great for getting people to shop online, it’s not great for getting people to make better decisions together.
When I first called Google’s PR department, I explained that I wanted to know how Google thought about its enormous curatorial power. What was the code of ethics, I asked, that Google uses to determine what to show to whom? The public affairs manager on the other end of the phone sounded confused. “You mean privacy?” No, I said, I wanted to know how Google thought about its editorial power. “Oh,” he replied, “we’re just trying to give people the most relevant information.” Indeed, he seemed to imply, no ethics were involved or required. I persisted: If a 9/11 conspiracy theorist searches for “9/11,” was it Google’s job to show him the Popular Mechanics article that debunks his theory or the movie that supports it? Which was more relevant? “I see what you’re getting at,” he said. “It’s an interesting question.” But I never got a clear answer.
Facebook’s flaws and its founder’s ill-conceived views about identity aren’t the result of an antisocial, vindictive mind-set. More likely, they’re a natural consequence of the odd situation successful start-ups like Facebook create, in which a twenty-something guy finds himself, in a matter of five years, in a position of great authority over the doings of 500 million human beings. One day you’re making sand castles; the next, your sand castle is worth $50 billion and everyone in the world wants a piece of it. Of course, there are far worse business-world personality types with whom to entrust the fabric of our social lives. With a reverence for rules, geeks tend to be principled—to carefully consider and then follow the rules they set for themselves and to stick to them under social pressure. “They have a somewhat skeptical view of authority,” Stanford professor Terry Winograd said of his former students Page and Brin. “If they see the world going one way and they believe it should be going the other way, they are more like to say ‘the rest of the world is wrong’ rather than ‘maybe we should reconsider.’”
Thiel has penthouse apartments in San Francisco and New York and a silver gullwing McLaren, the fastest car in the world. He also owns about 5 percent of Facebook. Despite his boyish, handsome features, Thiel often looks as though he’s brooding. Or maybe he’s just lost in thought. In his teenage years, he was a high-ranking chess player but stopped short of becoming a grand master. “Taken too far, chess can become an alternate reality in which one loses sight of the real world,” he told an interviewer for Fortune. “My chess ability was roughly at the limit. Had I become any stronger, there would have been some massive tradeoffs with success in other domains in life.” In high school, he read Solzhenitsyn’s Gulag Archipelago and J. R. R. Tolkien’s Lord of the Rings series, visions of corrupt and totalitarian power. At Stanford, he started a libertarian newspaper, the Stanford, to preach the gospel of freedom.
In 1998, Thiel cofounded the company that would become PayPal, which he sold to eBay for $1.5 billion in 2002. Today Thiel runs a multi-billion-dollar hedge fund, Clarium, and a venture capital firm, Founder’s Fund, which invests in software companies throughout Silicon Valley. Thiel has made some legendarily good picks—among them, Facebook, in which he was the first outside investor. (He’s also made some bad ones—Clarium has lost billions in the last few years.) But for Thiel, investing is more than a day job. It’s an avocation. “By starting a new Internet business, an entrepreneur may create a new world,” Thiel says. “The hope of the Internet is that these new worlds will impact and force change on the existing social and political order.”
“What I learned being in the ad business,” he says, “is that people can just go a long time without asking themselves what they should put their talent towards. You’re playing a game, and you know the point of the game is to win. But what game are you playing? What are you optimizing for? If you’re playing the game of trying to get the maximum downloads of your app, you’ll make the better farting app.” “We don’t need more things,” he says. “People are more magical than iPads! Your relationships are not media. Your friendships are not media. Love is not media.” In his low-key way, Heiferman is getting worked up. Evangelizing this view of technology—that it ought to do something meaningful to make our lives more fulfilling and to solve the big problems we face—isn’t as easy as it might seem. In addition to MeetUp more generally, Scott founded the New York Tech MeetUp, a group of ten thousand software engineers who meet every month to preview new Web sites. At a recent meeting, Scott made an impassioned plea for the assembled group to focus on solving the problems that matter—education, health care, the environment. It didn’t get a very good reception—in fact, he was just about booed off the stage. “‘We just want to do cool stuff,’ was the attitude,” Scott told me later. “ ‘Don’t bother me with this politics stuff.’
For better or worse, programmers and engineers are in a position of remarkable power to shape the future of our society. They can use this power to help solve the big problems of our age—poverty, education, disease—or they can, as Heifer-man says, make a better farting app. They’re entitled to do either, of course. But it’s disingenuous to have it both ways—to claim your enterprise is great and good when it suits you and claim you’re a mere sugar-water salesman when it doesn’t.
Actually, building an informed and engaged citizenry—in which people have the tools to help manage not only their own lives but their own communities and societies—is one of the most fascinating and important engineering challenges. Solving it will take a great deal of technical skill mixed with humanistic understanding—a real feat. We need more programmers to go beyond Google’s famous slogan, “Don’t be evil.” We need engineers who will do good.
The challenge, Calo says, is that it’s hard to remember that humanlike software and hardware aren’t human at all. Advertars or robotic assistants may have access to the whole set of personal data that exists online—they may know more about you, more precisely, than your best friend. And as persuasion and personality profiling get better, they’ll develop an increasingly nuanced sense of how to shift your behaviors. Which brings us back to the advertar. In an attention-limited world, lifelike, and especially humanlike, signals stand out—we’re hardwired to pay attention to them. It’s far easier to ignore a billboard than an attractive person calling your name. And as a result, advertisers may well decide to invest in technology that allows them to insert human advertisements into social spaces. The next attractive man or woman who friends you on Facebook could turn out to be an ad for a bag of chips. As Calo puts it, “people are not evolved to twentieth-century technology. The human brain evolved in a world in which only humans exhibited rich social behaviors, and a world in which all perceived objects were real physical objects.” Now all that’s shifting.
Tag a few pictures with Picasa, Google’s photo-management tool, and the software can already pick out who’s who in a collection of photos. And according to Eric Schmidt, the same is true of Google’s cache of images from the entire Web. “Give us 14 images of you,” he told a crowd of technologists at the Techonomy Conference in 2010, “and we can find other images of you with ninety-five percent accuracy.” As of the end of 2010, however, this feature isn’t available in Google Image Search. Face.com, an Israeli start-up, may offer the service before the search giant does. It’s not every day that a company develops a highly useful and world-changing technology and then waits for a competitor to launch it first. But Google has good reason to be concerned: The ability to search by face will shatter many of our cultural illusions about privacy and anonymity. Many of us will be caught in flagrante delicto. It’s not just that your friends (and enemies) will be able to easily find pictures other people have taken of you—as if the whole Internet has been tagged on Facebook. They will also be able to find pictures other people took of other people, in which you happen to be walking by or smoking a cigarette in the background.
If the lyrics aren’t exactly subtle about the dangers of crossing the border, that’s the point. Migra corridos was produced by a contractor working for the U.S. Border Control, as part of a campaign to stem the tide of immigrants along the border. The song is a prime example of a growing trend in what marketers delicately call “advertiser-funded media,” or AFM. Product placement has been in vogue for decades, and AFM is its natural next step. Advertisers love product placement because in a media environment in which it’s harder and harder to get people to pay attention to anything—especially ads—it provides a kind of loophole. You can’t fast-forward past product placement. You can’t miss it without missing some of the actual content. AFM is just a natural extension of the same logic: Media have always been vehicles for selling products, the argument goes, so why not just cut out the middleman and have product makers produce the content themselves?
In 2010, Walmart and Procter & Gamble announced a partnership to produce Secrets of the Mountain and The Jensen Project, family movies that will feature characters using the companies’ products throughout. Michael Bay, the director of Transformers, has started a new company called the Institute, whose tagline is “Where Brand Science Meets Great Storytelling.” Hansel and Gretel in 3-D, its first feature production, will be specially crafted to provide product-placement hooks throughout. Now that the video-game industry is far more profitable than the movie industry, it provides a huge opportunity for in game advertising and product placement as well. Massive Incorporated, a game advertising platform acquired by Microsoft for $200 million to $400 million, has placed ads on in game billboards and city walls for companies like Cingular and McDonald’s, and has the capacity to track which individual users saw which advertisements for how long. Splinter Cell, a game by UBIsoft, works placement for products like Axe deodorant into the architecture of the cityscape that characters travel through.
If the product placement and advertiser-funded media industries continue to grow, personalization will offer whole new vistas of possibility. Why name-drop Lipslicks when your reader is more likely to buy Cover Girl? Why have a video-game chase scene through Macy’s when the guy holding the controller is more of an Old Navy type? When software engineers talk about architecture, they’re usually talking metaphorically. But as people spend more of their time in virtual, personalizable…
Why should Web sites look the same to every viewer or customer? Different people don’t respond only to different products—they respond to different design sensibilities, different colors, even different types of product descriptions. It’s easy enough to imagine a Walmart Web site with softened edges and warm pastels for some customers and a hard-edged, minimalist design for others. And once that capacity exists, why stick with just one design per customer? Maybe it’s best to show me one side of the Walmart brand when I’m angry and another when I’m happy. This kind of approach isn’t a futuristic fantasy. A team led by John Hauser at MIT’s business school has developed the basic techniques for what they call Web site morphing, in which a shopping site analyzes users’ clicks to figure out what kinds of information and styles of presentation are most effective and then adjusts the layout to suit a particular user’s cognitive style. Hauser estimates that Web sites…
As personal data become more and more valuable, the behavioral data market described in chapter 1 is likely to explode. When a clothing company determines that knowing your favorite color produces a $5 increase in sales, it has an economic basis for pricing that data point—and for other Web sites to find reasons to ask you. (While OkCupid is mum about its business model, it likely rests on offering advertisers the ability to target its users based on the hundreds of personal questions they answer.) While many of these data acquisitions will be legitimate, some won’t be. Data are uniquely suited to gray-market activities, because they need not carry any trace of where they have come from or where they have been along the way. Wright calls this data laundering, and it’s already well under way: Spyware and spam companies sell questionably derived data to middlemen, who then add it to the databases powering the marketing campaigns of major corporations.
It’s fair to guess that the technology of the future will work about as well as the technology of the past—which is to say, well enough, but not perfectly. There will be bugs. There will be dislocations and annoyances. There will be breakdowns that cause us to question whether the whole system was worth it in the first place. And we’ll live with the threat that systems made to support us will be turned against us—that a clever hacker who cracks the baby monitor now has a surveillance device, that someone who can interfere with what we see can expose us to danger. The more power we have over our own environments, the more power someone who assumes the controls has over us. That is why it’s worth keeping the basic logic of these systems in mind: You don’t get to create your world on your own. You live in an equilibrium between your own desires and what the market will bear. And while in many cases this provides for healthier, happier lives, it also provides for the commercialization of everything—even of our sensory apparatus itself. There are few things uglier to contemplate than AugCog-enabled ads that escalate until they seize control of your attention. We’re compelled to return to Jaron Lanier’s question: For whom do these technologies work? If history is any guide, we may not be the primary customer. And as technology gets better and better at directing our attention, we need to watch closely what it is directing our attention toward.
For some of the “identity cascade” problems discussed in chapter 5, regularly erasing the cookies your Internet browser uses to identify who you are is a partial cure. Most browsers these days make erasing cookies pretty simple—you just select Options or Preferences and then choose Erase cookies. And many personalized ad networks are offering consumers the option to opt out. I’m posting an updated and more detailed list of places to opt out on the Web site for this book, www.thefilterbubble.com.
And as I discussed in chapter 5, personalization algorithms can cause identity loops, in which what the code knows about you constructs your media environment, and your media environment helps to shape your future preferences. This is an avoidable problem, but it requires crafting an algorithm that prioritizes “falsifiability,” that is, an algorithm that aims to disprove its idea of who you are. (If Amazon harbors a hunch that you’re a crime novel reader, for example, it could actively present you with choices from other genres to fill out its sense of who you are.)
Imagine for a moment that next to each Like button on Facebook was an Important button. You could tag items with one or the other or both. And Facebook could draw on a mix of both signals—what people like, and what they think really matters—to populate and personalize your news feed. You’d have to bet that news about Pakistan would be seen more often—even accounting for everyone’s quite subjective definition of what really matters. Collaborative filtering doesn’t have to lead to compulsive media: The whole game is in what values the filters seek to pull out. Alternately, Google or Facebook could place a slider bar running from “only stuff I like” to “stuff other people like that I’ll probably hate” at the top of search results and the News Feed, allowing users to set their own balance between tight personalization and a more diverse information flow. This approach would have two benefits: It would make clear that there’s personalization going on, and it would place it more firmly in the user’s control.
There’s one more thing the engineers of the filter bubble can do. They can solve for serendipity, by designing filtering systems to expose people to topics outside their normal experience. This will often be in tension with pure optimization in the short term, because a personalization system with an element of randomness will (by definition) get fewer clicks. But as the problems of personalization become better known, it may be a good move in the long run—consumers may choose systems that are good at introducing them to new topics. Perhaps what we need is a kind of anti-Netflix Prize—a Serendipity Prize for systems that are the best at holding readers’ attention while introducing them to new topics and ideas.