The following is excerpted from BROKEN CODE: Inside Facebook and the Fight to Expose Its Harmful Secrets by Jeff Horwitz. Reprinted by permission of Doubleday, an imprint of The Knopf Doubleday Publishing Group, a division of Penguin Random House LLC. Copyright © 2023 by Jeff Horwitz.

In 2006, the U.S. patent office received a filing for “an automatically generated display that contains information relevant to a user about another user of a social network.” Rather than forcing people to search through “disparate and disorganized” content for items of interest, the system would seek to generate a list of “relevant” information in a “preferred order.”

The listed authors were “Zuckerberg et al.” and the product was the News Feed.

The idea of showing users streams of activity wasn’t entirely new—­ photo-­sharing website Flickr and others had been experimenting with it—­ but the change was massive. Before, Facebook users would interact with the site mainly via notifications, pokes, or looking up friends’ profiles. With the launch of the News Feed, users got a constantly updating stream of posts and status changes. The shift came as a shock to what were Facebook’s then 10 million users, who did not appreciate their activities being monitored and their once-­ static profiles mined for updated content. In the face of widespread complaints, Zuckerberg wrote a post reassuring users, “Nothing you do is being broadcast; rather, it is being shared with people who care about what you do—­ your friends.” He titled it: “Calm down. Breathe. We hear you.”

Hearing user complaints wasn’t the same thing as listening to them. As Chris Cox would later note at a press event, News Feed was an instant success at boosting activity on the platform and connecting users. Engagement quickly doubled, and within two weeks of launch more than a million members had affiliated themselves with a single interest for the first time. The cause that had united so many people? A petition to eradicate the “stalkeresque” News Feed.

The opaque system that users revolted against was, in hindsight, remarkably simple. Content mostly appeared in reverse chronological order, with manual adjustments made to ensure that people saw both popular posts and a range of material. “In the beginning, News Feed ranking was turning knobs,” Cox said.

Fiddling with dials worked well enough for a little while, but everyone’s friend lists were growing and Facebook was introducing new features such as ads, pages, and interest groups. As entertainment, memes, and commerce began to compete with posts from friends in News Feed, Facebook needed to ensure that a user who had just logged on would see their best friend’s engagement photos ahead of a cooking page’s popular enchilada recipe.

The first effort at sorting, eventually branded “EdgeRank,” was a simple formula that prioritized content according to three principal factors: a post’s age, the amount of engagement it got, and the interconnection between user and poster. As an algorithm, it wasn’t much—­ just a rough attempt to translate the questions “Is it new, popular, or from someone you care about?” into math. 

There was no dark magic at play, but users again revolted against the idea of Facebook putting its thumb on what they saw. And, again, Facebook usage metrics jumped across the board.

The platform’s recommendation systems were still in their infancy, but the dissonance between users’ vocal disapproval and avid usage led to an inescapable conclusion inside the company: regular people’s opinions about Facebook’s mechanics were best ignored. Users screamed “stop,” Facebook kept going, and everything would work out dandy.

By 2010, the company was looking to move beyond EdgeRank’s crude formula to recommend content based on machine learning, a branch of artificial intelligence focused on training computers to design their own decision-­ making algorithms. Rather than programming Facebook’s computers to rank content according to simple math, engineers would program them to analyze user behavior and design their own ranking formulas. What people saw would be the result of constant experimentation, the platform serving up whatever it predicted was most likely to generate a like from a user and evaluating its own results in real time.

Related work from others:  Latest from MIT : How can we reduce the carbon footprint of global computing?

Despite the growing complexity of its product and the collection of user data at a scale the world had never seen, Facebook still didn’t know enough about its users to show them relevant ads. Brands loved the attention and buzz they could get from creating content on Facebook, but they hadn’t found the company’s paid offerings compelling. In May 2012, General Motors killed its entire Facebook advertising budget. A prominent digital advertising executive declared Facebook ads “fundamentally some of the worst performing ad units on the Web.”

Fixing the problem would fall to a team run by Joaquin Quiñonero Candela. A Spaniard who grew up in Morocco, Quiñonero was living in the UK and working on artificial intelligence at Microsoft in 2011 when friends scattered across Northern Africa began talking excitedly about social media–­ driven protests. The machine learning techniques he was using to optimize Bing search ads had clear applications to the social networks that people had used to overthrow four autocratic states and nearly topple several more. “I joined Facebook because of the Arab Spring,” Quiñonero said.

Quiñonero found that the way Facebook built its products was nearly as revolutionary as their results. Invited by a friend to tour the Menlo Park campus, he was shocked to look over the shoulder of an engineer making a significant but unsupervised update to Facebook’s code. Confirming how much faster the company moved than Microsoft, Quiñonero received a Facebook job offer a week later. Quiñonero began working on ads, and his timing could hardly have been better. Advances in machine learning and raw computing speed allowed the platform to not only pigeonhole users into demographic niches (“single heterosexual woman in San Francisco, late twenties, interested in camping and salsa dancing”) but to spot correlations between what they clicked on and then use that information to guess which ads they would find relevant. After beginning with near- random guesses on how to maximize the odds of a click, the system would learn from its hits and misses, refining its model for predicting which ads had the best shot at success. It was hardly omniscient—­ recommended ads were regularly inexplicable. But the bar for success in digital advertising was low: if 2 percent of users clicked on an ad, that was a triumph. With billions of ads served each day, algorithm tweaks that produced even modest gains could bring in tens or hundreds of millions of dollars in revenue. And Quiñonero’s team found that it could churn out those alterations. “I told my team to go fast, to ship every week,” he said. 

The rapid pace made sense. The team’s AI was improving not just revenue but how people felt about the platform. Better-­ targeted ads meant Facebook could make more money per user without increasing the ad load, and there wasn’t all that much that could go wrong. When Facebook pitched denture cream to teenagers, nobody died. 

Advertising was the beachhead for machine learning at Facebook, and soon everyone wanted a piece of the action. For product executives tasked with increasing the number of Facebook groups joined, friends added, and posts made, the appeal was obvious. If Quiñonero’s techniques could increase how often users engaged with ads, they could increase how often users engaged with everything else on the platform.

Every team responsible for ranking or recommending content rushed to overhaul their systems as fast as they could, setting off an explosion in the complexity of Facebook’s product. Employees found that the biggest gains often came not from deliberate initiatives but from simple futzing around. Rather than redesigning algorithms, which was slow, engineers were scoring big with quick and dirty machine learning experiments that amounted to throwing hundreds of variants of existing algorithms at the wall and seeing which versions stuck—­ which performed best with users. They wouldn’t necessarily know why a variable mattered or how one algorithm outperformed another at, say, predicting the likelihood of commenting. But they could keep fiddling until the machine learning model produced an algorithm that statistically outperformed the existing one, and that was good enough.

Related work from others:  UC Berkeley - Accelerating Ukraine Intelligence Analysis with Computer Vision on Synthetic Aperture Radar Imagery

It would be hard to conceive of an approach to building systems that more embodied the slogan “Move Fast and Break Things.” Facebook wanted only more. Zuckerberg wooed Yann LeCun, a French computer scientist specializing in deep learning, meaning the construction of computer systems capable of processing information in ways inspired by human thinking. Already renowned for creating the foundational AI techniques that made facial recognition possible, LeCun was put in charge of a division that aimed to put Facebook at the vanguard of fundamental research into artificial intelligence.

Following his success with ads, Quiñonero was given an equally formidable task: pushing machine learning into the company’s bloodstream as fast as possible. His initial staff of two dozen—­ the team responsible for building new core machine learning tools and making them available to other parts of the company—­ had grown in the three years since he’d been hired. But it was still nowhere near large enough to assist every product team that wanted machine learning help. The skills to build a model from scratch were too specialized for engineers to readily pick up, and you couldn’t increase the supply of machine learning PhDs by throwing money around.

The solution was to build FB Learner, a sort of “paint by numbers” version of machine learning. It packaged techniques into a template that could be used by engineers who quite literally did not understand what they were doing. FB Learner did for machine learning inside Facebook what services like WordPress had once done for building websites, rendering the need to muck around with HTML or configure a server unnecessary. Rather than setting up a blog, however, the engineers in question were messing with the guts of what was rapidly becoming a preeminent global communications platform.

Many at Facebook were aware of the increasing concerns around AI outside the company’s walls. Poorly designed algorithms meant to reward good healthcare penalized hospitals that treated sicker patients, and models purporting to quantify a parole candidate’s risk of reoffending turned out to be biased in favor of keeping Black people in jail. But these issues seemed remote on a social network.

An avid user of FB Learner would later describe machine learning’s mass diffusion inside Facebook as “giving rocket launchers to twenty-­ five-­ year-­ old engineers.” But at the time, Quiñonero and the company spoke of it as a triumph.

“Engineers and teams, even with little expertise, can build and run experiments with ease and deploy AI-­ powered products to production faster than ever,” Facebook announced in 2016, boasting that FB Learner was ingesting trillions of data points on user behavior every day and that engineers were running 500,000 experiments on them a month.

The sheer amount of data that Facebook collected—­ and ad-targeting results so good that users regularly suspected (wrongly) the company of eavesdropping on their offline conversations—gave rise to the claim that “Facebook knows everything about you.”

That wasn’t quite correct. The wonders of machine learning had obscured its limits. Facebook’s recommendation systems worked by raw correlation between user behavior, not by identifying a user’s tastes and interests and then serving content based on it. News Feed couldn’t tell you whether you liked ice skating or dirt biking, hip-­ hop or K-­ pop, and it couldn’t explain in human terms why one post appeared in your feed above another. Although this inexplicability was an obvious drawback, machine learning–­ based recommendation systems spoke to Zuckerberg’s deep faith in data, code, and personalization. Freed from human limitation, error, and bias, Facebook’s algorithms were capable, he believed, of unparalleled objectivity—­ and, perhaps more important, efficiency.

A separate strain of machine learning work was devoted to figuring out what content was actually in the posts Facebook recommended. Known as classifiers, these were AI systems trained to perform pattern recognition on vast data sets. Years before Facebook’s creation, classifiers had proven themselves indispensable in the fight against spam, allowing email providers to move beyond simple keyword filters that sought to block mass emails about, say, “Vi@gra.” By ingesting and comparing a huge collection of emails—some labeled as spam, some as not spam—­ a machine learning system could develop its own rubric for distinguishing between them. Once this classifier was “trained,” it would be set loose, analyzing incoming email and predicting the probability that each message should be sent to an inbox, a junk folder, or straight to hell.

Related work from others:  Latest from MIT Tech Review - The EU wants to regulate your favorite AI tools

By the time machine learning experts began to arrive at Facebook, the list of questions that classifiers sought to answer had grown well past “Is it spam?,” thanks in large part to people like LeCun. Zuckerberg was bullish on its future progress and its applications for Facebook. By 2016, he was predicting that classifiers would surpass human capacities of perception, recognition, and comprehension within the next five to ten years, allowing the company to shut down misbehavior and make huge leaps in connecting the world. That prediction would prove more than a little optimistic.

Even as techniques improved, data sets grew, and processing sped up, one drawback of machine learning persisted. The algorithms that the company produced stubbornly refused to explain themselves. Engineers could evaluate a classifier’s success by testing it to see what percentage of its judgment calls were accurate (its “precision”) and what portion of a thing it detected (its “recall”). But because the system was teaching itself how to identify something based on a logic of its own design, when it erred, there was no human-­cognizable reason why.

Sometimes mistakes would seem nonsensical. Other times they would be systematic in ways that reflected human error. Early in Facebook’s efforts to deploy a classifier to detect pornography, Arturo Bejar recalled, the system routinely tried to cull images of beds. Rather than learning to identify people screwing, the model had instead taught itself to recognize the furniture on which they most often did.

The problem had an easy fix: engineers simply needed to train the model with more PG-­ rated mattress scenes. It made for a good joke—­ as long as you didn’t consider that the form of machine learning that the engineers had just screwed up was one of the most basic that Facebook was using. Similarly fundamental errors kept occurring, even as the company came to rely on far more advanced AI techniques to make far weightier and complex decisions than “porn/not porn.” The company was going all in on AI, both to determine what people should see, and also to solve any problems that might arise.

There was no question that the computer science was dazzling and the gains concrete. But the speed, breadth, and scale of Face- book’s adoption of machine learning came at the cost of comprehensibility. Why did Facebook’s “Pages You Might Like” algorithm seem so focused on recommending certain topics? How had a video snippet from a computer animation about dental implants ended up being seen a hundred million times? And why did some news publishers consistently achieve virality when they just rewrote other outlets’ stories?

Faced with these questions, Facebook’s Communications team would note that the company’s systems responded to people’s behavior and that there was no accounting for taste. These were difficult points to refute. They also obscured an uncomfortable fact: Facebook was achieving its growth in ways it didn’t fully understand.

Within five years of announcing that it was beginning to use machine learning to recommend content and target ads, Facebook’s systems would rely so heavily on AI capable of training itself that, without the technology, Yann LeCun proudly declared, all that would be left of the company’s products would be “dust.”

Similar Posts