chrisphan.com

Life in the time of paperclip maximizers

An analog clock reading 10:082019-05-16 / 2019-W20-4T10:08:00-05:00 / 0x5cdd7cd0

Categories: the Eff Bee, Internet culture

pile of paperclips

In an influential 2003 essay, philosopher Nick Bostrom explored the ethical implications of developing a "superintelligence", that is, an "intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills." He argued that any such intelligence should be given "philanthropic values", and designed to be ultimately motivated with improving human lives. Without such a "supergoal of philanthropy", a superintelligence could be dangerous. For example, Bostrom imagines a powerful artificial intelligence that was tasked with making as many paperclips as possible, "with the [unintended] consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities."

A similar example was given by Rob Miles in a Computerphile video. Instead of being asked to make paperclips, Miles imagines a superintelligent machine tasked with an ultimate goal of collecting stamps. The machine's designer imagines that the machine would buy stamps from eBay, but instead the machine starts finding unforeseen ways to accumulate stamps, ways that are creative yet extremely undesirable. The machine starts by fraudulently convincing stamp collectors to send it their collections, and the undesirability of the tactics escalate from there:

There comes a point when the stamp-collecting device is thinking [. . .] "Paper is made of carbon and hydrogen and oxygen. I'm gonna need all of that I can get to make stamps." And it's gonna notice that people are made of carbon, hydrogen, and oxygen, right?

There comes a point where the stamp collecting device becomes extremely dangerous and that point is as soon as you switch it on.

We're probably a long way from being able to create the kind of superintelligence envisioned by Bostrom and Miles in their thought experiments. Compared to the hypothetical superintelligences envisioned by above, today's nascent artificial intelligences are rather rudimentary and are far from having the capacity to even understand "philanthropic values". But it's becoming increasingly clear that even rudimentary AIs can pose risks to our well-being and society when tasked with maximizing some outcome.

I have been a social media user for a long time. In 2001, I joined LiveJournal, a blogging platform that was also one of the first social media sites. One of LiveJournal's great innovations, sometimes mistakenly credited to Mark Zuckerberg and friends, was the "friends page", which displayed your "friends'" posts in reverse-chronological order.

LiveJournal is still around, but it never achieved the same market penetration as Facebook. Today's top services, such as Facebook and YouTube, now use computer programs to determine what shows up in your feed, and in what order, rather than just showing you posts in reverse-chronological order. These programs are designed to maximize "engagement", that is, how much time you spend on their sites (and hence how many ad impressions they can sell). These computer programs are based on machine learning, meaning that rather than having a human programer using human judgment to determine how to choose what content is shown, the computer chooses based on what content has lead to the most engagement in the past. Unfortunately, the most engaging content (and hence shown by an algorithm trying to maximize engagement) is often content that is extremist, misleading, or polarizing.

One of the first to raise the alarm was sociologist Zeynep Tufekci. Her TED talk "We're building a dystopia just to make people click on ads" is required viewing for anyone who uses these services (or cares about the future of society).

So in 2016, I attended rallies of then-candidate Donald Trump to study as a scholar the movement supporting him. I study social movements, so I was studying it, too. And then I wanted to write something about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and autoplaying to me white supremacist videos in increasing order of extremism. If I watched one, it served up one even more extreme and autoplayed that one, too. If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and autoplays conspiracy left, and it goes downhill from there.

Well, you might be thinking, this is politics, but it's not. This isn't about politics. This is just the algorithm figuring out human behavior. I once watched a video about vegetarianism on YouTube and YouTube recommended and autoplayed a video about being vegan. It's like you're never hardcore enough for YouTube.

Since then, many others have joined in sounding this alarm. In April 2018, Matt Yglesias wrote:

The association between Facebook and fake news is by now well-known, but the stark facts are worth repeating — according to Craig Silverman’s path-breaking analysis for BuzzFeed, the 20 highest-performing fake news stories of the closing days of the 2016 campaign did better on Facebook than the 20 highest-performing real ones.

Rumors, misinformation, and bad reporting can and do exist in any medium. But Facebook created a medium that is optimized for fakeness, not as an algorithmic quirk but due to the core conception of the platform. By turning news consumption and news discovery into a performative social process, Facebook turns itself into a confirmation bias machine — a machine that can best be fed through deliberate engineering. [. . .] Facebook’s imperative to maximize engagement [. . .] lands it in an endless cycle of sensationalism and nonsense.

To be clear, I'm not suggesting that these companies are deliberately promoting extremism or misinformation. It's probably an emergent property of the machine-learning algorithms that use trial and error to maximize users' engagement. It's an unintended side-effect, like the homicidal actions of Miles' hypothetical stamp-collecting machine.

But the probably inadvertent promotion of inaccurate information and extremism isn't the only problem with Facebook's engagement-maximizing algorithms. A bigger issue, to me, is that you have a nascent artificial intelligence trying to get you to waste more time on their web site, distracting you from other priorities. Just like Rob Miles' hypothetical stamp-collecting machine ends up trying to turn all available hydrocarbons into stamps, the computer programs used by social media companies try to convert as many of our waking hours at they can into time spent on their sites. Even the co-founder of Facebook has come to see the danger of this:

I was on the original News Feed team (my name is on the patent), and that product now gets billions of hours of attention and pulls in unknowable amounts of data each year. The average Facebook user spends an hour a day on the platform; Instagram users spend 53 minutes a day scrolling through pictures and videos. They create immense amounts of data — not just likes and dislikes, but how many seconds they watch a particular video — that Facebook uses to refine its targeted advertising. Facebook also collects data from partner companies and apps, without most users knowing about it, according to testing by The Wall Street Journal.

Some days, lying on the floor next to my 1-year-old son as he plays with his dinosaurs, I catch myself scrolling through Instagram, waiting to see if the next image will be more beautiful than the last. What am I doing? I know it’s not good for me, or for my son, and yet I do it anyway.

The choice is mine, but it doesn’t feel like a choice. Facebook seeps into every corner of our lives to capture as much of our attention and data as possible and, without any alternative, we make the trade.

What can we do to counteract these effects? I'm not sure, to be honest. But one small step I've taken, in the wake of the 2016 election, is to take back control of what I read. Instead of waiting for people to post content on Facebook, I actively seek it out, visiting a variety of news sites and reading the physical newspaper. I've also started using an RSS reader again: I read things from my feed when I have a spare moment, and exhausting my feed is a good sign to quit surfing the Internet and re-engage with people who are physically proximate.

I also heartily recommend the book Bored and Brilliant: How Spacing Out Can Unlock Your Most Productive and Creative Self by Manoush Zomorodi.