The danger of advertising in ChatGPT
This spring, Sam Altman, CEO of OpenAI, delivered a sermon against advertising from the presbytery of Harvard's Memorial Church: "I'll reveal a bias of mine: I hate ads." He said that ads "mostly cause the company providing the service to be out of alignment with the incentives of users," adding that he found the idea of mixing advertising with artificial intelligence "particularly disturbing."
This comment immediately reminded me of something I'd heard before. It appeared in a highly influential paper written by Sergey Brin and Larry Page in 1998, when they were at Stanford developing Google. They argued that advertising often made search engines less useful and that companies that used it "would by definition tend to prioritize advertisers over the needs of consumers."
I arrived at Stanford as a freshman in 2000, shortly after Brin and Page accepted about $25 million in venture capital to turn their academic project into a company. My best friend from college convinced me to try Google, which he considered more ethical than earlier search engines. But what we didn't know was that, in the midst of the dot-com bust, Google's investors were pressuring the cofounders to hire a more experienced CEO.
Brin and Page hired Eric Schmidt, who in turn hired Sheryl Sandberg to design an advertising program. A couple of years later, while filing for Google's IPO, Brin and Page justified their shift from an ad-averse stance by telling shareholders that ads made Google more useful because it provided what the founders called "excellent business information."
When I was a senior, news leaked that Facebook—which some of us had heard about from friends at Harvard where it started—was coming to our campus. As co-founder Mark Zuckerberg said, "I know it sounds corny, but I would love to make people's lives better, especially in their social lives. In the future, we might run ads to recoup our money, but since it's so cheap to offer, we'll probably wait."
Back in 2007, when I was covering Facebook for The Wall Street Journal, I got the scoop that this social network—which already incorporated ads—would begin using data from users and their "friends" to improve the accuracy with which ads reached the target audience. Like Google before it, Facebook hailed it as good for users. Zuckerberg even brought in Sheryl Sandberg from Google to help him. Later, under pressure from the economic downturn, followed by an IPO, Facebook followed Google's lead: doubling down on advertising. In this case, it did so by collecting and monetizing even more personal information about users.
And this brings me back to Altman and OpenAI, ChatGPT's parent company. It began as a nonprofit with the stated mission of building AI that would benefit humanity. After several interim restructurings, OpenAI has announced that it will create a public benefit and interest company (though still controlled by the nonprofit), which will serve the public good as well as the needs of shareholders; At the same time, it will remove the cap on investor returns, a change that CFO Sarah Friar says "gets us to a potential IPO... if and when we want." An IPO on the horizon and rumors of an economic downturn: these are the same conditions that preceded Google and Facebook's pivot into advertising.
The stage is set, then, for the next phase of Big Tech's exploitation of the increasingly rampant human desire for information, connection, and well-being. Against this backdrop, it's no surprise that Altman and other OpenAI executives are quietly dropping a trial balloon about the possibility of eventually turning to advertising. In December, Sarah Friar told The Financial Times OpenAI is considering it, though he clarified that the company has "no active plans" in this regard. Altman later considered the possibility of an affiliate revenue model, whereby his company would collect a percentage of sales when users purchased a product discovered through an OpenAI feature.
Altman emphasized that OpenAI wouldn't accept money to change the position of product mentions. However, it's not hard to imagine how a new OpenAI would work: it would combine all the personal information we already share with ChatGPT—marital problems, work conflicts—with the billions of words OpenAI consumes in creating its products, with the goal of making increasingly accurate recommendations about what we should do with our time, money, and attention.
I don't think we'd have to wait long for Altman to tell us it would be to our benefit. And once ChatGPT took the plunge, it's clear everyone would follow. Meanwhile, Google already inserts ads next to AI-generated search results.
The problem isn't just that this strategy would turn digital tools originally designed to help users into digital tools designed to help advertisers—two groups whose interests are not exactly identical. The problem is also that these tools would dramatically enhance what writer and psychologist Shoshana Zuboff calls "surveillance capitalism": a vast system in which companies treat our experiences and identities as commodities they can use to manipulate us through advertising. The effect, she says, is "fundamentally epistemic and anti-democratic."
Roger McNamee, a former mentor to Sheryl Sandberg and also Zuckerberg, as well as an early investor in Facebook, had, in his view, "a front-row seat" during the early years of Google and Facebook. He has later criticized these companies for their control over users, and politicians for what he sees as their failure to demand the necessary safeguards. McNamee recently told me he doesn't have as close a relationship with OpenAI as he does with these other companies, "but my skills at recognizing business patterns are much better now." He believes the danger is greater this time, compounded by all the data being exploited by AI companies, not to mention the massive consumption of natural resources and the potential threat they pose to the livelihoods of millions of workers—all for products he considers of minimal value. He believes that if OpenAI embraces the hype and embraces this surveillance capitalism, the catastrophe will be dire.
Copyright The New York Times