FTC’s Khan warns tech industry that agency will strictly enforce AI data privacy

Artificial intelligence tools will be vigorously regulated by the Federal Trade Commission (FTC) with an eye on consumer privacy, its Chair Lina Khan told an audience of tech executives and startup founders at a conference Tuesday.

“We’re crafting easily administrable remedies with bright-line rules on the development, use and management of AI inputs,” Khan said during a speech. “That means making clear that some data, particularly peoples’ sensitive health data, geolocation data and browsing data is simply off limits for model training.”

Khan said the agency’s enforcement actions will make clear that “there is no AI exemption” from existing laws.

“Firms cannot use claims of innovation as cover for law breaking,” Khan added.

Referring to how behavioral advertising has sometimes fueled companies’ abuse of private data, Khan said AI model training is “emerging as another feature that could further incentivize surveillance.”

The agency is closely watching how companies notify consumers if they want to use data they have already collected in another capacity to train AI, Khan said, emphasizing that firms must “actively notify users of this change rather than quietly and retroactively rewriting their terms of service.”

The FTC recently announced a string of settlements with companies it alleges improperly used consumers’ health, geolocation and browsing data.

Last week, the agency announced that Avast Limited will pay $16.5 million and no longer be permitted to sell or license web browsing data to advertisers after an investigation revealing the company and its subsidiaries allegedly peddled aggregated, re-identifiable browsing data to third parties on a massive scale, usually without alerting consumers.

Two data brokers were penalized for selling geolocation data to advertisers last month with the FTC alleging the companies amassed and sold the data without adequately informing consumers or obtaining their consent. The settlements cracking down on the sale of geolocation data were unprecedented for the agency.

Last March, in one of a string of health privacy settlements, the online counseling service BetterHelp, Inc. agreed to pay $7.8 million for allegedly sharing information about consumers’ mental health difficulties with third parties, including Facebook and Snapchat, for advertising purposes.

Khan’s speech Tuesday reflected the increasingly aggressive FTC approach to regulating technology companies’ treatment of consumer privacy.

“We’re looking at how business models can create and drive incentives,” Khan said. “Our remedies are making clear that the drive to refine algorithms cannot come at the expense of people’s privacy or security.”

AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive

You may have heard that “data is the new oil”—in other words, data is the critical raw material that drives innovation in tech and business, and like oil, it must be collected at a massive scale and then refined in order to be useful. And there is perhaps no data refinery as large-capacity and as data-hungry as AI. Companies developing AI products, as we have noted, possess a continuous appetite for more and newer data, and they may find that the readiest source of crude data are their own userbases. But many of these companies also have privacy and data security policies in place to protect users’ information. These companies now face a potential conflict of interest: they have powerful business incentives to turn the abundant flow of user data into more fuel for their AI products, but they also have existing commitments to protect their users’ privacy.

Companies might be tempted to resolve this conflict by simply changing the terms of their privacy policy so that they are no longer restricted in the ways they can use their customers’ data. And to avoid backlash from users who are concerned about their privacy, companies may try to make these changes surreptitiously. But market participants should be on notice that any firm that reneges on its user privacy commitments risks running afoul of the law.

It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.

When it comes to unlawful conduct, the FTC has a long history of challenging deceptive and unfair practices in connection to a company’s privacy policy that affect the promises the company made to consumers. Nearly two decades ago, the FTC charged Gateway Learning Corporation, known for its “Hooked on Phonics” products, with violating the FTC Act after it changed its privacy policy to allow it to share consumer data with third parties without notifying consumers or getting their consent.

Similarly, this past summer, the FTC alleged that a genetic testing company violated the law when the company changed its privacy policy to retroactively expand the kinds of third parties with which it could share consumers’ personal data. The company did that without notifying consumers who had previously shared personal data or obtaining their consent, said the FTC.

Even though the technological landscape has changed between 2004 and today, particularly with the advent of consumer-facing AI products, the facts remain the same: A business that collects user data based on one set of privacy commitments cannot then unilaterally renege on those commitments after collecting users’ data. Especially given that certain features of digital markets can make it more difficult for users to easily switch between services, users may lack recourse once a firm has used attractive privacy commitments to lure them to the product only to turn around and then back out of those commitments.

The FTC will continue to bring actions against companies that engage in unfair or deceptive practices—including those that try to switch up the “rules of the game” on consumers by surreptitiously re-writing their privacy policies or terms of service to allow themselves free rein to use consumer data for product development. Ultimately, there’s nothing intelligent about obtaining artificial consent.

Thank you to staff from across the Office of Technology and the Division of Privacy and Identity Protection in the Bureau of Consumer Protection who collaborated on this post (in alphabetical order): Crystal Grant, Julia Horwitz, Amritha Jayanti, Stephanie Nguyen, Madeleine Varner, Ben Wiseman.

FTC is ‘just getting started’ as it takes on Amazon, Meta and more.

Chair Lina Khan says:  “We’re absolutely focused on enforcing the law without fear.”

“In what way is Amazon a monopoly?” Karl asked. “Because I can buy on Amazon, but I can also go down the street and I can buy at Walmart or I can buy at Target.”

Khan said that Amazon has taken actions that ultimately mistreat customers.

“The fees that it charges to small businesses have dramatically increased over the last few years, so that now some small businesses have to pay 1 out of every $2 to Amazon. It’s basically a 50% Amazon tax,” she explained.

“Our lawsuit also alleges that Amazon has been using a whole set of secret algorithms to quietly raise prices for consumers,” she continued. “Our lawsuit alleges that if Amazon had not engaged in these illegal tactics, that would have allowed more rival online superstores to emerge, and that would be better for consumers who would face more price competition.”

Amazon rebuked the lawsuits’ claims after it was filed last year.

“The practices the FTC is challenging have helped to spur competition and innovation across the retail industry, and have produced greater selection, lower prices, and faster delivery speeds for Amazon customers and greater opportunity for the many businesses that sell in Amazon’s store,” Amazon said at the time.

Both Meta (formerly Facebook) and Amazon separately filed motions with the FTC requesting that Khan be recused from any decision related to their companies. She did not.

“Why did you not do that?” Karl pressed.

“The ethics laws primarily require recusal when you have some kind of conflict of interest,” she said, like having stock in a company or a close family member who works for a company.

“But not a clear bias?” Karl pushed back. “I mean, you obviously had an opinion about Amazon before you took this job.”

“I had done academic work. A lot of people that have the fortune of being appointed to these jobs come in through having done policy work,” Khan said. “We’re absolutely focused on enforcing the law without fear or favor. Well, we also don’t want to kind of indulge, you know, requests that are not serious and that are being made in a frivolous way if there’s no real conflict of interest.”

For all the predictable critics, Khan also has fans spanning the political spectrum, from progressives like Vermont Sen. Bernie Sanders to supporters of former President Donald Trump like Ohio Sen. JD Vance and Florida Rep. Matt Gaetz.

Her Republican supporters have been dubbed the “Khanservatives.”

Their support, however, doesn’t surprise Khan.

“Antitrust and anti-monopoly has a long bipartisan history in our country,” she said. “and it’s because there’s long been a recognition that in the same ways that if you concentrate power in our political sphere, that can undermine people’s liberties and freedoms; if you concentrate power in the economic sphere, that can also be a major threat.”

Khan said that monopoly power attracts bipartisan scrutiny, so regardless of who wins the presidential election in November and who chairs the FTC next, she believes the agency’s approach under her leadership will continue.

“The bipartisan concern that we see about monopoly power, the way we see concern about people’s data being harvested and surveilled, the way we see concern about financialization of health care — these are all issues that I think will continue to attract concern no matter who’s in this job,” she said.