FTC Mulls New Artificial Intelligence Regulation

FTC Mulls New Artificial Intelligence Regulation

The Federal Trade Commission (FTC) is considering a wide range of options, including new rules and guidelines, to tackle data privacy concerns and algorithmic discrimination.

FTC´s Chair Lina Khan, in a letter to Senator Richard Blumenthal (D-CT), outlined her goals to “protect Americans from unfair or deceptive practices online” and in particular, Khan said that the FTC is considering rulemaking to address “lax security practices, data privacy abuses and algorithmic decision-making that may result in unlawful discrimination.”

The FTC´s letter comes in response to a letter from several lawmakers, including Senator Blumenthal, who urged the FTC to start a rulemaking process that would “protect consumer privacy, promote civil rights and set clear safeguards on the collection and use of data in the digital economy.”

“Rulemaking may prove a useful tool to address the breadth of challenges that can result from commercial surveillance and other data practices […] and could establish clear market-wide requirements,” Khan wrote.

The FTC can resort to its rulemaking authority to address unfair or deceptive practices that occur commonly, instead of relying on actions against individual companies. Section 18 of the FTC Act enables the FTC to issue such trade regulation as long as the regulator provides an opportunity for interested parties to be heard and the Commission has reason to believe that the practices to be addressed by the rulemaking are “prevalent.”

This last requirement may not be so easy to meet by the FTC because algorithmic discrimination may exist, but the scale of this practice may not be so widely spread to be considered “prevalent.” Even if this practice is, unfortunately, spread throughout the country, it may be difficult for the FTC to bring evidence that endorses the adoption of such a rule, since companies are not very keen in sharing information about their algorithms.

In the event that the FTC promulgates a trade regulation, anyone who violates the rule, either willingly or without any knowledge of it, will be liable for civil penalties and injury caused to consumers.

The FTC hasn’t used its rulemaking powers very often in the past, but it is a very appealing tool for any regulator who wants to tackle industry-wide problems with very little supervision or accountability. Lina Khan, who has been very critical of big tech firms, may be attracted by this type of rule as she has seen first-hand the difficulties of eradicating certain data practices in a case-by-case approach.

While it is difficult to predict the scope of any possible trade rule on algorithmic discrimination, the use of facial recognition software and biometrics-related data are likely targets. The FTC has previously warned about the potential harm that the use of biometric data combined with artificial intelligence could cause to consumers. Earlier this year, the FTC ordered the company Everalbum to delete photos and videos collected from users without their explicit consent as well as algorithms developed using biometric information from users. The order came with additional recordkeeping and compliance requirements to ensure that this wrongdoing is not repeated in the future.

This type of individual action will still be available to the FTC regardless of the adoption of new rules. This may be a necessary step in the way to adopt new rules, since these probes may provide the FTC with enough evidence to sustain that certain data practices are “prevalent” in the country, a necessary requirement to adopt a trade rule under section 18 of the FTC Act.

What at least seems clear from Khan’s letter is that any rulemaking on artificial intelligence may address unfair or deceptive acts that result in discrimination and affects civil rights, but it may not go as far as the European Commission.

The EU regulator, in its latest proposal to improve the conditions of people working through digital platforms, included new obligations on algorithmic management that could have a negative effect on companies like Uber or Deliveroo. The proposed directive will require digital labor platforms to ensure human monitoring of the automated systems and to establish appropriate channels for discussing and requesting review of such decisions. This rule is the second EU legislative proposal this year that regulates or imposes some limitations on the establishment and operation of automated systems based on algorithms and artificial intelligence.

In conclusion, Khan’s letter may signal a change in the FTC’s approach to deal with big (and small) tech companies: using consumer protection tools and rulemaking, rather than relying only on antitrust enforcement tools, to tackle data-related concerns.

——————————

NEW PYMNTS DATA: AUTHENTICATING IDENTITIES IN THE DIGITAL ECONOMY – DECEMBER 2021

About:More than half of U.S. consumers think biometric authentication methods are faster, more convenient and more trustworthy than passwords or PINs — so why are less than 10% using them? PYMNTS, in collaboration with Mitek, surveyed more than 2,200 consumers to better define this perception versus use gap and identify ways businesses can boost usage.

FTC Mulls New Artificial Intelligence Regulation to Protect Consumers

Steve Liem

Learn More →