When people talk about artificial intelligence (AI), some people think about robots taking over humanity, but the reality is that AI is more about companies deploying an algorithm that repeats (better and better) a task, providing better outcomes. AI has multiple applications and some of the most popular uses right now are for digital platforms to rank and display content, for recruitment processes or for companies in financial services to offer products to clients.
There are concerns about the use of AI for military use, including hardware and drones, Pace University Professor of Law Roy Girasa told PYMNTS, but there is no question that the biggest issue right now is facial recognition.
“What you hear over and over and over again is facial recognition. It’s the idea that we have lost our privacy.” said Girasa. In fact, the only solution he sees right now is that we will have to accept the fact that there’s almost nothing we could do about it.
The case of facial recognition and how companies collect data, with or without user consent, was again in the news headlines this week as Texas sued Facebook for its facial recognition practices. Other companies like Clearview have also been fined for these practices and ordered to delete millions of photos used to train algorithms for facial recognition purposes.
Read more: Meta’s Facial Recognition Lawsuit May Intensify Privacy Law Discussion
Yet Girasa said litigation or regulation may not always be the solution. For instance, in his opinion, Europe has gone overboard with regulation. While he supports Europe’s effort to fight some of the Big Tech practices with strong antitrust enforcement, and a tight control of mergers and acquisitions, the regulation of AI may not be the right approach and it may be detrimental for innovation. “They seem to be more interested in regulation rather than in innovation.”
In the U.S., without a federal law regulating privacy or facial recognition, it is up to the 50 states to regulate. “Illinois and California are very aggressive copying other regulations, and companies are facing tremendous number of lawsuits, and I don’t know if this is good or bad,” Girasa said.
He said the problem is that companies are working hard to bring new products and they may technically violate the law, but it wasn’t a deliberate intention, they don’t pretend to invade privacy.
A different situation would be if facial recognition would be allowed to the level of China, where AI can determine whether you enter into a decent university or get a visa. There needs to be a balance, and this is what everybody is trying to figure out.
When asked about the need for new regulations to limit the use of AI, Girasa is of the opinion that we should worry about businesses, to let them operate, and if they grow big and behave anticompetitive, there should be strong enforcement of antitrust laws, “not simply fining 2% of their profits, but 10%.” But only when there is a clear benefit of applying this law.
“You can’t blame the companies for being aggressive and trying to expand because they either expand or they could literally dive overnight. I could understand that on the other hand, the role of government has to be to exercise some degree of control over them. Now where’s the balance? I can’t answer that.”
He suggests that the approach taken by the OECD is better than the “over-aggressiveness of the EU or the under-aggressiveness of the federal government.” The OECD wants to encourage innovation, and the EU and the U.S. should be working as partners in these issues.
Sign up here for daily updates on the legal, policy and regulatory issues shaping the future of the connected economy.