Our most technical people are down on AI … and that’s a good thing

Commentary: The IEEE crowd is skeptical about AI’s most bullish claims, which turns out to be exactly what we need to push it forward.


Image: Shutterstock/BAIVECTOR

According to a recent McKinsey survey, a majority of enterprises of all sizes are actively embracing AI. Hurray! The areas seeing the biggest boost from AI adoption include service-operations optimization, AI-based enhancement of products and contact-center automation. Again, hurray! When the general American populace is asked about AI, most have a positive view on AI’s potential. Hurrays all around. 

But if you ask the more engineering-centric, IEEE Spectrum crowd, AI has a long, long way to go before they’re willing to stand and applaud. IEEE Spectrum “members are involved with hard-to-penetrate vendor decision teams, usually in management capacity,” according to the 2020 media kit. In other words, this is a senior, highly technical crowd  that isn’t overly impressed by puff pieces on the wonders of AI (though they may well believe AI has a bright future). No, when the IEEE Spectrum editors looked back on the 10 most popular articles of 2021, a clear trend emerged: “what’s wrong with machine learning today.”

SEE: Artificial intelligence ethics policy (TechRepublic Premium)

All aboard the AI hype train

No one needs to be reminded that we’re still in the hype phase of AI. As tweeted by Michael McDonough, global director of economic research and chief economist, Bloomberg Intelligence, public mentions of artificial intelligence on earnings calls has ballooned since mid-2014:


Nor has this trend slowed since McDonough tweeted that in 2017. If anything, it has increased. 

Yet even as C-level executives keep finding it advantageous to oversell how AI is impacting their businesses, the folks actually charged with making AI work have been less sanguine. As revealed in Anaconda’s State of Data Science 2021 report, the biggest concern data scientists have with AI today is the possibility, even likelihood, of bias in the algorithms. There also remains a significant shortage of personnel capable of helping organizations maximize the value they derive from data. And even when companies do have the right talent on staff, getting value from AI investments can remain elusive,
as I’ve detailed

. Small wonder, then, that some suggest “The promise of true artificial general intelligence … remains elusive. Artificial stupidity reigns supreme.” (Disclosure: my IP law professor brother, Clark Asay, wrote that and, yes, I sort of like him.)

So AI has a ways to go. We knew this, right? But what are the specific concerns of the technical folks closest to AI deployments?

SEE: The ethical challenges of AI: A leader’s guide (free PDF) (TechRepublic)

What could go wrong?

The most popular article is uber practical in its focus: money. Or, rather, the diminishing returns associated with paying for AI improvement. The tl;dr? The computational and energy costs required to train deep learning systems may be higher than the benefits derived therefrom. Much higher. Here’s the money quote: “to halve the error rate, you can expect to need more than 500 times the computational resources.” And the longer version: “the good news is that deep learning provides enormous flexibility. The bad news is that this flexibility comes at an enormous computational cost.”

Seems bad. Is bad.

Of the other 10 most popular AI-related articles on IEEE Spectrum for the year, three were positive (about, for example, how Instacart uses AI to drive its business), one was neutral (a series of charts that offer a view into the current state of AI) and five more were negative:

  • On the uncertain future of AI (“Today, even as AI is revolutionizing industries and threatening to upend the global labor market, many experts are wondering if today’s AI is reaching its limits”).

  • Renowned machine learning pioneer Andrew Ng on the difference between test and production (“Those of us in machine learning are really good at doing well on a test set but unfortunately deploying a system takes more than doing well on a test set”).

  • An article on the exciting potential and “deeply troubling” reality of GPT-3, detailing “the potential danger that companies face as they work with this new and largely untamed technology, and as they deploy commercial products and services powered by GPT-3.”

  • An interview with Jeff Hawkins, inventor of the Palm Pilot, on why “AI needs much more neuroscience” to be useful.

  • A listicle of sorts, one that captures seven ways that AI fails (“Neural networks can be disastrously brittle, forgetful, and surprisingly bad at math”).

If anything, these curmudgeonly views on AI realities should make us all hopeful, not despondent. If you read through the articles, there’s a strong belief in the promise of AI, tempered by an understanding of the limitations that need to be overcome. This is precisely what we should want, rather than an overly optimistic stance that overlooks these roadblocks. The fact that these articles were most popular with the people most likely to be deploying AI within the enterprise is a sign of a rational approach to AI, rather than irrational exuberance.

Disclosure: I work for MongoDB but the views expressed herein are mine.

Also see


Related Posts