Thoughts On Artificial Intelligence Conferences For Business Audiences

I’ve attended a few conferences on artificial intelligence (AI), but one last week drove home some concepts to discuss. While it might seem I’m picking on The University of Texas, Austin, McCombs School of Business, I’m not. All their CATT 2021 Global Analytics Summit did is clarify some ideas. The key problem is that conference organizers don’t seem to be clearly differentiating between two different business audiences.

There are two very different business audiences interested in AI. The first is the IT organization, focused on implementing AI. They require more technical information and are more likely to be interested in concepts coming from academia that can be adapted to the implementations of AI. On the other side, there are the businesspeople working to see how AI can be used to solve business problems. However, it’s not just how AI can improve accounts payable, sales and other departments. They need to understand the legal, compliance, and market impacts of AI. Theory doesn’t have as much impact; business needs a practical understanding of adapting the technology. As my interest is in helping the latter group, that will be my focus in this discussion.

That brings us to the CATT conference. The topic was the explainability of artificial intelligence, the idea of telling people what the AI systems are doing. Unfortunately, it wasn’t clear which group was the target audience for the conference, and the group of presenters showed that. It was a mix of academics and people working in businesses. The messages coming from both was very different.

The first speakers were almost all academic, and I repeated heard the phrase that “explainability is neither necessary nor sufficient.” The first person to say differently was, no surprise, the first person who works for a living. Alice Xian, Sr. Research Scientist and Head of AI Ethics Office, Sony Group, pointed out that it is not sufficient, but it is certainly necessary. Ms. Xian was also the first to differentiate explainability per the two groups mentioned above. There’s understanding the model at a technical level, something IT needs to understand in order to present the systems as something business can use. The second is how to get the business users to trust the results of the system, which requires a different type of explainability.

Alice Xian also had a great comparison to help non-technical people understand levels of explainability. There’s an old study in behavior around Clever Hans, a horse that was thought to be able to count. Then there are drug sniffing dogs. People were able to explain exactly how Clever Hans worked, that he didn’t count but was looking at physical cues from his owners. It was then understood exactly how he worked. On the other hand, we still can’t exactly explain how the dogs differentiate the scents of drugs, but repeated testing shows the results are highly accurate. For Hans, people could explain exactly how he worked, and that showed he didn’t do what was claimed. For the dogs, we don’t know exactly how they do it, but we can explain the training, testing, and performance of the dogs to provide trust in their performance.

The next interesting speaker was Anad Rao, Principal and Global AI Lead, PricewaterhouseCoopers. One of the first things he said was to remind people that AI is only part of a complete business solution. He also had a real world example of the complexity of stakeholders in the use of AI in auto damage estimation. Rather than spend a lot of lines here, I’ll point you to an article I published earlier this year on the same subject.

There were some other good presentations, and what I noticed about them was the people who talked at the business level the best were the people who are in business. That shouldn’t be a surprise, but it’s something conference organizers should consider.

Decide on Your Audience

The upshot of the series of conferences, culminating in last week’s, is the organizers need to better define the audiences they wish to reach. For a webinar or short conference, one needs to be chosen. The McCoombs CATT conference was two half days, far too short to try to reach both IT and business audiences. They would have better served the audience by focusing on one or the other. In a larger conference, both can be addressed, but they need to be done in separate tracks.

The other key issue is how to address those audiences. As said up top, IT needs more technical information and early thoughts on concepts can help. For explainability, that means academics have a place, they can provide information on the theory and early practice. For explainability, that means they can discuss concepts that lead to a better understanding of explainability to programmers and technical analysts. However, they do need to be controlled. Their theories need to be vetted by people with real world knowledge, so that they aren’t sending their audience in a different direction than the business audience will be provided. The nonsense about explainability not being necessary is only one example.

The business audience needs to hear from other businesses. Very few businesses, especially mid- to large-sized companies, want to be early adopters. It seems academic institutions might know that in theory but often forget that in the excitement of new things. The speakers mentioned above were very good for the business side of explainability. Even though their comments won’t be discussed here, for brevity, so were Christoforos Anagnostopoulos, Senior Principal Data Scientist, McKinsey & Co., and Nazneen Rajani, Research Scientist, Salesforce Research. Ms. Rajani is a great example. It sounds like she was a recent graduate, but her short time working in an R&D group at a public company made her message far more interesting to business than did that of her academic adviser, also a presenter.

For Those Interested in Attending

So, you’re interested in knowing more about AI for business? The same thing goes for you as what I pointed out to organizers.

If you are in IT, looking at how to create your own AI systems or incorporate others, you want a mix of presenters that can lean more towards academia and R&D groups. You can get into the technical aspects and see if what is coming out of both types of research places can be leveraged. At the same time, there needs to still be some strong, real-world examples of implementation.

If you are a business line manager or in the CxO suite, you want to hear what is actually being accomplished with artificial intelligence, with strong examples that are relatable. You also want to hear about the non-technical implications of the technology, on the legal, ethical, and other impacts on the company based on market and regulatory reactions. You’ll want to attend something that is far more heavily weighted to people from business. Theories from academia might be interesting to hear, but you also might spend too much time shaking your head and thinking that what’s being said doesn’t match what’s going on in business.

The McCoombs conference was well intended, but it’s an example of what can go wrong with a short conference that didn’t focus on an audience. It tried to be too many things for too many people. They are not the only ones, and hopefully all will improve going forward. It will help if the business attendees take a look before, and provide clear feedback after, each conference to help the organizers improve.

Steve Liem

Learn More →