Picked up from the podium
The O’Reilly AI Conference in the fall served up an AI smorgasbord in Manhattan. In this column, I highlight some of the people, ideas and themes that emerged from the podium at that event, as both speakersaand attendees engaged with the topics and technologies that surround “AI” today.
We continue to struggle in general with the resurgence of use of the term “AI” throughout the mainstream press. As we have pointed out before in this column, simplistic definitions are erecting big hurdles to useful understanding of the field. Artificial intelligence is a large umbrella term that includes: machine learning of all types, digital assistants, conversational systems, Internet of Things, image and speech recognition, emotion and sentiment detection, cognitive computing, robotics and more. Within the many points of view represented across this breadth of AI topics, several primary themes bubbled to the surface in the conference presentations I heard. Perhaps the most interesting is the growing sense that we are turning from dreams to reality. What works? What doesn’t? How much manual effort is involved in automating work? Can we eliminate the man behind the AI curtain?
Here are a few clips of interesting points from podium speakers.
Peter Norvig, Google. Machine learning is a sort of black box. It’s fast, but it makes spectacular mistakes and it’s hard to figure out why. Because it’s not modular, it’s hard to isolate one part and debug it. Often, it’s not the code that’s the problem, but the data that is inaccurate. To make matters worse, the data keeps changing. Norvig called for new tools for non-traditional programming. This is relatively uncharted territory: How do you conceive of tools that you will need to build a system that you’re not sure what you want it to do in the future? Bottom line, however: “Machine learning is the worst possible system … except for all the others.”
Tim O’Reilly, O’Reilly Media. He was upbeat about AI and its potential to “change the game” of business. He cited complex problems that will always arise when we change the nature of the relationship between people and their computer tools, and he expects that we will never run out of jobs. O’Reilly concluded by asking: “Whose black box do you trust?” This issue of trust is one of the primary themes that kept bubbling up across presentations. Technology is complex and the popular media tend not to understand why things do or do not work. We need to do a better job of explaining how things work, together with engaging in a more open discussion of how we get the results that we do. Technology must stop being the mysterious bogeyman in the closet.
Genevieve Bell, Intel. “AI is just another manifestation of what it is to be human.” Bell is an anthropologist and struck another of the emerging themes of the conference: that we need to take into account cultural expectations when we design products that people will want to use. AI may have started as an all-knowing black box that could run the world. It’s becoming apparent as AI elements are embedded increasingly in devices and software that some things work and some don’t. Context and culture differ from one person to the next and from one moment to the next. They govern usefulness and therefore adoption of technology.
Intel, as a major sponsor of the conference, was very much in evidence. They are investing heavily in AI, both through acquiring companies like Saffron, Movidius and Nervana, and through developing more suitable computational architectures for AI. They have released Data Analytics Acceleration and Math Kernel Libraries to open source with a view to helping scale and optimize AI processes and products.
Other featured speakers at the conference were Lili Cheng of Microsoft Research and Oren Etzioni, director of the Allen Institute for Artificial Intelligence. Their views offer contrasting perspectives: AI product design and development lessons from Microsoft and some hard data about expectations for market development from the AI think tank.
Lili Cheng, Microsoft Research. Microsoft’s bot, Xaoice, now with 40 million users, was rolled out first in China and then Japan. Like Amazon’s Echo, it has a suite of engaging, amusing and useful characteristics that keep people coming back. Unlike the Echo, it is stateful and conversational. In addition to just chatting for fun, it can translate, schedule meetings, answer math problems (and joke about them), recognize dogs and count sheep for you when you can’t go to sleep. Moving Xaoice to the United States, as Microsoft did with Tay.ai, however, was an instant and embarrassing disaster. Cultural hurdles as well as back doors in the software can send a bot off the culturally permissible rails. Microsoft has taken the technology and released an open source bot framework (dev.botframework.com). A high hurdle to jump: building a cross-channel bot across walled gardens. This has been one of the major challenges for cognitive computing: automating the ingestion of data from multiple sources in different formats, with differing terminology.