Pushing ARTIFICIAL INTELLIGENCE ahead: the practical business value of cognitive visualizations
Extended reality—the overarching term encompassing augmented reality, virtual reality and mixed reality—is the cognitive visualization branch of artificial intelligence (AI). The relationship between cognitive visualizations and AI is symbiotic. Almost any variety of smart glasses leverages AI-powered speech or gesture recognition, while extended reality (XR) technologies offer the most complete way to visualize intricate neural networks and their prescriptive analytics results. Globally, the smart glasses market is projected to reach $20 billion by 2020.
Visualizations enable organizations to view data in motion or at rest, to dynamically interact with them and to discover relationships between data with varying levels of immersive, self-service experiences. They are the drivers for a compelling number of use cases yielding tangible business value to the enterprise today.
Recurring use cases for cognitive visualizations are widely based on their ability to facilitate two low-latency advantages. They enable organizations to gather all relevant data in a single place regardless of distance or time and to avert conventional blind spots and biases during analysis. How they do so differs according to the particular technology involved and its degree of situational awareness:
♦Virtual reality—When applied to data management and visualizations, VR provides a fully immersive experience into the digital world that surrounds users with relationships between data.
♦ Augmented reality—AR overlays digital imagery (including data) on the real world, supplementing reality with germane facts for workflows. According to Jason Haggar, VP of global partner & developer programs at DAQRI, AR has multiple immersion levels. The apex is “a fully immersive augmented reality experience that provides info in a digital display and also context-sensitive information that’s aware of your location.”
♦ Mixed reality—MR delivers a hybrid environment between the actual and digital world with “slightly less awareness of your surroundings and blending that with whatever experience you’re trying to put in front of the viewer,” according to Geof Wheelwright, director of marketing communications at Atheer. Many consider AR a subset of MR.
♦Extended reality—Although XR is the general reference term for all visualizations, it’s distinct from MR because it creates “a hybrid real-world digital experience where the digital content is aware of and can physically interact with the real world,” says Eric Mizufuka, product manager of augmented reality eyewear at Epson America. According to Mizufuka, MR only provides visual interactions.
Cognitive visualizations can radically transform any business process related to remote access, which frequently includes aspects of videoconferencing, training, fieldwork and workflow monitoring. By removing the distance and temporal issues involved, they significantly reduce expenditures for use cases involving maintenance and repairs. Their deployment alongside other AI techniques can deliver equal value for fundamental data management staples such as data visualization, search and data discovery, decreasing costs and time for any number of products, services and business functions.
Artificial intelligence technologies
Although use cases for cognitive visualizations abound horizontally in industries such as finance, healthcare, life sciences and law enforcement, they’re gaining the most credence in the industrial and manufacturing sectors. Of particular regard within this vertical is the hands-free nature of sophisticated XR options, which don’t require constant looking at handheld devices (like smartphones) for access. AI is crucial because the core tenets of speech recognition, voice recognition and gesture recognition hinge on advanced neural networks rapidly analyzing data and responding to real-time signals as input modes. Voice and speech capabilities also involve natural language processing (NLP), which Paul Boris, COO of Vuzix, describes as a “bridge” between XR and AI. “The NLP engine translates the request to something the AI engine can understand; the AI engine analyzes the options and returns recommendations in the form of data, suggested videos or activities,” Boris explains.
In such advanced settings, the prescriptive capabilities of AI are largely attributed to deep learning. Gary Oliver, CEO of Razorthink, says, “There’s a whole set of deep learning intelligence that can be plugged into business processes to augment human beings by looking at patterns in data and being able to recommend actions.”
The hands-free nature of cognitive visualizations assists manual processes in several ways, especially for remote applications. Porsche Cars North America reduced service resolution time 40 percent by enabling local dealerships to instantly communicate with remote experts via AR. “The remote experts get the benefits of saying I’ve now got a much better way of capturing the issues the dealers are facing and providing immediate feedback to them by using the see-what-I-see and videoconferencing within the glasses,” Wheelwright explains.
Field service work for valuable industrial assets benefits from the remote capabilities of AR, allowing distant experts to see situations while communicating with local personnel about them. According to Haggar, whether the use case involves a broken microscope in life sciences or failure on an oil drilling platform halting production, “being able to do repairs with a fully immersive experience in which the support tech can see what a field service person is looking at, give them instructions and it’s all hands free,” is more cost efficient and requires less downtime than flying support personnel onsite. The technologies issue similar quick wins with training or product development meetings, accelerating ROI.