-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

COGNITIVE COMPUTING: Is neuromorphic AI the next big thing?

Article Featured Image

There may be a lot of hype about cognitive computing and artificial intelligence (AI), but the numbers back up the optimism. IDC reported that spending in that market reached $12 billion in 2017, an increase of nearly 60 percent over 2016. Predictions for the next four years are equally strong, with a slightly lower annual growth rate over that period but larger total market sizes. IDC predicts that by 2021, worldwide spending will reach $57.6 billion, reflecting an annual growth rate of about 50 percent. Retail and banking will lead the market in dollars spent, with manufacturing and healthcare following.

IDC expects automated customer service agents and diagnostic and treatment systems to be the largest application areas. Intelligent processing automation is forecast to grow rapidly to become the third largest use case by 2021. Expert shopping advisors and product recommendations are predicted to nearly double each year, as are public safety and emergency response applications. About half the expenditures will be for software, including cognitive platforms, with services and hardware comprising the remainder. Notably, even hardware, forecast to grow more slowly than the other two categories, is expected to increase 40 percent per year.

Despite the many successes that cognitive computing has had, researchers see potential for improvement and are pushing forward to develop more capabilities. Among the cutting-edge technologies of interest are neuromorphic systems, which are modeled after the human brain. They have the potential to be more generalizable than existing cognitive systems, which are often limited to a narrow domain. That could allow for more use cases to be developed from the same system.

In addition, current cognitive computing systems often require a very large body of information to be ingested before they have a knowledgebase large enough to develop a model to which new data can be compared and classified. For many potential applications, the data may simply not be available. Training time is often extensive. The human brain is much more efficient—after just a few examples, a toddler can recognize a dog; the same cannot be said for image recognition systems. Finally, no existing computer system can match the human brain in efficiency of power consumption.

So, the motivation is there, but can a computer ever simulate the effects of the millions of neurons and billions of synaptic connections that are found in the brain? Neural networks were supposed to do that, at least to some extent, but as it turns out, they are not an ideal model of the brain. For those of us who never quite got a grip on neural network technology, that may come as something of a relief. Numerous other approaches are under development, and although they too are complex, they hold promise for supporting cognitive computing in new ways that more closely reflect the functioning of the human brain.

Synaptic model provides advice

Nara Logics has developed technology for creating neural network connections that emulate the brain more closely than traditional neural networks. Its technology uses models and mathematics similar to those that brain circuits employ to make their calculations. Jana Eggers, CEO of Nara Logics, says, “Whereas deep learning neural nets are used for optimization and pattern recognition, our technology is designed for decisions: The car recognizes that there’s a ball rolling in the street, does it swerve or brake? The answer depends on many conditions. The system takes in data of various types, makes inferences (both explicit and inferred) and provides output in the form of a scored list of answers for a particular context or use case. A machine can then decide based on these answers, or a human can.”

In traditional neural networks, an input such as a digital picture is provided, and it is processed through multiple layers of nodes. An output layer provides the answer, which could be that the image is a house or an animal. At each layer in between, the node performs a calculation that further refines the neural network’s interpretation of the image. The layers between the input and the output are referred to as hidden layers. Required training involves processing many examples and indicating which outputs are correct and then adjusting the weights at various nodes so that the probability of a correct interpretation is maximized.

In Nara Logic’s synaptic networks, each node indicates an object or a feature; the node is not just an indication of a weighted path to an outcome developed by training on a large number of examples. The connections between the synaptic nodes indicate whether or not the objects or features at the nodes are similar to other objects. When an unknown case is presented, certain nodes will be activated and others will be inhibited, as they are in the human brain. Other similar nodes can be activated to form inferences. Learning rules provided by humans set the weights on the connections.

“This structure lends itself very well to discovering options in data,” Eggers continues. “For example, we work with Procter & Gamble on their Olay Skin Advisor to provide consumers with a personalized skin care regimen. It’s like a beauty consultant in your hand, as you take a photo, answer a few questions and get a skin care routine that fits your skin age, needs and habits.” The synaptic network has a wiring diagram of how the products relate to each other, skin care needs and skin features. “An important aspect of Nara Logics’ technology is that we did not have to program the expertise to be able to personalize the results; it was already in the product data,” Eggers adds. “Nor does it need to be trained on large quantities of data to produce the optimal outcome.”

Another distinction from traditional neural networks is that the path to the conclusion is transparent. “It is possible to see each node that is activated and each one that is inhibited, so the developer can provide the user with the ‘why’ behind an answer,” Eggers explains. “Unlike the black boxes of most deep learning systems, our synaptic network provides a rationale about the logic driving each possible answer. So, for example, the person using Olay Skin Advisor can see which product is based on discovered skin age or focus area and which is for the biggest concern or to be used at night. Each product in the regimen is shown with the reasons why it fits her specific skin care needs.” The program produces the results and the developer chooses how to display the information.

“Our brains deal with options every day. We know that there is rarely ‘one answer,’” says Eggers. “Our platform returns the different options based on data that is built over time.” That is another similarity the Nara Logics synaptic network has with the brain. Traditional neural networks must retrain when new data is presented. The synaptic network continues to add data as the data is available, constantly adjusting weights between nodes and building new inferences. With traditional neural nets, when a new type of data becomes available, the model has to be rebuilt from the ground up versus adding knowledge as it is learned.

Nara Logics’ technology is based on work conducted at the Department of Brain and Cognitive Sciences at MIT and the MIT Computer Science and Artificial Intelligence Lab. “There has been a lot of cross-pollenization in these fields, but more is needed to get the best of the worlds of neuroscience, computer science and mathematics feeding the AI possibilities,” Eggers says.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues