-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Smarter software is coming … just slowly

Article Featured Image

Microsoft showed off its Project Adam in July 2014. Harry Shum, Microsoft’s executive VP of technology and research, demonstrated the progress in machine learning and artificial intelligence. At the Microsoft 2014 Faculty Summit, he said: “If you’re a dog lover, you become adept at identifying dogs. But what if your smartphone could identify them faster than you? Imagine this: You point your phone at a dog and ask it, ‘What kind of dog is this?’ and it identifies the exact breed.”

The goal of Microsoft’s Project Adam is to use software to recognize “any object.” (See blogs.microsoft.com/next/2014/07/14/microsoft-research-shows-advances-artificial-intelligence-project-adam.) The Microsoft approach relies on high-performance computing and large-scale distributed systems. Project Adam recognizes images after the Adam neural network has been trained. Adam’s neural network consists of more than two billion connections. Microsoft asserts, “This scalable infrastructure is twice more accurate in its object recognition and 50 times faster than other systems.”

New types of computing sought

With brilliant young mathematicians pushing the boundaries of mathematical methods and large companies investing in fundamental research for intelligent systems, the future has arrived. Or has it? The answer is, “Maybe.”

The reality is that sophisticated mathematical procedures require appropriate machine resources. For an advanced series of mathematical processes to run against datasets of terabytes and petabytes, today’s computers may not be up to the task. Processing the hundreds of thousands of messages flowing through Twitter each day consumes significant computing resources. Few organizations have the funds to provide unlimited computing horsepower to grind through flows of tweets, e-mails and voice telephone calls.

The new types of computing that Google, IBM and Microsoft hope to move from the research lab to a production environment is easy to envision. It will take time to create programming tools, debugging routines and programs that use advanced mathematical methods to wring meaning from text or a stream of social media postings.

Many vendors from the earliest days of enterprise content processing integrated smart software into their commercial systems. Fulcrum Technologies was able to categorize content, not just index key words. TeraText included analytics routines that, when properly resourced and configured, would identify trends within the documents in its index. Both of those systems date from the late 1970s and early 1980s. By 1996, Autonomy was using declassified mathematical methods based on Bayesian, LaPlacian and Monte Carlo methods to identify related content and flag emerging themes from the content processed.

Ninety-nine percent accuracy?

If we flash forward to today, dozens of vendors emphasize that their systems deliver outputs that understand the content processed by the system. Other vendors point to their systems’ ability to predict likely events from real-time content acquired and processed by sophisticated chains of computations.

The accuracy of content processing and knowledge extraction systems has improved. A properly configured system can index content with 80 to 90 percent accuracy. With a breakthrough in computing architecture, the hope is that the accuracy of the systems can move to 95 to 99 percent accuracy.

To achieve that level of quality in a consistent manner, much work must be done.

In practical terms, I have experimented with the voice to text and automatic search functions on Apple, Android and Microsoft phones. The systems work well when simple requests are fed into one of those consumer products. However, when a more complex query is passed to one of those systems involving an unfamiliar name or an obscure location, today’s smartest mobile search systems struggle and often fail.

Bold demonstrations of smart software work well in controlled circumstances. One excellent example is IBM Watson’s performance on general knowledge questions on the Jeopardy television show. However, when Watson is put into a commercial, real-world application, IBM relies on partners to refine specific applications. Smart systems in use by government agencies work brilliantly in controlled tests. However, when deployed, those systems miss the Ebola epidemic or the rise of the Islamic State of Iraq and the Levant. Those are but two examples of how predictive content processing systems have missed fairly significant developments.

Three years after IBM demonstrated Watson, a “freemium” Watson app is expected to be available in November 2014. (See computerweekly.com/news/2240230659/IBM-unveils-natural-language-based-cognitive-service-Watson-Analytics.) Bob Picciano, an IBM senior VP, is quoted by Computer Weekly as having said: “Watson Analytics is designed to help all businesspeople—from sales reps on the road to company CEOs—see patterns, pursue ideas and improve all types of decisions. We have eliminated the barrier between the answers they seek, the analytics they want and the data in the form they need. The combination of Watson-fueled analytics to magnify human cognition, the vast potential of big data and cloud-scale delivery to PCs, smartphones and other devices is transformational.”

Based on my experience, incremental improvements in computing methods, hardware and ancillary systems are continually occurring. Cheerleading is important, but cheerleading is not software that works. Better search, knowledge access and content processing are coming—just slowly. Very sloooowly. 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues