-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

The flip side of generative AI: Extractive AI

Article Featured Image

No doubt about it, generative AI (GenAI) has been the tech story of the year. A session at the AI Governance Alliance of the World Economic Forum 2024 referred to it as “The Steam Engine of the Fourth Industrial Revolution.” From its release in November 2022, ChatGPT’s userbase has grown to more than 1.3 billion visits per month. It’s a perfect example of the dizzying speed at which today’s technology has been evolving.

You could say it’s akin to the historic event of breaking the sound barrier. GenAI’s performance on standardized tests, measured against human performance benchmarks, continues to improve, with Chat GPT-4 reaching the 95th percentile on the SAT and the 90th percentile on the LSAT and bar exams. Yet as smart as these systems are, they can still act rather dumb. Some well-known, and laughable, assertions include that there are 24.3 billion tulips in the world and that Abraham Lincoln designed the Lincoln model car. (It was actually invented by Henry Leland, an admirer of Lincoln’s.) These are just a few of the many “hallucinations” that GenAI produces on a regular basis.

If GenAI is so smart, how can it produce so many wrong answers? The answer partly lies in the fact that the term “generative” means exactly that: generating entirely new output from scratch. A simple analogy would be if you were to take a closed-book exam. All you have is a blank sheet of paper and the knowledge that you’ve accumulated in your head about the subject. If you don’t know the correct answer, you take a guess. If you’re really stumped, you might even make something up out of whole cloth rather than just leave the answer blank. This is similar to the hallucinations we’ve observed.

There’s also the nagging problem of plagiarism. It’s essentially repeating verbatim the correct answer to a question without stating where it came from. Notable examples include recent copyright infringement lawsuits filed by The New York Times and popular authors such as John Grisham and David Baldacci for using their published material to train OpenAI’s large language models (LLMs). What, if anything, can we do to correct these shortfalls? The answer lies in an overlooked technology that has been around for decades. I’m talking about GenAI’s forgotten twin … extractive AI.

Filling in the gaps

Extractive AI takes a more comprehensive and transparent approach to machine intelligence. On the input side, instead of relying on the statistical likelihood of one word preceding or following another, it performs a more sophisticated derivation of meaning and intent, often using layered reference ontologies. This results in a more descriptive, as opposed to predictive, enumeration of the problem space.

The inference mechanisms available to operate on those problem spaces are even more numerous (case-based reasoning and recommender and critiquing systems are just two examples). Depending upon the nature of the problem, the system chooses the most appropriate matching algorithms, selects and ranks specific fragments of relevant knowledge, and applies rules of aggregation to assemble a response. Hallucinations are replaced with a simple, “I don’t know,” or, “Please provide more information.” And because AI is mostly about supporting decision making, extractive AI adjusts its recommendations based on whether the decision environment is characterized by risk or uncertainty.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues