-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Will AGI be intelligent?

Article Featured Image

“AGI” should be relatively easy to define since all it does is insert “general” in the middle of a two-word phrase that’s very familiar to us. But, of course, it’s harder than that. For starters, Wikipedia says AGI is AI “that can perform as well or better than humans on a wide range of cognitive tasks. As opposed to narrow AI, which is designed for specific tasks.”

Aside form the bad grammar of that final sentence fragment, the entry seems unobjectionable but incomplete. Since just about every AI model we use is better at a cognitive task than we are—that’s why we use it—what’s so important about bundling them all together into one model? Maybe two things.

First, imagine a scenario in which AGI functions merely as a synthesizer of the results of querying today’s specialized AI models, a metasearch engine. That falls far short of an AGI that generates outputs from an integrated model. For example, we humans don’t pause to separately consult our understanding of animals, driving, physics, causality, biology, and ethics when making a split-second decision to swerve to avoid a deer in the road. Our response arises from a deeply integrated model of the world that applies concepts such as causality across numerous fields. In fact, our separate “models” of those multiple fields all depend on one another, and it’s taken thousands of years of history and culture for us to separate them. AGI’s holistic approach not only could enhance the accuracy and reliability of its decisions, but it would also mirror the interconnectedness of the real world. We may have many models, but there is only one real world.

Second, we seem to think that AGI will begin a new epoch marked by the emergence of an intelligence that is greater than ours. I have no species pride left, so I’m fine with admitting that AI and even pocket calculators and sticky notes are better at cognitive tasks than I am. But I’m unconvinced that “intelligence” is the word we should be using for AI or for AGI, especially when conversations about AGI treat it as not just a new intelligence but as a new and superior type of consciousness.

Three questions about AGI

First, we seem to think about AGI as a discrete event that we could mark on our calendars as AGI Day. It might happen that way, especially given the power of marketing, but it’s likely to be fiction. There are too many conceivable AGI criteria. For example, if AGI should be judged by its cognitive abilities, what exactly are they? Doing your taxes and estimating the number of butterflies in the world are clearly cognitive acts because they require reasoning, math, and facts. But is deciding if someone has a crush on you a cognitive task? How about putting someone else’s interests ahead of yours? How about smiling at all babies, even the ugly ones? (That’s a trick question. There are no ugly babies.) Second, does AGI have to cognize the way we do to be intelligent? If its answers are correct but it can’t tell us why, is that intelligence? If it can tell us why, but that’s not how the AI actually came up with its responses, is that intelligence? That’s the case for chat AI: If you ask it why it says that the earth isn’t flat, it will give you good reasons, such as “Things vanish over the horizon,” but it didn’t produce that output because of those reasons. It came up with its response by an analysis of word usage, without any conception of what the words mean, just that those are statistically likely words to use in the context of your prompt.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues