First, we seem to think about AGI as a discrete event that we could mark on our calendars as AGI Day. It might happen that way, especially given the power of marketing, but it’s likely to be fiction. There are too many conceivable AGI criteria. For example, if AGI should be judged by its cognitive abilities, what exactly are they? Doing your taxes and estimating the number of butterflies in the world are clearly cognitive acts because they require reasoning, math, and facts. But is deciding if someone has a crush on you a cognitive task? How about putting someone else’s interests ahead of yours? How about smiling at all babies, even the ugly ones? (That’s a trick question. There are no ugly babies.) Second, does AGI have to cognize the way we do to be intelligent? If its answers are correct but it can’t tell us why, is that intelligence? If it can tell us why, but that’s not how the AI actually came up with its responses, is that intelligence? That’s the case for chat AI: If you ask it why it says that the earth isn’t flat, it will give you good reasons, such as “Things vanish over the horizon,” but it didn’t produce that output because of those reasons. It came up with its response by an analysis of word usage, without any conception of what the words mean, just that those are statistically likely words to use in the context of your prompt.