Furthermore, about 30 years ago, in his book Descartes’ Error: Emotion, Reason and the Human Brain, the neuroscientist Antonio Damasio showed that people lacking emotional awareness often make worse decisions than people who are in contact with their emotions. So, can you count an emotionless entity as truly intelligent?
And doesn’t intelligence require having values that help determine what we believe and how we make decisions? So far, we have to explicitly tell a machine learning model what values to support. Our own values are often complicated, unstable, and contradictory. They can vary from situation to situation, if only because they’re inevitably interdependent. While we have to provide AI with the values we want it to support, part of our human intelligence is the ability to change our balance of values as we learn and think more.
It’s not yet clear if enabling an AGI to change its values on its own is feasible, much less desirable. On the one hand, that might let it come up with values that are not aligned with our human values. On the other hand, if it has no values of its own, then it’s merely a tool, lacking the autonomy (or at least the semblance thereof) that for us is a requirement for intelligence.
This raises the third question: Why do we care? Computers have been smarter than us for a long time. AI is a big step forward because it can address more complex and more highly multidimensional issues. AGI presumably would take that to the next level to become an even more important tool for extending our own intelligence. But would that count as the beginning of a new revolution in our history? I’m not convinced. I’m also not not convinced.
I am convinced, however, that I am very likely wrong about this, as I’m sure AGI will let me know in the very near future.
P.S.: I’m sure you’ll all be relieved to know that I fixed the sentence fragment in the Wikipedia article on AGI. You’re welcome!