Charles Simon, BSEE, MSCs, is the founder and CEO of FutureAI.
What’s behind you?
It’s a simple enough question, but it says a great deal about how your mind works and, therefore, how artificial general intelligence (AGI) must be made to work if it is ever to reach the goal of human-like intelligence. That’s a necessity, given that AGI is the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can.
To answer this question, humans need to know about objects that are outside of their field of view. For example, you know that as you walk toward a road sign, you will first see the front of the sign. If you don’t turn your head as you continue walking, you will know that the sign is beside you. You will also know that while the sign is still there, its appearance would be significantly different if you chose to look at it because you would now be seeing the side of the sign. If you continue walking, you will know that the sign is now behind you. If you turned to look back, you would see the back of the sign, which might have a similar appearance or be completely different from the front of the sign.
All of this means that your mind possesses an internal model of your surroundings. This model contains representations of objects, not images. The existence of this internal mental model pervades your perception of the world and is fundamental to your understanding that physical objects exist in a physical world and that objects have some degree of permanence. Without this mental model and a knowledge and understanding of objects, it would be impossible for you to have an imagination and the ability to plan for the future.
It also suggests that such a model is essential if AGI is ever to display human-like intelligence. Without it, learning in the same way that a human does would be impossible. There would be no understanding that words and images represent physical things that exist and interact in a physical universe of which mankind is an important part. The ability to interpret everything in the context of everything else previously learned would simply not exist.
To explain this in another way, the objects in this internal model of your surroundings are positioned relative to yourself. You are at the center of this model. This gives you a fundamental, singular point of view, which is likely a necessary precursor to consciousness and is certainly necessary to the appearance of human-like intelligence.
You don’t perceive that objects change abruptly when they enter or leave your field of view. In fact, as your eyes dart about, objects are entering and leaving your field of view all the time, and this isn’t even noticeable. This means that your perception of objects around you is from the mental model, not from your eyes. The eyes continuously update the mental model, but your perception is of objects in the mental model, not shapes directly from your eyes. Any successful AGI would have to work in much the same way.
Even though your eyes sense images with the biological equivalent of pixels, our perception is entirely based on objects. You can’t see the individual pixels of your vision because your perception of the world is based on the content of your mental model, which in turn is based on objects after the information of individual pixels has been processed and discarded by your brain.
While you certainly perceive objects behind you, when you start to describe them in detail, you will quickly conclude that your brain’s capacity is fairly limited both in the number of objects you know and the amount of detail about each object. In contrast, a computer performing a similar task could easily be programmed to handle any arbitrarily large number of objects in any desired level of detail.
So back to the initial question: What is behind you? A system that does not have the capacity to answer this and similar types of questions will not display human-like general intelligence. Its perceptions of objects and positions would be fundamentally different.
Being able to answer such questions and having the mental model that enables it is necessary for AGI, but this is not the same as saying it is sufficient for AGI. An organism or AI without a mental model cannot have human-like intelligence, but just having an internal mental model doesn’t necessarily make an AI intelligent.
What is behind you? Any AI that can’t answer a question like this is not an AGI, and if it doesn’t have the internal mechanism to answer this type of question, it can never become one. While the Turing test invites a questioner to interview a potential AGI for 15 to 20 minutes, this one single question might reveal even more. Of course, we can easily write a program to answer this question, but it is an exemplar of a broad class of questions that goes to the root of the capabilities that are necessary precursors to general intelligence. When our machines can answer most of these, then they will be on the road to becoming AGIs.