Deep learning systems pick out statistical patterns in data – that's how they interpret the world. But statistical learning requires lots of data, and it is not particularly adequate to apply knowledge to new situations. That's unlike symbolic AI, which records the chain of steps taken to reach with less data than traditional methods.
A new study by a team of researchers at MIT, MIT-IBM Watson AI Lab, and DeepMind demonstrates the potential of symbolic AI applied to an image comprehension task. They say that in tests, their hybrid model managed to learn object-related concepts like color and shape, using that knowledge to suss out object in a scene with minimal training data and "no explicit programming".
learn concepts by connecting words with images, ”said study lead author Jiayuan Mao in a statement. “A machine that can learn the same way needs much less data, and is better able to transfer its knowledge to new scenarios.”
The team's model comprises a perception component that translates the images into an object-based representation, and a language layer that extracts meanings from words and sentences and creates "symbolic programs" (ie, instructions) that tell the AI how to answer the question. A third module runs the symbolic programs on the scene and speaks out in answer, updating the model when it makes mistakes.
The researchers trained it on images paired with related questions and answers from Stanford University's CLEVR image comprehension test set. (For example: "What's the color of the object?" And "How many objects are both right of the green cylinder and have the same material as the small blue ball?") The questions grew progressively harder than the model learned, and once the mastered object-level concepts, the model advanced to learning how to relate objects and their properties to each other.
In experiments, it was able to interpret new scenes and concepts "almost perfectly," the researchers report, handily outperforming other bleeding-edge AI systems with just 5,000 images and 1