Higher-level Inductive Biases

By LI Haoyang 2020.12.23

Content

Higher-level Inductive BiasesContentInductive Biases for Deep Learning of Higher-Level Cognition - 2020Global Workspace TheoryInductive Biases ProposedOmitted PartsInspirations

Inductive Biases for Deep Learning of Higher-Level Cognition - 2020

Anirudh Goyal, Yoshua Bengio. Inductive Biases for Deep Learning of Higher-Level Cognition. arXiv preprint 2020. arXiv:2011.15091

Inductive biases, broadly speaking, encourage the learning algorithm to prioritise solutions with certain properties.

A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics).

This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories.

This paper states some perspectives in a declarative way, the main claim is that it's time to incorporate high-level inductive biases in deep learning for stronger AI.

Basically, inductive biases are assumptions about the data.

Global Workspace Theory

In cognitive science, the Global Workspace Theory (GWT) (Baars, 1993) suggests an architecture allowing specialist components to interact.

The key claim of GWT is the existence of a shared representation—sometimes called a blackboard, sometimes a workspace—that can be modified by any selected specialist and that is broadcast to all specialists.

This theory states that brain works modularly, in which there exists a common information bottleneck that when a task is done, only some specific modules are excited and communicated through a global workspace. This theory reconciles with the von Neumann structure of computer.

Besides GWT:

Regarding the topology of the communication channels between modules, it is known that modules in the brain satisfy some spatial topology such that the computation is not all-to-all between all the modules.

Combining both:

It states that low-level modules have both the ability to communicate with low-level neighbors and high-level headquarters.

Inductive Biases Proposed

Based on the functioning of human brains:

Our brain seems to thus harbour two very different types of knowledge:

It looks like current deep learning systems are fairly good at perception and system 1 tasks.

Humans enjoy system 2 abilities which permit fast learning (I can tell you a new rule in one sentence and you do not have to practice it in order to be able to apply it, albeit awkwardly and slowly) and systematic generalization, both of which should be important characteristics of the next generation of deep learning systems.

They proposes the following higher-level inductive biases:

Omitted Parts

In the original paper, they also state about the link with computer programming, the causal models and the synergy between AI and cognitive science communities.

Inspirations

This is a nice perspective paper, although most of its viewpoints are declarative without sufficient supports presented.