Santoro et al, NeurIPS 2017, [link]
tags: architectures - neurips - 2017
CNN
architectures with notion of relational reasoning, particularly useful for tasks such as visual question answering, dynamics understanding etc.
The main idea of Relation Networks (RN
) is to constrain the functional form of convolutional neural networks as to explicitly learn relations between entities, rather than hoping for this property to emerge in the representation during training. Formally, let be a set of objects of interest ; The Relation Network is trained to learn a representation that considers all pairwise relations across the objects:
and are defined as Multi Layer Perceptrons. By definition, the Relation Network (i) has to consider all pairs of objects, (ii) operates directly on the set of objects hence is not constrained to a specific organization of the data, and (iii) is data-efficient in the sense that only one function, is learned to capture all the possible relations: and are typically light modules and most of the overhead comes from the sum of pairwise components ().
The objects are the basic elements of the relational process we want to model. They are defined with regard to the task at hand, for instance:
Attending relations between objects in an image: The image is first processed through a fully-convolutional network. Each of the resulting cell is taken as an object, which is a feature of dimensions , additionally tagged with its position in the feature map.
Sequence of images. In that case, each image is first fed through a feature extractor and the resulting embedding is used as an object. The goal is to model relations between images across the sequence.
Figure: Example of applying the Relation Network for Visual Question Answeting. Questions are processed with an LSTM
to produce a question embedding, and images are processed with a CNN
to produce a set of objects for the RN
.
The main evaluation is done on the CLEVR
dataset [2]. The main message seems to be that the proposed module is very simple and yet often improves the model accuracy when added to various architectures (CNN
, CNN + LSTM
etc.) introduced in [1]. The main baseline they compare to (and outperform) is Spatial Attention (SA
) which is another simple method to integrate some form of relational reasoning in a neural architecture.
Palm et al, [link]
This paper builds on the Relation Network architecture and propose to explore more complex relational structures, defined as a graph, using a message passing approach: Formally, we are given a graph with vertices and edges . By abuse of notation, also denotes the embedding for vertex (e.g. obtained via a CNN) and is 1 where and are linked, 0 otherwise. To each node we associate a hidden state at iteration , which will be updated via message passing. After a few iterations, the resulting state is passed through a
MLP
to output the result (either for each node or for the whole graph):
Comparing to the original Relation Network:
- Each update rule is a Relation Network that only looks at pairwise relations between linked vertices. The message passing scheme additionally introduces the notion of recurrence, and the dependency on the previous hidden state.
- The dependence on could in theory be avoided by adding self-edges from to , to make it closer to the Relation Network formulation.
- Adding as input of looks like a simple trick to avoid long-term memory problems.
The experiments essentially compare the proposed
RRNN
model to the Relation Network and classical recurrent architectures such asLSTM
. They consider three datasets:
- Babi. NLP question answering task with some reasoning involved. Solves 19.7 (out of 20) tasks on average, while simple RN solved around 18 of them reliably.
- Pretty CLEVR. A CLEVR like dataset (only with simple 2D shapes) with questions involving various steps of reasoning, e.g. “which is the shape steps of the red circle ?”
- Sudoku. the graph contains 81 nodes (one for each cell in the sudoku), with edges between cells belonging to the same row, column or block.
Jahrens and Martinetz, [link]
This paper presents a very simple trick to make Relation Network consider higher order relations than pairwise, while retaining some efficiency. Essentially the model can be written as follow:
It is not clear while this model would be equivalent to explicitly considering higher-level relations (as it is rather combining pairwise terms for a finite number of steps). According to the experiments it seems that indeed this architecture could be better fitted for the studied tasks (e.g. over the Relation Network or Recurrent Relation Network) but it also makes the model even harder to interpret.