Laboratory for Progress

Perceptive Robotics and Grounded Reasoning Systems

Differentiable Nonparametric Belief Propagation

Anthony Opipari, Chao Chen, Shoutian Wang, Jana Pavlasek, Karthik Desingh, Odest Chadwicke Jenkins

We present a differentiable approach to learn the probabilistic factors used for inference by a nonparametric belief propagation algorithm. Existing nonparametric belief propagation methods rely on domain-specific features encoded in the probabilistic factors of a graphical model. In this work, we replace each crafted factor with a differentiable neural network enabling the factors to be learned using an efficient optimization routine from labeled data. By combining differentiable neural networks with an efficient belief propagation algorithm, our method learns to maintain a set of marginal posterior samples using end-to-end training. We evaluate our differentiable nonparametric belief propagation (DNBP) method on a set of articulated pose tracking tasks and compare performance with a recurrent neural network. Results from this comparison demonstrate the effectiveness of using learned factors for tracking and suggest the practical advantage over hand-crafted approaches.

Read the Paper   

Experiments

DNBP tracks the belief of each node of a graph model over a sequence of input data. The edge parameters and observation likelihood are learned. We show selected results on two different graph models.

Double Pendulum

The pendulum is composed of three nodes connected by two rigid links. It has two revolute joints. The model is shown below. Our algorithm tracks the marginal belief over the 2D location of each of the three nodes (yellow circles + the end effector).

(a) The double pendulum.
(b) The pendulum graph.

The following are random examples for the double pendulum. We compare our method (DNBP) to an LSTM baseline for each example.

Input Observation

Ground Truth

DNBP Belief

LSTM Prediction


Spider Experiment

The spider is composed of a root node connected to three legs by revolute joints. Each leg has two telescoping links connected by a revolute joint. The goal is to track the 2D location of the root node, three leg joints, and three end effectors (7 nodes total).

(a) The spider.
(b) The spider graph.

The following are random examples for the spider. We compare our method (DNBP) to an LSTM baseline for each example.

Input Observation

Ground Truth

DNBP Belief

LSTM Prediction


Entropy

A key benefit of DNBP is that it tracks belief over iterations. We can model the algorithm's uncertainty which tells us how much we should trust the estimate. Here is an illustration of the measured entropy in the estimate over iterations for an easy scene:

DNBP Entropy over Iterations

Below, we add a significant occlusion to the observation. When the pendulum end effector is totally occluded, the entropy is high, demonstrating uncertainty in the estimate.

DNBP Entropy over Iterations

Citation

@article{opipari2021dnbp,
  author = {Opipari, Anthony and Chen, Chao and Wang, Shoutian and Pavlasek, Jana and Desingh, Karthik and Jenkins, Odest Chadwicke},
  title = {Differentiable Nonparametric Belief Propagation},
  year = {2021},
  journal={arXiv preprint arXiv:2101.05948}
}