DNBP: Differentiable Nonparametric

Belief Propagation

Anthony Opipari

University of Michigan

Jana Pavlasek

University of Michigan

Chao Chen

University of Michigan

Shoutian Wang

University of Michigan

Karthik Desingh

University of Washington

Odest Chadwicke Jenkins

University of Michigan

We present a differentiable approach to learn the probabilistic factors used for inference by a nonparametric belief propagation algorithm. Existing nonparametric belief propagation methods rely on domain-specific features encoded in the probabilistic factors of a graphical model. In this work, we replace each crafted factor with a differentiable neural network enabling the factors to be learned using an efficient optimization routine from labeled data. By combining differentiable neural networks with an efficient belief propagation algorithm, our method learns to maintain a set of marginal posterior samples using end-to-end training. We evaluate our differentiable nonparametric belief propagation (DNBP) method on a set of articulated pose tracking tasks and compare performance with learned baselines. Results from these experiments demonstrate the effectiveness of using learned factors for tracking and suggest the practical advantage over hand-crafted approaches.

Experiments

DNBP tracks the belief of each node of a graph model over a sequence of input data. The edge parameters and observation likelihood are learned. We show selected results on two different graph models.

Double Pendulum

The pendulum is composed of three nodes connected by two rigid links. It has two revolute joints. The model is shown below. Our algorithm tracks the marginal belief over the 2D location of each of the three nodes (yellow circles + the end effector).

(a) The double pendulum.
(b) The pendulum graph.

The following are random examples for the double pendulum. We compare our method (DNBP) to an LSTM baseline for each example.

Input Observation

Ground Truth

DNBP Belief

LSTM Prediction


Spider Experiment

The spider is composed of a root node connected to three legs by revolute joints. Each leg has two telescoping links connected by a revolute joint. The goal is to track the 2D location of the root node, three leg joints, and three end effectors (7 nodes total).

(a) The spider.
(b) The spider graph.

The following are random examples for the spider. We compare our method (DNBP) to an LSTM baseline for each example.

Input Observation

Ground Truth

DNBP Belief

LSTM Prediction


Entropy

A key benefit of DNBP is that it tracks belief over iterations. We can model the algorithm's uncertainty which tells us how much we should trust the estimate. Here is an illustration of the measured entropy in the estimate over iterations for an easy scene:

DNBP Entropy over Iterations

Below, we add a significant occlusion to the observation. When the pendulum end effector is totally occluded, the entropy is high, demonstrating uncertainty in the estimate.

DNBP Entropy over Iterations

Hand Pose Tracking

Below, DNBP is evaluated on randomly sampled sequences of the hand tracking task, using the First-Person Hand Action Benchmark [Garcia-Hernando et al., CVPR 2018]. The point-wise estimate produced by DNBP as well as the marginal uncertainty associated with the estimate of each finger joint is visualized. Marginal uncertainty is calculated from the marginal belief estimates of DNBP as 1 standard deviation in the x and y dimensions respectively. Uncertainty in the depth dimension is not visualized.

Successful Estimations

In this and the following section, additional qualitative output from DNBP on the hand tracking task is included. Collectively, the example sequences that follow were randomly sampled but have been grouped into cases of subjective 'success' and 'failure' for clarity. Notably in both success and failure cases, the uncertainty reported by DNBP is inspected. In the middle row of each video, DNBP's uncertainty is illustrated qualitatively for each finger joint as a colored ellipse corresponding to 1 standard deviation in the x and y dimensions as calculated from the covariance of the corresponding marginal belief particles. As a quantitative measure of uncertainty, the bottom row of each video displays the entropy associated with DNBP's marginal belief estimate for each finger joint.

Failure Cases

Citation


@article{opipari2022dnbp,
  author = {Opipari, Anthony and Pavlasek, Jana and Chen, Chao and Wang, Shoutian and Desingh, Karthik and Jenkins, Odest Chadwicke},
  title = {Differentiable Nonparametric Belief Propagation},
  year = {2022},
  journal={ICRA Workshop: Robotic Perception and Mapping: Emerging Techniques}
}

@article{opipari2021dnbp,
  author = {Opipari, Anthony and Chen, Chao and Wang, Shoutian and Pavlasek, Jana and Desingh, Karthik and Jenkins, Odest Chadwicke},
  title = {Differentiable Nonparametric Belief Propagation},
  year = {2021},
  journal={arXiv preprint arXiv:2101.05948}
}