Material Detail

MAP Estimation with Perfect Graphs

MAP Estimation with Perfect Graphs

This video was recorded at Machine Learning Summer School (MLSS), Chicago 2009. Efficiently finding the maximum a posteriori (MAP) configuration of a graphical model is an important problem which is often implemented using message passing algorithms and linear programming. The optimality of such algorithms is only well established for singly-connected graphs such as trees. Recently, along with others, we have shown that matching and b-matching also admit exact MAP estimation under max product belief propagation. This leads us to consider a generalization of trees, matchings and b-matchings: the fascinating family of so-called perfect graphs. While MAP estimation in general loopy graphical models is NP, for perfect graphs of a particular form, the problem is in P. This result leverages recent progress in defining perfect graphs (the strong perfect graph theorem which has been resolved after 4 decades), linear programming relaxations of MAP estimation and recent convergent message passing schemes. In particular, we convert any graphical model into a so-called nand Markov random field. This model is straightforward to relax into a linear program whose integrality can be established in general by testing for graph perfection. This perfection test is performed efficiently using a polynomial time algorithm. Alternatively, known decomposition tools from perfect graph theory may be used to prove perfection for certain graphs. Thus, a general graph framework is provided for determining when MAP estimation in any graphical model is in P, has an integral linear programming relaxation and is exactly recoverable by message passing.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.