Material Detail

Grammatical Inference as a Principal Component Analysis Problem

Grammatical Inference as a Principal Component Analysis Problem

This video was recorded at 26th International Conference on Machine Learning (ICML), Montreal 2009. One of the main problems in probabilistic grammatical inference consists in inferring a stochastic language, i.e. a probability distribu- tion, in some class of probabilistic models, from a sample of words independently drawn according to a fixed unknown target distribution p. Here we consider the class of rational stochastic languages composed of stochastic languages that can be computed by muliplicity automata, which can be viewed as a generalization of probabilistic automata. Rational stochastic languages p have a useful algebraic characterization: all the mappings up:v-¿p(uv) lie in a finite dimensional vector subspace Vp of the vector space R(E) composed of all real-valued functions defined over E. Hence, a first step in the grammatial inference process can consist in identifying the subspace Vp. In this paper, we study the possibility of using principal component analysis to achieve this task. We provide an inference algorithm which computes an estimate of the target distribution. We prove sometheoreticalpropertiesofthisalgorithmandweprovideresultsfromnumericalsimulationsthatconfirm the relevance of our approach.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.