Material Detail

Time-series information and unsupervised representation learning

Time-series information and unsupervised representation learning

This video was recorded at Video Journal of Machine Learning Abstracts - Volume 4. Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available, but no or little feedback is provided to the learner. To address this issue, we formulate the following problem. Given a series of observations $X_1,\dots,X_n$ coming from a large (high-dimensional) space $\cX$, find a representation function $f$ mapping $\cX$ to a finite space $\cY$ such that the series $f(X_1),\dots,f(X_n)$ preserve as much information as possible about the original time-series dependence in $X_1,\dots,X_n$. We show that, for stationary time series, the function $f$ can be selected as the one maximizing the time-series information $h_0(f(X))- h_\infty (f(X))$ where $h_0(f(X))$ is the Shannon entropy of $f(X_1)$ and $h_\infty (f(X))$ is the entropy rate of the time series $f(X_1),\dots,f(X_n),\dots$. Implications for the problem of optimal control are presented.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.