Material Detail
On Optimal Estimators in Learning Theory
This video was recorded at Machine Learning Summer School (MLSS), Chicago 2005. This talk addresses some problems of supervised learning in the setting formulated by Cucker and Smale. Supervised learning, or learning from examples, refers to a process that builds on the base of available data of inputs xi and outputs yi, i=1,...,m, a function that best represents the relation between the inputs x in X and the corresponding outputs y in Y. The goal is to find an estimator fz on the base of given data z := ((x1,y1),...,(xm,ym)) that approximates well the regression function fp defined on Z=X x Y. We assume that (xi,yi), i=1,...,m are independent and distributed according to p. There are several important ingredients in the mathematical formulation of this problem. We follow the way that has become standard in approximation theory and has been used in recent papers. In this approach we first choose a function class W (a hypothesis space H) to work with. After selecting a class W we have the following two ways to go. The first one is based on the idea of studying approximation of the L2(px) projection fW := (fp)W of fp onto W. Here, px is the marginal probability measure. This setting is known as the improper function learning problem or the projection learning problem. In this case we do not assume that the regression function fp comes from a specific (say, smoothness) class of functions. The second way is based on the assumption fp in W. This setting is known as the proper function learning problem. For instance, we may assume that fp has some smoothness. We will give some upper and lower estimates in both settings.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info