Material Detail

Using Unlabeled Data in Generalization Error Bounds

Using Unlabeled Data in Generalization Error Bounds

This video was recorded at (Ab)Use of Bounds Workshop, Whistler 2004. It's been over 30 years since the foundations of sample complexity based learning theory and now seems a good time to assess the program. Has this branch of learning theory been useful? The purpose of this workshop is not merely progress assessment. The sample complexity bounds community has internal disagreements about what is (and is not) a useful bound, what is (and is not) a tight bound, how (and where) bounds might reasonably be used, and which bounds-related questions should be answered. One goal of this workshop is to debate the merits of these different issues in order to foster better understanding internally as well as externally. It is not the purpose of the workshop to converge to the one right way to assess sample complexity or learning performance etc; rather we seek to understand the relative merits of diverse approaches and how they relate, recognising that it is very unlikely there is one true and best solution. The workshop is generally focused on answers to the above questions. Some specific topics include: Quantitatively tight bounds. (What are they, how are they useful, etc...) Position statements and arguments about what bounds should deliver. Bounds for clustering and other "non-standard" learning problems The relationship between bounds and algorithms When are bounds useless? Issues in bound use (computational and informational complexities) What quantities should bounds depend on? (a priori knowledge of the task? Unlabeled training data? All training data?)

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.