MERLOT Search - materialType=Presentation&category=2513&createdSince=2012-11-08&sort.property=dateCreated
http://www.merlot.org:80/merlot/
A search of MERLOT materialsCopyright 1997-2015 MERLOT. All rights reserved.Mon, 29 Jun 2015 22:49:10 PDTMon, 29 Jun 2015 22:49:10 PDTMERLOT Search - materialType=Presentation&category=2513&createdSince=2012-11-08&sort.property=dateCreatedhttp://www.merlot.org:80/merlot/images/merlot.gif
http://www.merlot.org:80/merlot/
4434Pi Day
http://www.merlot.org/merlot/viewMaterial.htm?id=999301
This is a librarian designed research tool to help people learn about what Pi is and why we should celebrate Pi Day (3.14.15). Includes a video with everything you wanted to know about Pi in 3.14 minutes.Lecture 11: Statistical Estimation
http://www.merlot.org/merlot/viewMaterial.htm?id=981439
This video was recorded at Stanford Engineering Everywhere EE364A - Convex Optimization I. So in pure statistics there's just parameterized probability distributions and we have a parameter X and your job, you get one or more samples from one of these distributions and you're charge is to say something intelligent about which distribution, which is to say which parameter value, generated the sample. So that's statistics. So a standard technique is maximum likelihood estimations. In maximum likelihood estimation you do the following. You have an observation Y and you look at the density of the – you look at the density at Y or probability distribution if it's a distribution on like – if its got different points on atomic points. ... See the whole transcript at Convex Optimization I - Lecture 11An Introduction to Instrumental Variables
http://www.merlot.org/merlot/viewMaterial.htm?id=966496
This video was recorded at Workshop on Inverse Problems: Econometry, Numerical Analysis and Optimization, Statistics, Touluse 2005. What statisticians, numericians, engineers or econometricians mean by "inverse problem" often differs. For a statistician, an inverse problem is an estimation problem of a function which is not directly observed. The data are finite in number and contain errors, whose variance decreases with the number of observations, as they do in classical inference problems, while the unknown typically is infinite dimensional, as it is in nonparametric regression. For numericians, the noise is more an error induced by the fact that the real data are not directly observed. But the asymptotics differ, as the regularity conditions imposed for the solution. Finally, in econometrics the structural approach combines data observation and economic model. The parameter of interest is defined as a solution of a functional equation depending on the data distribution. Hence the operator in the underlying inverse problem is in general unknown. Many questions arise naturally in all the different fields, which are of great both applied and theoretical interest: identifiability, consistency and optimality in various forms, iterative methods. There have been great advances in the study of inverse problems within these three communities and we think that it is time for a workshop where the different point of views could be confronted, leading to exchanges of methodologies and several improvements. For instance non linear inverse problems have been studied in numerical analysis while statistical literature on this topics is scarce. Unknown inverse operators are common in econometrics but the problem is not well studied in statistics. On the other hand, adaptive estimation and optimal rates of convergence are common in statistics but not in the other fields.Regularization: Quadratic Versus Sparsity-enforcing and Deterministic Versus Stochastic Methods
http://www.merlot.org/merlot/viewMaterial.htm?id=966508
This video was recorded at Workshop on Inverse Problems: Econometry, Numerical Analysis and Optimization, Statistics, Touluse 2005. What statisticians, numericians, engineers or econometricians mean by "inverse problem" often differs. For a statistician, an inverse problem is an estimation problem of a function which is not directly observed. The data are finite in number and contain errors, whose variance decreases with the number of observations, as they do in classical inference problems, while the unknown typically is infinite dimensional, as it is in nonparametric regression. For numericians, the noise is more an error induced by the fact that the real data are not directly observed. But the asymptotics differ, as the regularity conditions imposed for the solution. Finally, in econometrics the structural approach combines data observation and economic model. The parameter of interest is defined as a solution of a functional equation depending on the data distribution. Hence the operator in the underlying inverse problem is in general unknown. Many questions arise naturally in all the different fields, which are of great both applied and theoretical interest: identifiability, consistency and optimality in various forms, iterative methods. There have been great advances in the study of inverse problems within these three communities and we think that it is time for a workshop where the different point of views could be confronted, leading to exchanges of methodologies and several improvements. For instance non linear inverse problems have been studied in numerical analysis while statistical literature on this topics is scarce. Unknown inverse operators are common in econometrics but the problem is not well studied in statistics. On the other hand, adaptive estimation and optimal rates of convergence are common in statistics but not in the other fields.Slowly but surely, Bayesian ideas revolutionize medical research
http://www.merlot.org/merlot/viewMaterial.htm?id=966851
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. Bayesian theory is elegant and intuitive. But elegance may have little value in practical settings. The "Bayesian Revolution" of the last half of the 20th century was irrelevant for biostatisticians. They were busy changing the world in another way, and they neither needed nor wanted more methodology than they already had. The randomized controlled trial (RCT) came into existence in the 1940s and it changed medical research from an art into a science, with biostatisticians guiding the process. To make matters worse for the reputation of Bayesians, we seemed to be anti-randomization, and medical researchers feared we wanted to return them to the dark ages. The standard approach to clinical experimentation is frequentist, which has advantages and disadvantages. One disadvantage is that unit of statistical inference is the entire experiment. As a consequence, the RCT has remained largely unchanged. It is still the gold standard of medical research, but it can make research ponderously slow. And it is not ideally suited for the "personalized medicine" approach of today, identifying which types of patients benefit from which therapies. In this presentation I'll chronicle the increased use of the Bayesian perspective in medical research over this period. An important niche regards adaptive design. I'll describe a variety of approaches, most of which employ randomization, and all employ Bayesian updating. Accumulating trial results are analyzed frequently with the possibility of modifying the trial's future course based on the overall theme of the trial. It is possible to have many treatment arms. Including combination therapies enables learning howtreatments interact with each other aswell as the way they interact with biomarkers of disease that are specific to individual patients. I will give an example (called I-SPY 2) of a Bayesian adaptive biomarker-driven trial in neoadjuvant breast cancer. The goal is to efficiently identify biomarker signatures for a variety of agents and combinations being considered simultaneously. Longitudinal modeling plays a vital role. Although the Bayesian approach supplies important tools for designing informative and efficient clinical trials, I've learned to not try to change things too abruptly. In particular, we can stay rooted in the well established frequentist tradition by evaluating false-positive rates and statistical power using simulation. The most exciting aspect of this story is the potential for utilizing Bayesian ideas in the future to build ever more efficient study designs and associated processes for developing therapies, based on the existing solid foundation.Approximate Bayesian computation (ABC): advances and questions
http://www.merlot.org/merlot/viewMaterial.htm?id=966854
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. The lack of closed formlikelihoods has been the bane of Bayesian computation for many years and, prior to the introduction of MCMC methods, a strong impediment to the propagation of the Bayesian paradigm. We are now facing models where an MCMC completion of the model towards closed-formlikelihoods seems unachievable and where a further degree of approximation appears unavoidable. In this tutorial, I will present the motivation for approximative Bayesian computation (ABC) methods, the various implementations found in the current literature, as well as the inferential, rather than computational, challenges set by these methods.Confidence in nonparametric credible sets?
http://www.merlot.org/merlot/viewMaterial.htm?id=966857
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. In nonparametric statistics the posterior distribution is used in exactly the same way as in any Bayesian analysis. It supposedly gives us the likelihood of various parameter values given the data. A difference with parametric analysis is that it is often difficult to have an intuitive understanding of the prior, which affects the believability of the posterior distribution as a quantification of uncertainty. A second difference is that the posterior distribution is much more sensitive to the prior: its "fine properties" matter. This is true even in the asymptotic situation when the informativeness of the data increases indefinitely. In this talk we start by reviewing frequentist asymptotic results and insights on posterior distributions in the semi- and nonparametric setting obtained in the last decade. These results show that posterior distributions can be effective in recovering a true parameter provided some care is taken when choosing a prior. We next go on to ask whether posterior distributions are also capable in giving a correct idea of error in the reconstructions. Are credible sets in any way comparable to confidence regions? We shall not present an answer to this question, but show by example that it will be delicate.Bayesian dynamic modelling
http://www.merlot.org/merlot/viewMaterial.htm?id=966860
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. Since the 1970s, applications of Bayesian time series models and forecasting methods have represented major success stories for our discipline. Dynamic modelling is a very broad field, so this ISBA Lecture on Bayesian Foundations will rather selectively note key concepts and some core model contexts, leavened with extracts froma few time series analysis and forecasting examples from various application fields. The Lecture with then link into and briefly discuss a range of recent developments in exciting and challenging areas of Bayesian time series analysis.Simbolni zapis
http://www.merlot.org/merlot/viewMaterial.htm?id=969392
This video was recorded at Logika in množice. Obravnavamo osnovne logične operacije, njihov simbolni zapis in prevedbo med simbolnim zapisom in njegovim pomenom v slovenščini.Predavanje 11
http://www.merlot.org/merlot/viewMaterial.htm?id=969394
This video was recorded at Logika in množice. Predmet Logika in množice se predava v 1. letniku prve stopnje študija matematike na Fakulteti za matematiko in fiziko, Univerza v Ljubljani. Predmet se uči na dveh smereh: 1. stopnja študija Matematika Enoviti magistrski študijski program Pedagoška matematika Namen predmeta je študente naučiti osnovne pojme logike in teorije množic. Najpomembnejši koncept, ki ga srečamo pri predmetu je matematični dokaz. Učimo se, kaj je dokaz, kako ga zapišemo in preberemo. Poleg tega se učimo tudi teorijo množic, saj je ta osnova in univerzalni jezik moderne matematike. Uradna stran predmeta Logika in množice. Opozorilo: VideoLectures.NET poudarjajo, da je snemanje predmeta Logika in množice simulacija uporabe odprtokodne samodejne snemalne računalniške opreme Matterhorn. Avdio in video kakovost posnetkov sta temu primerna.