MERLOT Search - category=2595&createdSince=2012-12-29&sort.property=dateCreated
http://www.merlot.org:80/merlot/
A search of MERLOT materialsCopyright 1997-2015 MERLOT. All rights reserved.Mon, 2 Mar 2015 00:01:19 PSTMon, 2 Mar 2015 00:01:19 PSTMERLOT Search - category=2595&createdSince=2012-12-29&sort.property=dateCreatedhttp://www.merlot.org:80/merlot/images/merlot.gif
http://www.merlot.org:80/merlot/
4434Introduction to Probability - Probability Examples c-1
http://www.merlot.org/merlot/viewMaterial.htm?id=992042
'In this book you find the basic mathematics that is needed by engineers and university students . The author will help you to understand the meaning and function of mathematical concepts. The best way to learn it, is by doing it, the exercises in this book will help you do just that.Topics as Elementary probability calculus, density functions and stochastic processes are illustrated.This book requires knowledge of Calculus 1 and Calculus 2.'RegressIt -- free Excel add-in for regression and data analysis
http://www.merlot.org/merlot/viewMaterial.htm?id=988548
Free Excel add-in for linear regression and multivariate data analysis which offers presentation-quality graphics and support for good analytical practices, especially data and model visualization, tests of model assumptions, appropriate use of transformed variables in linear models, intelligent formatting of tables and charts, keeping a detailed and well-organized audit trail, and uniquely identifying the user who performed the analysis. It provides a good complement, if not a substitute, for commercial statistical software as far as linear regression modeling and descriptive analysis are concerned. It was developed in a university teaching environment but is also intended for professional use.An Introduction to Instrumental Variables
http://www.merlot.org/merlot/viewMaterial.htm?id=966496
This video was recorded at Workshop on Inverse Problems: Econometry, Numerical Analysis and Optimization, Statistics, Touluse 2005. What statisticians, numericians, engineers or econometricians mean by "inverse problem" often differs. For a statistician, an inverse problem is an estimation problem of a function which is not directly observed. The data are finite in number and contain errors, whose variance decreases with the number of observations, as they do in classical inference problems, while the unknown typically is infinite dimensional, as it is in nonparametric regression. For numericians, the noise is more an error induced by the fact that the real data are not directly observed. But the asymptotics differ, as the regularity conditions imposed for the solution. Finally, in econometrics the structural approach combines data observation and economic model. The parameter of interest is defined as a solution of a functional equation depending on the data distribution. Hence the operator in the underlying inverse problem is in general unknown. Many questions arise naturally in all the different fields, which are of great both applied and theoretical interest: identifiability, consistency and optimality in various forms, iterative methods. There have been great advances in the study of inverse problems within these three communities and we think that it is time for a workshop where the different point of views could be confronted, leading to exchanges of methodologies and several improvements. For instance non linear inverse problems have been studied in numerical analysis while statistical literature on this topics is scarce. Unknown inverse operators are common in econometrics but the problem is not well studied in statistics. On the other hand, adaptive estimation and optimal rates of convergence are common in statistics but not in the other fields.Regularization: Quadratic Versus Sparsity-enforcing and Deterministic Versus Stochastic Methods
http://www.merlot.org/merlot/viewMaterial.htm?id=966508
This video was recorded at Workshop on Inverse Problems: Econometry, Numerical Analysis and Optimization, Statistics, Touluse 2005. What statisticians, numericians, engineers or econometricians mean by "inverse problem" often differs. For a statistician, an inverse problem is an estimation problem of a function which is not directly observed. The data are finite in number and contain errors, whose variance decreases with the number of observations, as they do in classical inference problems, while the unknown typically is infinite dimensional, as it is in nonparametric regression. For numericians, the noise is more an error induced by the fact that the real data are not directly observed. But the asymptotics differ, as the regularity conditions imposed for the solution. Finally, in econometrics the structural approach combines data observation and economic model. The parameter of interest is defined as a solution of a functional equation depending on the data distribution. Hence the operator in the underlying inverse problem is in general unknown. Many questions arise naturally in all the different fields, which are of great both applied and theoretical interest: identifiability, consistency and optimality in various forms, iterative methods. There have been great advances in the study of inverse problems within these three communities and we think that it is time for a workshop where the different point of views could be confronted, leading to exchanges of methodologies and several improvements. For instance non linear inverse problems have been studied in numerical analysis while statistical literature on this topics is scarce. Unknown inverse operators are common in econometrics but the problem is not well studied in statistics. On the other hand, adaptive estimation and optimal rates of convergence are common in statistics but not in the other fields.Slowly but surely, Bayesian ideas revolutionize medical research
http://www.merlot.org/merlot/viewMaterial.htm?id=966851
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. Bayesian theory is elegant and intuitive. But elegance may have little value in practical settings. The "Bayesian Revolution" of the last half of the 20th century was irrelevant for biostatisticians. They were busy changing the world in another way, and they neither needed nor wanted more methodology than they already had. The randomized controlled trial (RCT) came into existence in the 1940s and it changed medical research from an art into a science, with biostatisticians guiding the process. To make matters worse for the reputation of Bayesians, we seemed to be anti-randomization, and medical researchers feared we wanted to return them to the dark ages. The standard approach to clinical experimentation is frequentist, which has advantages and disadvantages. One disadvantage is that unit of statistical inference is the entire experiment. As a consequence, the RCT has remained largely unchanged. It is still the gold standard of medical research, but it can make research ponderously slow. And it is not ideally suited for the "personalized medicine" approach of today, identifying which types of patients benefit from which therapies. In this presentation I'll chronicle the increased use of the Bayesian perspective in medical research over this period. An important niche regards adaptive design. I'll describe a variety of approaches, most of which employ randomization, and all employ Bayesian updating. Accumulating trial results are analyzed frequently with the possibility of modifying the trial's future course based on the overall theme of the trial. It is possible to have many treatment arms. Including combination therapies enables learning howtreatments interact with each other aswell as the way they interact with biomarkers of disease that are specific to individual patients. I will give an example (called I-SPY 2) of a Bayesian adaptive biomarker-driven trial in neoadjuvant breast cancer. The goal is to efficiently identify biomarker signatures for a variety of agents and combinations being considered simultaneously. Longitudinal modeling plays a vital role. Although the Bayesian approach supplies important tools for designing informative and efficient clinical trials, I've learned to not try to change things too abruptly. In particular, we can stay rooted in the well established frequentist tradition by evaluating false-positive rates and statistical power using simulation. The most exciting aspect of this story is the potential for utilizing Bayesian ideas in the future to build ever more efficient study designs and associated processes for developing therapies, based on the existing solid foundation.Approximate Bayesian computation (ABC): advances and questions
http://www.merlot.org/merlot/viewMaterial.htm?id=966854
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. The lack of closed formlikelihoods has been the bane of Bayesian computation for many years and, prior to the introduction of MCMC methods, a strong impediment to the propagation of the Bayesian paradigm. We are now facing models where an MCMC completion of the model towards closed-formlikelihoods seems unachievable and where a further degree of approximation appears unavoidable. In this tutorial, I will present the motivation for approximative Bayesian computation (ABC) methods, the various implementations found in the current literature, as well as the inferential, rather than computational, challenges set by these methods.Confidence in nonparametric credible sets?
http://www.merlot.org/merlot/viewMaterial.htm?id=966857
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. In nonparametric statistics the posterior distribution is used in exactly the same way as in any Bayesian analysis. It supposedly gives us the likelihood of various parameter values given the data. A difference with parametric analysis is that it is often difficult to have an intuitive understanding of the prior, which affects the believability of the posterior distribution as a quantification of uncertainty. A second difference is that the posterior distribution is much more sensitive to the prior: its "fine properties" matter. This is true even in the asymptotic situation when the informativeness of the data increases indefinitely. In this talk we start by reviewing frequentist asymptotic results and insights on posterior distributions in the semi- and nonparametric setting obtained in the last decade. These results show that posterior distributions can be effective in recovering a true parameter provided some care is taken when choosing a prior. We next go on to ask whether posterior distributions are also capable in giving a correct idea of error in the reconstructions. Are credible sets in any way comparable to confidence regions? We shall not present an answer to this question, but show by example that it will be delicate.Bayesian dynamic modelling
http://www.merlot.org/merlot/viewMaterial.htm?id=966860
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. Since the 1970s, applications of Bayesian time series models and forecasting methods have represented major success stories for our discipline. Dynamic modelling is a very broad field, so this ISBA Lecture on Bayesian Foundations will rather selectively note key concepts and some core model contexts, leavened with extracts froma few time series analysis and forecasting examples from various application fields. The Lecture with then link into and briefly discuss a range of recent developments in exciting and challenging areas of Bayesian time series analysis.Nonparametric Tests between Distributions
http://www.merlot.org/merlot/viewMaterial.htm?id=969780
This video was recorded at Workshop on Modelling in Classification and Statistical Learning, Eindhoven 2004. Reproducing Kernel Hilbert Spaces have been mainly used for estimation. Distributional tests in this area were mainly concerned with tests for independence of random variables. We give concentration of measure bounds for the latter using an easy to compute criterion between spaces of observations. In addition, we show that a similar criterion can be used easily for the purpose of testing the identity between two distributions. In both cases, we prove necessary and sufficient conditions for the tests.Prequential Statistics
http://www.merlot.org/merlot/viewMaterial.htm?id=973560
This video was recorded at Machine Learning seminars at the Cambridge University Engineering Department. Machine Learning is a multidisciplinary field which aims to understand and design algorithms that automatically extract useful information from data. Since real world data are typically noisy, ambiguous and occasionally erroneous, a central requirement of a learning system is that it must be able to handle uncertainty. Probability theory provides an ideal basis for representing and manipulating uncertain knowledge, so many successful algorithms in machine learning are based on probabilistic i.e. Bayesian inference. Bayesian inference provides a principled framework for machine learning, but exact inference is often intractable, so most algorithms rely on approximations such as variational methods or Markov chain Monte Carlo. More > http://talks.cam.ac.uk/show/archive/9091