MERLOT Search - category=2595&sort.property=dateCreated
http://www.merlot.org:80/merlot/
A search of MERLOT materialsCopyright 1997-2015 MERLOT. All rights reserved.Thu, 28 May 2015 01:06:09 PDTThu, 28 May 2015 01:06:09 PDTMERLOT Search - category=2595&sort.property=dateCreatedhttp://www.merlot.org:80/merlot/images/merlot.gif
http://www.merlot.org:80/merlot/
4434Elementary Stats
http://www.merlot.org/merlot/viewMaterial.htm?id=1021852
This course is the study of descriptive statistics; probability; discrete and continuous (including binomial, normal and T) distributions; sampling distributions; interval estimation; hypothesis testing; linear regression and correlation. It is recommended for majors in the fields of biology, mathematics, social sciences, education and business. Elementary Statistics
http://www.merlot.org/merlot/viewMaterial.htm?id=1020983
This course is the study of descriptive statistics; probability; discrete and continuous (including binomial, normal and T) distributions; sampling distributions; interval estimation; hypothesis testing; linear regression and correlation. It is recommended for majors in the fields of biology, mathematics, social sciences, education and business. Exercises in Statistical Inference with detailed solutions
http://www.merlot.org/merlot/viewMaterial.htm?id=1016990
'Statistical inference is a process of drawing general conclusions from data in a specific sample. Typical inferential problems are: Does alternative A give higher return than alternative B? Is drug A more effective than drug B? In both cases solutions are based on observations in a single sample.To solve inferential problems one has to deal with the problems: (i) How to find the best estimate of an unknown quantity, (ii) How to find an interval that covers the true unknown value and (iii) How to test hypothesis about the value of an unknown quantity. The treatment of these issues can be found in a large amount of statistical textbooks. The present book differs from the latter since it focuses on problem solving and only a minimum of the theory needed is presented.'Introduction to Probability - Probability Examples c-1
http://www.merlot.org/merlot/viewMaterial.htm?id=992042
'In this book you find the basic mathematics that is needed by engineers and university students . The author will help you to understand the meaning and function of mathematical concepts. The best way to learn it, is by doing it, the exercises in this book will help you do just that.Topics as Elementary probability calculus, density functions and stochastic processes are illustrated.This book requires knowledge of Calculus 1 and Calculus 2.'RegressIt -- free Excel add-in for regression and data analysis
http://www.merlot.org/merlot/viewMaterial.htm?id=988548
Free Excel add-in for linear regression and multivariate data analysis which offers presentation-quality graphics and support for good analytical practices, especially data and model visualization, tests of model assumptions, appropriate use of transformed variables in linear models, intelligent formatting of tables and charts, keeping a detailed and well-organized audit trail, and uniquely identifying the user who performed the analysis. It provides a good complement, if not a substitute, for commercial statistical software as far as linear regression modeling and descriptive analysis are concerned. It was developed in a university teaching environment but is also intended for professional use.An Introduction to Instrumental Variables
http://www.merlot.org/merlot/viewMaterial.htm?id=966496
This video was recorded at Workshop on Inverse Problems: Econometry, Numerical Analysis and Optimization, Statistics, Touluse 2005. What statisticians, numericians, engineers or econometricians mean by "inverse problem" often differs. For a statistician, an inverse problem is an estimation problem of a function which is not directly observed. The data are finite in number and contain errors, whose variance decreases with the number of observations, as they do in classical inference problems, while the unknown typically is infinite dimensional, as it is in nonparametric regression. For numericians, the noise is more an error induced by the fact that the real data are not directly observed. But the asymptotics differ, as the regularity conditions imposed for the solution. Finally, in econometrics the structural approach combines data observation and economic model. The parameter of interest is defined as a solution of a functional equation depending on the data distribution. Hence the operator in the underlying inverse problem is in general unknown. Many questions arise naturally in all the different fields, which are of great both applied and theoretical interest: identifiability, consistency and optimality in various forms, iterative methods. There have been great advances in the study of inverse problems within these three communities and we think that it is time for a workshop where the different point of views could be confronted, leading to exchanges of methodologies and several improvements. For instance non linear inverse problems have been studied in numerical analysis while statistical literature on this topics is scarce. Unknown inverse operators are common in econometrics but the problem is not well studied in statistics. On the other hand, adaptive estimation and optimal rates of convergence are common in statistics but not in the other fields.Regularization: Quadratic Versus Sparsity-enforcing and Deterministic Versus Stochastic Methods
http://www.merlot.org/merlot/viewMaterial.htm?id=966508
This video was recorded at Workshop on Inverse Problems: Econometry, Numerical Analysis and Optimization, Statistics, Touluse 2005. What statisticians, numericians, engineers or econometricians mean by "inverse problem" often differs. For a statistician, an inverse problem is an estimation problem of a function which is not directly observed. The data are finite in number and contain errors, whose variance decreases with the number of observations, as they do in classical inference problems, while the unknown typically is infinite dimensional, as it is in nonparametric regression. For numericians, the noise is more an error induced by the fact that the real data are not directly observed. But the asymptotics differ, as the regularity conditions imposed for the solution. Finally, in econometrics the structural approach combines data observation and economic model. The parameter of interest is defined as a solution of a functional equation depending on the data distribution. Hence the operator in the underlying inverse problem is in general unknown. Many questions arise naturally in all the different fields, which are of great both applied and theoretical interest: identifiability, consistency and optimality in various forms, iterative methods. There have been great advances in the study of inverse problems within these three communities and we think that it is time for a workshop where the different point of views could be confronted, leading to exchanges of methodologies and several improvements. For instance non linear inverse problems have been studied in numerical analysis while statistical literature on this topics is scarce. Unknown inverse operators are common in econometrics but the problem is not well studied in statistics. On the other hand, adaptive estimation and optimal rates of convergence are common in statistics but not in the other fields.Slowly but surely, Bayesian ideas revolutionize medical research
http://www.merlot.org/merlot/viewMaterial.htm?id=966851
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. Bayesian theory is elegant and intuitive. But elegance may have little value in practical settings. The "Bayesian Revolution" of the last half of the 20th century was irrelevant for biostatisticians. They were busy changing the world in another way, and they neither needed nor wanted more methodology than they already had. The randomized controlled trial (RCT) came into existence in the 1940s and it changed medical research from an art into a science, with biostatisticians guiding the process. To make matters worse for the reputation of Bayesians, we seemed to be anti-randomization, and medical researchers feared we wanted to return them to the dark ages. The standard approach to clinical experimentation is frequentist, which has advantages and disadvantages. One disadvantage is that unit of statistical inference is the entire experiment. As a consequence, the RCT has remained largely unchanged. It is still the gold standard of medical research, but it can make research ponderously slow. And it is not ideally suited for the "personalized medicine" approach of today, identifying which types of patients benefit from which therapies. In this presentation I'll chronicle the increased use of the Bayesian perspective in medical research over this period. An important niche regards adaptive design. I'll describe a variety of approaches, most of which employ randomization, and all employ Bayesian updating. Accumulating trial results are analyzed frequently with the possibility of modifying the trial's future course based on the overall theme of the trial. It is possible to have many treatment arms. Including combination therapies enables learning howtreatments interact with each other aswell as the way they interact with biomarkers of disease that are specific to individual patients. I will give an example (called I-SPY 2) of a Bayesian adaptive biomarker-driven trial in neoadjuvant breast cancer. The goal is to efficiently identify biomarker signatures for a variety of agents and combinations being considered simultaneously. Longitudinal modeling plays a vital role. Although the Bayesian approach supplies important tools for designing informative and efficient clinical trials, I've learned to not try to change things too abruptly. In particular, we can stay rooted in the well established frequentist tradition by evaluating false-positive rates and statistical power using simulation. The most exciting aspect of this story is the potential for utilizing Bayesian ideas in the future to build ever more efficient study designs and associated processes for developing therapies, based on the existing solid foundation.Approximate Bayesian computation (ABC): advances and questions
http://www.merlot.org/merlot/viewMaterial.htm?id=966854
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. The lack of closed formlikelihoods has been the bane of Bayesian computation for many years and, prior to the introduction of MCMC methods, a strong impediment to the propagation of the Bayesian paradigm. We are now facing models where an MCMC completion of the model towards closed-formlikelihoods seems unachievable and where a further degree of approximation appears unavoidable. In this tutorial, I will present the motivation for approximative Bayesian computation (ABC) methods, the various implementations found in the current literature, as well as the inferential, rather than computational, challenges set by these methods.Confidence in nonparametric credible sets?
http://www.merlot.org/merlot/viewMaterial.htm?id=966857
This video was recorded at International Society for Bayesian Analysis (ISBA) Lectures on Bayesian Foundations, Kyoto 2012. In nonparametric statistics the posterior distribution is used in exactly the same way as in any Bayesian analysis. It supposedly gives us the likelihood of various parameter values given the data. A difference with parametric analysis is that it is often difficult to have an intuitive understanding of the prior, which affects the believability of the posterior distribution as a quantification of uncertainty. A second difference is that the posterior distribution is much more sensitive to the prior: its "fine properties" matter. This is true even in the asymptotic situation when the informativeness of the data increases indefinitely. In this talk we start by reviewing frequentist asymptotic results and insights on posterior distributions in the semi- and nonparametric setting obtained in the last decade. These results show that posterior distributions can be effective in recovering a true parameter provided some care is taken when choosing a prior. We next go on to ask whether posterior distributions are also capable in giving a correct idea of error in the reconstructions. Are credible sets in any way comparable to confidence regions? We shall not present an answer to this question, but show by example that it will be delicate.