|
Statistics and Probability Seminar Series -- Fall 2009
Thursday 4:00-5:00pm, Room MCS 149 (Tea served from 3:30-4:00pm, Room MCS 153)
September 3, 4-5pm, Room MCS 149 (Thursday)
Pritam Ranjan
Department of Mathematics and Statistics,
Acadia University
Gaussian Process Model as an Interpolator for a Deterministic Computer
Simulator
Abstract
For many expensive computer simulators, the outputs are deterministic and thus
the desired surrogate is an interpolator of the observed data. Gaussian
spatial process (GP) is commonly used to model such simulator outputs.
Fitting a GP model to n data points requires numerous inversion of a
correlation matrix R. This becomes computationally unstable due to
near-singularity of R, which occurs if any pair of data points are close
together in the input space. The popular approach to overcome near-singularity
introduces over-smoothing of the data. In this talk, I will present an
iterative approach to construct a new predictor that gives higher accuracy.
September 10, 4-5pm, Room MCS 149 (Thursday)
Samuel Kou
Department of Statistics,
Harvard University
Multi-resolution Inference of Stochastic Models from Partially Observed Data
Abstract
Stochastic models, diffusion models in particular, are widely used in
science, engineering and economics. Inferring the parameter values
from data is often complicated by the fact that the underlying
stochastic processes are only partially observed. Examples include
inference of discretely observed diffusion processes, stochastic
volatility models, and double stochastic Poisson (Cox) processes.
Likelihood based inference faces the difficulty that the likelihood is
usually not available even numerically. Conventional approach
discretizes the stochastic model to approximate the likelihood. In
order to have desirable accuracy, one has to use highly dense
discretization. However, dense discretization usually imposes
unbearable computation burden. In this talk we will introduce the
framework of Bayesian multi-resolution inference to address this
difficulty. By working on different resolution (discretization) levels
simultaneously and by letting the resolutions talk to each other, we
substantially improve not only the computational efficiency, but also
the estimation accuracy. We will illustrate the strength of the
multi-resolution approach by examples.
September 17, 4-5pm, Room MCS 149 (Thursday)
Lie Wang
Department of Mathematics,
Massachusetts Institute of Technology
Recovery of Sparse Signals and Shifting Inequality
Abstract
We present a concise and coherent analysis of the constrained l1 minimization
method for stable recovering of high-dimensional sparse signals both in the
noiseless case and noisy case. The analysis is surprisingly simple and
elementary, while leads to strong results. In particular, it is shown that the
sparse recovery problem can be solved via l1 minimization under weaker
conditions than what is known in the literature. An oracle inequality is also
derived.
September 24, 4-5pm, Room MCS 149 (Thursday)
Helena Kauppila
Department of Mathematics,
Columbia University
Optimal Consumption with Investment in Incomplete Semimartingale Markets
Abstract
We study consumption and investment decisions in incomplete semimartingale
markets. The agent's utility is modeled by a time-inhomogeneous felicity
function and is of the type introduced by Hindy, Huang, and Kreps. These
utilities are chosen because they satisfy important economic considerations
such as intertemporal substitution. We use a stochastic representation result
to arrive at an appropriate set of dual variables and apply minimax arguments
to establish a relationship with the utility optimization problem and a
related dual problem. The dual variables are no longer adapted processes, but
rather processes with adapted densities.
October 1, 4-5pm, Room MCS 149 (Thursday)
Scott Sheffield
Courant Institute,
New York University
Quantum Gravity: KPZ, SLE, and Conformal Welding
Abstract
What is the most natural notion of a "uniformly random" two-dimensional Riemannian manifold? To what extent can such an object be defined and investigated mathematically? Discrete and continuum versions of these questions have been studied for decades in the mathematical physics literature (where they are closely related to string theory and conformal field theory). I will describe some recent mathematical progress in this area, including a rigorous proof of the KPZ formula (joint with B. Duplantier) and a rigorous connection to certain random fractal curves called Schramm–Loewner evolutions.
October 22, 4-5pm, Room MCS 149 (Thursday)
Paul Dupuis
Department of Applied Mathematics,
Brown University
Analysis and design of splitting schemes for the estimation of rare events
Abstract
Two of the main Monte Carlo techniques for the estimation of probabilities and expectations connected with rare events are those based on importance sampling and those which simulate a (hopefully) cleverly constructed branching process. The first part of this talk will describe the simplest branching type scheme and some of its variants. The second part will outline an approach to the problems of algorithm design and analysis that is applicable to some forms of splitting. It assumes that the process model satisfies a large deviation principle, and obtains necessary and sufficient conditions for good performance. If time permits a few open problems will be mentioned.
October 29, 4-5pm, Room MCS 149 (Thursday)
Richard Ellis
Department of Mathematics and Statistics
University of Massachusetts, Amherst
What Is the Most Likely Way for an Unlikely Event To Happen?
Abstract
This talk is an introduction to the theory of large deviations, which studies the asymptotic behavior of probabilities of rare events. The theory has its roots in the work of Ludwig Boltzmann. In 1877 he did the first large deviation calculation in science when he showed that large deviation probabilities of the empirical vector can be expressed in terms of the relative entropy function. We apply this insight of Boltzmann to prove a conditional limit theorem that addresses a basic issue arising in mathematics as well as in applications. What is the most likely way for an unlikely event to happen? We answer this question in the context of $n$ tosses of a cubic die and other random experiments involving finitely many outcomes. Let $X_i$ denote the outcome of the $i$'th toss and define $S_n = X_1 + ... + X_n$. If the die were fair, then we would expect that for large $n$, $S_n/n$ should be close to the theoretical mean of 3.5. Given that $n$ is large but that $S_n/n$ is close to a number $z$ not equal to 3.5, the problem is to compute, in the limit $n$ to infinity, the probability of obtaining $k = 1, 2, ..., 6$ on a single toss. Interestingly, this conditional limit theorem is intimately related to statistical mechanics because it gives a rigorous derivation, for a random ideal gas, of a basic construction due to Gibbs; namely, to derive the form of the canonical ensemble from the microcanonical ensemble.
November 5, 4-5pm, Room MCS 149 (Thursday)
Daniel Rudoy
School of Engineering and Applied Sciences,
Harvard University
Testing for Local Stationarity in Acoustic Signals: Parametric and Nonparametric Approaches
Abstract
This talk treats nonstationarity detection in the context of speech and audio time series, with broad application to stochastic processes exhibiting locally stationary behavior. Many such processes, in particular information-carrying natural sound signals, exhibit a degree of controlled nonstationarity that varies slowly over time. The talk first describes the basic concepts of these systems and their analysis via local Fourier methods. Parametric approaches appropriate for speech are then introduced by way of time-varying autoregressions, along with nonparametric approaches based on time-localized power spectral density estimates, along with an efficient offline bootstrap procedure based on the Wold representation. The talk includes asymptotic results as well as practical examples and applications in speech forensics and audio waveform segmentation.
November 10, 4-5pm, Room MCS 135 (Tuesday -- note the unusual day and room)
Didier Sornette
Department of Management, Technology and Economics
ETH Zurich
Multifractal Earthquakes and Crashes
Abstract
Parisi and Frisch [1985] and Halsey et al.[1986] have introduced the concept generalizing scale invariance, called multifractality, motivated by hydrodynamic turbulence and fractal growth processes respectively. Use of the multifractal spectrum as a metric to characterize complex systems is now routinely used in many fields to describe hierarchical structures in space and time. However, the origin of multifractality is rarely identified. This is certainly true for earthquakes and rupture for which the possible existence of multifractality is still debated. After a general introduction of the fundamental concepts and their general applications, we will discuss a a physically-based ``multifractal stress activation'' model of earthquake interaction and triggering based on two simple ingredients: (i) a seismic rupture results from thermally activated processes giving an exponential dependence on the local stress; (ii) the stress relaxation has a long memory. The interplay between these two physical processes are shown to lead to a multifractal organization of seismicity in the shape of a remarkable magnitude-dependence of the exponent p of the Omori law for aftershocks, which we observe quantitatively in real catalogs. The generalization of this research to other systems finds that multifractal scaling is a robust property of a large class of continuous stochastic processes, constructed as exponentials of long-memory processes. The general mechanism for multifractality found here will also be highlighted in finance by asking: are large market events caused by exogenous shocks or can they occur endogenously? We ask this question for large stock market events and conclude that endogenous crashes do exist by testing a remarkable prediction of the MRW (multifractal random walk) model of volatility without adjustable parameters, in which multifractality reveals itself in the time domain rather than in the statistical moments.
November 12, 4-5pm, Room MCS 149 (Thursday)
Evariste Giné
Department of Mathematics
University of Connecticut
Uniform limit theorems for wavelet density estimators with application to sup-norm adaptive estimation of densities
Abstract
The almost sure rate of convergence for linear wavelet density estimators is obtained, as well as a central limit theorem for the distribution functions based on these estimators. These results are applied to show that the hard thresholding wavelet estimator of Donoho, Johnston, Kerkyacharian and Picard (1995) is adaptive in sup norm to the smoothness of a density. Other, new, sup-norm adaptive estimators are also considered.
(This is joint work with Richard Nickl.)
November 19, 4-5pm, Room MCS 149 (Thursday)
Harold Kushner
Department of Applied Mathematics,
Brown University
Numerical Methods for Optimal Controls for Nonlinear Stochastic Systems With Delays
Abstract
We are concerned with general nonlinear controlled stochastic dynamical systems, with delays. There might be reflecting boundaries, which occur frequently in applications to communications systems. The Markov chain approximation numerical methods are widely used to compute optimal value functions and controls for stochastic as well as deterministic systems.
For the no-delay case, the method covers virtually all models of current interest. The
method is robust and the approximations have physical interpretations as control problems closely related to the original one. These advantages carry over to the delay problem.
The path, control, and reflection terms (if any) might all be delayed. When the control and reflection terms are delayed, current algorithms normally lead to impossible demands on memory. We will discuss an alternative dual approach, based on the association of systems with delays with forms of a stochastic wave or transportation equation. This leads to algorithms with much reduced memory and computational requirements. The classical Markov chain method will be reviewed, and adapted to the approximation of the optimal value functions and controls for the system with delays. The approach is nonstandard. But the results of numerical computations that will be presented show that the approach has considerable promise. The convergence theorems are based on weak convergence-martingale methods, and require relatively weak conditions.
December 3, 4-5pm, Room MCS 149 (Thursday)
Carl Morris
Department of Statistics,
Harvard University
Probability and Statistical Modeling in Sports
Abstract
Mathematical models have been used to understand sports and to spur fan interest in sports, from the perspective of the media, sports management, and from fans themselves. The theory behind such models was understood and applications discussed long ago by the mathematical community. In recent years, empowered by widespread availability of data and personal computers, by popular works such
as those of Bill James (baseball), and by Michael Lewis ("Moneyball"), and with the growth of fantasy leagues and sports analytics on many websites, interest in sports data and models has extended to millions.
Examples of sports models that may be discussed are: Bernoulli trials (tennis); Markov, martingales, and hierarchical models (baseball); Brownian motion and logistic regression (basketball); and random walk (football). Those of us who enjoy thinking about sports analyses may be able to translate that knowledge to create models that then apply to other fields. Also, because the public interpretations (and misinterpretions) of statistics in sports are commonly revealed, sports analysis offers a glimpse into how such data are considered in other settings.
December 10, 4-5pm, Room MCS 149 (Thursday)
Yoonjung Lee
Department of Statistics
Harvard University
Network Models with Applications to Modeling Financial Data
Abstract
Researchers in finance, risk management, and insurance deal with ever-complex financial derivatives and massive data sets, which may have been beyond imagination just a few decades ago. I demonstrate through applications that network models can provide useful tools for modeling complex dynamics of financial data. In the first part of the talk, I explore an alternative way of characterizing the risk of hedge fund returns whose non-linear dependence dynamics are difficult to capture within a standard econometric time series model, by embedding a social network model in a hierarchical Bayesian modeling framework. The proposed approach allows us to map hedge fund returns on an easy-to-interpret structure, to identify a few funds that may be more influential than other funds, and to reduce a high-dimensional volatility estimation problem into a low-dimensional one. In the second part of the talk, I introduce a pair-wise copula construction with applications to modeling credit default swap data. While adopting a D-vine structure, I employ a hidden graphical structure to induce association parameters in t-copulas and to test which pairs can be regarded conditionally independent. Within this framework, the model selection problem is treated systematically and the model parameters are estimated through a Bayesian procedure.
Information on seminars from previous semesters may be found here: Fall 2005 | Spring 2006 | Fall 2006| Spring 2007| Fall 2007| Spring 2008| Fall 2008| Spring 2009| .
|