October 1 Gareth E. Roberts
(College of the Holy Cross)
Title: Classifying Four-Body Convex Central Configurations
Abstract: A central configuration is a special arrangement of masses
in the N-body problem where the gravitational force on each body
points toward the center of mass. Central configurations lead to
homothetic and homographic periodic solutions, and play a crucial
role in understanding the topological structure of the integral
manifolds. Here we strive to classify all four-body convex solutions
(i.e., the bodies form a convex quadrilateral), with an eye toward
configurations possessing some type of symmetry or special geometric
property. Special cases considered include co-circular,
trapezoidal, kite, tangential, orthodiagonal, equidiagonal, and
bisecting-diagonal configurations. We find simple coordinates to describe the space and show that the set of all four-body convex central configurations is three-dimensional, a graph over three position variables.
October 8 No Seminar - Columbus Day
October 15 Jean-Pierre Eckmann
(Université de Genève)
Title: Protein: the physics of amorphous learning matter
Abstract: This is work with Jacques Rougemont (Geneva) and Tsvi Tlusty (Ulsan).
The greater context is an attempt to bring the language of mathematics
closer to that of biology, where there exist lots of data but few
conceptual methods. Considering the evolution of functional proteins (which
do something useful), we view them as amorphous solids. It turns out
that describing the protein as a (finite) random network of
connections, which are changed through evolution, a
Green's function approach can explain many properties of proteins
which biologists have observed. The talk needs no previous knowledge of
biological concepts.
October 22 Philip Pearce
(MIT)
Title: Learning dynamics from data in biophysical systems
Abstract: In many biological and physical systems, recent experimental advances have led to a vast increase in the quantity and quality of data. To maximize the understanding gained from these datasets requires a parallel development of sophisticated modeling and data analysis frameworks. In this talk I will present two examples of learning dynamics from high-dimensional data in biophysical systems. The first example concerns bacterial biofilms: dense, surface-associated, three-dimensional structures populated by cells embedded in matrix. Using single-cell live-imaging data to constrain the parameters in cell-based simulations, the cell-cell interactions that generate biofilm morphologies are characterized. In the second example, a general framework is presented for learning the dynamics of a system stochastically exploring a high-dimensional effective energy landscape. The method is applicable whenever a system can be approximated by a Fokker-Planck type Markovian dynamics, and is found to reconstruct accurately the folding networks of several proteins.
October 29 Wen Feng
(College of the Holy Cross)
Title: Spectral stability of the vorticies for the NLS in higher dimensions
Abstract: PDF
November 5 Christopher Chong
Bowdoin College
Title: Emergence of Dispersive Shock Waves in Nonlinear Lattices
Abstract:
Dispersive shock waves (DSWs), which connect states of different amplitude via an expanding wave train, are known
to form in nonlinear dispersive media subjected to sharp changes in state. In this talk, DSWs
in lattices of the Fermi-Pasta-Ulam-Tsingou type are explored. Various long-wave length approximations are used
to describe the formation and structure of the DSWs. The analytical results are complemented by systematic numerical simulations and
experiments.
November 12 David Campbell
Boston University
Title: The Subtle Road to Equilibrium: Intermittent Many-Body Dynamics at Equilibrium
Abstract: PDF
November 19 Salvatore Pace
Boston University
Title: Behavior and Breakdown of Higher-Order Fermi-Pasta-Ulam-Tsingou Recurrences
Abstract: The existence and stability of higher-order recurrences (HoRs), including super-recurrences, super-super-recurrences, etc., has been numerically investigated in the alpha and beta Fermi-Pasta-Ulam-Tsingou (FPUT) lattices for initial conditions in the fundamental normal mode. The results found represent a considerable extension of the pioneering work of Tuck and Menzel on super-recurrences. For fixed lattice sizes, apparent singularities in the periods of these HoRs has been studied and aref speculated to be caused by nonlinear resonances. These singularities depend very sensitively on the initial energy. Futhermore, the mechanisms by which the super-recurrences in the two model's breakdown as the initial energy and respective nonlinear parameters are increased are compared. The breakdown of super-recurrences in the beta-FPUT lattice is associated with the destruction of the so-called metastable state and hence is associated with relaxation towards equilibrium. For the alpha-FPUT lattice, this is found to not be the case, and instead the super-recurrences break down while the lattice is still metastable.
November 26 Eric Vanden-Eijnden
Courant Institute, NYU
Title: Machine learning, particle systems, and scientific computing
Abstract: The methods and models of machine learning (ML) are rapidly becoming de facto tools for the analysis and interpretation of large data sets. Complex classification tasks such as speech and image recognition, automatic translation, decision making, etc. that were out of reach a decade ago are now routinely performed by computers with a high degree of reliability using (deep) neural networks. These performances suggest that it may be possible to represent high-dimensional functions with controllably small errors, potentially outperforming standard interpolation methods based on Galerkin truncation or finite elements: these have been the workhorses of scientific computing but suffer from the curse of dimensionality. By beating this curse, ML techniques could change the way we perform quantum physics calculations, molecular dynamics simulation, PDE solution, etc. In support of this prospect, in this talk I will present results about the representation error and trainability of neural networks, obtained by mapping the parameters of the neural network to a system of interacting particles. I will also discuss what these results imply for applications in scientific computing.