Filter off: No filter for categories
The one-dimensional KPP-equation driven by space-time white noise, % \[ \partial_t u = \partial_{xx} u + \theta u - u^2 + u^{\frac{1}{2}} dW, \qquad t>0, x \in \mathbb{R}, \theta>0, \qquad \qquad u(0,x) = u_0(x) \geq 0 \] % is a stochastic partial differential equation (SPDE) that exhibits a phase transition for initial non-negative finite-mass conditions. Solutions to this SPDE arise for instance as (weak) limits of approximate densities of occupied sites in rescaled one-dimensional long range contact processes. If $\theta$ is below a critical value $\theta_c$, solutions with initial finite mass die out to $0$ in finite time, almost surely. Above this critical value, the probability of (global) survival is strictly positive. Let $\theta>\theta_c$, then there exist stochastic wavelike solutions which travel with non-negative linear speed. For initial conditions that are ‘’uniformly distributed in space’’, the corresponding solutions are all in the domain of attraction of a unique non-zero stationary distribution. For the (parameter-dependent) nearest-neighbor contact process on $\mathbb{\ZZ}$, an interacting partic le system, more is known. A complete convergence theorem holds, that is, a full description of the limiting law of a solution is available, starting from any initial condition. Its proof relies in essence on the progression of so-called edge processes. In these models, edge speeds characterize critical values. In my talk, I will introduce the two models in question (nearest-neighbor contact process and KPP-equation with noise). Then I explain in how far the concepts and techniques of the first model can be used to obtain new insights into the second model. In particular, the problems one encounters when changing from the discrete to the continuous (in space) setting are highlighted and approaches to resolve them are discussed.
High dimensional and large data sets often impose a significant burden on applications both in terms of computational complexity and storage cost. To reduce these burdens, it is often necessary to obtain lower dimensional, preferably binarized, representations of such data that simultaneously preserve important geometric properties, or even permit accurate reconstruction. Thinking of a set $X \subset \mathbb{R}^N$ as either a high-dimensional data set, or even as a class of signals (such as natural images), in this talk we present fast methods to replace points from $X$ with points in a lower-dimensional cube $\{\pm 1\}^m$. That is, we embed $X$ into the binary cube, and we endow the binary cube with a function (a pseudo-metric) that preserves Euclidean distances in the original space. We discuss applications of these ideas to compressed sensing, which deals with efficient data acquisition, as well as on-going work related to machine learning.
Large-scale variational data assimilation problems commonly arise in many applications like numerical weather prediction and oceanography. We give a brief introduction to common data assimilation methods. More specifically, weak constraint four-dimensional variational data assimilation is an important method for incorporating observations into a model. The system arising within the minimisation process can be formulated as a saddle point problem. In this talk we present a low-rank approach which exploits the structure of this system using techniques and theory from solving matrix equations. Numerical experiments with the linear advection-diffusion equation, and the nonlinear Lorenz-95 model demonstrate the effectiveness of a low-rank Krylov subspace solver.
We are going to discuss the key role played by propagation of chaos in the derivation of Boltzmann equation from particle systems. In the case of deterministic (reversible) hard-sphere dynamics, we will see that the emergence of the irreversibility in the kinetic limit is related to singular correlations created by the Hamiltonian dynamics. The fluctuations around the Botzmann equation can be analyzed thanks to the control of the dynamical correlations.
Everyone is invited! Join us for coffee and tea in the common room (B448) half an hour before the talk.
In this talk, I will discuss how machine learning can be used to improve the design of market mechanisms. I will present one particular case study in detail: a machine learning-powered iterative combinatorial auction (CA). The main goal of integrating machine learning (ML) into the auction is to improve preference elicitation, which is a major challenge in large CAs. In contrast to prior work, our auction design uses "value queries" instead of prices to drive the auction. The ML algorithm is used to help the auction mechanism decide which value queries to ask in every iteration. While using ML inside an auction introduces new challenges, we demonstrate how we obtain an auction design that is individually rational, has good incentives, and is computationally tractable. Via simulations, we benchmark our new auction against the well-known combinatorial clock auction (CCA). Our results demonstrate that the ML-powered auction achieves higher allocative efficiency than the CCA, even with only a small number of value queries.
*Joint work with Gianluca Brero (University of Zurich) and Benjamin Lubin (Boston University).
Bio Sketch:
Sven Seuken is a tenured Associate Professor at the Department of Informatics of the University of Zurich and head of the Computation and Economics Research Group. His research lies at the intersection of Computer Science and Game Theory, with a focus on market design. He received his Ph.D. in Computer Science from Harvard University in 2011 under the guidance of David C. Parkes. Since joining the University of Zurich, he has received several awards and research grants, including a Google Faculty Research Award and an ERC Starting Grant. In 2017, he was ranked as one of the "Top 40 under 40" in the category "society and science" by the German business magazine Capital. He is also interested in practical applications of market design: he is the Chief Economist at BandwidthX, Inc. (USA) and he is a senior market design advisor for Tremor Technologies, Inc. (USA).
Rebalancing and relocation in carsharing systems with respect to optimal routing received some attention in recent literature. Most research assumes that workers relocate one vehicle, and then use a bike to continue to the next location, but other modes, including public transport, second drivers, and trucks, exist as well. We study the economic viability of these modes in different environments. We consider a model in which employees of the operators drive vehicles from cold spots to hot spots, and then use bike, car, or public transit to continue to the next cold spot. Alternatively, vehicles can be loaded onto a truck. We formulate the problem as a hybridization of those different modes. The Multi-Mode Carsharing Relocation Problem (M-CRP) is modeled as a flow-based Vehicle Routing Problem (VRP) with multiple synchronization constraints and a heterogeneous fleet. Synchronization occurs among vehicles (every node is visited by at most one vehicle), but also between workers and vehicles (workers require vehicles to reach another location and vice versa). For this model, we present a branch-and-cut algorithm with Variable MIP Neighborhood Descent. Managerially, we observe that selecting the proper mode and in particular integrating different modes allows operators to save substantial costs. We support operators in their decision which mode(s) to choose for their fleet in their given environment and give hints as to which features of the environment (fleet, city ...) drive this choice.
The concept of splitting tessellation processes in spherical spaces is introduced using Markov process theory. Expectations, variances and covariances of spherical curvature measures induced by a splitting tessellation are studied using tools from spherical integral geometry. Also the spherical pair-correlation function of the surface measure is computed explicitly.
TBA
We will discuss several symmetrization procedures (Steiner, Ehrhard, spherical, and circular symmetrization) that are known not to increase the perimeter. First, we will show how it is possible to characterize those sets whose perimeter remains unchanged under symmetrization. After that, we will also characterize rigidity of equality cases. By rigidity, we mean the situation when those sets whose perimeter remains unchanged under symmetrization, are trivially obtained through a rigid motion of the (Steiner, Ehrhard or spherical) symmetral. We will achieve this through the introduction of a suitable measure-theoretic notion of connectedness, and through a fine analysis of the barycenter function for a special class of sets. These results are obtained together with several collaborators (Maria Colombo, Guido De Philippis, Francesco Maggi, Matteo Perugini, Dominik Stöger).
Lattice models and non-discrete models for self-attracting reinforced stochastic motion and their limiting behavior are analyzed. The limiting models are non-linear partial differential equations, whose qualitative behavior is discussed and set into context with the related particle systems. Applications are, among others in biology, namely self-organizing microorganisms, which aggregate for survival and do so by self-produced attractive signals.
Index pairings resulting from pairings of K-groups with Fredholm modules are intrinsically objects of infinite dimensional analysis, but with the spectral localizer a new player has appeared on the scene and opens the doors for numerical K-theory. I will illustrate these ideas by means of an elementary example, namely Fritz Noether's index theorem for the winding number of a continous and invertible complex function.
In this talk we discuss pointwise error estimates for the biharmonic equation on a convex polygonal domain. Finite element discretization of this problem is not straightforward and various approaches were proposed over the years. However, they all have some drawbacks. The $C^0$ interior penalty method is a sound alternative. This method is attractive since the finite elements consist of usual Lagrange elements of arbitrary order and it is straightforward to implement. Pointwise error estimates is well developed area for the second order problems, however, there are few such results for fourth order problems. Many such pointwise error estimates are obtained via Sobolev embedding.This is not satisfactory since such results are usually not optimal and often the discrepancy between norms makes them hard to use for applications, for example for optimal control problems. In addition, it is hard to localize them. In this paper we take an approach that uses a dyadic decomposition and obtain the best approximation global and local type results for the second derivative. In this talk we discuss the main ideas of the proof and highlight the major differences between the analysis of the second and fourth order problems.
Risk measures and benchmark distributions Abstract
In this talk we discuss how the commonly used risk measures fail to control the probability of exceeding given level of losses. In order to address this aspect of tail risk we propose new classes of law invariant risk measures that generalize Value at Risk and Expected Shortfall. The key ingredient is a benchmark function which allows to specify admissibility constraints based on the whole tail of the distribution. As an interesting particular case, stochastic dominance constraints can be used in order to define acceptability. We present the main finance theoretical and statistical properties of this new class of risk measures and for the convex case we show its dual representation. Merits and drawbacks are discussed and applications to capital adequacy, portfolio risk management and catastrophic risk are presented. Based on joint works with Cosimo Munari, Valeria Bignozzi and Ruodu Wang.
This talk presents our recent work on hedging derivatives under market frictions by deep learning techniques and approximation error bounds for random recurrent neural networks. We first consider the problem of optimally hedging a portfolio of derivatives in a scenario based discrete-time market with transaction costs. Risk-preferences are specified in terms of a convex risk-measure. Such a framework has suffered from numerical intractability up until recently, but this has changed thanks to technological advances: using hedging strategies built from neural networks and machine learning optimization techniques, optimal hedging strategies can be approximated efficiently. In order to optimize the choice of the network architecture we then study approximation properties of reservoir computing systems, putting particular emphasis on random recurrent neural networks. We prove that only a small proportion of the parameters need to be trained and obtain high-probability bounds on the generalization error in terms of the network parameters.
The talk is based on joint work with Hans Bühler, Josef Teichmann and Ben Wood as well as joint work in progress with Lyudmila Grigoryeva and Juan-Pablo Ortega.
Dereich, Mailler and Mörters (2017) recently analysed a multi-type branching process with reinforcement in continuous time with type-space [0,1]. An individual's reproduction depends on its type, the closer to one the higher its rate of reproduction. Under certain conditions a finite fraction of the limiting population will consist of individuals having maximal fitness; a phenomenon which is known as condensation. We will give an intuitive introduction into the model and present a necessary and sufficient condition, namely non-existence of a Malthusian parameter, for condensation to occur.
Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Over the last five years, the TCS community has launched a relentless attack on this question, leading to the discovery of numerous powerful ideas. We give what appears to be the culmination of this line of work: An NC algorithm for finding a minimum weight perfect matching in a general graph with polynomially bounded edge weights, provided it is given an oracle for the decision problem. Consequently, for settling the main open problem, it suffices to obtain an NC algorithm for the decision problem. We believe this new fact has qualitatively changed the nature of this open problem.
All known efficient matching algorithms for general graphs follow one of two approaches: those initiated by Edmonds (1965) or Lovasz (1979). Our algorithm is based on a new approach which was inspired by the multitude of ideas discovered in the last five years. The difficulty of obtaining an NC perfect matching algorithm led researchers to study matching vis-a-vis clever relaxations of the class NC. In this vein, Goldwasser and Grossman (2015) gave a pseudo-deterministic RNC algorithm for finding a perfect matching in a bipartite graph, i.e., an RNC algorithm with the additional requirement that on the same graph, it should return the same (i.e., unique) perfect matching for almost all choices of random bits. A corollary of our reduction is an analogous algorithm for general graphs. This talk is fully self-contained.
Based on the following joint paper with Nima Anari: https://arxiv.org/pdf/1901.10387.pdf
Bio: Vijay Vazirani is a Distinguished Professor at UC Irvine. https://en.wikipedia.org/wiki/Vijay_Vazirani https://www.ics.uci.edu/~vazirani/
The dream of high fidelity has lasted since more than 100 years. In the last decades, signal processing has contributed many new solutions and a vast amount of additional knowledge to this field. These include
- simple solutions like matrix multichannel systems,
- audio coding which changed the world of music distribution and listening habits
- active noise control
- active modification of room acoustics
- search and recommendation technologies to find your favourite music
- and many more …
The talk will cover topics of audio signal processing technologies in current use and new research as well as new future applications like personalized auditory realities (PARty).
A spectrahedron is the solution set of a linear matrix inequality(LMI), or equivalently, an affine-linear section of the cone of positive semidefinite matrices of some size. The linear images of spectrahedra are called spectrahedral shadows or projected spectrahedra. They are the feasible sets of semidefinite programming (SDP) and are convex semi-algebraic sets. We will discuss examples and constructions of spectrahedral shadows, and will also present examples of sets that fail to be a spectrahedral shadow. For optimizing over a concretely given spectrahedral shadow, the matrix size in the lifted LMI has strong influence on performance parameters. We will discuss some recent results on how large these matrix sizes have to be chosen.
A conductance graph on $\mathbb{Z}^d$ is a nearest-neighbor graph where all of the edges have positive weights assigned to them. We first consider a point process of particles on the nearest neighbour graph $(\mathbb{Z}^d,E)$ and show some known results about the spread of infection between particles performing continuous time simple random walks. Next, we extend consider the case of uniformly elliptic random graphs on $\mathbb{Z}^d$ and show that the infection spreads with positive speed also in this more general case. We show this by developing a general multi-scale percolation argument using a two-sided Lipschitz surface that can also be used to answer other questions of this nature. Joint work with Alexandre Stauffer.
Sampling from high-dimensional probability distributions is an important challenge in computational statistics, molecular dynamics and machine learning. In this talk, I will present two approaches based on interacting particle systems. The first part will exhibit a framework for constructing an ensemble of coupled Markov processes with fixed marginals. As it turns out, many natural design criteria (including the asymptotic variance of estimators) lead to multi-marginal optimal transport problems that can be analysed in terms of infinitesimal generators and solved approximately. In the second part, I will discuss some geometrical ideas in the theory of gradient flows, aimed towards analysing the recently introduced Stein variational gradient descent algorithm. This is joint work with A. Duncan, G. Pavlitiotis and L. Szpruch.
The Muskat model, which describes the filtration of an incompressible imhomogeneous fluid in a porous medium, has played a crucial role in both optimal transport and convex integration theories. The fast development of microstructures in the unstable regime where the heavy component of the fluid is above the light one is a challenge at both computational and analytic levels. I will first explain how a very simple and computationally effective time discrete version of the Muskat model can be designed in terms of polar factorization of maps. Then I will discuss various possible formulations (multiphasic, dissipative...) of this model. In particular I will explain the local well-posedness of the multiphasic version in contrast with what happens for the Euler equations with a similar formulation.