Filter off: No filter for categories
We are interested in the fast solution of nonlinear ODE/DAE-constrained mixed-integer optimal control problems. Such problems frequently arise in industrial process control, and typically show significant potential for optimization. The hybrid and nonlinear nature of these problems however is challenging to deal with. We present a theoretical framework based on a direct and simultaneous method for optimal control and on a partial outer convexification reformulation of the problem that results in a complementarity programming problem formulation. We show that this framework enjoys an approximation property in function spaces that results in feasibility and optimality certificates. Our framework also allows for efficient computation of solutions with known approximation quality after discretization in time.
Given a universe U of n elements and a weighted collection {S_1,S_2,...,S_m} of m subsets of U, the universal set cover problem is to apriori map each element u ϵ U to a set f(u) ϵ {S_1,S_2,...,S_m} containing u such that any subset X ⊆ U is covered by ∪_{u ϵ X}f(u). The aim is to find a mapping such that the cost of ∪_{u ϵ X}f(u) will be is as close as possible to the optimal set cover cost for X. Such problems are also called oblivious or apriori optimization problems.
Unfortunately, for every universal mapping, the cost of ∪_{u ϵ X}f(u) can be Ω(√n) times larger than optimal OPT(X) if the set X is adversarially chosen. One can improve this result if X would be chosen randomly (sampling elements u independently), and we would look at the average performance. In this setting it can be shown that one can find a mappping such that the average solution, i.e., E{X}[∪_{uϵ X} f(u)], is only O(log(nm)) times worse than the E[OPT(X)]. In some settings however m can be way larger than n, and this yields worse than — desirable for any set cover generalization — O(log n)-approximation.
In the talk we give a different take on this problem. Note that the benchmark E[OPT(X)] used above is clairvoyant-like, i.e., it looks at an optimum cover which knows the realization of X. And that's something similar to the benchmark used, for example, in online algorithms. However, even though the problem has a strong uncertainty flavor, the solution is in fact static, i.e., it's just a fixed mapping between elements and subsets. And this is in contrary to a solution for an online problem where the online algorithm makes decisions step by step as the input arrives.
Therefore, we shall look at an optimum such mapping f_{OPT}:U → {S_1,S_2,...,S_m} and compare our solution against E[∪_{uϵ X}f_{OPT}(u)]. In the talk we'll show how one can obtain O(log n)-approximation in this setting which matches the hardness for the deterministic variant of set cover. This result holds under a more demanding model than the previous one, because we allow X to be sampled from a black-box distribution, allowing for arbitrary correlations between elements (in the previous work elements were sampled independently).
Time allowing, we are also going to show that one can consider metric facility location in this problem and obtain a constant approximation.
Joint work with Fabrizio Grandoni, Stefano Leonardi, Michał Włodarczyk.
Log-Mod of the characteristic polynomial of GUE matrices is an example of a regularized log-correlated and asymptotically Gaussian processes. Exploiting the methods of statistical mechanics we aim to compute explicitly the distribution for the value and position of the global maximum of themodulus of GUE characteristic polynomial. To that goal we have employed a conjecture on the nature of the freezing phenomena seen in the statistical mechanics of log-correlated random energy models. Important role in our approach is played by conjectured high-temperature duality. We provide numerical evidence supporting our conjectures. The talk will be based on joint works with Nick Simm arXiv: 1503.07110 and with Pierre Le Doussal arXiv: 1511.04258.
Diffusion is the tendency of a substance to evenly spread into an available space, and is one of the most common physical processes. The classical models of diffusion lead to well-known equations. However, in recent times, it has become evident that many of the assumptions involved in these models are not always satisfactory or not even realistic at all. Consequently, different models of diffusion have been proposed, fractional diffusion being one of them. The latter has received a great deal of attention recently, mostly fueled by applications in diverse areas such as finance, turbulence and quasi-geostrophic flow models, image processing, peridynamics, biophysics, and many others.
This talk will serve as an introduction to fractional diffusion equation - fractional derivative in both space and time. A novel PDE result by Caffarelli and Silvestre '07 has led to innovative schemes to realize the fractional order operators. We will discuss these numerical methods and their application to PDE constrained optimization problems.
In der Astrophysik werden große Sternsysteme wie Galaxien oder Kugelsternhaufen oft durch ein nichtlineares System partieller Differentialgleichungen modelliert, welches in der Mathematik als Vlasov-Poisson-System bekannt ist. In meinem Vortrag werde ich zunächst dieses System und einige seiner mathematischen Eigenschaften vorstellen. Danach möchte ich einige neuere Entwicklungen und Resultate ansprechen, die einen Bezug zu aktuellen Fragen der Astrophysik haben ("missing mass problem").
Spectrahedra, the feasible regions of semidefinite programs, form a rich class of convex bodies that properly contains that of polyhedra. While the class of polyhedra is closed under linear projections, the class of spectrahedra is not. A containment problem is the task to decide the set-theoretic inclusion of two given sets. We study containment of (projections of) polyhedra and spectrahedra starting from a geometric point of view. Using connections to the theory of (completely) positive maps and real algebraic geometry allows to state hierarchies of semidefinite programs to decide the containment problem. We prove several convergence results.
The famous Braess paradox describes the counter-intuitive phenomenon in which, in certain settings, the increase of resources, like building a new road within a congested network, may in fact lead to larger costs for the players in an equilibrium. In this talk, we consider general nonatomic congestion games and give a characterization of the combinatorial property of strategy spaces for which the Braess paradox does not occur. In short, matroid bases are precisely the required structure. We prove this characterization by two novel sensitivity results for convex separable optimization problems over polymatroid base polyhedra which may be of independent interest.
This is joint work with S. Fujishige, M. Goemans, B. Peis and R. Zenklusen.
"We consider existence and uniqueness of subclasses of stationary hard-core particle systems arising as thinnings of stationary particle processes. These subclasses are defined by natural maximality criteria. We investigate two specific criteria, one related to the intensity of the hard-core particle process, the other one being a local optimality criterion. In fact, the criteria are equivalent under suitable moment conditions. We show that stationary hard-core thinnings satisfying such criteria exist and are frequently unique. More precisely, uniqueness holds in subcritical and barely supercritical regimes of continuum percolation. Additionally, based on the analysis of a specific example, we argue that fluctuations in grain sizes can play an important role for establishing uniqueness at high intensities. This talk is based on joint work with Günter Last."
I discuss strong convergence with optimal rates for a spatial discretization of the backward stochastic heat equation, and the forward-backward stochastic heat equation from linear-quadratic stochastic optimal control. A full discretization based on the implicit Euler method for a temporal discretization, a least squares Monte-Carlo method, and the new stochastic gradient descent method are then proposed, and simulation results are reported. This is joint work with T. Dunst (Universität Tübingen).
Conley theory studies qualitative features of dynamical systems by means of topological invariants of isolated invariant sets. Since the invariants are stable under perturbations and computable from a finite sample, they provide a tool for rigorous, qualitative numerical analysis of dynamical systems.
In this talk, after a brief introduction to Conley theory, I will present its recent extension to combinatorial multivector fields, a generalization of the concept introduced by R. Forman in his discrete (combinatorial) Morse theory. I will also show some numerical examples indicating potential applications in the study of sampled dynamical systems.
Persistent homology assigns a topological descriptor to a real-valued function defined on a topological space. This descriptor is called a persistence diagram and comprises a collection of intervals summarizing the dimension and 'size' of the homological features of the associated sublevel filtration. Two immediate questions arise: 1) is the descriptor stable with respect to perturbation of the function? 2) can we approximate the descriptor? By introducing the language of interleavings we shall see how positive answers have been given to both these questions. From here we shall discuss similar questions related to other, albeit related, topological descriptors.
We present a general framework and two related inference methods for regression of (correlated, non-Gaussian) functional responses on multiple scalar and functional covariates. By representing the associated inference in terms of a generalized additive mixed model for scalar responses, we are able to make use of established algorithms to fit complex and versatile models for functional responses under a large variety of distributional assumptions. We discuss approaches to model spatial, temporal, spatio-temporal and hierarchical correlation of functional responses in this framework and describe some applied results, as well as related applications of these ideas to scalar responses in distributed lag models and cumulative, time-varying effects of time-varying covariates in time-to-event models.
Vector-valued measures satisfying a PDE constraint appear in various areas of nonlinear PDE theory and the calculus of variations. Often, the “shape" of singularities that may be contained in these measures, such as jumps or fractal parts, is of particular interest. In this talk, I will first motivate how variational problems in crystal plasticity naturally lead to such PDE-constrained measures and how their shape is physically relevant. Then, I will present a recent general structure theorem for the singular part of any vector-valued measure satisfying a linear PDE constraint. As applications, we obtain a simple new proof of Alberti's seminal Rank-One Theorem on the shape of derivatives of functions with bounded variation (BV), an extension of this theorem to functions of bounded deformation (BD), and a structure theorem for families of normal currents. Further, our structure result for currents implies the solution to the conjecture that if every Lipschitz function is differentiab le almost everywhere with respect to some positive measure (i.e. the Rademacher theorem holds with respect to that measure), then this measure has to be absolutely continuous relative to Lebesgue measure. This is joint work with Guido De Philippis (SISSA Trieste).
In this talk, we investigate a model for lipid raft formation and dynamics in biological membranes. The model describes the lipid composition of the membrane and an interaction with cholesterol. To account for cholesterol exchange between cytosol and cell membrane we couple a bulk-diffusion to a surface PDE on the membrane. The latter describes a relaxation dynamics for an energy taking lipid-phase separation and lipid-cholesterol interaction energy into account. It takes the form of an (extended) Cahn–Hilliard equation. We choose diffuse-interface methods, in order to simulate the coupled bulk-surface system. Moreover, we consider the limit of infinitely fast bulk-diffusion, where one obtains a reduced non-local surface PDE model. This reduced model is numerically treated by surface finite elements and its stationary points are analyzed.
A temporal graph is a graph in which the edge set can change from step to step. The temporal graph exploration problem TEMPEX is the problem of computing a foremost exploration schedule for a temporal graph, i.e., a temporal walk that starts at a given start node, visits all nodes of the graph, and has the smallest arrival time. We consider only temporal graphs that are connected at each step. For such temporal graphs with \(n\) nodes, we show that it is NP-hard to approximate TEMPEX with ratio \(O(n^{1-\epsilon})\) for any \(\epsilon>0\). We also provide an explicit construction of temporal graphs that require \(\Theta(n^2)\) steps to be explored. We then consider TEMPEX under the assumption that the underlying graph (i.e. the graph that contains all edges that are present in the temporal graph in at least one step) belongs to a specific class of graphs. Among other results, we show that temporal graphs can be explored in \(O(n^{1.5}k^2\log n)\) steps if the underlying graph has treewidth \(k\) and in \(O(n\log^3 n)\) steps if the underlying graph is a \(2\times n\) grid. We also show that sparse temporal graphs with regularly present edges can always be explored in \(O(n)\) steps.
Joint work with Michael Hoffmann and Frank Kammer.
In this talk we discuss the analysis of a cross-diffusion PDE system for a mixture of hard spheres, which was derived by Bruna and Chapman from a stochastic system of N interacting Brownian particles using the method of matched asympototics. Gradient-flow techniques have become a well established tool to study these kind of nonlinear PDEs. Hence expressing a nonlinear diffusion equation as a gradient flow of an entropy is a very desirable feature. The PDE system under consideration satisfies a gradient flow structure if particles have the same size. For particles of different size we can interpret the equations as an asymptotic gradient flow structure (which results from the asymptotic expansion in the derivation). We shall use this asymptotic gradient flow structure to provide existence of stationary solutions and stability close to equilibrium. Furthermore we discuss global in time existence for the full gradient flow system and illustrate the behavior of the model with various numerical simulations.
Joint work with M. Bruna (University of Oxford), M. Burger (WWU Münster) and Helene Ranetbauer (RICAM)
We consider the two-dimensional Hubbard model on the honeycomb lattice as a model of single layer graphene with local interactions. We are concerned with the cases of a half-filled energy band and weak interaction strength at low temperatures. The effective action, generating the connected, amputated correlation functions, is studied by means of renormalisation group methods where scale decomposition is carried out with respect to the spatial part of momentum only. For this purpose, we investigate the decay properties and special determinant bounds of the scale dependent covariance, applying results by Pedra, Salmhofer and Wieczerkowski. In this context we derive appropriate power counting bounds. Neglecting possible self-energy corrections, analyticity of the effective action is proven for sufficiently weak coupling. For an infinite volume, the analyticity is preserved in the limit of zero temperature.
In this talk I will present a novel convex relaxation technique for a particular class of energy functionals consisting of a pointwise nonconvex data term and a total variation regularization as frequently used in image processing and computer vision problems. The method is based on the technique of functional lifting in which the minimization problem is reformulated in a higher dimensional space in order to obtain a tighter approximation of the original nonconvex energy. Several numerical experiments demonstrate the effectiveness of the proposed approach.
Computational chemistry has attracted much attention since this years choice of the Nobel prize winners in chemistry for their contributions to multi-scale models such as the QM/MM method. The large majority of chemically interesting phenomena take place in condensed phase, where the environment (e.g., solvent) can play a crucial role in determining the structure, the properties and the dynamics of the system to be studied. In a practical context, accounting for all solvent molecules in a either Molecular Dynamics (MD) or even Quantum Mechanical (QM) computation is infeasible due to the complexity of the underlying equations. A particular choice of a multi-scale model is to model the solvent environment to be a conducting continuum medium (COSMO-model) and the resulting electrostatic energy contribution to the solvation energy can be computed by solving a Poisson equation in the Van der Waals cavity of the solute molecule.
The mathematical problem is therefore set (and well-posed) and we illustrate how to approximate its solution using Schwarz's domain decomposition method adapted to integral equations. In this manner, the problem can be solved iteratively and the coupling of the local problems is determined by the connectivity of the molecule (in contrast to fast multipole-based boundary element solvers). The resulting numerical scheme is extremely fast and a linear scaling of the computing resources with respect to the number of atoms of the solute molecule can be achieved. In numerical examples we show how this approach outperforms existing methods such as the fast multipole method-based boundary element method.
Computing the unstable and stable homotopy groups of spheres has been a central problem and driving force in homotopy theory since its inception. The situation in motivic unstable and stable homotopy theory is even more complicated and mysterious; in spite of this, serious progress has been made in understanding the homotopy groups of motivic spheres, starting with the groundbreaking work of Morel. We will give an overview of recent progress in our understanding of the stable motivic homotopy groups of spheres and the consequences this has for our understanding of the motivic stable homotopy category.
Compressive sensing enables accurate recovery of approximately sparse vectors from incomplete information. The first part of the talk gives a (short) introduction to this area. In the second part of the talk we consider the numerical solution of parametric operator equations where the parameter domain is high-dimensional. In fact, one can show that the solution of certain parametric operator equations (parametric PDEs) is analytic in the parameters which can be exploited to show convergence rates for nonlinear (sparse) approximation. Building on this fact, we show that methods from compressive sensing can be used to compute approximations from samples (snapshots) of the parametric operator equation for randomly chosen parameters, which in turn can be computed by standard techniques including Petrov-Galerkin methods. We provide theoretical approximation rates for this scheme. Moreover, we then pass to a multilevel version of this scheme, similarly to multilevel Monte Carlo methods, and show that the overall complexity of computing parametric solutions can be further reduced.