Filter off: No filter for categories
We consider the dynamic version of a major discrete tomography task: reconstruct the trajectories of moving particles from a small number of projections. The most relevant case in recent plasma physics applications involves projections from only two directions. It turns out that the computational complexity of these reconstruction tasks depends strongly on the type of prior information that is incorporated in the reconstruction. In this talk we present results that provide a particular sharp line between tractable (polynomial-time solvable) and intractable (NP-hard) problem instances. This is joint work with Peter Gritzmann (TU Muenchen).
We study the design and optimization of polyhedral patterns, which are patterns of planar polygonal faces on freeform surfaces. Working with polyhedral patterns is desirable in architectural geometry and industrial design. However, the classical tiling patterns on the plane must take on various shapes in order to faithfully and feasibly approximate curved surfaces. We define and analyze the deformations these tiles must undertake to account for curvature, and discover the symmetries that remain invariant under such deformations. We propose a novel method to regularize polyhedral patterns while maintaining these symmetries into a plethora of aesthetic and feasible patterns. Finally, we raise the question of faithful ap- proximations of smooth surfaces by polyhedral surfaces and present some initial results in this direction.
Advective transport of scalar quantities through surfaces is of fundamental importance in many scientific applications. From the Eulerian perspective of the surface it can be quantified by the well-known flux integral. The recent development of highly accurate semi-Lagrangian methods for solving scalar conservation laws and of Lagrangian approaches to coherent structures in turbulent (geophysical) fluid flows necessitate a new approach to transport, i.e. accumulated (over time) flux, from the (Lagrangian) material perspective. In my talk, I present a Lagrangian framework for calculating transport of conserved quantities through a given (hyper-)surface in n-dimensional, fully aperiodic, volume-preserving flows.
The translation invariant Nelson massless model is known to be a toy-model for the infrared problem arising in non-relativistic Quantum Electrodynamics. Mathematically and in the one-electron sector, this manifests itself in the absence of a mass-shell at the bottom of the continuous energy-momentum spectrum. Construction of a ground state involves coherent non-Fock representations already known by J. Fröhlich and indexed by the total momentum of the system. After a review of the model and its properties, I will present a roadmap together with ideas and methods towards a proof of analyticity of the said ground state in the total momentum, if the latter is restricted to a suitable neighbourhood of zero. This is joint work with G.M. Graf.
In many applications in engineering and science we encounter models that are governed by systems of coupled ordinary differential equations (ODE) and partial differential equations (PDE). A full or a just partial coupling between the equations requires suitable techniques for the analysis, simulation and optimization, depending on the properties of the problem. We start with an overview of examined examples, including among other things an elastic cranetrolley load system and a quarter car model with free road contact. In the following the focus is put on the optimal control problem of a truck with a uid container as load, representing a system of hyperbolic PDEs fully coupled to Newton's dynamics of motion. We derive the mathematical model, discuss its analytic characteristics and solve the resulting optimal control problem by a first-optimizethen- discretize approach as well as by a first-discretize-then-optimize approach. Finally, we summarize general challenges for our class of problems and we discuss the employed methods for optimal control. Time permitting, we consider a globalized algorithm for semismooth Newton methods with some details.
The theory of compressed sensing shows that sparse signals, that is, vectors with only a small number of non-vanishing entries, can be recovered from a number of measurements that is considerably less than the signal dimension. Typically the measurements to allow for such guarantees are chosen at random. Considerable efforts have been spent over the last years to study random measurements with additional structural properties imposed by the applications. However, in contrast to generic (for example Gaussian) measurements, such structured random measurement system are no longer universal, i.e., without additional modifications, the guarantees only hold for certain sparsity bases. This talk discusses two approaches to obtain recovery guarantees when the sparsity basis under consideration does not allow the application of the standard theory. On the one hand one can adjust the sampling distribution (for example in Fourier imaging applications), and on the other hand, one can apply a suitable temporal transform to achieve sparsity in the standard basis for each time instance (for example in photoacoustic tomography applications).
The talk is based on joint works with Rachel Ward and with Michael Sandbichler, Thomas Berer, Peter Burgholzer, and Markus Haltmeier.
In this talk, I will survey some of my recent results on theoretical and computational aspects of applied topology. I will focus on three aspects of topological descriptors: their use for the inference and simplification of topological features, their stability with respect to perturbations of the data, and efficient methods for their computation on a large scale.
These questions will be motivated and illustrated by concrete problems: faithful simplification of contours lines in cartography, denoising of isosurfaces for visualization of medical images, and reconstruction of a shape and its topological properties from a point cloud.
I shall report on some recent works where I have introduced a multi-scale technique in the particle states occupation numbers that provides a convergent expansion of the ground state of an interacting Bose gas in a finite box and in the mean field limiting regime. The talk will be mainly focused on the construction of the ground state of a three-modes system.
For more information, visit
http://www-m15.ma.tum.de/Allgemeines/WorkshopDonauIsarInn2016EN
In modern data analysis, one frequently needs to deal with high-dimensional data sets. To alleviate the computational and storage issues that arise in handling this type of data, it is of interest to pre-process the data set to reduce its dimensionality, while preserving the information in the data that is vital for the computational task at hand.
In this talk I will consider dimensionality reduction with Johnson-Lindenstrauss embeddings, which are random matrix constructions aimed at reducing the dimension while approximately preserving Euclidean inter-point distances in the data set. In particular I will focus on the 'fast' and 'sparse' Johnson-Lindenstrauss embeddings, which allow for a fast implementation. The presented results describe how the achievable embedding dimension depends on the desired approximation error and the 'intrinsic dimension' of the original data set. If time permits, I will discuss applications to manifold embeddings and constrained least squares programs.
This talk is partly based on joint work with Jean Bourgain (IAS Princeton) and Jelani Nelson (Harvard).
For more information, visit
http://www-m15.ma.tum.de/Allgemeines/WorkshopDonauIsarInn2016EN
This talk deals with the computation of the numerical solution of boundary value problems with Neumann boundary conditions and optimal control problems with Neumann control in polygonal domains using the finite element method. Due to the corners of the domain, the convergence rate of the numerical solutions can be lower than in case of smooth domains. As a remedy one can use local mesh refinement near the corners. In order to prove optimal error estimates regularity results weighted Sobolev spaces are exploited. In such a case the convergence rate of |ln h|^(3/2) h^2 using piecewise linear ansatz functions can be shown for the state variable as well as the adjoint variable in the domain and the control variable on the boundary. It is also shown how to generate graded meshes by different techniques. Similar results for boundary value problems with Dirichlet boundary conditions and optimal control problems with distributed control were obtained by Th. Apel, A. Rosch and D. Sirch (2009).
In stratified random sampling, minimizing the variance of a total estimate leads to the optimal allocation by Neyman (1934) and Tschuprow (1923). Classical algorithms for this problem yield real-valued rather than integer-valued solutions.
We presents three integer-valued optimization algorithms which solve the problem of minimizing a separable convex function with upper and lower boundary conditions. We begin by identifying the polymatroid structure of the feasible region and show that it can be solved by Greedy-type strategies. Subsequently, we develop a polynomial-time algorithm using a binary search based on the concept of "super-medians".
As an application, the algorithms are used to solve the German Census 2011 allocation problem, an optimal allocation problem in stratified random sampling respecting box constraints. Numerical results show the practical relevance of this approach.
joint work with Ulf Friedrich, Ralf Münnich and Matthias Wagner.
Given a discrete subset \(S\) of \(R^n\), let \(c(S,k)\) be the smallest number \(t\) such that for every finite system of linear inequalities that has exactly \(k\) solutions in \(S\), there exists a subsystem of at most \(t\) inequalities that still has exactly \(k\) solutions in \(S\). This number was introduced for \(S=Z^n\) by Aliev et al.~(2014) in order to study a quantitative version of Doignon's integer variant of the classical Helly theorem. The initial estimate \(c(Z^n,k)\in O(k)\) by Aliev et al.~was recently improved to a sublinear bound by Chestnut et al., who also determined the asymptotic growth of \(c(Z^2,k)\) to be \(k^\frac13\).
Via the theory of Helly-numbers of set families we show that \(c(S,k)\) admits a variety of different interpretations, for example, by the maximum number of vertices of a polytope containing \(k\) non-vertex points of \(S\), or by the maximum number of facets of an inclusion-maximal polyhedron containing \(k\) points of \(S\) in its interior. Based on these interpretations, we show that for the integral lattice the asymptotic behavior of \(c(Z^n,k)\) is given by \(\Theta(k^\frac{n-1}{n+1})\), and we determine \(c(Z^n,k)\) for small values of \(k\) exactly. For general discrete sets \(S\) we derive an upper bound of \(c(S,k)\in O(k\cdot h(S))\), where \(h(S)\) is the Helly-number of \(S\).
This is joint work with G. Averkov, B. Gonzalez Merino, I. Paschke, and S. Weltge.
A common setting in geometric packing problems is that we are given a set of two-dimensional items, e.g., rectangles, and a rectangular container and our goal is to pack these items or a subset of them items into the container to optimize objective functions like the total profit of the packed items or the necessary height of the container. A typical obstacle in these problem settings is that in the input there are different types of items, i.e., items that are wide and thin, that are high and narrow, or items that are large in both dimensions. In this talk, I will present a method to handle this obstacle. In a nutshell, the key is to prove that there are near-optimal solutions in which the given container can be partitioned into few rectangular boxes such that in each box there are only items of one of the mentioned types. This leads to better approximation guarantees for two specific problems: a (1+eps)-approximation algorithm in quasi-polynomial time for the two-dimensional knapsack problem and a (1.4+eps)-approximation algorithm in pseudo-polynomial time for the strip-packing problem. Note that the latter bound is strictly smaller than the lower bound of 3/2 that holds for (non-pseudo-)polynomial time algorithms for the problem.
(joint work with Anna Adamaszek and Giorgi Nadiradze)