The course takes place on October 04 and October 05, 2016.
Many applications of simulation science involve complex and evolving geometries with possibly strong deformations during the evolution. In the context of finite element methods most often a “fitted” characterization is used where a parametric description in terms of a computational mesh is available. An alternative approach is based on the idea of separating the computational mesh and the geometry description, resulting in geometrically “unfitted” methods, which allow for a very flexible handling of geometries. On a (typically simple) background mesh a basis discretization is defined. Only afterwards, according to the separately defined geometry this discretization is adapted to the geometrical information. This approach allows to handle complex and possibly time-dependent geometries without the need for complex and time consuming mesh generation or remeshing. In the recent years finite element methods based on this methodology, geometrically unfitted finite element methods, have drawn more and more attention. Despite its advantages unfitted discretizations, often also called cut-cell methods, give rise to new problems. Finite element spaces that are usually directly linked to the underlying mesh have to be adapted to account for the separated geometry description. As a consequence important properties such as the stability of bases, conditioning of matrices, the implementation of accurate numerical integration, stable time discretization or the imposition of boundary and interface conditions have to be re-established. This is often much harder to accomplish compared to fitted finite element methods and requires new techniques and ideas. We present a class of geometrically unfitted finite element methods, apply them to different PDE problems and present the key techniques to obtain (high order) accurate numerical approximations, optimal order error bounds and robust implementations.
More details you can find on http://igdk1754.ma.tum.de/IGDK1754/CCLehrenfeld2016
In order to effectively model uncertainty or ambiguity in engineering and the natural sciences one typically considers partial differential equations (PDEs) with uncertain or distributed parameters. Passing from modeling and simulation to optimization, we arrive at stochastic optimization problems with PDE-constraints. This relatively new area requires a number of extensions from the classical stochastic optimization literature due to the infinite dimensional nature of the deterministic decision variables. This includes, e.g., well-posedness questions, derivation of optimality conditions, handling state constraints, and developing viable numerical methods. On the other hand, the uncertainty requires us to employ or extend concepts from risk management to assure risk-averse or robust decisions are met. In our work, we do this by using risk measures, e.g., conditional value-at-risk or mean plus upper-semideviation. Since these functionals are typically non-smooth, we suggest several smoothing techniques in order to make use of existing algorithms from PDE-constrained optimization. After motivating the model class, we discuss appropriate conditions on the objective functionals and derive first-order optimality conditions. We then demonstrate the effect of various risk-measures on the optimal solution via numerical experiments.
The scientific topics include graphical models, networks and extremes. One aim of the conference is to celebrate the Carl Friedrich von Siemens Prize of the Alexander von Humboldt Foundation awarded to Steffen Lauritzen from University of Copenhagen.
www.statistics.ma.tum.de/CISE2016
The scientific topics include graphical models, networks and extremes. One aim of the conference is to celebrate the Carl Friedrich von Siemens Prize of the Alexander von Humboldt Foundation awarded to Steffen Lauritzen from University of Copenhagen.
www.statistics.ma.tum.de/CISE2016
Hierarchical optimization problems with two decision levels where at least one decision maker has to solve an optimal control problem are called bilevel optimal control problems. This class of optimization models suffers from nonconvexity, nondifferentiability, and an inherent lack of regularity. Thus, the derivation of existence results and optimality conditions for such problems is a challenging task. On the other hand, these problems frequently appear when dealing with parameter estimation or optimal control models whose feasible sets depend implicitly on another optimized system. In this talk, we start to motivate the consideration of bilevel optimal control problems by means of several examples. Afterwards, we present two different strategies which can be used to transform the original bilevel optimal control problem into a single-level surrogate model using lower level optimality conditions or the lower level optimal value function. We point out several puzzling difficulties arising from the bilevel structure and the function space setting. Finally, we present constraint qualifications and necessary optimality conditions for certain instances of bilevel optimal control problems. Parts of the talk are based on joint works with Francisco Benita and Gerd Wachsmuth, respectively.
The scientific topics include graphical models, networks and extremes. One aim of the conference is to celebrate the Carl Friedrich von Siemens Prize of the Alexander von Humboldt Foundation awarded to Steffen Lauritzen from University of Copenhagen.
www.statistics.ma.tum.de/CISE2016
We investigate bifurcations from an attractive random equilibrium to shear-induced chaos for stochastically driven limit cycles, indicated by a change of sign of the first Lyapunov exponent. This addresses an open problem posed by Lai-Sang Young and co-workers, extending results on periodically kicked limit cycles to the stochastic context. We also apply concepts from ergodic theory, like entropy and the SRB property of the invariant random measure, to describe the random attractors in the chaotic case.
In this thesis three popular and well-studied randomized rumor spreading algorithms are studied, the push, pull and push\&pull model. Initially, some vertex owns a rumor and it is passed to one of its neighbours in a way depending on the rumor spreading algorithm. For the push algorithm every vertex that knows the rumor passes it to a random neighbour. For the pull algorithm every uninformed vertex ask the rumor from a random neighbour. The push\&pull algorithm is a combination of both, every vertex either asks or passes the information. This spreading is repeated until every vertex knows the rumor. The main question is how many rounds does it take on a given graphs until all vertices are informed. Here the asymptotic (random) broadcast time for these algorithms on expander graphs, which are almost regular and have small spectral expansion, is studied. The time for the push model was already shown to be $\log_2 n + \log n +o(\log n)$, which is asymptotically the same time it takes on the complete graph. The main result of this thesis is that for the pull and the push\&pull algorithm the same holds, i.e. the asymptotic runtime on expander graphs coincides with the runtime on the complete graph. For the pull model the runtime is $\log_2 n +o(\log n)$ and for the push\&pull model it is $\log_3 n+o(\log n)$.
Efficient data representation is a core aspect of modern signal processing, for example sparsity of data has been very efficiently used in compressed sensing in order to improve the required minimal number of measurements. We will look at a 'dual' version of sparsity, namely Analysis cosparsity and provide new sequential algorithms to find Analysis Operators which sparsify a given class of signals.
The construction and classification of Riemannian metrics satisfying certain curvature bounds are of fundamental interest in differential geometry. In this talk we specialize to metrics of positive scalar curvature, which are characterized by a simple volume growth condition of balls with small radii. Research during the last couple of years, based on a variety of methods, revealed the topological complexity of the space of all such metrics on a fixed manifold. We will give an overview of this development.
Two specifications are commonly used to describe the motion of a fluid in computational fluid dynamics. The Lagrangian specification based on grid generating points which are moving with the particles in a fluid. This approach allows the preservation of specific motion related properties of a physical model like the Galilei invariance. Nevertheless, the Lagrangian specification often leads to a distorted computational mesh, since the mesh moves with respect to the fluid. These distortions are the source of numerical artifacts which destabilize the scheme. Distortions are not an issue in the Eulerian specification, since this approach based on a static mesh. However, this approach is too stiff to maintain specific motion related properties of a physical model. The arbitrary Lagrangian-Eulerian (ALE) approach locally allows the description of the fluid motion by the Lagrangian or Eulerian specification. Hence, in some sense, the ALE approach is a kind of compromise between the Lagrangian and Eulerian specification which allows to exploit the advantages of both specifications.
We adopted the ALE approach to develop and analyze a moving mesh discontinuous Galerkin method for one dimensional nonlinear conservation laws in [1]. This method based on the "'method of lines approach'". For the spatial discretization a discontinuous Galerkin method with a time-dependent test function space is used. For the time integration the total variation diminishing Runge-Kutta methods are used.
In this talk, in the context of scalar conservation laws, certain mathematical aspects of the ALE-DG method will be presented. In particular a local maximum principle and the entropy stability will be discussed. Afterward, the capability of the method to handle singularities will be shown by one dimensional numerical experiments for the Burgers' and Euler equations. Furthermore, we present accuracy tests for the one and two dimensional Euler equations.
(Collaborators: Christian Klingenberg (University of Würzburg) and Yinhua Xia (University of Science and Technology of China).
[1] Klingenberg C., Schnücke G. and Xia Y., Arbitrary Lagrangian-Eulerian discontinuous Galerkin method for conservation laws: analysis and application in one dimension, Math. Comp. (June20, 2016), available via DIALOG: http://dx.doi.org/10.1090/mcom/3126.
The contact process is a classical interacting particle system. Liggett and Steif (2006) proved that, for the supercritical contact process on certain graphs, the upper invariant measure stochastically dominates an i.i.d. Bernoulli product measure. In particular, they proved this for Z^d and (for infection rate sufficiently large) d-ary homogeneous trees T_d. In this talk we will discuss some space-time versions of their results. In particular, we ask the question whether the contact process may dominate an independent spin-flip process. The answer to this question seems to depend on properties of the graph. We first show that it is not possible if the graph is amenable. On the other hand, we prove some results indicating that it is indeed the case for the contact process on T_d. This talk is based on joint work with Rob van den Berg (CWI and VU Amsterdam).
We will propose a time-dependent nonhomogeneous Markov model for predicting free parking spaces in urban areas. We therefore will describe the formalized general setting that reflects the real world environment we work in when concerned with parking prediction. Also, issues concerning the matter of availability of the right kind of data will be addressed. Then, the parameter estimation based on a given data set will be of particular interest. To this end, an extended framework of empirical risk minimization for matrix-valued functions and the use of matrix-valued reproducing kernel Hilbert spaces to approximate the time-dependent generator of the Markov process will be introduced. Finally, we want to give an outlook on further research possibilities in this field with a focus on a purely data driven approach.
Title: Multivariate total positivity, Simpson's paradox, and the structure of M-matrices
Simpson's paradox is describing the phenomenon that two quantities can be positively associated in a population, but negatively associated within subpopulations. This can imply quite misleading interpretation of statistical information and it is therefore important to identify situations where this cannot happen. When a distribution is multivariate totally positive of order two (MTP2) this cannot happen. The MTP2 property is closed under marginalization, conditioning, and increasing transformations and has a number of other stability properties. In addition, it has fundamental implications for conditional independence relations. A multivariate Gaussian distribution is MTP2 if and only if its covariance matrix is an inverse M-matrix, i.e.\ if all off-diagonal elements of the concentration matrix (inverse covariance matrix) are non-positive. For other types of distributions, other conditions apply.
In the talk I shall give examples of Simpson's paradox, explain the basic properties associated with the MTP2 condition, give examples of MTP2 distributions and characterize these in a number of special cases. I shall also indicate some of the fundamental consequences that this property implies for conditional independence structures.
see www.ma.tum.de/Mathematik/FakultaetsKolloquium
In aggregating the judgments (subjective probabilities, preference rankings etc) of different information sources, one often needs to assign weights to each source. Intuitively, these weights should reflect the quality of individual sources and their (dis)similarity, as judged by the decision maker. Formally, such judgments are captured by a non-additive set functions that characterizes sets of sources in terms of their “joint reliability” or “valued diversity”. The main contribution of the paper is to propose and axiomatically justify a particular weighting rule, the Diversity Value. The Diversity Value is defined by a logarithmic scoring criterion and can be characterized as a weighted Shapley value in which source weights are determined endogenously.
Bio: Prof. Dr. Klaus Nehring is a Professor of Economics at the University of California, Davis, since 1998. He received his Ph.D. in economics at Harvard University in 1991 and held visiting positions at the London School of Economics and Princeton University. His current research focuses on “Temptation and Self-Control”, “Decision-Making under Ambiguity”, “Judgment Aggregation in Social Choice”, “Abstract Convexity Theory” and “A Theory of Diversity”. Prof. Nehring has published his work in leading economics journals such as Econometrica, the Journal of Economic Theory, and Games and Economic Behavior.
This work is motivated by the need to study the impact of data uncertainties and material imperfections on the solution to optimal control problems constrained by partial differential equations. We consider a pathwise optimal control problem constrained by a diffusion equation with random coefficient together with box constraints for the control. For each realization of the diffusion coefficient we solve an optimal control problem using the variational discretization [M. Hinze, Comput. Optim. Appl., 30 (2005), pp. 45-61]. Our framework allows for lognormal coefficients whose realizations are not uniformly bounded away from zero and infinity.
We establish finite element error bounds for the pathwise optimal controls. This analysis is nontrivial due to the limited spatial regularity and the lack of uniform ellipticity and boundedness of the diffusion operator. We apply the error bounds to prove convergence of a multilevel Monte Carlo estimator for the expected value of the pathwise optimal controls. In addition we analyze the computational complexity of the multilevel estimator. We perform numerical experiments in 2D space to confirm the convergence result and the complexity bound.
In joint work with Tsachik Gelander and Uri Bader we ask the following questions about negatively curved manifolds whose volume is below a given number V>0: How many are there? Is the size of their homology universally bounded in terms of V? We answer these questions by developing a kind of simplicial thick-thin decomposition.
Woher kommt der Fortschritt in der Mathematik? Setzt sich da jemand an den Schreibtisch, schreibt viele komplizierte Formeln auf, hat einen Geistesblitz, ruft „Heureka – ich hab‘s!”, und dann stimmt das bis in alle Ewigkeit?
Tatsächlich kommt so etwas vor, aber der Alltag der mathematischen Forschung ist deutlich vielfältiger und interessanter! Professor Günter M. Ziegler hinterfragt in seinem Vortrag die viel beschworene „absolute Sicherheit“ von mathematischen Beweisen. Seine Überzeugung: Das Fehler-Machen gehört zur Mathematik, und es gibt keine Kreativität und keine Ideen ohne Fehler. Um dies zu verdeutlichen, wird er eine kleine Kulturgeschichte der mathematischen Fehler skizzieren, von Euklid bis in die heutige Forschung. Und was hat das alles mit einem Schinkenbrot zu tun? Ziegler wird es aufklären und erzählen, wie Mathematikerinnen und Mathematiker um die Ergebnisse ringen, die dann wirklich für immer gelten und nicht nur bis zur nächsten Mahlzeit.
Veranstalter: Deutsche Forschungsgemeinschaft (DFG) Ort: Bayerische Staatsbibliothek, Fürstensaal (1. OG) Anmeldung erforderlich unter veranstaltungen@bsb-muenchen.de oder Tel. +49 89 28638-2115 Eintritt frei
Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. We motivate these methods from an optimization point of view, we present a convergence analysis, and we discuss two techniques (loping and flagging) for reducing the work. The performance of the algorithms is illustrated with numerical examples from computed tomography.