Kategorien-Filter ist aus: keine Filterung nach Kategorien.
The course takes place from July 04 until July 08, 2016.
The FEniCS Project is a collection of open source software for the automated, efficient solution of partial differential equations. The software allows specifying finite element variational problems in close to mathematical form and, via automated code generation techniques, the automated assembly and solution of such. This is a powerful and exciting combination that enables rapid, reliable and fun development of efficient finite element models. In addition, the FEniCS extension dolfin-adjoint automates adjoint model implementation and provides an environment for rapidly solving PDE-constrained optimization problems.
This five-day course will consist of short lectures in combination with hands-on exercises aimed at novice and intermediate FEniCS users: starting from the very basics of the Finite Element method, to solving non-trivial, nonlinear, time-dependent PDEs, all the way to solving advanced, time-dependent PDE-constrained optimization problems.
Topics covered in the course include: solving linear static PDEs, solving nonlinear static PDEs, solving linear time-dependent PDEs, mixed problems, splitting methods, discontinuous Galerkin methods, adjoint methods and solving PDE-constrained optimization problems. Partial differential equations solved in the course include the Poisson equation, a nonlinear Poisson equation, the Stokes equations, nonlinear hyperelasticity (St. Venant–Kirchhoff), and the optimal control of the incompressible Navier-Stokes equations.
Further practical information will be send to all participants after signing up to the course.
For registration please send an email to Angela Puchert (firstname.lastname@example.org). Please note: The number of participants is limited.
More details you can find on http://igdk1754.ma.tum.de/IGDK1754/CCFunke2016
A smart grid is the combination of a traditional electrical power system with information and energy both flowing back and forth between suppliers and consumers. This new paradigm introduces major challenges such as the integration of intermittent generation and storage, and the need for electricity consumers to play an active role in the operations of the system. We will illustrate the importance of optimization in meeting these challenges, and the opportunities provided by smart grid to the optimization community.
As part of the talk, Prof. Anjos (Editor-in-Chief, Optimization and Engineering) will award Prof. Ulbrich (TUM) and Dr. Simon (TUM) the Howard Rosenbrock Prize 2015 for the best paper published in the journal Optimization and Engineering (OPTE) in 2015. The title of the paper is "Adjoint based optimal control of partially miscible two-phase flow in porous media with applications to CO_2 sequestration in underground reservoirs".
A simple model for cell growth and division into α > 1 daughter cells is given by the functional pde
- ∂^2/∂x²(D(x)n(x,t))+ ∂/∂x (g(x)n(x,t))+ ∂/∂t n(x,t)+bn(x,t)= bα^2n(α x,t).
Here, n denotes the number density of cells of size x at time t , D is dispersion, g is the growth rate, and b is the division rate. ("Size" is usually measured by mass or DNA content.) The differential equation is supplemented by the condition
n(x,0) = n0 (x), where n0 is the initial cell size distribution, and the boundary condition
- ∂/∂x (D(x)n(x,t)) + g(x)n(x,t) =0 .
at x = 0. The problem is thus of the initial-boundary value type. There is a paucity of analytical solution techniques for these problems; however, it is possible to solve the problem for some simple cases of interest. Although the leading order long time asymptotic behaviour of solutions to these problems is known for fairly general cases, the higher order terms are relatively unexplored. The exact solutions yield the higher order long time asymptotic behaviour of solutions for the special cases and may provide some insight into more general cases. In this talk, we discuss the model and consider some special cases where the initial-boundary value problem can be solved analytically.
The vectorial Deffuant model is an extension of the voter model. It is a multi-state spin system which aims to model opinion dynamics or cultural dissemination and accounts for “homophily” with a so called ‘threshold for interaction’.
We consider the vectorial Deffuant model on the 1-dimensional lattice and investigate its limiting behaviour. Three classes of behaviour are introduced: * clustering - complete consensus is achieved in the limit * fixation - a limiting (random) configuration is achieved eventually * fluctuation - configuration changes at arbitrarily large times
The central objective is to find relations between the size of the state space and threshold parameters and the limiting behaviour of the model.
I will review several works in collaboration with Gravejat, Lewin, Sere, Solovej where we try to establish a well defined Hamiltonian formalism for relativistic Fermions.
The algebraic approach to QFT is based on two steps: the assignment of a physical system a *-algebra and the construction of a state. In 1971 a full characterisation of quasi-free states was given by Araki. Taking advantage of his results, we present a functional analytic construction of quasi-free states for quantum Dirac fields and we investigate as concrete example the Rindler spacetime. In the last part of the talk, we propose a modification of this construction, in order to include more general framework.
Practical applications of nonparametric density estimators in more than three dimensions suffer a great deal from the well-known curse of dimensionality: convergence slows down as dimension increases. We show that one can evade the curse of dimensionality by assuming a simplified vine copula model for the dependence between variables. We formulate a general nonparametric estimator for such a model and show under high-level assumptions that the speed of convergence is independent of dimension. Simulation experiments illustrate a large gain in finite sample performance when the simplifying assumption is at least approximately true. But even when it is severely violated, the vine copula based approach proves advantageous as soon as more than a few variables are involved.
The operation of gas transmission or water supply networks involves many challenges for the network operators in the real market. Fed in by multiple suppliers, gas or water has to be routed through the network to meet the consumers’ demands. At the same time, the operational costs of the network like energy consumption of compressor and pumping stations or contractual penalties have to be minimized. This leads to an optimal control problem on a network. For the optimization task, reliable simulation results are necessary. We address this task by using a goal-oriented adaptive strategy for the simulation. Besides refinement in space and variable time stepping, we want to use simplified models in regions of the network with low activity, while sophisticated models are used in regions where the dynamical behaviour of the flow needs to be resolved in detail. We introduce a posteriori error estimators to assess the discretization and model errors with respect to a quantity of interest. These error estimators are derived using adjoint techniques, which are also suitable for optimization. We then present a strategy to balance these errors regarding a given tolerance. Finally, we will show some numerical experiments for the adaptive simulation algorithm as well as the applicability in an optimization framework.
In a seminal paper Ornstein and Zernike proposed in 1914 to split the interaction between molecules in a liquid into a direct and an indirect part. While the resulting spatial convolution equation is of great important in physics, it seems to be hardly known among mathematicians. In the first part of this talk we consider the pair-connectedness function (PCD) of a rather general stationary cluster model. Combining point process methods with analytic tools for solving integral equations we show that the associated Ornstein-Zernike equation (OZE) admits a unique solution in the whole subcritical regime. In the second part of the talk we consider the special case of a Poisson Boolean model with deterministic grains and show that the solution of the OZE is an analytic function of the intensity. Moreover, for small intensities there is a simple combinatorial way (based on the concept of pivotal diagrams) to express the coefficients of the power series in terms of the corresponding coefficients of the PCD. In the final part of the talk we shall briefly discuss the random connection model and propose some directions for future research.
This talk is based on joint work with Günter Last (Karlsruhe).
In the first part of the talk we consider a Poisson process on a general phase space. The random connection model is obtained by connecting two Poisson points according to some random rule which is independent for different pairs. We shall discuss first and second order properties of the point processes counting the clusters of a given size. In the second part of the talk we specialize to an Euclidean phase space and prove a central limit theorem for the number of clusters in a growing observation window. The proof is based on some new Berry-Esseen bounds for the normal approximation of functionals of a pairwise marked Poisson process.
This talk is based on joint work with Franz Nestmann (Karlsruhe) and Matthias Schulte (Bern).
We show a large deviation principle for the weighted spectral measure of random matrices corresponding to a general potential. Unlike for the empirical eigenvalue distribution, the speed reduces to n and the rate function contains a contribution of eigenvalues outside of the limit support. As an application, we show how this large deviation principle yields a probabilistic proof of the celebrated Killip-Simon sum rule: a remarkable relation between the entries of a Jacobi-operator and its spectral measure. We also obtain new variants of such sum rules. This talk is based on a joint work with Fabrice Gamboa and Alain Rouault.
The fermionic oscillator semigroup is a natural quantum analog of the classical Mehler semigroup, which is the semigroup generated by the bosonic number operator in its standard representation as an opertator on functions on Euclidean space with a Gaussian reference measure. The Mehler semigroup plays an important role in the proof of important inequalities that govern classical information theory. There is a very close analogy between the classical Mehler semigroup and its fermionic analog which was borne out in the proof of Gross's conjecture that that the fermionic semigroup should have the same optimal hypercontractivity properties as its classical cousin. The optimal fermion hypercontractivity inequality can be viewed as a quantum convolution inequality. We present some recent results developing this perspective, which are relevant to questions concerning the entropy power inequality in quantum information theory, and which are joint work with Elliott Lieb and with Jan Maas.
The study of highly symmetric discrete structures in ordinary 3-space has a long and fascinating history. A radically new, skeletal approach to polyhedra was pioneered by Grunbaum in the 1970's, building on Coxeter's work. A polyhedron, or more general polyhedral structure, is viewed as a finite or infinite periodic geometric edge graph in space equipped with additional (super) structure imposed by the faces, and its symmetry is measured by transitivity properties of its geometric symmetry group. Since the mid 1970's, there has been a lot of activity in this area. We survey the present state of the ongoing program to classify discrete polyhedral structures in space by distinguished transitivity properties of their symmetry groups.
We discuss the construction of local Lyapunov functions for asymptotically stable equilibria in dynamical systems generated by random and stochastic differential equations. A special focus is given to the conversion of systems of stochastic differential equations to equivalent systems of random differential equations, and the special stability notion in the sense of mean-square concepts that is tailored for computational considerations and Monte-Carlo approaches.
Random walk on the giant component of the Erdős–Rényi random graph exhibits cutoff, which is fast convergence to the stationary distribution within a specific time frame, the so-called cutoff window. The same is true for random walk on d-regular graphs, and more generally, random walks on random graphs with given degree sequences. This is in contrast to random walk on other classes of graphs, for example ladder graphs or cycles. The talk is based on my Master's thesis and aims at explaining why cutoff occurs for some classes of graphs and not for others, and how to prove that. I will specifically discuss coupling techniques. The theory will be illustrated with numerical simulations.