Filter off: No filter for categories. Modify filter
Contents: This course discusses time-frequency analysis tools such as wavelets and Gabor systems which have had great impact on signal processing and communications engineering over the past 25 years. We cover the finite dimensional setting, including some results of applications of compressive sensing in time-frequency analysis. The course is suitable for mathematically inclined electrical engineers and mathematicians on the “graduate student” level, i.e., being in the final two years of their MSc studies.
Educational objectives: Participating students will receive a solid background on many aspects of modern time-frequency analysis. Students will recognize the usefulness of discussed tools and function spaces to analyze functions as well as operators that are well described in, respectively on phase space. Moreover, students will understand the cross fertilization between time-frequency analysis and various disciplines of electrical engineering such as communications engineering and information theory.
Educational methods: lectures, exercises Media: blackboard
Level: Master Duration: 1 Semester (WS 13/14), lectures will be held in January and February on 12 dates with weekly exercises Language: English Exam: oral (20 min.)
Prerequisites (recommended): Bachelor, Measure- and Integration theory, basics in Functional Analysis and Fourier Analysis
Literature:
Karlheinz Gröchenig: Foundations of Time-Frequency Analysis. Birkhäuser, 2001. Eds. P. G. Casazza and G. Kutyniok, Finite Frames: Theory and Applications, Birkhäuser, Boston (2012)
A simple polygonal chain in 3D, with fixed edge lengths and fixed angles between consecutive edges, is a geometric abstraction of a robot arm, as well as of a protein backbone. One end is pinned down, and the other can be thought of as the ''hand'', which can ''reach'' a certain region in space called the workspace or reachability region of the robot. Traditionally, the workspace was computed with approximate, numerical methods, which lack accuracy and correctness guarantees. For over 40 years, characterizing the positions of maximum and minimum reach, and uncovering structural properties of the reachability region have been long standing open problems.
In recent joint work with Ciprian Borcea, we have found a surprisingly elementary, geometric characterization of extremal reaches. This led to exact, polynomial time algorithms for computing the workspace for a large and important class of robot arms, and to several practical applications.
This talk is a survey of our results, which will be presented using a variety of physical props and 3D graphics.
A central problem in quantitative risk management concerns the evaluation of, the risk of a portfolio, ie the sum $S$ of $n$ individual risks $X_i$. Solving this problem is mainly a numerical task once the joint distribution of $(X_1,X_2, \dots ,X_d)$ is completely specified. Unfortunately, while the marginal distributions of the risks $X_i$ are often known, their interaction (dependence) is usually either unknown or only partially known, implying that any computed risk measure of $S$ is subject to model error. Previous academic research has provided us with maximum and minimum possible values for risk measures when only the marginal distributions are assumed to be known (unconstrained bounds). This approach leads to wide bounds as all information on the dependence is ignored. In this paper, we are able to also consider the availability of dependence information in the computation of the bounds. We provide analytic bounds that are easy to compute but not always sharp. We also provide algorithms that allow to obtain sharp bounds approximately. Interestingly, the approximate sharp bounds match closely the ones that are obtained analytically. Numerical illustrations show that our approach leads to bounds that are significantly tighter than the (unconstrained) ones available in the literature. (This is joint work with Steven Vanduffel.)
The risk of a financial position is usually summarized by a risk measure. As this risk measure has to be estimated from historical data, it is important to be able to verify and compare competing estimation procedures. In statistical decision theory, risk measures for which such verification and comparison is possible, are called elicitable. It is known that quantile based risk measures such as value-at-risk are elicitable. However, the coherent risk measure expected shortfall is not elicitable. Hence, it is unclear how to perform forecast verification or comparison. We address the question whether coherent and elicitable risk measures exist (other than minus the expected value). We show that one positive answer are expectiles, and that they play a special role amongst all elicitable law-invariant coherent risk measures.
An efficient estimator is constructed for the quadratic covariation or integrated covolatility matrix of a multivariate continuous martingale based on noisy and non- synchronous observations under high-frequency asymptotics. Our approach relies on an asymptotically equivalent continuous-time observation model where a local generalised method of moments in the spectral domain turns out to be optimal. Asymptotic semiparametric efficiency is established in the Cramér-Rao sense. The efficient covariance structure shows surprising geometric features. Main findings are that non-synchronicity of observation times has no impact on the asymptotics and that major efficiency gains are possible under correlation. (Joint with Markus Bibinger, Nikolaus Hautsch, Peter Malec)
We consider the elasticity problem in a heterogeneous domain with an $\varepsilon-$periodic micro-structure, including a multiple micro-contact between the structural components. These components can be a simply connected matrix domain with open cracks or inclusions completely surounded by cracks, which do not connect the boundary. The contact is described by the Signorini and Tresca-friction contact conditions. The Signorini condition is a closed convex cone for the open cracks, while the friction condition is a nonlinear convex functional over the interface jump of the solution on the oscillating interface. The difficulties appear when the inclusions are completely surrounded by cracks and can have rigid displacements. In this case, in order to obtain preliminary estimates for the solution in the $\varepsilon$-domain, the Korn inequality should be modified, first in the fixed context and then for the $\varepsilon$-dependent periodic case. Additionally, for all states of the contact (inclusions can freely move, or are locked at the interface with the matrix, or the frictional traction is achieved on the inclusion-matrix interface and the inclusions can slide in the tangential to the interface direction) we obtain estimates for the solution in the $\varepsilon$-domain, uniform with respect to $\varepsilon$ . An asymptotic analysis (as $\varepsilon\to 0$) for nonlinear functionals over the growing interface is also performed, based on the application of the periodic unfolding method for sequences of jumps of the solution on the oscillating interface.
We study pulses in optical glass finer cables which have a periodically varying dispersion along the cable. This dispersion management leads to interesting effects in those cables. Mathematically, these pulses are described a a non-local version of the non-linear Schrödinger equation. This non-locallity makes the rigorous analysis hard. In the talk, we will focus on existence of solitary pulses in these cables and show that they are show that they are well-localized.
A time-dependent Poisson-Nernst-Planck system of nonlinear partial differential equations is considered. It is modeled in terms of the Fickian multiphase diffusion law coupled with electrostatic and quasi-Fermi electrochemical potentials. The model describes a plenty of electrokinetic phenomena in physical and biological sciences. The generalized model is supplemented by positivity and volume constraints, by quasi-Fermi electrochemical potentials depending on the pressure, and by inhomogeneous transmission boundary conditions representing reactions at the micro-scale level. We aim at a proper variational modeling, optimization, and asymptotic analysis as well as homogenization of the model at the macro-scale level.
Next-Generation Sequencing technologies are becoming increasingly popular due to their greater sensitivity, specificity and accuracy compared to microarray technologies. However, the statistical analysis of RNA-Seq data offers challenges being different from the statistical analysis of microarray gene expression data. Furthermore, to obtain statistically stable differentially expressed genes, high numbers of biological replicated are needed, which are often very expensive to perform. In this talk, both issues will be tackled: First an introduction into RNA-Seq data, its processing and its statistical analysis for the detection of differentially expressed genes will be given. Second, results for sequential experimental design for multiple group RNA-Seq data, such as tumors at different stages, will be presented. To be more precise, the experimental design setting consists initially of a small number of replicates for multiple groups of interest. The goal is to find those groups, where the most improvement in terms of identifying differentially expressed genes can be obtained. Within this context, additionally the usage of clustering algorithms is examined to see whether different experiments are optimal for different expression profiles of genes.
Distributional transformations play a large role in Stein's method of computing explicit bounds on the error in distributional convergence. This is done by coupling the random variable of interest with another random variable having the transformed distribution and rewriting the terms resulting from Stein's equation. We review known transformations like the size bias and zero bias transformations and explain their use in Stein's method. Furthermore we present new abstract theorems on existence and uniqueness of distributional transformations with certain properties, which generalize earlier results by Goldstein and Reinert. Finally, we hint at possible applications of our theory, comprising random walk models.
The colloquium talk will offer a walk through time-frequency analysis. As the main motivation we will use the transmission of digital information by OFDM (orthogonal frequency division multiplexing) in wireless communications. This idea leads immediately to intriguing questions about Gabor expansions and Gabor frames. Their rigorous mathematical investigation connects harmonic analysis, complex analysis, operator theory, and even some non-commutative geometry. I will try to explain some of the main ideas and intra-mathematical connections.
The classical sampling theorem states that a bandlimited function on the real line can be recovered from their values on a discrete set. Thomas Kailath realized in the early 60’s that, similarly, an operator whose Kohn-Nirenberg symbol is bandlimited to a rectangle of area one can be recovered from its response to a sum of Dirac impulses, the first cornerstone of the now well developed operator sampling theory.
The focus of this talk is to describe the role of orbits of finite dimensional vectors under time-frequency shifts, that is, of Gabor systems, in operator sampling. For example, properties of generic Gabor frames allow us to use ideas and algorithms from compressive sensing to determine an operator. We also show that Gabor systems that form symmetric, informationally complete, positive operator valued measures (SIC-POVM), that is, equiangular tight frames, allow us to design novel estimators for stationary stochastic operators, more precisely, for so-called wide sense stationary with uncorrelated scattering (WSSUS) channel operators.
We consider the discretization of an optimal boundary control problem with distributed observation by the boundary concentrated finite element method. With an \( H^{1+\delta}(\Omega) \) regular elliptic PDE on two-dimensional domains as constraint, we prove that the discretization error \( \|u^*-u_h^*\|_{L_2{(\Gamma)}}\) decreases like $N^{-\delta}$, where \(N\) denotes the total number of unknowns. For the case \( \delta = 1\) in convex polygonal domains, the discretization error for \(h\)-FEM behaves like \(N^{-3/4}\), whereas for boundary concentrated FEM the discretization error behaves like \(N^{-1}\). This makes the boundary concentrated FEM favorable in comparison to \(h\)-FEM. This method is also suitable for treating piecewise defined data and a tracking functional acting only on a subdomain of \(\Omega\). We present several numerical results.
I will present my work in collaboration with Alexander Mielke toward a rigorous justification of the classical linearization approach in plasticity. When restricting to the small-deformation realm it is indeed customary to leave the nonlinear finite-strain frame and resort to linearized theories instead. This reduction is usually motivated by means of heuristic Taylor expansion arguments. I will complement these formal motivations by providing a rigorous linearization proof in the framework of the general theory of evolutionary $\Gamma$-convergence for rate-independent processes. In particular, I will check that, by taking the small-deformations limit, energetic solutions of the quasi-static finite-strain elastoplasticity system converge to the unique strong solution of linearized elastoplasticity.
In mathematical biology, chemotaxis terms partially oriented movement of individuals - usually of single cells - along gradients of a chemical signal substance. Experimental findings report striking effects of such chemotactic migration, inter alia phenomena of self-organization such as spatial aggregation. A prototypical model for the description of such chemotactic dynamics, consisting of two parabolic equations with a cross-diffusive term as its most characteristic ingredient, was proposed by Keller and Segel in 1970 already and intensively discussed since then in the mathematical literature. However, the fundamental mathematical question concerning the existence of exploding solutions could only be answered satisfactorily for simplified systems up to now. The presentation aims at reporting some recent developments, with a particular focus on mathematical methods for detecting blow-up solutions.
I start with a stochastic directed graph on the integers whereby a directed edge between i and a larger integer j exists with probability p that may depend on the distance j-i, and there is no edges from bigger to smaller integers. Edge lengths L(i,j) may be constants or i.i.d. random variables. We introduce also a complementary "infinite bin" model. We study the asymptotics for the maximal path length in a long chunk of the graph. Under certain assumptions, the model has a regenerative structure and, in particular, the SLLN and the CLT follow. Otherwise, we obtain scaling laws and asymptotic distributions expressed in terms of a "continuous last-passage percolation" model on [0,1].
If time allows, I introduce multi-dimensional extensions of the model and discuss similar models.
Modern Tensor formats have shown to be applicable to a variety of problems in the field of applied mathematics, e.g. efficient representations and approximations of multivariate functions in high dimensions. Often the choice of the adequate format is crucial for the efficient numerical treatment. We present the new Hierarchical Tensor Format and especially the concept of the truncation. We demonstrate the scope of the hierarchical approach on some examples.
I will start with a short story, hopefully entertaining. Then I will discuss the general philosophy of precision computations of Lamb shifts using QED, based in particular on the works of Shabaev and Pachucki. I will explain two kinds of effective Hamiltonians. Then I will discuss the formalism of time-ordered and 2-times Green's functions. Finally, if there is still time, I will talk about the structure of QED and possible perturbative approaches, which are relevant for bound state computations.
Part II of the talk from 29.01.2014.