Filter off: No filter for categories
In this talk, we will discuss the massively parallel Sample and Prune procedure for solving problems on large datasets. Sample and Prune is a distributed sampling method used to efficiency discover a small set of representative elements from a large data set. We discuss how Sample and Prune can be used for clustering and submodular optimization. In particular, we show how this procedure can be utilized to simulate a class of greedy sequential algorithms in the distributed setting. We will further discuss how it could possibly be extended to construct efficient algorithms for a wide range of problems in the distributed setting.
The talk focuses on problems where geometries have to be reconstructed based on measurements of waves. Beginning from a given measurement of the boundary data of a hyperbolic equation, we seek to find a geometry that produces such reflections and spectra. Typical applications would be the detection of inclusions in solids, the design of acoustic devices, remote sensing and radar.
The presentation focuses on both the continuous as well as the discretized setting. On the continuous side, we derive the shape derivative for general hyperbolic conservation laws, which typically leads to boundary representations of the directional derivatives with respect to a deformation of the domain. The resulting adjoint problem runs backwards in time and requires handling of large amount of data. Thus, this boundary structure is quite beneficial for time-dependent problems, as it reduces the data overhead needed to compute the adjoint solution.
On the discretized side, the continuous problem is coupled to the conditions of optimal Delaunay triangulations, thereby creating a self-reparameterizing scheme, always ensuring mesh quality.
The presentation concludes with inverse problems based on acoustics and Maxwell’s equations.
Online algorithms are usually analyzed using the notion of competitive ratio which compares the solution obtained by the algorithm to that obtained by an online adversary for the worst possible input sequence. Often this measure turns out to be too pessimistic, and one popular approach especially for scheduling problems has been that of "resource augmentation" which was first proposed by Kalyanasundaram and Pruhs. Although resource augmentation has been very successful in dealing with a variety of objective functions, there are problems for which even a (arbitrary) constant speedup cannot lead to a constant competitive algorithm. We propose a "rejection model" which requires no resource augmentation but which permits the online algorithm to not serve an epsilon-fraction of the requests.
The problems considered in this work are in the restricted assignment setting where each job can be assigned only to a subset of machines. For the load balancing problem where the objective is to minimize the maximum load on any machine, we give O(log2 1/ε)-competitive algorithm which rejects at most an ε-fraction of the jobs. For the problem of minimizing the maximum weighted flow-time, we give an O(1/ε4)-competitive algorithm which can reject at most an ε-fraction of the jobs by weight. We also extend this result to a more general setting where the weights of a job for measuring its weighted flow-time and its contribution towards total allowed rejection weight are different. This is useful, for instance, when we consider the objective of minimizing the maximum stretch. We obtain an O(1/ε6)-competitive algorithm in this case. All these problems admit strong lower bounds on the competitive ratio for any online algorithm under the speed augmentation model.
Further, for the problem of minimizing weighted lp-norm of flow-time, we show an 1/ε^O(1)-competitive algorithm which rejects at most an ε-fraction of the jobs by weight. This is again in contrast with the resource augmentation model, under which no algorithm exists with competitive ratio independent of p. All our algorithms are immediate dispatch, though they may not be immediate reject.
We develop an algorithm that allows to tackle PDE constrained shape optimization problems in the framework of finite element discretization of the underlying boundary value problem. The key idea is to identify shapes with diffeomorphisms and reformulate shape optimization problems so that all computations can be performed on an initial mesh. Then, we discretize deformation diffeomorphisms with B-splines and present an optimization algorithm based on superconvergent domain integral expressions of shape gradients. We discuss theoretical aspects of this approach and provide numerical evidence of its performance on well-posed test cases. Finally, we show how the method has been used to improve the design of microlenses.
If statistics is concerned with the collection, management, analysis, presentation, and interpretation of data, then so is network science. The distinction lies neither in a universal network theory nor the complex systems of certain application domains but (i) an incidence structure relating the units of observation and (ii) a corresponding focus on dependencies among observations. While not mainstream, this view clears away some of the smoke and mirrors obscuring the close relationship between statistics and network science.
A model for a micro electrochemical cell with a flowing electrolyte is derived based on the Poisson-Nernst-Planck equation governing the ion transport due to diffusion, advection and electric fields [1,2]. The electrode kinetics we model by the Butler–Volmer boundary conditions. Specifically we performed simulations as well as experimental studies for cyclic voltammetry of the ferri–ferrocyanide redox couple on a gold plated ECC biosensor encapsulated in a microfluidic system. We have examined the effect of flow rate, scan rate, varying concentration of the supporting electrolyte, exchange current density and the position of electrode on the cyclic voltammetry measurements. The results show that at a relatively high flow (250 μ/L) and low scan rates (50 - 200 mV/s), the current response is limited by the convection due to quick supply of fresh ions at the electrode surface which leads to fading hysteresis of the recorded cyclic voltammetry. However, at high scan rates (250 mV/s) and low flow rates (50 - 200 μ/L), peak currents are recorded, which means that mass transport is dominated by the diffusion mechanism and a quasi-steady state of cyclic voltammetry is recorded. In the case of insufficient supporting electrolyte, the excess charges generated during scan will lead to ohmic distortion of the electrolyte solution and consequently result into a ramping effect of the recorded cyclic voltammetry. However, for sufficient amount of supporting electrolyte (200 mM), the simulation results show good agreement with the experimental data [2].
When charging an electrode in an electrolyte strong ion crowding may appear at the electrodes leading to densely packed ions in the Stern layer of the double layer region. The ion size limits the crowing of ions and in the literature it has been suggested to model this steric effect by a singular diffusion term in the Nernst Planck equation [3]. We consider an arbitrary electrolyte solution that is sandwiched between electrodes and allow for electrochemical reactions at the electrode/electrolyte interface. The result shows a quick build up of boundary layers in the double layer which is counterbalanced by the finite size constraint on the ionic species and prevents overcrowding of the ions. Comparison between model simulations with and without a singular diffusion term will be discussed.
References:
[1] B.J. Adesokan, A. Evgrafov and M.P. Soerensen, Simulating cyclic voltammetry under advection for electrochemical cantilevers, Math. Meth. Appl. Sci. 38 (2015) 3384–3391.
[2] B.J. Adesokan, X. Quan, A. Evgrafov, A. Heiskanen, A. Boisen, M.P. Soerensen, Experimentation and numerical modeling of cyclic voltammetry for electrochemicalmicro-sized sensors under the influence of electrolyte flow. Journal of Electroanalytical Chemistry, 763, (2016) 141–148.
[3] M. S. Kilic, M. Z. Bazant, A. Ajdari, Steric effects in the dynamics of electrolytes at large applied voltages. II. Modified Poisson-Nernst-Planck equations, Phys. Rev. E 75 (2007) 021503.
Given a sample from a multivariate distribution F, the uniform random variates generated independently and rearranged in the order specified by the vector of ranks look like a sample from the copula of F. This idea can be regarded as Baker (2008)’s construction of copulas based on order statistics with the ranks being coefficients, and led us to define the empirical beta copula. It is then fairly easy to show that the empirical beta copula is a particular case of the empirical Bernstein copula (taking all the orders of the Bernstein polynomials equal to the sample size). The advantage is that we do not need any smoothing parameter. Also it is extremely simple to simulate a sample from the empirical beta copula. We show that the empirical Bernstein copula is a genuine copula by providing (necessary and) sufficient conditions for a Bernstein transformation to be a copula. Furthermore, we establish the assumptions under which the standard asymptotic results hold for the empirical Bernstein copula. They are significantly weaker than those given in Janssen et al. (2012). Our Monte Carlo simulation study shows that there is an advantage of smoothing to improve finite-samples performance. It is found that in all cases, the empirical beta copula outperforms the empirical copula in terms of the bias and the integrated mean squared error. Compared with the empirical Bernstein copula with optimal smoothing rate, its performance is still significantly better in several cases, especially in terms of bias.
In the first part of the talk we briefly introduce the concept of phase fields for the simulation of two-component systems. Here, a two-phase system might be a fluid consisting of two immiscable components, or the topology of some object, where the components are material and void. Thereafter we discuss the simulation of a two-phase fluid using a thermodynamically consistent model and propose an energy stable discretization scheme. A proper spatial resolution of the equations is guaranteed by residual based error estimation based on the full system of equations. Thereafter we apply this diffuse interface concept to the optimization of objects in Navier--Stokes flow. The resulting optimality conditions are solved by a gradient flow approach, that leads to equations that are similar to the ones for the two-phase flow simulation. Again residual based adaptive meshing is applied for the spatial resolution of the sought structure. We finish the talk with a brief outlook on current and future plans for the algorithmic treatment of the topology optimization problem.
If one puts sand on a metal plate and allows the plate to vibrate, the sand arranges itself in the most miraculous ways (this used to be a circus trick and Napoleon was a big fan). We discuss the geometric structure of these shapes and how it interacts with standard and new types of packing problems.