Filter is active: Show only talks in the following category : Forschungsseminar M9: Angewandte Geometrie und Diskrete Mathematik.
In the Multiple Allocation k-Hub Center (MAkHC), we are given a connected edge-weighted graph G, sets of clients C and hub locations H, where V (G) = C ∪ H, a set of demands D ⊆ C 2 and a positive integer k. A solution is a set of hubs H ⊆ H of size k such that every demand (a, b) is satisfied by a path starting in a, going through some vertex of H, and ending in b. The objective is to minimize the largest length of a path. We show that finding a (3 − ϵ)-approximation is NP-hard already for planar graphs. For arbitrary graphs, the approximation lower bound holds even if we parameterize by k and the value r of an optimal solution. An exact FPT algorithm is also unlikely when the parameter combines k and various graph widths, including pathwidth. To confront these barriers, we give a (2 + ϵ)-approximation algorithm parameterized by treewidth, and, as a byproduct, for unweighted planar graphs, we give a (2 + ϵ)-approximation algorithm parameterized by k and r. Compared to classical location problems, computing the length of a path depends on non-local decisions. This turns standard dynamic programming algorithms impractical, thus we introduce new ideas so that our algorithm approximates the length using only local information.
We investigate linear programming based branch-and-bound using general disjunctions, also known as stabbing planes and derive the first sub-exponential lower bound (i.e., 2^{L^\omega(1)}) in the encoding length L of the integer program for the size of a general branch-and-bound tree. In fact, this is even the first super-linear bound.
This is achieved by showing that general branch-and-bound admits quasi-feasible monotone real interpolation. Specialized to a certain infeasible integer program whose infeasibility expresses that no graph can have both a k-coloring and a (k+1)-clique, this property states that any branch-and-bound tree for this program can be turned into a monotone real circuit of similar size which distinguishes between graphs with a k-coloring and graphs with a (k+1)-clique. Thus, we can utilize known lower-bounds for such circuits.
Using Hrubeš' and Pudlák's notion of infeasibility certificates, one can show that certain random CNFs are sub-exponentially hard to refute for branch-and-bound with high probability via the same technique.
This is joint work with Marc Pfetsch.
The Banach-Mazur distance is a well-established notion of convex geometry with numerous important applications in the fields like discrete geometry or local theory of Banach spaces. This notion has already been extensively studied by many different authors, but the vast majority of established results are of the asymptotic nature. The non-asymptotic properties of Banach-Mazur distance seem to be quite elusive and even in very small dimensions they are surprisingly difficult to establish. Actually there are rather few situations in which the Banach-Mazur distance between a pair of convex bodies was determined precisely. One example illustrating this difficulty is the case of the cube and the cross-polytope, as their Banach-Mazur distance was known only in the planar case. In this talk we prove that the distance between the cube and the cross-polytope is equal to 9/5 in the dimension three and it is equal to 2 in dimension four.
The n-dimensional simplex is a convex body well-known for its numerous remarkable features and it was studied extensively also from the point of view of the Banach-Mazur distance. Our starting point is the following well-known and interesting property: the n-dimensional simplex is equidistant (in the Banach-Mazur distance) to all symmetric convex bodies with the distance equal to n. Moreover, it is known the simplex is the unique convex body with this property. It is therefore natural to ask if the simplex is the unique convex body that is equidistant to all symmetric convex bodies, but not necessarily with the distance equal to n. We answer this question negatively in the planar case. For all 7/4 < r < 2 we provide a general construction of a family of convex bodies K, which are with the distance r to every symmetric convex body. It should be noted that this distance r coincides with the asymmetry constant of K and the construction is based on some basic properties of the asymmetry constant.
Any collection of n compact convex planar sets K_1,…,K_n defines a vector of n over 2 mixed areas V(K_i, K_j) for 1≤i<j≤n. We show that for n≥4 these numbers satisfy certain Plücker-type inequalities. Moreover, we prove that for n=4 these inequalities completely describe the space of all mixed area vectors (V(K_i,K_j):1≤i<j≤4). For arbitrary n≥4 we show that this space has a semialgebraic closure of full dimension. As an application, we obtain an inequality description for the smallest positive homogeneous set containing the configuration space of intersection numbers of quadruples of tropical curves.
Joint work with Ivan Soprunov.
In light of the log-Brunn-Minkowski conjecture, various attempts have been made to define the geometric means of convex bodies. Many of these constructions are fairly complex and/or fail to satisfy some natural properties one would expect of such a mean. To improve our understanding of potential geometric mean definitions, we study the closely related p-means of convex bodies, with the usual definition extended to two series ranging over all p in [-∞,∞]. We characterize their equality cases and obtain (in almost all instances tight) inequalities that quantify how well these means approximate each other. Based on our findings, we propose a fairly simple definition of the geometric mean that satisfies the properties considered in recent literature, and discuss potential axiomatic characterizations. Finally, we conclude that some of these properties are incompatible with approaches to proof the log-Brunn-Minkowski conjecture via geometric means.
Least-squares assignments are a common way to partition a data set in clustering applications. While the combined choice of best clusters and representative sites is a known hard problem, linear programming methods and polyhedral theory provide insight into closely related tasks. We show that it is efficient to decide whether a given clustering can be represented as a least-squares assignment, and extend the related algorithm to arrive at soft-margin multiclass support vector machines. Further, we connect the search for optimal sites for a given clustering to volume computations for normal cones of an associated vertex in a certain polyhedron. This leads to new measures for the robustness of clusterings and explains why popular algorithms like k-means work well in practice.
Clustering is a fundamental problem setting with applications in many different areas. For a given set of points in a metric space and an integer k, we seek to partition the given points into k clusters. For each computed cluster, one typically defines one point as the center of the cluster. A natural objective is to minimize the sum of the cluster center’s radii, where we assign the smallest radius r to each center such that each point in the cluster is at a distance of at most r from the center. The best-known polynomial time approximation ratio for this problem is 3.389. In the setting with outliers, i.e., we are given an integer m and allow up to m points that are not in any cluster, the best-known approximation factor is 12.365. In this paper, we improve both approximation ratios to 3 + ε. Our algorithms are primal-dual algorithms that use fundamentally new ideas to compute solutions and to guarantee the claimed approximation ratios. For example, we replace the classical binary search to find the best value of a Lagrangian multiplier λ by a primal-dual routine in which λ is a variable that is raised. Also, we show that for each connected component due to almost tight dual constraints, we can find one single cluster that covers all its points and we bound its cost via a new primal-dual analysis. We remark that our approximation factor of 3 + ε is a natural limit for the known approaches in the literature.
A prominent problem in scheduling theory is the weighted flow time problem on one machine. We are given a machine and a set of jobs, each of them characterized by a processing time, a release time, and a weight. The goal is to find a (possibly preemptive) schedule for the jobs in order to minimize the sum of the weighted flow times, where the flow time of a job is the time between its release time and its completion time. It had been a longstanding important open question to find a polynomial time O(1)-approximation algorithm for the problem and this was resolved in a recent line of work. These algorithms are quite complicated and involve for example a reduction to (geometric) covering problems, dynamic programs to solve those, and LP-rounding methods to reduce the running time to a polynomial in the input size. In this paper, we present a much simpler (6+ϵ)-approximation algorithm for the problem that does not use any of these reductions, but which works on the input jobs directly. It even generalizes directly to an O(1)-approximation algorithm for minimizing the p-norm of the jobs' flow times, for any 0<p<∞ (the original problem setting corresponds to p=1). Prior to our work, for p>1 only a pseudopolynomial time O(1)-approximation algorithm was known for this variant, and no algorithm for p<1. For the same objective function, we present a very simple QPTAS for the setting of constantly many unrelated machines for 0<p<∞ (and assuming quasi-polynomially bounded input data). It works in the cases with and without the possibility to migrate a job to a different machine. This is the first QPTAS for the problem if migrations are allowed, and it is arguably simpler than the known QPTAS for minimizing the weighted sum of the jobs' flow times without migration.
We study many-to-one matching problems for assigning a set of students to a set of courses according to their preference lists in which they rank the agents from the other set. The agents can also rate several agents of the other set equally or exclude them. Our goal is to find a stable matching that maximizes students’ satisfaction using Integer Programming (IP) methods. The flexibility of these methods becomes apparent by introducing additional conditions on the type of students and/or on the minimum number of students that must attend a course for it to be offered.
Quantum Computing (QC) has witnessed remarkable growth and has garnered significant attention in recent years, often accompanied by strong hype. This introductory talk aims to provide a comprehensive overview of the fundamental concepts in QC and its potential for Mathematicians. The talk begins by explaining the core principles of QC like entanglement and superposition. Furthermore, the talk delves into the mathematical underpinnings of QC, demonstrating how operations on a quantum computer can be elegantly described using linear algebra. This approach enables the manipulation and analysis of quantum states, paving the way for the development of quantum algorithms. As an illustration of the power of quantum computing, the concept of quantum teleportation will be introduced, showcasing the remarkable ability to transmit quantum information using classical channels. The talk also highlights two influential quantum algorithms: Shor's and Grover's algorithm. Shor's algorithm, renowned for its impact on cryptography, presents a polylogarithmic approach to factoring large numbers, thereby threatening conventional encryption methods. On the other hand, Grover's algorithm offers a powerful tool for database search, potentially providing a quadratic speedup compared to classical search algorithms. Lastly, the current state of the art in quantum computing is addressed.
Instead of a classical research talk, Andreas Wiese will give a short presentation on the topic "How to be productive".
The talk will be followed by an open discussion, where everyone is invited to participate and give their own input, opinions and ideas on the topic.
An important objective function in the scheduling literature is to minimize the sum of weighted flow times. We are given a set of jobs, where each job is characterized by a release time, a processing time, and a weight. Our goal is to find a preemptive schedule on a single machine that minimizes the sum of the weighted flow times of the jobs, where the flow time of a job is the time between its completion time and its release time. We answer this question in the affirmative and present a polynomial time (1 + 𝜀)-approximation algorithm for weighted flow time on a single machine. We use a reduction of the problem to a geometric covering problem, which was introduced in previous approaches and which loses only a factor of 1+𝜀 in the approximation ratio. However, unlike the previous algorithm, we solve the resulting instances of the covering problem exactly, rather than losing a factor 2 + 𝜀. Key for this is to identify and exploit structural properties of instances of that problem covering problem which arise in the reduction from weighted flow time.
This talk is based on joint work with Lars Rohwedder and Andreas Wiese.
It is a well-known conjecture that integer programs on an integer constraint matrix whose subdeterminants are bounded in absolute value by a constant could be solvable in polynomial time. Despite some recent progress in special cases, the question is still wide open. We consider the special case consisting of a TU constraint matrix with one additional row, which in general encompasses different hard and interesting problems. We present partial progress to the resolution of the question of polynomial solvability of such problems in particular for transposed network matrices. The applied techniques range from Seymour's decomposition of TU matrices over certain types of proximity results for integer programs to graph and minor theory.
This is joint work with Manuel Aprile, Samuel Fiorini, Stefan Weltge and Yelena Yuditsky.
As part of our work on a standardized interface for exchanging models for gear calculation (www.rexs.info), we have defined a modeling structure for gears. Different component types are available as building blocks (e.g. shafts, bearings, gears). These can be connected to each other via component-specific relations. For example, there is a stage_relation with the roles "stage", "gear 1", "gear 2" for a gear stage or a side_relation that connects a bearing with two other components. Most relations have 2 or 3 roles, or components that connect them. So the model structure resembles a graph. Now it is in most cases so that such a model in the interface format can be imported by different tools and thereby into their internal data structure is transferred which deviates from this modeling structure (at least partly). If the model is exported then again thereby in many cases the Ids of components and relations change. After importing and exporting, certain aspects of the model may be more or less detailed than before (depending on what the tool can do and what it requires). This poses a challenge when you want to compare the two model states. The idea is therefore to create a matching between the two models based on the similarity of the model structure, which identifies related components. In the seminar I would like to describe the REXS modeling system, outline the matching problem and discuss if there are any solutions that could be modified for the concrete problem.
The goal of this project was, assuming both a group of seminars and a group of students give preferences over the agents in the other group they are interested in, to find an homogeneous assignment of participants and seminars which addresses the situation of each seminar and each student as best as possible. This problem is a generalization of the classical Hospitals/Residents Problem, where we allowed indifference in the preference relations of each agent, assigned an arbitrary but fixed number of types to each student (each belonging to a category, e.g. study program, exchange student or not, etc) and were interested in allowing each seminar to state the minimal number of participants they would require to be held. The impact of all these changes was studied and the resulting problem formulated as a Linear Program.
When considering non-symmetric gauges there are several ways to define the diameter of a convex body. These correspond to different symmetrizations of the gauge, i.e., means of the gauge $C$ and $-C$. We study inequalities involving the inradius, circumradius and diameter and present examples and results that confirm that not only does studying the symmetrizations help to understand the diameters better but also the other way around.
In „Ellipsoids of maximal volume in convex bodies“ Keith Ball proved a general bound on the volume of k-dimensional ellipsoids in n-dimensional convex bodies in relation to their John ellipsoid. A stronger bound is known in the symmetric case. Our goal was to connect these results by establishing a bound depending on the John asymmetry s_0. We could prove a tight bound for all k and all asymmetry values s_0 not in (1,1+2/n), and characterize the equality cases.
The classical Brunn-Minkowski inequality in the $n$-dimensional Euclidean space asserts that the volume (Lebesgue measure) to the power $1/n$ is a concave functional when dealing with convex bodies (non-empty compact convex sets). It quickly yields, among other results, the classical isoperimetric inequality, which can be summarized by saying that the Euclidean balls minimize the surface area measure (Minkowski content) among those convex bodies with prescribed positive volume. Moreover, it implies the so-called Rogers-Shephard inequality, which provides a sharp upper bound for the volume of the difference set in terms of the volume of the original convex body.
There exist various facets of the previous results, due to their different versions, generalizations, and extensions. In this talk, after recalling the above classical inequalities for the volume, we will discuss and show certain discrete analogues of them for the lattice point enumerator, which gives the number of integer points of a bounded set. Moreover, we will show that these new discrete inequalities imply the corresponding classical results for convex bodies.
This is about joint works with David Alonso-Gutiérrez (Universidad de Zaragoza), David Iglesias (Universidad de Murcia), Eduardo Lucas (Universidad de Murcia), and Artem Zvavitch (Kent State University).
Clustering is one of the fundamental tasks in data analytics and machine learning. In many situations, different clusterings of the same data set become relevant. For example, different algorithms for the same clustering task may return dramatically different solutions. We are interested in applications in which one clustering has to be transformed into another; such a scenario arises, for example, when a gradual transition from an old solution to a new one is required. Based on linear programming and network theory, we develop methods for the construction of a sequence of so-called elementary moves that accomplishes such a transition. Specifically, we discuss two types of transitions: short transitions with a low number of steps and transitions that retain separation of clusters throughout.
After a short presentation by Andreas Wiese, there will be an open discussion where everyone is invited to participate and give their own input, opinions and ideas on the topic.
Eine bestmögliche Einteilung von Studierenden zu Kursen stellt für viele Algorithmen eine große Herausforderung dar. Dabei sind Eigenschaften wie Strategiesicherheit, Pareto-Effizienz und Fairness schwer einzuhalten. Es haben sich in diesem Zusammenhang zwei bekannte Algorithmen bewährt und sollten bezüglich ihrer Vor- und Nachteile verglichen werden.
In Scandinavia, as well as in many other countries, the election systems to the national assemblies are in a way bidimensional: Seats are apportioned within constituencies but with respect to the national outcome of the parties. For that purpose, the seats are divided into proper constituency seats and adjustment seats. The allocation of the latter is mathematically interesting but politically controversial. Balinski and Demange have presented fairness properties which allocation methods of this kind (i.e. of the adjustment seats) should respect. They prove that this demand leads to one and only one method – given a specific underlying one-dimensional divisor rule like that of d’Hondt or Sainte-Laguë. This optimal solution can also be formulated as a simple linear optimal assignment problem. Pukelsheim has managed to convince law makers in the Canton of Zürich, to adopt this optimal allocation method (based on dual multipliers). The speaker, who has been advising the Parliament and Governments in Iceland (one of the Scandinavian countries) for over a quarter of a century on election systems, has however the experience that politicians, lawyers and political scientists will only accept recursive algorithms for seat apportionments. Iterative methods are not agreeable but the optimal solution calls for iterations. Consequently, practical allocation methods for bidimensional election systems are inevitably only approximations to the optimal method. In the talk several near optimal allocation methods will be presented, many of which are derived from heuristics for the classical transportation problem (Monge, Vogel, Russel). To test the practicality and quality of these methods a simulation model has been developed and will be presented in the talk. This model generates random election outcomes (with user-given averages, e.g. actual election results). The seats are then allocated using the different heuristic methods. The quality of each method is measured using different indicators, classical and new, thus enabling a ranking of the tested methods, in particular in comparison with the optimal method.
In a symmetric two-player game, a symmetric equilibrium can only be dynamically stable if it has positive index. The sum of the indices of all equilibria is 1, so a unique equilibrium has index 1. The index is a topological notion related to geometric orientation, and defined in terms of the sign of the determinant of the payoffs in the equilibrium support. We prove a simple strategic characterization of the index conjectured by Hofbauer: In a nondegenerate symmetric game, an equilibrium has index 1 if and only if it is the unique equilibrium in a larger symmetric game obtained by adding further strategies (it suffices to add a linear number of strategies).
Our elementary proof introduces "unit-vector games" where one player's payoff matrix consists of unit vectors, and applies in a novel way simplicial polytopes. In addition, we employ a very different known result that any matrix with positive determinant is the product of three P-matrices, a class of matrices important in linear complementarity.
Joint work with Anne Balthasar.
Biography: Bernhard von Stengel is professor of mathematics at the London School of Economics. He is interested in the geometry and computation of Nash equilibria and other mathematical questions of game theory and operations research. His professional degrees are in mathematics and in computer science.
Balanced knockout tournaments are one of the most common formats for sports competitions, and are also used in elections and decision-making. The computational complexity of determining whether there is a way to make a particular player win such a tournament had been an open problem for at least 10 years. We prove that checking whether there exists a draw in which a particular player wins is NP-complete. We also generalize this result to double-elimination tournaments.
Minimum k-Section bezeichnet das Problem, die Knotenmenge eines gegebenen Graphen in k gleich große Klassen aufzuteilen, sodass die Anzahl der Kanten zwischen diesen Klassen kleinst möglich ist. Selbst für Bäume auf n Knoten ist es NP-schwer, eine optimale Lösung des Minimum k-Section Problems bis auf einen Faktor von n^c zu approximieren, für jedes beliebige c<1. Ziel dieses Vortrags ist es zu zeigen, dass jeder Baum T auf n Knoten eine k-Sektion der Weite (k-1) (2 + 16n / diam(T)) * Delta(T) erlaubt. Diese Schranke impliziert einen polynomialen Algorithmus, der für Bäume mit beschränktem Grad und linearem Durchmesser eine optimale Lösung des Minimum k-Section Problem bis auf einen konstanten Faktor approximiert. Außerdem wird erläutert, wie die Methoden mit Hilfe von Baumzerlegungen verallgemeinert werden können.
Eine Bisektion eines Graphen ist eine Partition seiner Knotenmenge in zwei gleich große Klassen, wobei die Weite angibt, wie viele Kanten des Graphens zwischen diesen Klassen verlaufen. Minimum Bisection ist das NP-schwere Problem, in einem gegebenen Graphen eine Bisektion mit kleinst möglicher Weite zu finden. Ziel dieses Vortrags ist es, die Struktur von Bäumen mit beschränktem Grad, die keine Bisektion mit konstanter Weite erlauben, zu untersuchen. Zuerst wird gezeigt, dass Bäume auf n Knoten mit einem Pfad der Länge n/4 eine Bisektion mit konstanter Weite erlauben. Danach wird erläutert, wie diese Ideen für beliebige Bäume verallgemeinert werden können um Bisektionen zu konstruieren, deren Weite vom Durchmesser des Baumes abhängt. Dies wiederum kann mit Hilfe von Baumzerlegungen für beliebige Graphen verallgemeinert werden.
Consider an important meeting to be held in a team-based organization. Taking availability constraints into account, an online scheduling poll is used to decide upon the exact time of the meeting. Decisions are to be taken during the meeting, therefore each team wants to maximize its relative attendance in the meeting (that is, the proportional number of its participating team members).
We introduce a corresponding game, where each team can declare (in the scheduling poll) a lower total availability, in order to improve its relative attendance---the pay-off. We are especially interested in situations where teams can form coalitions:
1. We provide an efficient algorithm that, given a coalition, finds an optimal way for each team in the coalition to improve its pay-off. 2. In contrast, we show that deciding whether a coalition with improvement exists is NP-hard. 3. We also study the existence of Nash equilibria: Finding Nash equilibria for various small sizes of teams and coalitions can be done in polynomial time while it is coNP-hard if the coalition size is unbounded.
* This is a joint work with Robert Bredereck, Rolf Niedermeier, Svetlana Obraztsova, and Nimrod Talmon.
The study of highly symmetric discrete structures in ordinary 3-space has a long and fascinating history. A radically new, skeletal approach to polyhedra was pioneered by Grunbaum in the 1970's, building on Coxeter's work. A polyhedron, or more general polyhedral structure, is viewed as a finite or infinite periodic geometric edge graph in space equipped with additional (super) structure imposed by the faces, and its symmetry is measured by transitivity properties of its geometric symmetry group. Since the mid 1970's, there has been a lot of activity in this area. We survey the present state of the ongoing program to classify discrete polyhedral structures in space by distinguished transitivity properties of their symmetry groups.
tba
In this talk, I present a solution method for convex optimization. In the framework of this method, the original convex optimization problem is first reduced to a convex feasibility problem. Then several instances of the method of alternating projections are applied simultaneously. Each instance is assigned its own subproblem. In a polynomial number of iterations, at least one instance of the alternating projections solves its subproblem. If this subproblem is infeasible, the original optimization problem is simplified. This method leads to polynomial algorithms for linear optimization and for some combinatorial problems which are representable as convex problems by means of polynomial separation oracles.
Online algorithms are algorithms that have to make decisions before knowing the entire input. In a worst-case analysis, one considers the worst possible input for the algorithm in relation to the best offline solution. For a number of problems, however, this perspective is too pessimistic because a hypothetical adversary can trick the algorithm. In such cases, it is useful to look at input models with stochastic components. For example, the adversary may still determine the input but not its order. This is drawn at random from all possible permutations.
We first consider weighted bipartite online matching. The nodes of one side of the graph appear one after the other and have to be matched to the other side. In the case of random arrival order, our algorithm calculates a matching that is in expectation at most a 1/e-factor worse than the optimal matching in the graph. This is optimal. An important generalization of this problem are linear packing programs. The variables appear one after the other and immediately upon each arrival of a variable its value must be set. Also in this case, we achieve optimal guarantees. Finally, we show how to use the techniques in more complex scenarios such as scheduling problems or with respect to other objective functions.
To reduce the x-ray dose in computerized tomography (CT), many optimization approaches have been proposed aiming at minimizing the sum of a term that measures lack of consistency with the detected attenuation of x-rays and a regularizing term that measures lack of consistency with some prior knowledge about the object that is being imaged.
One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing term, a particular recent choice for this is the l1-norm shearlet transform.
Proponents of the l1-norm of the shearlet transform for the regularizing term claim that the reconstructions so obtained are better than those produced using TV. We report results, based on simulated CT data collection of the head, that contradict the general validity of this claim.
Our experiments for making such comparisons use the superiorization methodology for both regularizing terms. Superiorization is a procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that according to the primary criterion are as good as those produced by the original algorithm, but are superior to them according to a secondary (regularizing) criterion.
The lecture presents a recent methodology allowing one to execute numerical computations with finite, infinite, and infinitesimal numbers on a new type of a computer – the Infinity Computer – patented in EU, USA, and Russia. The new approach is based on the principle ‘The whole is greater than the part’ (Euclid’s Common Notion 5) that is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework different from that of the non-standard analysis. The new methodology evolves ideas of Cantor and Levi-Civita in a more applied way and, among other things, introduces new infinite integers that possess both cardinal and ordinal properties as usual finite numbers. It is emphasized that the philosophical triad – researcher, object of investigation, and tools used to observe the object – existing in such natural sciences as Physics and Chemistry, exists in Mathematics, too. In natural sciences, the instrument used to observe the object influences the results of observations. The same happens in Mathematics where numeral systems used to express numbers are among the instruments of observations used by mathematicians. The usage of powerful numeral systems gives the possibility to obtain more precise results in Mathematics, in the same way as the usage of a good microscope gives the possibility to obtain more precise results in Physics. A numeral system using a new numeral called grossone is described. It allows one to express easily infinities and infinitesimals offering rich capabilities for describing mathematical objects, mathematical modeling, and practical computations. The concept of the accuracy of numeral systems is introduced. The accuracy of the new numeral system is compared with traditional numeral systems used to work with infinity. The new methodology has been successfully used in a number of applications: Turing machines and lexicographic ordering, cellular automata, percolation and biological processes, numerical differentiation, optimization, and ODEs, fractals, infinite series, set theory, hyperbolic geometry, etc. The Infinity Calculator using the Infinity Computer technology is presented during the talk.
We propose a model for scheduling jobs in a parallel machine setting that takes into account the cost of migrations by assuming that the processing time of a job may depend on the specific set of machines among which the job is migrated. For the makespan minimization objective, the model generalizes classical scheduling problems such as R||C max and P|pmtn|Cmax, as well as novel scenarios such as semi-partitioned and clustered scheduling. In the case of a k-level hierarchical family of machines, we prove an upper bound on the approximation ratio of the problem equal to \(1 + H_k\), where \(H_k\) is the kth harmonic number. When k = 2, an improved upper bound of \(2 + 1/m\) is provided, where $m$ is the number of machines. The results are achieved via an improved rounding scheme for assignment/packing constraints. An extension that incorporates memory capacity constraints is also discussed.
A temporal graph is a graph in which the edge set can change from step to step. The temporal graph exploration problem TEMPEX is the problem of computing a foremost exploration schedule for a temporal graph, i.e., a temporal walk that starts at a given start node, visits all nodes of the graph, and has the smallest arrival time. We consider only temporal graphs that are connected at each step. For such temporal graphs with \(n\) nodes, we show that it is NP-hard to approximate TEMPEX with ratio \(O(n^{1-\epsilon})\) for any \(\epsilon>0\). We also provide an explicit construction of temporal graphs that require \(\Theta(n^2)\) steps to be explored. We then consider TEMPEX under the assumption that the underlying graph (i.e. the graph that contains all edges that are present in the temporal graph in at least one step) belongs to a specific class of graphs. Among other results, we show that temporal graphs can be explored in \(O(n^{1.5}k^2\log n)\) steps if the underlying graph has treewidth \(k\) and in \(O(n\log^3 n)\) steps if the underlying graph is a \(2\times n\) grid. We also show that sparse temporal graphs with regularly present edges can always be explored in \(O(n)\) steps.
Joint work with Michael Hoffmann and Frank Kammer.
Spectrahedra, the feasible regions of semidefinite programs, form a rich class of convex bodies that properly contains that of polyhedra. While the class of polyhedra is closed under linear projections, the class of spectrahedra is not. A containment problem is the task to decide the set-theoretic inclusion of two given sets. We study containment of (projections of) polyhedra and spectrahedra starting from a geometric point of view. Using connections to the theory of (completely) positive maps and real algebraic geometry allows to state hierarchies of semidefinite programs to decide the containment problem. We prove several convergence results.
The famous Braess paradox describes the counter-intuitive phenomenon in which, in certain settings, the increase of resources, like building a new road within a congested network, may in fact lead to larger costs for the players in an equilibrium. In this talk, we consider general nonatomic congestion games and give a characterization of the combinatorial property of strategy spaces for which the Braess paradox does not occur. In short, matroid bases are precisely the required structure. We prove this characterization by two novel sensitivity results for convex separable optimization problems over polymatroid base polyhedra which may be of independent interest.
This is joint work with S. Fujishige, M. Goemans, B. Peis and R. Zenklusen.
Given a universe U of n elements and a weighted collection {S_1,S_2,...,S_m} of m subsets of U, the universal set cover problem is to apriori map each element u ϵ U to a set f(u) ϵ {S_1,S_2,...,S_m} containing u such that any subset X ⊆ U is covered by ∪_{u ϵ X}f(u). The aim is to find a mapping such that the cost of ∪_{u ϵ X}f(u) will be is as close as possible to the optimal set cover cost for X. Such problems are also called oblivious or apriori optimization problems.
Unfortunately, for every universal mapping, the cost of ∪_{u ϵ X}f(u) can be Ω(√n) times larger than optimal OPT(X) if the set X is adversarially chosen. One can improve this result if X would be chosen randomly (sampling elements u independently), and we would look at the average performance. In this setting it can be shown that one can find a mappping such that the average solution, i.e., E{X}[∪_{uϵ X} f(u)], is only O(log(nm)) times worse than the E[OPT(X)]. In some settings however m can be way larger than n, and this yields worse than — desirable for any set cover generalization — O(log n)-approximation.
In the talk we give a different take on this problem. Note that the benchmark E[OPT(X)] used above is clairvoyant-like, i.e., it looks at an optimum cover which knows the realization of X. And that's something similar to the benchmark used, for example, in online algorithms. However, even though the problem has a strong uncertainty flavor, the solution is in fact static, i.e., it's just a fixed mapping between elements and subsets. And this is in contrary to a solution for an online problem where the online algorithm makes decisions step by step as the input arrives.
Therefore, we shall look at an optimum such mapping f_{OPT}:U → {S_1,S_2,...,S_m} and compare our solution against E[∪_{uϵ X}f_{OPT}(u)]. In the talk we'll show how one can obtain O(log n)-approximation in this setting which matches the hardness for the deterministic variant of set cover. This result holds under a more demanding model than the previous one, because we allow X to be sampled from a black-box distribution, allowing for arbitrary correlations between elements (in the previous work elements were sampled independently).
Time allowing, we are also going to show that one can consider metric facility location in this problem and obtain a constant approximation.
Joint work with Fabrizio Grandoni, Stefano Leonardi, Michał Włodarczyk.
Online algorithms are usually analyzed using the notion of competitive ratio which compares the solution obtained by the algorithm to that obtained by an online adversary for the worst possible input sequence. Often this measure turns out to be too pessimistic, and one popular approach especially for scheduling problems has been that of "resource augmentation" which was first proposed by Kalyanasundaram and Pruhs. Although resource augmentation has been very successful in dealing with a variety of objective functions, there are problems for which even a (arbitrary) constant speedup cannot lead to a constant competitive algorithm. We propose a "rejection model" which requires no resource augmentation but which permits the online algorithm to not serve an epsilon-fraction of the requests.
The problems considered in this work are in the restricted assignment setting where each job can be assigned only to a subset of machines. For the load balancing problem where the objective is to minimize the maximum load on any machine, we give O(log2 1/ε)-competitive algorithm which rejects at most an ε-fraction of the jobs. For the problem of minimizing the maximum weighted flow-time, we give an O(1/ε4)-competitive algorithm which can reject at most an ε-fraction of the jobs by weight. We also extend this result to a more general setting where the weights of a job for measuring its weighted flow-time and its contribution towards total allowed rejection weight are different. This is useful, for instance, when we consider the objective of minimizing the maximum stretch. We obtain an O(1/ε6)-competitive algorithm in this case. All these problems admit strong lower bounds on the competitive ratio for any online algorithm under the speed augmentation model.
Further, for the problem of minimizing weighted lp-norm of flow-time, we show an 1/ε^O(1)-competitive algorithm which rejects at most an ε-fraction of the jobs by weight. This is again in contrast with the resource augmentation model, under which no algorithm exists with competitive ratio independent of p. All our algorithms are immediate dispatch, though they may not be immediate reject.
In this talk, we will discuss the massively parallel Sample and Prune procedure for solving problems on large datasets. Sample and Prune is a distributed sampling method used to efficiency discover a small set of representative elements from a large data set. We discuss how Sample and Prune can be used for clustering and submodular optimization. In particular, we show how this procedure can be utilized to simulate a class of greedy sequential algorithms in the distributed setting. We will further discuss how it could possibly be extended to construct efficient algorithms for a wide range of problems in the distributed setting.
A common setting in geometric packing problems is that we are given a set of two-dimensional items, e.g., rectangles, and a rectangular container and our goal is to pack these items or a subset of them items into the container to optimize objective functions like the total profit of the packed items or the necessary height of the container. A typical obstacle in these problem settings is that in the input there are different types of items, i.e., items that are wide and thin, that are high and narrow, or items that are large in both dimensions. In this talk, I will present a method to handle this obstacle. In a nutshell, the key is to prove that there are near-optimal solutions in which the given container can be partitioned into few rectangular boxes such that in each box there are only items of one of the mentioned types. This leads to better approximation guarantees for two specific problems: a (1+eps)-approximation algorithm in quasi-polynomial time for the two-dimensional knapsack problem and a (1.4+eps)-approximation algorithm in pseudo-polynomial time for the strip-packing problem. Note that the latter bound is strictly smaller than the lower bound of 3/2 that holds for (non-pseudo-)polynomial time algorithms for the problem.
(joint work with Anna Adamaszek and Giorgi Nadiradze)
In stratified random sampling, minimizing the variance of a total estimate leads to the optimal allocation by Neyman (1934) and Tschuprow (1923). Classical algorithms for this problem yield real-valued rather than integer-valued solutions.
We presents three integer-valued optimization algorithms which solve the problem of minimizing a separable convex function with upper and lower boundary conditions. We begin by identifying the polymatroid structure of the feasible region and show that it can be solved by Greedy-type strategies. Subsequently, we develop a polynomial-time algorithm using a binary search based on the concept of "super-medians".
As an application, the algorithms are used to solve the German Census 2011 allocation problem, an optimal allocation problem in stratified random sampling respecting box constraints. Numerical results show the practical relevance of this approach.
joint work with Ulf Friedrich, Ralf Münnich and Matthias Wagner.
Given a discrete subset \(S\) of \(R^n\), let \(c(S,k)\) be the smallest number \(t\) such that for every finite system of linear inequalities that has exactly \(k\) solutions in \(S\), there exists a subsystem of at most \(t\) inequalities that still has exactly \(k\) solutions in \(S\). This number was introduced for \(S=Z^n\) by Aliev et al.~(2014) in order to study a quantitative version of Doignon's integer variant of the classical Helly theorem. The initial estimate \(c(Z^n,k)\in O(k)\) by Aliev et al.~was recently improved to a sublinear bound by Chestnut et al., who also determined the asymptotic growth of \(c(Z^2,k)\) to be \(k^\frac13\).
Via the theory of Helly-numbers of set families we show that \(c(S,k)\) admits a variety of different interpretations, for example, by the maximum number of vertices of a polytope containing \(k\) non-vertex points of \(S\), or by the maximum number of facets of an inclusion-maximal polyhedron containing \(k\) points of \(S\) in its interior. Based on these interpretations, we show that for the integral lattice the asymptotic behavior of \(c(Z^n,k)\) is given by \(\Theta(k^\frac{n-1}{n+1})\), and we determine \(c(Z^n,k)\) for small values of \(k\) exactly. For general discrete sets \(S\) we derive an upper bound of \(c(S,k)\in O(k\cdot h(S))\), where \(h(S)\) is the Helly-number of \(S\).
This is joint work with G. Averkov, B. Gonzalez Merino, I. Paschke, and S. Weltge.
We consider the dynamic version of a major discrete tomography task: reconstruct the trajectories of moving particles from a small number of projections. The most relevant case in recent plasma physics applications involves projections from only two directions. It turns out that the computational complexity of these reconstruction tasks depends strongly on the type of prior information that is incorporated in the reconstruction. In this talk we present results that provide a particular sharp line between tractable (polynomial-time solvable) and intractable (NP-hard) problem instances. This is joint work with Peter Gritzmann (TU Muenchen).
The study of the combinatorial diameter of polyhedra is a classical problem in the theory of linear programming. While transportation polytopes are at the core of operations research, for more than 50 years it has been open whether the Hirsch conjecture holds for general transportation polytopes. We follow a line of research to its positive resolution.
We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all scenarios in an explicitly given set. Each scenario is a subset of jobs that must be executed in that scenario. The two objectives that we consider are minimizing the maximum makespan over all scenarios and minimizing the sum of the makespans of all scenarios. For both versions, we obtained several approximation algorithms and lower bounds on their approximability. Also some (easier) special cases were considered. In this talk I will show a number of these results.
Based on joint work with Esteban Feuerstein, Alberto Marchetti-Spaccamela, Frans Schalekamp, René Sitters, Leen Stougie and Anke van Zuylen.
An \(O(\log m)\)-Competitive Algorithm for Online Machine Minimization
We consider the online machine minimization problem in which jobs with hard deadlines arrive online over time at their release dates. The task is to determine a feasible preemptive schedule on a minimum number of machines. Our main result is an \(O(\log m)\)-competitive algorithm with \(m\) being the optimal number of machines used in an optimal offline solution. This is the first improvement on an intriguing problem in nearly two decades. To date, the best known result is a \(O(\log p_{min}/p_{max})\)-competitive algorithm by Phillips et al. (STOC 1997) that depends on the ratio of maximum and minimum job sizes, \(p_{max}\) and \(p_{min}\). Even for \(m = 2\) no better algorithm was known. Our algorithm is in this case constant-competitive. When applied to laminar or agreeable instances, our algorithm achieves a competitive ratio of \(O(1)\) even independently of m. The following two key components lead to our new result. Firstly, we derive a new lower bound on the optimum value that relates the laxity and the number of jobs with intersecting time windows. Then, we design a new algorithm that is tailored to this lower bound and balances the delay of jobs by taking the number of currently running jobs into account.
Joint work with Lin Chen and Kevin Schewior; presented at SODA 2016.
The Minkowski measure of symmetry \(s(K)\) of a convex body \(K\), is the smallest positive dilatation of \(K\) containing a translate of \(-K\). In this talk we will explain some of its basic properties in detail. Afterwards, we will show how \(s(.)\) can be used to strengthen, smoothen, and join different geometric inequalities, as well as its connections to other concepts such as diametrical compleness, Jung's inequality, or Banach-Mazur distance.
We consider the classical scheduling problem on parallel identical machines to minimize the makespan. There is a long history of studies on this problem, focusing on exact and approximation algorithms, and it is thus natural to consider whether these algorithms are best possible in terms of the running time. Under the Exponential Time Hypothesis (ETH), we achieve the following results in this paper:
1. The scheduling problem on a constant number \(m\) of identical machines, which is denoted as \(Pm||C_{max}\), is known to admit a fully polynomial time approximation scheme (FPTAS) of running time \(O(n) + (1/\epsilon)^{O(m)}\) (indeed, the algorithm works for an even more general problem where machines are unrelated). We prove this algorithm is essentially the best possible in the sense that a \((1/\epsilon)^{O(m^{1-\delta})}+n^{O(1)}\) time FPTAS for any \(\delta>0\) implies that ETH fails.
2. The scheduling problem on an arbitrary number of identical machines, which is denoted as \(P||C_{max}\), is known to admit a polynomial time approximation scheme (PTAS) of running time \(2^{O(1/\epsilon^2\log^3(1/\epsilon))}+n^{O(1)}\). We prove this algorithm is nearly optimal in the sense that a \(2^{O((1/\epsilon)^{1-\delta})}+n^{O(1)}\) time PTAS for any \(\delta>0\) implies that ETH fails, leaving a small room for improvement.
In addition, we also consider exact algorithms for the scheduling problem and prove the following result: The traditional dynamic programming algorithm for \(P||C_{max}\) is known to run in \(2^{O(n)}\) time. We prove this is essentially the best possible in the sense that even if we restrict that there are \(n\) jobs and the processing time of each job is bounded by \(O(n)\), an exact algorithm of running time \(2^{O(n^{1-\delta})}\) for any \(\delta>0\) implies that ETH fails.
The following zero-sum game is played on a weighted graph: Alice selects a matching M and Bob selects a number k. Then, Alice receives a payoff equal to the ratio of the weight of the k heaviest edges of M to the maximum weight of a matching of size at most k in the graph. If M guarantees a payoff of at least L then it is called L-robust. In 2002, Hassin and Rubinstein gave an algorithm that returns a sqrt(1/2)-robust matching, which is best possible.
In this talk, we will see that Alice can improve on the guarantee of sqrt(1/2) when allowing her to play a randomized strategy. We devise a simple algorithm that returns a 1/ln(4)-robust randomized matching, based on the following observation: If all edge weights are integer powers of 2, then a lexicographically optimum matching is 1-robust. We prove this property not only for matchings but for a very general class of independence systems that includes matroid intersection, b-matchings, and strong 2-exchange systems. We also show that our robustness results for randomized matchings translate to an asymptotic robustness guarantee for deterministic matchings: When restricting Bob's choice to cardinalities larger than a given constant, then Alice can find a single deterministic matching with approximately the same guaranteed payoff as in the randomized setting.
This is joint work with Martin Skutella and José A. Soto.