Skip to main content

DIMAP Seminar


6 Dec 2016 Andrzej Grzesik
Jagiellonian University
TBA
29 Nov 2016 Mirna Džamonja
University of East Anglia
TBA
22 Nov 2016 Rafael Barbosa
University of Warwick
TBA
8 Nov 2016 Marcin Bieńkowski
University of Wroclaw
Online algorithms for multi-level aggregation
1 Nov 2016 Andrew Treglown
University of Birmingham
TBA
25 Oct 2016 Kristina Vušković
University of Leeds
TBA
18 Oct 2016 Christian Konrad
University of Warwick
TBA
4 Oct 2016 James Maynard
University of Oxford
Primes with missing digits
28 Jun 2016 Stefan Kiefer
University of Oxford
Distinguishing Hidden Markov Chains
21 Jun 2016 Yufei Zhao
University of Oxford
Upper tails for arithmetic progressions in a random set
14 Jun 2016 Dmitry Chistikov
MPI-SWS
Navigating one-counter automata: Directions in the mountains
31 May 2016 Andrew Thomason
University of Cambridge
List colouring of hypergraphs
24 May 2016 Allan Lo
University of Birmingham
The decomposition threshold of a given graph
17 May 2016 Michal Przykucki
University of Oxford
The time of graph bootstrap percolation
10 May 2016 Viktor Zamaraev
University of Warwick
On forbidden induced subgraphs for unit disk graphs
3 May 2016 Alexandre Stauffer
University of Bath
Multi-particle diffusion limited aggregation
26 Apr 2016 Filip Mazowiecki
University of Warwick
Copyless cost register automata
8 Mar 2016 Patrick Totzke
University of Warwick
The Reachability Problem for Two-Dimensional Vector Addition Systems with States
1 Mar 2016 Jakub Gajarsky
Masaryk University, Brno
A New Perspective on FO Model Checking of Dense Graph Classes
23 Feb 2016 Richard Mycroft
University of Birmingham
Hamilton cycles in dense hypergraphs
16 Feb 2016 Ping Hu
University of Warwick
Minimum number of edges in odd cycles
9 Feb 2016 Bartosz Walczak
Jagiellonian University, Krakow
Partial representation extension problems for classes of graphs with geometric representations
2 Feb 2016 Pierre-André Maugis
UCL
Non-Parametric Network Inference
26 Jan 2016 Nikos Tzevelekos
Queen Mary, University of London
History-Register Automata
19 Jan 2016 Simon Griffiths
University of Oxford
Moderate deviations of random graph subgraph counts
8 Dec 2015 Huy Le Nguyen
Toyota Technological Institute at Chicago
Communication Lower Bounds for Statistical Estimation Problems
1 Dec 2015 Tamas Hubai
University of Warwick
Graph polynomials for locally convergent graph sequences
24 Nov 2015 Jozef Siran
Open University
Regular and orientably regular maps
17 Nov 2015 Hossein Jowhari
University of Warwick
An introduction to Lp samplers and their applications
10 Nov 2015 Golnaz Badkobeh
University of Warwick
Closed Words: Algorithms and Combinatorics
3 Nov 2015 He Sun
University of Bristol
Heat kernels in graphs: A journey from random walks to geometry, and back
20 Oct 2015 Peter Nelson
University of Waterloo
Exponentially dense matroids
13 Oct 2015 Lutz Warnke
University of Cambridge
The phase transition in bounded-size Achlioptas processes
6 Oct 2015 Guillem Perarnau
University of Birmingham
Connectivity in bridge-addable graph classes: the McDiarmid-Steger-Welsh conjecture
21 Sept 2015 Krzysztof Onak
IBM T.J. Watson Research Center
Parallel Algorithms for Geometric Graph Problems
30 Jun 2015 Henry Cohn
Microsoft Research
The role of symmetry in high-dimensional sphere packing
23 Jun 2015 László Miklós Lovász
MIT
Left and right convergence of bounded degree graphs
15 Jun 2015 Rob van Stee
University of Leicester
The optimal absolute ratio for online bin packing
9 Jun 2015 John Talbot
University College London
Mantel's theorem extensions and analogues
2 Jun 2015 Richard Cole
New York University
Approximating the Nash Social Welfare with Indivisible Items
26 May 2015 Sylvain Schmitz
ENS Cachan & CNRS & INRIA
Leverhulme Lecture: Complexity Hierarchies Beyond Elementary
12 May 2015 Daniela Kuhn
University of Birmingham
Optimal path and cycle decompositions of random graphs
8 May 2015 Magnus Wahlström
Royal Holloway
FPT algorithms via LP-relaxations
28 Apr 2015 Daniel Lokshtanov
University of Bergen
FO Model Checking on Posets of Bounded Width
21 Apr 2015 Will Perkins
University of Birmingham
Spectral Algorithms for Random Hypergraph Partitioning
10 Mar 2015 Stanislav Živný
University of Oxford
Hybrid tractability of binary CSPs
3 Mar 2015 Rahul Savani
University of Liverpool
The Complexity of the Simplex Method
24 Feb 2015 Alexandre Stauffer
University of Bath
Random walks on dynamic graphs given by dynamical percolation
17 Feb 2015 Deryk Osthus
University of Birmingham
Decompositions of large graphs into small subgraphs
10 Feb 2015 Justin Ward
University of Warwick
A Distributed Greedy Algorithm for Submodular Maximization in Large Datasets
27 Jan 2015 Alex Scott
University of Oxford
Colouring graphs without odd holes
20 Jan 2015 Paul Goldberg
University of Oxford
Learning game-theoretic equilibria via query protocols
13 Jan 2015 Jaehoon Kim
University of Birmingham
Regular subgraphs of hypergraphs
6 Jan 2015 Ping Hu
University of Warwick
Rainbow triangles in three-colored graphs
2 Dec 2014 Tomasz Łuczak
Adam Mickiewicz University
On the chromatic number of generalized shift graph
25 Nov 2014 Jozef Skokan
LSE
The Ramsey number of the clique and the hypercube
11 Nov 2014 Julia Wolf
University of Bristol
Ramsey multiplicity of patterns in abelian groups
10 Nov 2014 Elliot Anshelevich
Rensselaer Polytechnic Institute
Stable Matching, Friendship, and Altruism
4 Nov 2014 Thomas Bloom
University of Bristol
Structure in large spectra
28 Oct 2014 Endre Csóka
University of Warwick
Efficient Teamwork
21 Oct 2014 Swiatoslaw R. Gal
Wroclaw University
Flag triangulations of spheres: f-, h-, and γ- numbers
14 Oct 2014 Sune Kristian Jakobsen
Queen Mary, University of London
Information Theoretical Cryptogenography
7 Oct 2014 Alina Ene
University of Warwick
Approximation Algorithms for Multiway Partitioning Problems and a Simplex Coloring Conjecture
30 Sept 2014 Barnaby Martin
Middlesex University London
A tetrachotomy for positive equality-free logic
3 July 2014 Tony Wirth
University of Melbourne
Rooted Triplet Inconsistency: Hardness and Algorithms
19 Jun 2014 Marcin Pilipczuk
University of Bergen
Subexponential Parameterized Complexity of Completion Problems
18 Jun 2014 Jun Zhang
NTU Singapore
Differentially Private Data Release
10 Jun 2014 László Miklós Lovász
MIT
The height of the tower in Szemerédi's regularity lemma
3 Jun 2014 Bartosz Walczak
Jagiellonian University in Kraków
Minors and dimension
27 May 2014 Agelos Georgakopoulos
University of Warwick
Some problems around random walks
20 May 2014 Jan Volec
University of Warwick
Subcubic triangle-free graphs have fractional chromatic number at most 14/5
13 May 2014 Einar Steingrimsson
University of Strathclyde
The topology and Möbius function of the permutation pattern poset
29 Apr 2014 Petra Berenbrink
Simon Fraser University
Diffusion Load Balancing
14 Mar 2014 Thomas Sauerwald
University of Cambridge
Balls into Bins via Local Search
11 Mar 2014 Zun-Bin Zhao
University of Birmingham
RSP-based analysis for the efficiency of l1-minimization for solving l0-problems
04 Mar 2014 Guglielmo Lulli
University of Milano-Bicocca
A new optimization framework for dynamic resource allocation problems
25 Feb 2014 Nicolas Gillis
Université de Mons
Fast projection methods for robust separable nonnegative matrix factorization
18 Feb 2014 Paul Spirakis
University of Liverpool
Temporal Networks : Optimization and Connectivity
11 Feb 2014 László Végh
London School of Economics
Strongly polynomial algorithm for generalized flow maximization
31 Jan 2014 Ilias Diakonikolas
University of Edinburgh
Learning Sums of Independent Integer Random Variables
21 Jan 2014 Viresh Patel
Queen Mary University of London
A domination algorithm for {0,1}-instances of the travelling salesman problem
4 Dec 2013 Jan Bulánek
Charles University/Academy of Sciences
Tight Lower Bounds for the Online Labeling Problem
3 Dec 2013 Lutz Warnke
University of Cambridge
The Evolution of Subcritical Achlioptas Processes
26 Nov 2013 Rob van Stee
University of Leicester
A Unified Approach to Truthful Scheduling on Related Machines
26 Nov 2013 Milan Vojnovic
Microsoft Research Cambridge
Balanced Graph Partitioning for Massive Scale Computations
20 Nov 2013 Alex Grigoriev
Maastricht University
Bidimensionality on Geometric Intersection Graphs
19 Nov 2013 Sergey Kitaev
University of Strathclyde
Equidistributions on Planar Maps via Involutions on Description Trees
15 Nov 2013 Sanjeeb Dash
IBM T. J. Watson Research Center, NY
Multi-Branch Split Cutting Planes for Mixed-Integer Programs
12 Nov 2013 Ben Barber
University of Birmingham
Partition Regularity in the Rationals
5 Nov 2013 Zhan Pang
Lancaster University
Dynamic Financial Hedging Strategies for a Storable Commodity with Demand Uncertainty
30 Oct 2013 Piotr Indyk
MIT
Faster Algorithms for the Sparse Fourier Transform
22 Oct 2013 Elias Koutsoupias
University of Oxford
Near-Optimal Multi-Unit Auctions with Ordered Bidders
15 Oct 2013 Robert Calderbank
Duke University
Information Theory and Compressed Sensing
10 Oct 2013 Alina Ene
University of Warwick / Princeton
Routing in Directed Graphs with Symmetric Demands
8 Oct 2013 Stanislav Böhm
VSB - Technical University of Ostrava
NL-completeness of Equivalence for Deterministic One-Counter Automata
1 Oct 2013 Sang-il Oum
KAIST
Vertex-Minors of Graphs
25 July 2013 Srikanta Tirthapura
Iowa State University
On Optimality of Clustering by Space Filling Curves
25 Jun 2013 Juan-Jose Salazar-Gonzalez
University of La Laguna
Optimizing the Management of Crews and Aircrafts in Canary Islands
20 Jun 2013 Graham Cormode
University of Warwick
Streaming Verification of Outsourced Computation
18 Jun 2013 Endre Boros
Rutgers University
A family of effective payoffs in stochastic games with perfect information
11 Jun 2013 John Fearnley
University of Liverpool
Reachability in Two-Clock Timed Automata is PSPACE-complete
4 Jun 2013 Martin Anthony
LSE
Quadratization of pseudo-Boolean functions
28 May 2013 Michel Habib
University Paris Diderot
New trends for graph searches
21 May 2013 Igor Potapov
University of Liverpool
Reachability Problems for Words, Matrices and Maps
14 May 2013 Anna Huber
Durham University
Skew Bisubmodularity and Valued CSPs
13 May 2013 Lila Fontes
University of Toronto
The Tradeoff Between Privacy and Communication
7 May 2013 Andrei Krokhin
Durham University
Robust algorithms for constraint satisfaction problems
30 Apr 2013 Daniel Paulusma
Durham University
Coloring (H1,H2)-free graphs
23 Apr 2013 Alain Hertz
Polytechnique Montreal
On the maximum difference between several graph invariants
19 Mar 2013 Lehilton Chaves
University of Warwick
Squared Metric Facility Location Problem and the Upper Bound Factor-Revealing Programs
18 Mar 2013 Richard Cole
New York University
Truthful Mechanisms for Approximating Proportionally Fair Allocations
12 Mar 2013 Tri-Dung Nguyen
University of Southampton
Optimal coalition structure and stable payoff distribution in large games
05 Mar 2013 Roman Glebov
University of Warwick
On bounded degree spanning trees in the random graph
26 Feb 2013 Andrew Treglown
Queen Mary, University of London
More on perfect matchings in uniform hypergraphs
19 Feb 2013 David Conlon
University of Oxford
Ramsey multiplicity
12 Feb 2013 Roman Belavkin
Middlesex University London
On Optimality of Deterministic and Non-Deterministic Transformations
6 Feb 2013 Liron Yedidsion
Technion - Israel Institute of Technology
Approximation Algorithm for the Resource Dependent Assignment Problem
29 Jan 2013 Thomas Dueholm Hansen
Aarhus University
Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor
22 Jan 2013 Steve Alpern
University of Warwick
A new method of searching a network
15 Jan 2013 Claire Mathieu
École Normale Supérieure and Brown University
Algorithms for optimization over noisy data
8 Jan 2013 Victor Zamaraev
HSE
On the factorial layer of hereditary classes of graphs
27 Nov 2012 Peter Keevash
Queen Mary, University of London
Turan Numbers of Bipartite Graphs Plus an Odd Cycle
20 Nov 2012 Wolfram Wiesemann
Imperial College London
Distributionally Robust Convex Optimisation
13 Nov 2012 Justin Ward
University of Toronto
A Tight, Combinatorial Algorithm for Submodular Matroid Maximization
30 Oct 2012 Timothy Griffin
University of Cambridge
Local Optimality in Algebraic Path Problems
23 Oct 2012 Alexander Tiskin
University of Warwick
Semi-local String Comparison
16 Oct 2012 Marcin Mucha
Warsaw University
Lyndon Words and Short Superstrings
9 Oct 2012 Stanislav Živný
University of Warwick
The Power of Linear Programming for Valued CSPs
4 Jul 2012 Piotr Sankowski
University of Warsaw
Algorithmic Applications of Baur-Strassen's Theorem: Shortest Cycles, Diameter and Matchings
29 Jun 2012 Andrew McGregor
University of Massachusetts, Amherst
Analyzing Graphs via Random Linear Projections
19 Jun 2012 Fabrizio Grandoni
Dalle Molle Institute for Artificial Intelligence
Pricing on Paths: A PTAS for the Highway Problem
06 Jun 2012 Etienne de Klerk
Tilburg University
Improved Lower Bounds on Crossing Numbers of Graphs Through Optimization
22 May 2012 Olof Sisask
Queen Mary, University of London
Arithmetic Progressions in Sumsets via (Discrete) Probability, Geometry and Analysis
18 May 2012 Benny Sudakov
UCLA
Induced Matchings, Arithmetic Progressions and Communication
15 May 2012 Igor Razgon
University of Leicester
Treewidth Reduction Theorem and Algorithmic Problems on Graphs
8 May 2012 Ashley Montanaro
University of Cambridge
Exact Quantum Query Algorithms
1 May 2012 Sergey Kitaev
University of Strathclyde
Representing Graphs by Words
24 Apr 2012 Catherine Greenhill
University of New South Wales
Making Markov Chains Less Lazy
16 Apr 2012 Tim Nonner
IBM Zurich Research Laboratory
Polynomial-Time Approximation Schemes for Shortest Path with Alternatives
23 Mar 2012 Andrew Treglown
Charles University
Minimum Degree Thresholds for Perfect Matchings in Hypergraphs
21 Mar 2012 Sebastian Stiller
TU Berlin
Robust Optimization over Integers
20 Mar 2012 Leslie Goldberg
University of Liverpool
The Complexity of Computing the Sign of the Tutte Polynomial
13 Mar 2012 Peter Butkovic
University of Birmingham
Combinatorics of Tropical Linear Algebra
6 Mar 2012 Christian Sohler
Technical University Dortmund
Every Property of Hyperfinite Graphs is Testable
5 Mar 2012 Justin Ward
University of Toronto
Improved Approximations for Monotone Submodular k-Set Packing and General k-Exchange Systems
28 Feb 2012 Jonathan Thompson
Cardiff University
Hybridising Heuristic and Exact Methods to Solve Scheduling Problems
14 Feb 2012 Martin Hoefer
RWTH Aachen University
Local Matching Dynamics in Social Networks
6 Feb 2012 Ola Svensson
EPFL
Approximating Graphic TSP by Matchings
31 Jan 2012 Eranda Dragoti-Çela
Graz University of Technology
The Quadratic Assignment Problem: on the Borderline Between Hard and Easy Special Cases
25 Jan 2012 Giacomo Zambelli
London School of Economics and Political Science
Extended Formulations in Combinatorial Optimization
24 Jan 2012 Vitaly Strusevich
University of Greenwich
The "Power of ..." Type Results in Parallel Machine Scheduling
10 Jan 2012 Daniel Kuhn
Imperial College London
Robust Markov Decision Processes
6 Dec 2011 Wojciech Samotij
University of Cambridge
Independent Sets in Hypergraphs
5 Dec 2011 J. Ian Munro
University of Waterloo
Creating a Partial Order and Finishing the Sort
29 Nov 2011 Guilhem Semerjian
École Normale Supérieure
Statistical Mechanics Approach to the Problem of Counting Large Subgraphs in Random Graphs
22 Nov 2011 Robert Brignall
The Open University
Applications and Studies in Modular Decomposition
15 Nov 2011 Nikolaos Fountoulakis
University of Birmingham
Random Graphs on Spaces of Negative Curvature
8 Nov 2011 Boris Bukh
University of Cambridge
Upper Bound for Centerlines
4 Nov 2011 Daniel Král'
Charles University in Prague
Algorithms for Testing FOL Properties
1 Nov 2011 Raphaël Clifford
University of Bristol
Lower Bounds for Online Integer Multiplication and Convolution in the Cell-probe Model
14 Oct 2011 Bernhard von Stengel
London School of Economics
Nash Codes for Noisy Channels
4 Oct 2011 Andreas Brandstädt
Universität Rostock
On Clique Separator Decomposition of Some Hole-Free Graph Classes
30 Aug 2011 Arie M.C.A. Koster
RWTH Aachen University
Network Design under Demand Uncertainties
25 Jul 2011 Laura Galli
Università di Bologna
Cutting-Planes for the Max-Cut Problem
12 Jul 2011 Leslie Valiant
Harvard University
Holographic Algorithms
29 Jun 2011 Maxim Sviridenko
IBM T.J. Watson Research
New and Old Algorithms for Matroid and Submodular Optimization
28 Jun 2011 Konstantinos Panagiotou
MPI
Ultra-Fast Rumor Spreading in Models of Real-World Networks
21 Jun 2011 Vitaliy Koshelev
Moscow Institute of Physics and Technology
On the Erdős-Szekeres Problem and its Modifications
20 Jun 2011 Paweł Gawrychowski
University of Wrocław
Optimal (Fully) LZW-compressed Pattern Matching
14 Jun 2011 Igor Razgon
University of Leicester
Understanding the Kernelizability of Multiway Cut
7 Jun 2011 Jan van den Heuvel
London School of Economics
Fractional Colouring and Pre-colouring Extension of Graphs
31 May 2011 Viresh Patel
Durham University
Partitioning Posets
24 May 2011 Martin Kochol
Slovak Academy of Sciences
Solution to an Edge-coloring Conjecture of Grunbaum
17 May 2011 Lutz Warnke
University of Oxford
Achlioptas Process Phase Transitions are Continuous
3 May 2011 David Saad
Aston University
Dynamics of Boolean Networks - An Exact Solution
27 Apr 2011 David S. Johnson
AT&T
The Traveling Salesman Problem: Theory and Practice in the Unit Square
26 Apr 2011 Ammar Oulamara
INPL
No-wait Flow Shop with Batching Machines
15 Mar 2011 Michel Deza
École Normale Supérieure, Paris
Space Fullerenes
8 Mar 2011 Michel Deza
École Normale Supérieure, Paris
Elementary Polycycles and Applications
23 Feb 2011 David Ellis
University of Cambridge
Triangle-intersecting Families of Graphs
15 Feb 2011 Fred Holroyd
The Open University
Overlap Colourings of Graphs: A Generalization of Multicolourings
11 Feb 2011 Amr Elmasry
University of Copenhagen
Number Systems and Data Structures
8 Feb 2011 Dave Cohen
Royal Holloway
The Complexity of the Constraint Satisfaction Problem: Beyond Structure and Language
18 Jan 2011 Xiaotie Deng
University of Liverpool
Auction and Equilibrium in Sponsored Search Market Design
11 Jan 2011 Robert Krauthgamer
Weizmann Institute
Polylogarithmic Approximation for Edit Distance and the Asymmetric Query Complexity
7 Dec 2010 Anders Yeo
Royal Holloway
All Ternary Permutation Constraint Satisfaction Problems Parameterized Above Average Have Kernels with a Quadratic Number of Variables
1 Dec 2010 Tim Pigden
Optrak
Defining Real-world Vehicle Routing Problems: An Object-based Approach
30 Nov 2010 Simone Severini
University College London
A Role for the Lovasz Theta Function in Quantum Mechanics
23 Nov 2010 Bill Jackson
Queen Mary
Boundedness, Rigidity and Global Rigidity of Direction-Length Frameworks
16 Nov 2010 Nik Ruskuc
University of St Andrews
Grid Pattern Classes
9 Nov 2010 Gregory Sorkin
London School of Economics
Unsatisfiability Below the Threshold(s)
4 Nov 2010 Andrew Treglown
University of Birmingham
Matchings in 3-uniform Hypergraphs
2 Nov 2010 Artem Pyatkin
Durham University
On Incidentor Colorings of Multigraphs
26 Oct 2010 Michele Zito
University of Liverpool
The Empire Colouring Problem: Old and New Results
19 Oct 2010 Ilia Krasikov
Brunel University
Reconstruction Problems and Polynomials
12 Oct 2010 Vadim Lozin
University of Warwick
A Decidability Result for the Dominating Set Problem
23 Sep 2010 Marc Demange
ESSEC Business School
Hardness and Approximation of Minimum Maximal Matching in k-regular graphs
12 Aug 2010 Sourav Chakraborty
CWI
Query Complexity Lower Bounds for Reconstruction of Codes
10 Aug 2010 Alexander Skopalik
RWTH Aachen
Altruism in Atomic Congestion Games
29 Jun 2010 Wieslaw Kubiak
Memorial University
Proportional Optimization and Fairness: Applications
22 Jun 2010 Cole Smith
University of Florida
A Decomposition Approach for Insuring Critical Paths
21 Jun 2010 Prudence Wong
University of Liverpool
Energy Efficient Job Scheduling with Speed Scaling and Sleep Management
15 Jun 2010 Viresh Patel
Durham University
Edge Expansion in Graphs on Surfaces
8 Jun 2010 Guy Kortsarz
Rutgers University
A Survey of Connectivity Approximation via a Survey of the Techniques
2 Jun 2010 Troels Bjerre Sørensen
University of Warwick
The Computational Complexity of Trembling Hand Perfection and Other Equilibrium Refinements
1 Jun 2010 John Fearnley
University of Warwick
Exponential Lower Bounds For Policy Iteration
25 May 2010 Petr Golovach
Durham University
Paths of Bounded Length and their Cuts: Parameterized Complexity and Algorithms
18 May 2010 Paul Sant
University of Bedfordshire
Colouring Pairs of Binary Trees and the Four Colour Problem - Results and Achievements
11 May 2010 David Conlon
University of Cambridge
An Approximate Version of Sidorenko's Conjecture
4 May 2010 Juergen Branke
University of Warwick
Hybridizing Evolutionary Algorithms and Parametric Quadratic Programming to Solve Multi-Objective Portfolio Optimization Problems with Cardinality Constraints
26 Apr 2010 Shiva Kasiviswanathan
Los Alamos National Laboratory
A Rigorous Approach to Statistical Database Privacy
22 Apr 2010 F. Meyer auf der Heide
University of Paderborn
Local Algorithms for Robotic Formation Problems
16 Mar 2010 Stephan Kreutzer
University of Oxford
Algorithmic Meta-Theorems: Upper and Lower Bounds for the Parameterized Complexity of Problems on Graphs
10 Mar 2010 Martin Zimmermann
RWTH Aachen
Time-optimal Strategies for Infinite Games
9 Mar 2010 Marcin Kamiński
ULB
Induced Minors and Contractions - An Algorithmic View
3 Mar 2010 Oliver Riordan
University of Oxford
The Generalized Triangle-triangle Transformation in Percolation
2 Mar 2010 Rico Zenklusen
EPF Lausanne
A Flow Model Based on Linking Systems with Applications in Network Coding
23 Feb 2010 Robert Krauthgamer
Weizmann Institute
A Nonlinear Approach to Dimension Reduction
16 Feb 2010 Heiko Röglin
Maastricht University
k-Means has Polynomial Smoothed Complexity
10 Feb 2010 Daniel Král'
Charles University, Prague
Total Fractional Colorings of Graphs with Large Girth
9 Feb 2010 Deryk Osthus
University of Birmingham
Hamilton Decompositions of Graphs and Digraphs
2 Feb 2010 Mikhail Moshkov
KAUST
Time Complexity of Decision Trees
27 Jan 2010 Stefanie Gerke
Royal Holloway
Random Graph Processes
26 Jan 2010 Peter Keevash
Queen Mary
Cycles in Directed Graphs
19 Jan 2010 Andrei Toom
University of Pernambuco
Solvable and Unsolvable in Cellular Automata
12 Jan 2010 Colin Cooper
King's College London
Using Neighbourhood Exploration to Speed up Random Walks
2 Dec 2009 Tomasz Radzik
King's College London
Robustness of the Rotor-router Mechanism for Graph Exploration
1 Dec 2009 Vadim Zverovich
UWE Bristol
Discrepancy and Signed Domination in Graphs and Hypergraphs
25 Nov 2009 Frits Spieksma
KU Leuven
An Overview of Multi-index Assignment Problems
24 Nov 2009 Troels Bjerre Sørensen
LMU Munich
Approximability and Parameterized Complexity of Minmax Values
18 Nov 2009 Rajiv Raman
University of Warwick
Profit-maximizing Pricing: The Highway and Tollbooth Problem
17 Nov 2009 Patrick Briest
University of Paderborn
Pricing Lotteries
12 Nov 2009 Altannar Chinchuluun
Imperial College London
Some Multiobjective Optimization Problems
11 Nov 2009 Codruţ Grosu
University of Bukarest
The Extremal Function for Partial Bipartite Tilings
10 Nov 2009 Gregory Sorkin
IBM Research
The Power of Choice in a Generalized Pólya Urn Model
10 Nov 2009 Colin McDiarmid
University of Oxford
Random Graphs with Few Disjoint Cycles
10 Nov 2009 Harald Räcke
University of Warwick
Oblivious Routing in the L_p-norm
4 Nov 2009 Charilaos Efthymiou
University of Edinburgh
Correlation Decay and Applications to Counting Colourings
3 Nov 2009 Robert Brignall
University of Bristol
Antichains and the Structure of Permutation Classes
27 Oct 2009 Oded Lachish
University of Warwick
Wiretapping a Hidden Network
13 Oct 2009 Peter Cameron
Queen Mary
Synchronization
6 Oct 2009 Bernard Ries
University of Warwick
On Graphs that Satisfy Local Pooling
29 Sep 2009 Matthias Westermann
University of Bonn
The Power of Online Reordering
24 Jun 2009 Alex Scott
University of Oxford
Triangles in Random Graphs
23 Jun 2009 Imre Leader
University of Cambridge
Positive Projections
16 Jun 2009 Dieter Rautenbach
TU Ilmenau
Cycles, Paths, Connectivity and Diameter in Distance Graphs
9 Jun 2009 Amin Coja-Oghlan
University of Edinburgh
Discrepancy and eigenvalues
3 Jun 2009 Piotr Krysta
University of Liverpool
Social Context Games
26 May 2009 Darek Kowalski
University of Liverpool
30 Years with Consensus: from Feasibility to Scalability
19 May 2009 Daniel Paulusma
Durham University
Disconnecting a Graph by a Disconnected Cut
13 May 2009 Mary Cryan
University of Edinburgh
Algorithms for Counting/Sampling Cell-Bounded Contingency Tables
7 May 2009 Diana Piguet
Charles University Prague
4-Colour Ramsey Number of Paths
6 May 2009 Julia Böttcher
TU München
Sparse Random Graphs are Fault Tolerant for Large Bipartite Graphs
6 May 2009 Jan Hladký
Charles University Prague
Counting Flags in Triangle Free Digraphs
5 May 2009 Mathias Schacht
Humboldt-Universität Berlin
Regularity Lemmas for Graphs
21 Apr 2009 Russell Martin
University of Liverpool
Time and Space Efficient Anonymous Graph Traversal
6 Apr 2009 Noga Alon
Tel Aviv University
Eliminating Cycles in the Torus
17 Mar 2009 Berthold Vöcking
RWTH Aachen
Oblivious Interference Scheduling
16 Mar 2009 Yoshiaki Oda
Keio University
Polynomial Time Solvable Cases of the Vehicle Routing Problem
11 Mar 2009 Dolores Romero Morales
University of Oxford
Bounding Revenue Deficiency in Multiple Winner Procurement Auctions
10 Mar 2009 Gunnar W. Klau
CWI Amsterdam
Prize-Collecting Steiner Trees and Disease
3 Mar 2009 Kousha Etessami
University of Edinburgh
Adding Recursion to Markov Chains, Markov Decision Processes, and Stochastic Games
2 Mar 2009 Ayşegül Altin
TOBB University
The Robust Network Loading Problem under Polyhedral Demand Uncertainty: Formulation, Polyhedral Analysis, and Computations
24 Feb 2009 Rudolf Müller
University Maastricht
Optimal Mechanisms for Scheduling
10 Feb 2009 Marc Wennink
BT Innovate
Distributed Optimisation in Network Control
3 Feb 2009 Christian Elsholtz
Royal Holloway
Multidimensional Problems in Additive Combinatorics
27 Jan 2009 Alexander Tiskin
University of Warwick
Fast Distance Multiplication of Unit-Monge Matrices
20 Jan 2009 Vadim Lozin
University of Warwick
Boundary Properties of Graphs and the Hamiltonian Cycle Problem
13 Jan 2009 Peter Allen
University of Warwick
Minimum Degree Conditions for Large Subgraphs
2 Dec 2008 Hajo Broersma
University of Durham
Degree Sequence Conditions for Graph Properties
25 Nov 2008 Bas Lemmens
University of Warwick
Min-Max Functions
18 Nov 2008 Kristina Vušković
University of Leeds
Even-hole-free Graphs and the Decomposition Method
11 Nov 2008 Piotr Krysta
University of Liverpool
Approximability of Combinatorial Exchange Problems
4 Nov 2008 Stefanie Gerke
Royal Holloway
Sensor Networks and Random Intersection Graphs
28 Oct 2008 Yves Crama
University of Liège
Throughput Optimization in Two-Machine Flowshops with Flexible Operations
21 Oct 2008 Stefan Szeider
Durham University
Tricky Problems for Graphs of Bounded Treewidth
15 Oct 2008 Jan van den Heuvel
London School of Economics
The External Network Problem and the Source Location Problem
7 Oct 2008 Andrew Thomason
University of Cambridge
Extremal Graph Theory with Colours
30 Sep 2008 Sinan Gürel
Warwick Business School
Machine-Job Assignment Problems with Separable Convex Costs and Match-up Scheduling with Controllable Processing Times
22 Sep 2008 Uri Zwick
Tel Aviv University
Discounted Deterministic Markov Decision Processes
24 Jun 2008 Haiko Müller
University of Leeds
On a Disparity Between Relative Cliquewidth and Relative NLC-width
17 Jun 2008 Tobias Müller
Universiteit Eindhoven
Colouring Random Geometric Graphs
10 Jun 2008 Jakub Mareček
University of Nottingham
A Clique-Based Integer Programming Formulation of Graph Colouring and Applications
4 Jun 2008 Richard Cole
New York University
Fast-Converging Tatonnement Algorithms for One-Time and Ongoing Market Problems
29 May 2008 Eli Upfal
Brown University
The Multi-Armed Bandit Meets the Web Surfer
20 May 2008 Peter Key
Microsoft Research
Optimizing Communication Networks: Incentives, Welfare Maximization and Multipath Transfers
13 May 2008 Andreas Bley
Zuse Institute Berlin
Unsplittable Shortest Path Routing: Hardness and Approximation Algorithms
6 May 2008 Sergey Nazarenko
University of Warwick
Diophantine Problems Arising in the Theory of Nonlinear Waves
29 Apr 2008 Ayalvadi Ganesh
University of Bristol
Controlling Epidemic Spread on Networks
22 Apr 2008 Bernhard von Stengel
London School of Economics
Strategic Characterization of the Index of an Equilibrium
11 Mar 2008 Paul Goldberg
University of Liverpool
On Recent Progress on Computing Approximate Nash Equilibria
7 Mar 2008 Matthias Englert
RWTH Aachen
Online Minimum Makespan Scheduling with Reordering
4 Mar 2008 Nicole Immorlica
CWI
Balloon Popping with Applications to Ascending Auctions
26 Feb 2008 Martin Dyer
University of Leeds
Randomly Colouring a Random Graph
12 Feb 2008 Vassili Kolokoltsov
University of Warwick
Game theoretic Approach to Hedging of Rainbow Options
5 Feb 2008 Wilfrid Kendall
University of Warwick
Short-length Routes in Low-cost Networks via Poisson Line Patterns
29 Jan 2008 Christian Sohler
University of Paderborn
Clustering for Metric and Non-Metric Distance Measures
22 Jan 2008 Alexander Tiskin
University of Warwick
The Travelling Salesman Problem: Are there any Open Problems left? Part II.
15 Jan 2008 Vladimir Deineko
University of Warwick
The Travelling Salesman Problem: Are there any Open Problems left? Part I.
11 Dec 2007 Arie Koster
University of Warwick
Improving Integer Programming Solvers: {0,½}-Chvátal-Gomory Cuts
4 Dec 2007 Peter Bro Miltersen
University of Aarhus
Strategic Game Playing and Equilibrium Refinements
27 Nov 2007 Mark Jerrum
Queen Mary
An Approximation Trichotomy for Boolean #CSP
20 Nov 2007 Vadim Lozin
University of Warwick
From Tree-width to Clique-width: Excluding a Unit Interval Graph
13 Nov 2007 Christian Raack
Zuse Institute Berlin
Cut-based Inequalities for Capacitated Network Design Problems
6 Nov 2007 Graham Brightwell
London School of Economics
Submodular Percolation and the Worm Order
30 Oct 2007 Marc Reimann
University of Warwick
Ant Colony Optimization for Vehicle Routing
23 Oct 2007 Gregory Gutin
Royal Holloway
Parameterized Algorithms for Directed Maximum Leaf Problems
16 Oct 2007 Harald Räcke
University of Warwick
Hierarchical Graph Decompositions for Minimising Congestion
9 Oct 2007 Mike Paterson
University of Warwick
Overhang Bounds
19 Jul 2007 Carl Seger
Intel
Connecting Bits with Floating-Point Numbers: Model Checking and Theorem Proving in Practice
28 Jun 2007 Vadim Lozin
University of Warwick
Tree-width and Optimization in Bounded Degree Graphs
29 May 2007 Prahladh Harsha
TTI Chicago
A Survey of Short PCP Constructions
3 May 2007 Arie Koster
University of Warwick
Solved and Unsolved Optimization Challenges in Telecommunications
06 December 2016, 16:00, MS.05

TBA

TBA

29 November 2016, 16:00, MS.05

TBA

TBA

22 November 2016, 16:00, MS.05

TBA

TBA

8 November 2016, 16:00, MS.05

Online Algorithms for Multi-Level Aggregation

In the Multi-Level Aggregation Problem (MLAP), requests arrive at the nodes of an edge-weighted tree T, and have to be served eventually. A service is defined as a subtree X of T that contains its root. This subtree X serves all requests that are pending in the nodes of X, and the cost of this service is equal to the total weight of X. Each request also incurs waiting cost between its arrival and service times. The objective is to minimize the total waiting cost of all requests plus the total cost of all service subtrees. Aggregation problem for trees of arbitrary depth arise in multicasting, sensor networks, communication in organization hierarchies, and in supply-chain management. The instances of MLAP associated with these applications are naturally online, in the sense that aggregation decisions need to be made without information about future requests.

Constant-competitive online algorithms are known for MLAP with one or two levels. However, it has been open whether there exist constant competitive online algorithms for trees of depth more than 2. Addressing this open problem, we give the first constant competitive online algorithm for networks of arbitrary (fixed) number of levels. The competitive ratio is O(D^4 2^D), where D is the depth of T. The algorithm works for arbitrary waiting cost functions, including the variant with deadlines.

Joint work with Martin Böhm, Jarosław Byrka, Marek Chrobak, Christoph Dürr, Lukáš Folwarczný, Łukasz Jeż, Jiří Sgall, Nguyen Kim Thang and Pavel Veselý, appeared at ESA 2016.

01 November 2016, 16:00, MS.05

TBA

TBA

25 October 2016, 16:00, MS.05

TBA

TBA

18 October 2016, 16:00, MS.05

TBA

TBA

04 October 2016, 16:00, MS.05

Primes with missing digits

We will talk about recent work which shows that there are infinitely many primes with no 7 in their decimal expansion. (And similarly with 7 replaced by any other digit.) This shows the existence of primes in a thin set of numbers, which is typically very difficult. The proof relies on a fun mixture of number theory, discrete Fourier analysis and combinatorial geometry, amongst other things.

28 June 2016, 16:00, MS.05

Distinguishing Hidden Markov Chains

Hidden Markov Chains (HMCs) are commonly used mathematical models of probabilistic systems. They are employed in various fields such as speech recognition, signal processing, and biological sequence analysis. We consider the problem of distinguishing two given HMCs based on an observation sequence that one of the HMCs generates. More precisely, given two HMCs and an observation sequence, a distinguishing algorithm is expected to identify the HMC that generates the observation sequence. Two HMCs are called distinguishable if for every ε>0 there is a distinguishing algorithm whose error probability is less than ε. We show that one can decide in polynomial time whether two HMCs are distinguishable. Further, we present and analyze two distinguishing algorithms for distinguishable HMCs. The first algorithm makes a decision after processing a fixed number of observations, and it exhibits two-sided error. The second algorithm processes an unbounded number of observations, but the algorithm has only one-sided error. The error probability, for both algorithms, decays exponentially with the number of processed observations. We also provide an algorithm for distinguishing multiple HMCs. Finally, we discuss an application in stochastic runtime verification.

Joint work with A. Prasad Sistla.

21 June 2016, 16:00, MS.05

Upper tails for arithmetic progressions in a random set

Let X denote the number of k-term arithmetic progressions in a random subset of Z/NZ or {1,...,N}. We determine, in certain ranges of parameters, the probability that X exceeds its expectation by a constant factor. ⁡Recently, Chatterjee and Dembo reduced the problem, at least for k=3, to a natural variational problem. We solve this variational problem asymptotically, thereby obtaining the large deviation rate. More generally, for all k, using the inverse theorem for the Gowers norms, we determine the rate as long as p tends to 0 extremely slowly.

Joint work with Bhaswar B. Bhattacharya, Shirshendu Ganguly, and Xuancheng Shao.

14 June 2016, 16:00, MS.05

Navigating one-counter automata: Directions in the mountains

One-counter automata (OCA) are finite-state automata with a counter that supports increments, decrements, and tests for zero. They correspond to an intermediate class between regular and context-free languages and are suitable for modeling ``counting'' phenomena. However, reasoning about OCA is often intractable: for example, language equivalence is undecidable for nondeterministic OCA, and for deterministic OCA it took 40 years to prove the existence of short distinguishing words. In this talk, I will give a review of reasoning tasks for OCA and discuss new tractability results:

  1. shortest paths between configurations of OCA are of at most quadratic length;
  2. the Parikh image and sub-/superword closures of the language are accepted by small nondeterministic finite automata.
Based on joint work with Mohamed Faouzi Atig, Wojciech Czerwiński, Piotr Hofman, K Narayan Kumar, Michał Pilipczuk, Prakash Saivasan, Michael Wehar, and Georg Zetzsche.
31 May 2016, 16:00, MS.05

List colouring of hypergraphs

The list chromatic number of a hypergraph is defined in the same way as for a graph: it is the smallest number k such that, if every vertex is given a list of k colours, then each vertex can choose a colour from its list so that no edge is monochromatic. One of the successes of the recently discovered "container method" was to show that the list colouring number of an r-uniform hypergraph is at least cr log d, where d is the average degree. But is the container method giving the best answer here? We describe a colouring algorithm based on "preference orders" which correctly determines the list chromatic number of hypergraphs having a certain property. Hypergraphs with this property include those generated at random, which are expected to be worst-case examples for the general problem.

Joint work with Ares Meroueh.

24 May 2016, 16:00, MS.05

The decomposition threshold of a given graph

A fundamental theorem of Wilson states that, for every graph F, the edge-set of every sufficiently large clique (satisfying a trivially necessary divisibility condition) has a decomposition into copies of F. One of the main open problems in this area is to determine the minimum degree threshold which guarantees an F-decomposition in an incomplete host graph. We solve this problem for the case when F is bipartite and make significant progress towards the general case.

This is joint work with Stefan Glock, Daniela Kühn, Richard Montgomery and Deryk Osthus.

17 May 2016, 16:00, MS.05

The time of graph bootstrap percolation

Graph bootstrap percolation, known also as weak saturation, is a simple cellular automaton introduced by Bollobás in 1968. Given a graph H and a subset G of E(Kn) we initially infect all edges in G and then, in consecutive steps, we infect every edge e that completes a new infected copy of H in Kn. We say that G percolates if eventually every edge in Kn is infected.

In this talk, for H = Kr, we discuss the running time of the process until it percolates. Treating this as an extremal question, we obtain bounds on the maximum running time of the process when the initial graph G is chosen deterministically. For G = Gn,p, we give threshold densities p=p(n,r,t) such that Gn,p percolates in exactly t time steps of the Kr-bootstrap process with high probability.

Based on joint work with Béla Bollobás, Oliver Riordan, and Julian Sahasrabudhe, and with Karen Gunderson and Sebastian Koch.

10 May 2016, 16:00, MS.05

On forbidden induced subgraphs for unit disk graphs

A unit disk graph is the intersection graph of disks of equal radii in the plane. The class of unit disk graphs is hereditary, and therefore can be characterized in terms of minimal forbidden induced subgraphs. In spite of quite active study of unit disk graphs very little is known about minimal forbidden induced subgraphs for this class. We found only finitely many minimal non unit disk graphs in the literature. In this paper we study in a systematic way forbidden induced subgraphs for the class of unit disk graphs. We develop several structural and geometrical tools, and use them to reveal infinitely many new minimal non unit disk graphs. Further we use these results to investigate structure of co-bipartite unit disk graphs. In particular, we give structural characterization of those co-bipartite unit disk graphs whose edges between parts form a C4-free bipartite graph, and show that bipartite complements of these graphs are also unit disk graphs. Our results lead us to propose a conjecture that the class of co-bipartite unit disk graphs is closed under bipartite complementation.

Based on joint work with Aistis Atminas.

03 May 2016, 16:00, MS.05

Multi-particle diffusion limited aggregation

Many processes found in nature, for example dielectric breakdown cascades, Hele-shaw flow and electrodeposition, have fascinating similar behaviors such as the formation of fractal-like arms. We consider a classical random process of aggregation on Zd, which belongs to a class of models introduced in the physics and chemistry literature to better understand how fractal structures arise in nature. At each vertex of Zd, initially place one particle with probability mu, independently of other vertices. In addition, an aggregate will grow through Zd. The aggregate initially consists only of the origin. Particles move as continuous time simple random walks obeying the exclusion rule (so that each vertex has no more than one particle at any time). The aggregate grows indefinitely by attaching particles to its surface whenever a particle attempts to jump onto it. Once a particle is attached to the aggregate, it becomes part of the aggregate and does not move anymore. The only rigorous results previously known about this process were restricted to 1 dimension. In this work we show that, for dimensions d>1, if the initial density of particles mu is large enough, then with positive probability the aggregate grows with positive speed, reaching out to distance of order t by time t.

This is a joint work with Vladas Sidoravicius.

26 April 2016, 16:00, MS.05

Copyless cost register automata

Cost register automata (CRA) and its subclass, copyless CRA, were recently proposed by Alur et al. as a new model for computing functions over words. We study the expressiveness and closure properties of copyless CRA. In particular we compare this model with the well-know model of weighted automata. Weighted automata can be naturally characterized by a subfragment of weighted logic, a quantitative extension of MSO introduced by Droste and Gastin. Our results show that a similar characterization of copyless CRA seems to be unlikely. We introduce a new quantitative logic, which turns out to be equally expressive to a natural subclass of copyless CRA.

Based on joint work with Cristian Riveros.

8 March 2016, 16:00, MS.05

The Reachability Problem for Two-Dimensional Vector Addition Systems with States

Does a given regular language over an alphabet of integer vectors contain a word such that (1) the sum of every prefix is non-negative and (2) the sum of the whole word is 0 ?

This problem, known as the reachability problem for vector addition systems with states, has widespread applications in logic, formal languages, control theory, and the verification of concurrent processes. It was first shown to be decidable already in the early 80s, but its exact complexity remains elusive to date.

In this talk I will recall the status quo regarding the complexity of the reachability problem, mostly focussing on the subproblems for fixed and small dimensions.

I will present our recent result on the two-dimensional case that the length of shortest reachability certificates is polynomial in the largest integer and the size of the NFA defining the input language. This complements a result by Blondin et. al. (LICS'15) and shows that the 2-dim. reachability problem is PSPACE-complete when numbers are encoded in binary and NL-complete if the input is given in unary.

1 March 2016, 16:00, MS.05

A New Perspective on FO Model Checking of Dense Graph Classes

We consider the FO model checking problem of dense graph classes, namely those which are FO-interpretable in some sparse graph classes. If an input dense graph is given together with the corresponding FO interpretation in a sparse graph, one can easily solve the model checking problem using the existing algorithms for sparse graph classes. However, if the assumed interpretation is not given, then the situation is markedly harder.

We give a structural characterization of graph classes which are FO-interpretable in graph classes of bounded degree. This characterization allows us to efficiently compute such an interpretation for an input graph. As a consequence, we obtain an FPT algorithm for FO model checking of graph classes FO interpretable in graph classes of bounded degree. The approach we use to obtain these results may also be of independent interest.

Based on joint work with P. Hliněný, D. Lokshtanov, J. Obdržálek and M. S. Ramanujan.

23 February 2016, 16:00, MS.05

Hamilton cycles in dense hypergraphs

We consider the problem of finding a Hamilton cycle in a uniform hypergraph of large minimum codegree. This is an area which has been widely studied over the years, with existing results including Dirac's classical theorem that any graph G on n>=3 vertices with minimum degree delta(G)>=n/2 admits a Hamilton cycle, and Röodl, Ruciński and Szemerédi's analogous theorem for k-uniform hypergraphs (k-graphs) H with minimum codegree delta(H)>=n/2+o(n).

Our attention is focussed on k-graphs with minimum codegree slightly below the aforementioned minimum codegree threshold. For tight cycles, we show that the determining the existence of a Hamilton cycle is NP-hard even in k-graphs H with minimum codegree delta(H)>=n/2-C , where C is a constant depending only on k. On the other hand, we demonstrate significantly different behavior for looser cycles. Indeed, we prove the existence of a constant eps such that we can find a Hamilton 2-cycle in polynomial time (or give a certificate that no such cycle exists) in a 4-graph H with delta(H)>=n/2-eps*n. This is achieved through a structural theorem which precisely characterises all 4-graphs H with delta(H)>=n/2-eps*n which do not contain a Hamilton 2-cycle.

This is joint work with Frederik Garbe (University of Birmingham).

16 February 2016, 16:00, MS.05

Minimum number of edges in odd cycles

Erdős, Faudree and Rousseau conjectured in 1992 that for every k>=2 every graph with n vertices and n2/4+1 edges contains at least 2n2/9 edges that occur in an odd cycle of length 2k+1. We disprove this conjecture for k=2 by showing a graph with (2+sqrt(2))n2/16 edges in pentagons. We prove that asymptotically this is the best possible constant in this case. For the remaining case k>=3, we prove that asymptotically the conjecture is true. The main tool used in the proofs is the flag algebra method applied in a specific two-colored setting.

Joint work with Andrzej Grzesik and Jan Volec.

9 February 2016, 16:00, MS.05

Partial representation extension problems for classes of graphs with geometric representations

In order to define a class of graphs, we often say that the class consists of exactly those graphs which admit a specific (e.g. geometric) representation. For instance, interval graphs are the graphs that can be represented by real intervals so that every vertex is assigned an interval and the edges are the pairs of intervals that intersect. The recognition problem for such a class of graphs asks whether a graph provided in the input belongs to the class. Most recognition algorithms also produce an appropriate representation for positive instances. The partial representation extension problem generalizes the recognition problem: given a graph G, a subset S of V(G), and a representation R of the induced subgraph G[S], it asks whether G admits a representation that extends R, that is, whose restriction to S is equal to R.

Partial representation extension problems have been introduced in 2011 by Klavík, Kratochvíl and Vyskočil, who provided a polynomial-time algorithm for extending partial representations of interval graphs. In general, recognition algorithms for classes which allow a compact structural description of all valid representations of a graph (such as the PQ-tree for interval graphs or the modular decomposition for permutation graphs) are usually easy to generalize to solve the partial representation extension problem as well. However, for many natural classes of graphs like circular-arc graphs or trapezoid graphs no such structural description is known. In a joint work with Tomasz Krawczyk, we propose a way of overcoming this difficulty; in particular, we provide a polynomial-time partial representation extension algorithm for the class of trapezoid graphs.

In this talk, I will briefly survey results on partial representation extension problems for various classes of graphs. Then, for some example classes, I will explain what the structural description of all valid representations looks like and how it is useful for extending partial representations. Finally, I will present some ideas behind the above-mentioned algorithm for trapezoid graphs.

2 February 2016, 16:00, MS.05

Non-Parametric Network Inference

Networks are ubiquitous in today’s world. Any time we make observations about people, places, or things and the interactions between them, we have a network. Yet a quantitative understanding of real-world networks is in its infancy, and must be based on strong theoretical and methodological foundations. The goal of this talk is to provide some insight into these foundations from the perspective of nonparametric techniques in relation to subgraph count statistics, in particular how tradeoffs between model complexity and parsimony can be balanced to yield practical algorithms with provable properties.

26 January 2016, 16:00, MS.05

History-Register Automata

Programs with dynamic allocation are able to create and use an unbounded number of fresh resources, such as references, objects, files, etc. We propose History-Register Automata (HRA), a new automata-theoretic formalism for modelling such programs. HRAs extend the expressiveness of previous approaches and bring us to the limits of decidability for reachability checks. The distinctive feature of our machines is their use of unbounded memory sets (histories) where input symbols can be selectively stored and compared with symbols to follow. In addition, stored symbols can be consumed or deleted by reset. We show that the combination of consumption and reset capabilities renders the automata powerful enough to imitate counter machines, and yields closure under all regular operations apart from complementation. We moreover examine weaker notions of HRAs which strike different balances between expressiveness and effectiveness. This is joint work with Radu Grigore.

19 January 2016, 16:00, MS.05

Moderate deviations of random graph subgraph counts

There have been many results recently on the probability of large deviations of subgraph counts in the Erdos-Renyi random graph G(n,p). In this talk we discuss smaller deviations in subgraph counts. And, as we shall see, the correct context for these results is G(n,m) rather than G(n,p).

Based on joint work with Christina Goldschmidt and Alex Scott.

8 December 2015, 16:00, MS.05

Communication Lower Bounds for Statistical Estimation Problems

We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions. In the distributed sparse Gaussian mean estimation problem, each of the m machines receives n data points from a d-dimensional Gaussian distribution with unknown mean θ which is promised to be k-sparse. The machines communicate by message passing and aim to estimate the mean θ. We provide a tight (up to logarithmic factors) tradeoff between the estimation error and the number of bits communicated between the machines. This directly leads to a lower bound for the distributed sparse linear regression problem: to achieve the statistical minimax error, the total communication is at least Ω(min{n,d}m), where n is the number of observations that each machine receives and d is the ambient dimension. We also give the first optimal simultaneous protocol in the dense case for mean estimation.

As our main technique, we prove a distributed data processing inequality, as a generalization of usual data processing inequalities, which might be of independent interest and useful for other problems.

This is joint work with Mark Braverman, Ankit Garg, Tengyu Ma, and David P. Woodruff


1 December 2015, 16:00, MS.05

Graph polynomials for locally convergent graph sequences

A sequence of graphs is called locally convergent if their features observable by local sampling are getting indistinguishable in the limit. This concept is most useful for bounded degree graphs. We observe how graph polynomials like the chromatic or matching polynomial behave on such a sequence of graphs. Are they convergent in some regard? What would be a suitable notion of convergence?

(Joint work with different subsets of Miklós Abért and Péter Csikvári.)

24 November 2015, 16:00, MS.05

Regular and orientably regular maps

A map is a cellular decomposition of a surface, or, equivalently, a 2-cell embedding of a graph on a surface. A map is regular if its automorphism group is transitive, and hence regular, on flags (vertex-edge-side triples of mutually incident elements). If the supporting surface of a map is orientable, the map is orientably regular if its group of all orientation-preserving automorphisms is transitive, and hence regular, on arcs (edges with a direction). Regular and orientably regular maps are a natural generalisation of Platonic solids to arbitrary surfaces.

At an algebraic level, regular and orientably regular maps can be identified with the corresponding automorphism groups, which turn out to be quotients of the well-known triangle groups. This gives rise to rich connections between the theory of regular maps and group theory, hyperbolic geometry, Riemann surfaces and Galois theory.

In the talk we will discuss the state-of-the-art of classification of regular and orientably regular maps with a given underlying graph, supporting surface, or automorphism group, and outline open problems in this area of research.

17 November 2015, 16:00, MS.05

An introduction to Lp samplers and their applications

In this talk, I will discuss certain class of samplers for dynamic data that are known as L_p samplers. Formally, given a finite stream of updates (additions and subtraction) on the coordinates of an underlying vector x in R^n, an L_p sampler outputs a random coordinate where the probability of picking the i-th coordinate is |x_i|^p/||x||_p^p. In particular for p=1, each coordinate is sampled proportional to its weight |x_i|, while For p=0, the sampler simply outputs a random non-zero coordinate.

Relaxing the requirements and for p in [0,2), it has been shown there are approximate L_p samplers that output the i-th coordinate with probability O(|x_i|^p/||x||_p^p) using only O(log^2n) bits of space.

In this talk, I will discuss the general idea behind such samplers and present two applications in finding duplicates and dynamic graph problems. If time permits, I also show an Omega(log^2 n) lower bound for sampling from 0,+-1 vectors that proves the tightness of the upper bound in terms of dependence on n. In conclusion, I will mention some open questions and motivations for further work in this area.

The major part of this talk will be based on a joint work with Mert Saglam and Gabor Tardos published in PODS 2011.

10 November 2015, 16:00, MS.05

Closed Words: Algorithms and Combinatorics

A closed word, also referred to as a periodic-like word or complete first return, is a word whose longest border does not have internal occurrences, or, equivalently, whose longest repeated prefix is not right special. We show that a word of length n contains at least n+1 distinct closed factors. We also characterise words containing exactly n+1 closed factors. Furthermore, we show that a word of length n can contain Θ(n2) many distinct closed factors.

On the algorithmic side, we consider factorising a word into its longest closed factors and present a linear time algorithm for this problem. In addition, we construct a closed factor array, which holds for every position in the word, the longest closed factor starting at that position. We describe our technique that runs in O(n log n/ log log n) time.

3 November 2015, 16:00, MS.05

Heat kernels in graphs: A journey from random walks to geometry, and back

Heat kernels are one of the most fundamental concepts in physics and mathematics. In physics, the heat kernel is a fundamental solution of the heat equation and connects the Laplacian operator to the rate of heat dissipation. In spectral geometry, many fundamental techniques are based on heat kernels. In finite Markov chain theory, heat kernels correspond to continuous-time random walks and constitute one of the most powerful techniques in estimating the mixing time.

In this talk, we will briefly discuss this line of research and its relation to heat kernels in graphs. In particular, we will see how heat kernels are used to design the first nearly-linear time algorithm for finding clusters in real-world graphs. Some interesting open questions will be addressed as well.

This is based on the joint work with Richard Peng (MIT), and Luca Zanetti (University of Bristol). Parts of the results of this talk appeared in COLT 2015.

20 October 2015, 16:00, MS.05

Exponentially dense matroids

The growth rate function for a minor-closed class of matroids is the function h(n) whose value at an integer n is the maximum number of elements in a simple matroid in the class of rank at most n; this can be seen as a measure of the density of thematroids in the class. A theorem of Geelen, Kabell, Kung and Whittle implies that h(n), where finite, grows either linearly, quadratically, or exponentially with base equal to some prime power q, in n. I will discuss growth rate functions for classes of theexponential sort, determining the growth rate function almost exactly for various interesting classes and giving a theorem that essentially characterises all such functions. No knowledge of matroid theory will be assumed.

13 October 2015, 16:00, MS.05

The phase transition in bounded-size Achlioptas processes

In the Erdös-Rényi random graph process, starting from an empty graph, in each step a new random edge is added to the evolving graph. One of its most interesting features is the `percolation phase transition': as the ratio of the number of edges to vertices increases past a certain critical density, the global structure changes radically, from only small components to a single giant component plus small ones.

In this talk we consider Achlioptas processes, which have become a key example for random graph processes with dependencies between the edges. Starting from an empty graph these proceed as follows: in each step two potential edges are chosen uniformly at random, and using some rule one of them is selected and added to the evolving graph.

We shall prove that, for a large class of widely studied rules (so-called bounded-size rules), the percolation phase transition is qualitatively comparable to the classical Erdös-Rényi process. For example, assuming ε3n→∞ and ε→0 as n→∞, the size of the largest component after step tcn±εn whp satisfies L1(tcn-εn)∼Dε-2log(ε3n) and L1(tc n+εn)∼dεn, where tc,D,d>0 are rule-dependent constants (in the Erdös-Rényi process we have tc=D=1/2 and d=4).

Based on joint work with Oliver Riordan.

6 October 2015, 16:00, MS.05

Connectivity in bridge-addable graph classes: the McDiarmid-Steger-Welsh conjecture

A class of graphs is bridge-addable if, given a graph G in the class, any graph obtained by adding an edge between two connected components of G is also in the class. We prove a conjecture of McDiarmid, Steger and Welsh, that says that if Gn is taken uniformly at random from a class of bridge-addable graphs on n vertices, then Gn is connected with probability at least exp(-1/2)+o(1), when n tends to infinity. This lower bound is asymptotically best possible and it is reached for the class of forests. Our proof uses a "local double counting" strategy that enables us to compare the size of two sets of combinatorial objects by solving a related multivariate optimization problem. In our case, the optimization problem deals with partition functions of trees weighted by a supermultiplicative functional.

This is joint work with Guillaume Chapuy.

21 September 2015, 11:00, CS1.04

Parallel Algorithms for Geometric Graph Problems

Over the past decade a number of parallel systems such as MapReduce, Hadoop, and Dryad have become widely successful in practice. In this talk, I will present a new algorithmic framework for geometric graph problems for these kinds of systems. Our algorithms produce approximate solutions for problems such as Minimum Spanning Tree (MST) and Earth-Mover Distance (EMD). Provided the underlying set of points lies in a space of constant dimension, only a constant number of rounds is necessary, while the total amount of space and communication remains linear in the size of the data. In contrast, for general graphs, achieving the same result for MST (or even connectivity) remains a challenging open problem, despite drawing significant attention in recent years.

Our algorithmic framework has implications beyond modern parallel systems. For example, it yields a new algorithm for approximating EMD in the plane in near-linear time. We note that while recently Sharathkumar and Agarwal (STOC 2012) have developed a near-linear time algorithm for (1+epsilon)-approximating EMD, our approach is fundamentally different and also solves the transportation cost problem, which was raised as an open question in their work.

Joint work with Alexandr Andoni, Aleksandar Nikolov, and Grigory Yaroslavtsev.

30 June 2015, 16:00, MS.05

The role of symmetry in high-dimensional sphere packing

The sphere packing problem asks how large a fraction of R^n can be covered with congruent balls if they are not allowed to overlap (except tangentially). In this talk, I’ll survey what’s known in high dimensions, with particular attention to the role of symmetry, and I’ll highlight some open problems on which progress should be possible.

23 June 2015, 16:00, MS.05

Left and right convergence of bounded degree graphs

There are several notions of convergence for sequences of bounded degree graphs. One such notion is left convergence (also known as local or Benjamini-Schramm convergence), which is based on counting neighborhood distributions. Another notion is right convergence, based on counting homomorphisms to a target (weighted) graph. We introduce some of these notions. Borgs, Chayes, Kahn and Lovász showed that a sequence of bounded degree graphs is left convergent if and only if it is right convergent for certain target graphs H with all weights (including loops) close to 1. We will give a short alternative proof of this statement.

15 June 2015, 16:00, room MS.05

The optimal absolute ratio for online bin packing

We present an online bin packing algorithm with absolute competitive ratio 5/3, which is optimal.

9 June 2015, 16:00, MS.05

Mantel's theorem extensions and analogues

Mantel’s theorem tells us precisely how many edges a triangle-free graph may contain. We will consider some of the many analogues and extensions of this result, old, new and conjectured. New results are joint work with Rahil Baber and Adam Sanitt.

2 June 2015, 16:00, MS.05

Approximating the Nash Social Welfare with Indivisible Items

We study the problem of allocating a set of indivisible items among agents with additive valuations with the goal of maximizing the Nash social welfare objective, i.e., the product of the agents' valuations, a problem which is known to be NP-hard. Our main result is the first algorithm that guarantees a constant factor approximation for the average Nash social welfare. We first observe that the integrality gap of the convex program relaxation can be arbitrarily high so we propose a different fractional solution which we use as a guide to define an appropriate rounding which then leads to the constant factor approximation algorithm.

An interesting contribution of our analysis is the fractional solution that we use. The dual variables of the convex program that computes the Nash social welfare-maximizing fractional allocation can be interpreted as prices, and the fraction of the item allocated to each agent corresponds to how much the agent is spending on that item. Using this interpretation, we define a fractional solution that restricts the amount of spending that can go into any given item, thus keeping the highly demanded items under-allocated, and forcing the agents to spend on less desired items. As a result, the induced prices of this novel allocation reveal more information regarding how the less desired items should be allocated.

This is joint work with Vasilis Gkatzelis.

26 May 2015, 16:00, MS.05

Leverhulme Lecture: Complexity Hierarchies Beyond Elementary

Decision problems with a non-elementary complexity occur naturally in logic, combinatorics, formal language, verification, etc., with complexities ranging from simple towers of exponentials to Ackermannian and beyond. Somewhat surprisingly, we lack the definitions of classes and reductions that would allow to state completeness results at such high complexities. We introduce a hierarchy of fast-growing complexity classes and show its suitability for completeness statements of many non-elementary problems.

12 May 2015, 16:00, MS.05

Optimal path and cycle decompositions of random graphs

Motivated by longstanding conjectures regarding decompositions of graphs into paths and cycles, we prove the optimal decomposition results for dense random graphs into (i) cycles and edges, (ii) paths and (iii) linear forests. There is also an interesting connection to the overfull subgraph conjecture on edge-colourings of graphs. We actually derive (i)-(iii) from quasirandom versions of our results.

The results are joint work with Stefan Glock and Deryk Osthus.

8 May 2015, 15:00, CS1.01

FPT algorithms via LP-relaxations

For a handful of problems (including some of the "flagship problems" of parameterized complexity), very efficient FPT algorithms can be constructed using using the LP-relaxation of the problem. However, this approach requires the LP to have some very restrictive properties that only a few LP-relaxations have, namely half-integrality and a technical condition known as persistence. (Prior to the work presented here, we knew of essentially two problems with such LP-relaxations, namely Vertex Cover and Node Multiway Cut). It was also not clear how to look for more compatible polytopes.

We circumvented this problem by observing that a half-integral LP relaxation corresponds to a polynomial-time solvable discrete problem relaxation to a larger domain, e.g., Vertex Cover can be efficiently minimised over the solution space {0, 1/2, 1}^n, which corresponds to optimisation over a domain of size 3. This lets us rephrase the existence of "useful relaxations" in the language of so-called Valued CSP problems.

In particular, we show that the LP-relaxations for Vertex Cover and Node Multiway Cut are special cases of a class of relaxations known as k-submodular functions, and use this class to give improved FPT algorithms for several other problems, including the first 2^O(k)-time algorithm for the problems Group Feedback Vertex Set and Subset Feedback Vertex. This is based on work from SODA 2014.

If time allows, I will also give a quick look at a direct combinatorial condition that provides an extension of this work.

28 April 2015, 16:00, MS.05

FO Model Checking on Posets of Bounded Width

Over the past two decades the main focus of research into first-order (FO) model checking algorithms have been sparse relationalstructures - culminating in the FPT-algorithm by Grohe, Kreutzer and Siebertz for FO model checking of nowhere dense classes of graphs [STOC'14], with dense structures starting to attract attention only recently. Bova, Ganian and Szeider [LICS'14] initiated the study of the complexity of FO model checkingon partially ordered sets (posets). Bova, Ganian and Szeider showed that model checking existential FO logic is fixed-parameter tractable (FPT) on posets of bounded width, where the width of a poset is the size of the largest antichain in the poset. The existence of an FPT algorithm for general FO model checking on posets of bounded width, however, remained open. We resolve this question in the positive by giving an algorithm that takes as its input an n-element poset P of width w and an FO logic formula h, and determines whether h holds on P in time f(h,w)n2.

Based on joint work with Jakub Gajarský, Petr Hliněný, Jan Obdržálek, Sebastian Ordyniak, M. S. Ramanujan, and Saket Saurabh.

21 April 2015, 16:00, MS.05

Spectral Algorithms for Random Hypergraph Partitioning

I will discuss new algorithms for a collection of related problems: random hypergraph partitioning (the hypergraph generalization of the stochastic block model), planted random k-SAT, and inverting Goldreich's one-way function. In each the goal is to find a planted solution at the lowest possible edge or clause density. I will present both positive algorithmic results and lower bounds for general classes of algorithms. Based on joint work with Vitaly Feldman, Santosh Vempala, and Laura Florescu.

10 March 2015, 16:00, MS.05

Hybrid tractability of binary CSPs

In this talk, we will survey recent work on the computational complexity of binary constraint satisfaction problems (CSPs) that are not tractable only due to the structure of the instance (such as bounded treewidth) or the types of the constraints in the instance (such as linear equations).

A part of this talk is based on joint work with David Cohen, Martin Cooper, and Peter Jeavons.

3 March 2015, 16:00, MS.05

The Complexity of the Simplex Method

The simplex method is a well-studied and widely-used pivoting method for solving linear programs. When Dantzig originally formulated the simplex method, he gave a natural pivot rule that pivots into the basis a variable with the most violated reduced cost. In their seminal work, Klee and Minty showed that this pivot rule takes exponential time in the worst case. We prove two main results on the simplex method. Firstly, we show that it is PSPACE-complete to find the solution that is computed by the simplex method using Dantzig's pivot rule. Secondly, we prove that deciding whether Dantzig's rule ever chooses a specific variable to enter the basis is PSPACE-complete. We use the known connection between Markov decision processes (MDPs) and linear programming, and an equivalence between Dantzig's pivot rule and a natural variant of policy iteration for average-reward MDPs. We construct MDPs and show PSPACE-completeness results for single-switch policy iteration, which in turn imply our main results for the simplex method.

Joint work with John Fearnley

24 February 2015, 16:00, MS.05

Random walks on dynamic graphs given by dynamical percolation

We study the behavior of random walk on dynamical percolation. In this model, the edges of a graph G are either open or closed, and refresh their status at rate \mu. At the same time a random walker moves on G at rate 1 but only along edges which are open. The regime of interest here is when \mu goes to zero as the number of vertices of G goes to infinity, since this creates long-range dependencies on the model. When G is the d-dimensional torus of side length n, we prove that in the subcritical regime, the mixing times is of order n^2/\mu. We also obtain results concerning mean squared displacement and hitting times.

This is a joint work with Yuval Peres and Jeff Steif.

17 February 2015, 16:00, MS.05

Decompositions of large graphs into small subgraphs

A fundamental theorem of Wilson states that, for every graph F, every sufficiently large F-divisible clique has an F-decomposition. Here G has an F-decomposition if the edges of G can be covered by edge-disjoint copies of F (and F-divisibility is a trivial necessary condition for this). We extend Wilson’s theorem to graphs which are allowed to be far from complete (joint work with B. Barber, D. Kuhn, A. Lo).

I will also discuss some results and open problems on decompositions of dense graphs and hypergraphs into Hamilton cycles and perfect matchings.

10 February 2015, 16:00, MS.05

A Distributed Greedy Algorithm for Submodular Maximization in Large Datasets

A variety of practical optimization problems such as sensor placement, document summarization, and influence maximization can be formulated in terms of maximizing a non-decreasing submodular function. One simple, yet practically effective approach to this problem is the simple, standard greedy algorithm, which requires time Θ(n^2). Unfortunately, this makes it impractical for extremely large data sets.

In this talk, I will present a new distributed greedy algorithm, based on joint work with Rafael Barbosa, Alina Ene, and Huy Le Nguyen. Unlike previous distributed approaches, our algorithm leverages the power of randomization to attain a constant-factor approximation, independent of the number of machines used. Moreover, our results can be applied to general matroid constraints and knapsack constraints, as well as decreasing submodular functions.

27 January 2015, 16:00, MS.05

Colouring graphs without odd holes

Gyárfás conjectured in 1985 that if G is a graph with no induced cycle of odd length at least 5, then the chromatic number of G is bounded by a function of its clique number. We prove this conjecture, and discuss some further results. Joint work with Paul Seymour.

20 January 2015, 16:00, MS.05

Learning game-theoretic equilibria via query protocols

In the traditional model of algorithmically solving a game, the entire game is the "input data", which is presented in its entirety to an algorithm that is supposed to compute a solution, for example an exact/approximate Nash/correlated equilibrium. In some situations it may be preferable to regard the game as a "black box" which the algorithm can find out about via queries. For example, a complete description of a multi-player game may be infeasibly large. In this talk, we give an overview of recent work on algorithms that find game-theoretic equilibria via a sequence of queries that specify pure-strategy profiles, and answer with the associated payoffs. The main research issue is "query complexity", which refers to how many queries are needed in order to find a given kind of solution.

13 January 2015, 16:00, MS.05

Regular subgraphs of hypergraphs

An n-vertex graph with no r-regular subgraphs are forests if r=2, thus it has at most n-1 edges. For r>=3, Pyber showed that it has at most crn log(n) edges for some cr. For hypergraphs, Mubayi and Verstraete showed that n-vertex k-graphs with no 2-regular subgraphs has at most {n-1 choose k-1} edges if k>=4 is even and n is sufficiently large. In this talk, we prove that for every integer r>=3, an n-vertex k-graph H containing no r-regular subgraphs has at most (1+o(1)){n-1 choose k-1} edges if k>r and n is sufficiently large. Moreover, if r=3,4, r|k, and n, k are both sufficiently large, then the maximum number of edges in H is exactly {n-1 choose k-1}, with equality only if all edges contain a specific vertex v. We also ask some related questions.

6 January 2015, 16:00, MS.05

Rainbow triangles in three-colored graphs

Erdős and Sós proposed a problem of determining the maximum number F(n) of rainbow triangles in 3-edge-colored complete graphs on n vertices. They conjectured that F(n)=F(a)+F(b)+F(c)+F(d)+abc+abd+acd+bcd, where a+b+c+d=n and a,b,c,d are as equal as possible. We prove that the conjectured recurrence holds for sufficiently large n. We also prove the conjecture for n = 4^k for all k >= 0. These results imply that lim F(n)/{n choose 3}=0.4, and determine the unique limit object. In the proof we use flag algebras combined with stability arguments.

Joint work with József Balogh, Bernard Lidický, Florian Pfender, Jan Volec and Michael Young.

2 December 2014, 16:00, MS.05

On the chromatic number of generalized shift graph

A generalized shifted graph is a graph whose vertices are ordered k-subsets a well ordered ground set and two vertices a_1<a_2<...<a_k and b_1<b_2<...<bk are adjacent if they form some given pattern such as: a_1<a_2<b_1<a_3<b_2<...<bk-2<a_k<bk-1,<bk. In the talk we study the behaviour of the chromatic number of such graphs. (This is a joint work with Christian Avart and Vojta Rödl.)

25 November 2014, 16:00, MS.05

The Ramsey number of the clique and the hypercube

The Ramsey number r(K_s,Q_n) is the smallest integer N such that every red-blue colouring of the edges of the complete graph on N vertices contains either a red n-dimensional hypercube Q_n, or a blue clique on s vertices. In 1983, Burr and Erdos conjectured that r(K_s,Q_n) = (s-1)(2^n - 1)+1 for every positive integer s and sufficiently large n. In this talk we shall sketch the proof of this conjecture and discuss some related problems. (Joint work with Gonzalo Fiz Pontiveros, Simon Griffiths, Rob Morris and David Saxton.)

11 November 2014, 16:00, MS.05

Ramsey multiplicity of patterns in abelian groups

Burr and Rosta conjectured that given any fixed (small) graph H, a random 2-colouring of the edges of the complete graph K_n contains (asymptotically) the minimum number of monochromatic copies of H. This conjecture was disproved by Sidorenko, who showed that it is false when H is a triangle with a pendant edge. Despite a number of other special cases having been resolved since, the general classification problem remains wide open. We explore an analogous question concerning monochromatic additive configurations contained in 2-colourings of the cyclic group Z_p and the finite-dimensional vector space F_p^n. (This is joint work with Alex Saad.)

10 November 2014, 16:00, B3.01

Stable Matching, Friendship, and Altruism

We will discuss both integral and fractional versions of "correlated stable matching" problems. Each player is a node in a social network and strives to form a good match with a neighboring player; the player utilities from forming a match are correlated. We consider the existence, computation, and inefficiency of stable matchings from which no pair of players wants to deviate. We especially focus on networks where players are embedded in a social context, and may incorporate friendship relations or altruism into their decisions.

When the benefits from a match are the same for both players, we show that incorporating the well-being of other players into their matching decisions significantly improves the quality of stable solutions. Furthermore, a good stable matching always exists and can be reached in polynomial time. We extend these results to more general matching rewards, when players matched to each other may receive different utilities from the match. For this more general case, we show that incorporating social context (i.e., "caring about your friends") can make an even larger difference, and greatly reduce the price of anarchy. Finally, we extend most of our results to network contribution games, in which players can decide how much effort to contribute to each incident edge, instead of simply choosing a single node to match with.

4 November 2014, 16:00, MS.05

Structure in large spectra

The large spectrum, the set of frequencies where the Fourier transform of a given set is particularly large, plays an important role in arithmetic combinatorics. A important result about such sets is Chang's lemma, which gives a sharp bound for the additive dimension of such sets, the size of the largest dissociated subset. We present a new structural lemma which, for certain applications, is a quantitative improvement of Chang's lemma. As an application, we discuss how this can be used to obtain a new quantitative result for Roth's theorem on three term arithmetic progressions.

28 October 2014, 16:00, MS.05

Efficient Teamwork

In real-life multi-agent projects, agents often choose actions that are highly inefficient for the project or damaging for other agents because they care only about their own contracts and interests. We show that this can be avoided by the right project management. We model agents with private workflows including hidden actions and chance events, which can influence each other through publicly observable actions and events. We design an efficient mechanism for this model which is prior-free, incentive-compatible, collusion-resistant, individually rational and avoids free-riders.

21 October 2014, 16:00, MS.05

Flag triangulations of spheres: f-, h-, and γ- numbers

Heinz Hopf asked if the sign of the (reduced) Euler characteristic of the nonpositively curved manifold is determined by its dimension. The question was motivated by the following strategy (for Riemannian met‐ ric). Is there a higher‐dimensional analogue of Gauß‐Bonnet formula? Is the integrand of constant sign? The answer to the first question is affirmative, but the second is not true. Thus the problem remains open. If the manifold carries piecewise Euclidean metric, the version of Gauß‐Bonnet formula is obvious. Even in the simplest case of cubbed manifold (i.e. a manifold made of cubes) the "integrand" becomes an in‐ teresting combinatorial quantity. According to extremely simple, though beautiful construction due to Mike Davis the second question is equivalent to the original Hopf question (for cubbed manifolds). In the talk we would discuss the connection between "no missing faces" triangulation of spheres and nonpositively curved cubbed manifolds. We would derive famous Charney‐Davis Conjecture (Hopf conjecture for cubbed manifolds) and discuss the up‐to‐date human knowledge about it (as well as hopes and illusions).

14 October 2014, 16:00, MS.05

Information Theoretical Cryptogenography

We consider problems where n people are communicating and a random subset of them is trying to leak information, without making it clear who are leaking the information. We introduce a measure of suspicion, and show that the amount of leaked information will always be bounded by the expected increase in suspicion, and that this bound is tight. Suppose a large number of people have some information they want to leak, but they want to ensure that after the communication, an observer will assign probability at most c to the events that each of them is trying to leak the information. How much information can they reliably leak, per person who is leaking? We show that the answer is -log(1−c)/c-log(e) bits.

7 October 2014, 16:00, MS.05

Approximation Algorithms for Multiway Partitioning Problems and a Simplex Coloring Conjecture

We consider several problems where the goal is partition a ground set into several pieces while minimizing a "cut-type" objective function; examples include Multiway Cut, Node-weighted Multiway Cut, Metric Labeling and Hypergraph Labeling. A natural LP relaxation gives an optimal approximation for these problems, assuming the Unique Games Conjecture (the UGC assumption can be removed for certain submodular generalizations of these problems). However, we do not know how to round this LP in general and the focus has been on understanding this LP for specific problems. In this talk, we describe several rounding strategies and an integrality gap construction that leads to a simplex coloring conjecture reminiscent of Sperner’s Lemma.

This talk is based on joint work with Chandra Chekuri (UIUC), Huy Nguyen (Simons Institute Berkeley), Jan Vondrak (IBM Research Almaden), and Yi Wu (Google).

30 September 2014, 16:00, MS.05

A tetrachotomy for positive equality-free logic

We consider the problem of evaluating positive equality-free sentences of FO on a fixed, finite relational structure B. This may be seen as a generalisation of the Quantified Constraint Satisfaction Problem (QCSP), itself a generalisation of the Constraint Satisfaction Problem (CSP). We argue that our generalisation is not totally arbitrary, and that ours is the only problem in a large class - other than the CSP and QCSP - whose complexity taxonomy was unsolved.

We introduce surjective hyper-endomorphisms in order to give a Galois connection that characterises definability in positive equality-free FO. Through the algebraic method we are able to characterise the complexity of our problem for all finite structures B. Specifically, the problem is either in Logspace, NP-complete, co-NP-complete or Pspace-complete.

The problem appears obtuse, but possesses a surprising elegance. There may yet be lessons to be learnt in the methodology of the solution of this case, for the continuing program for CSP.

Tony Wirth, University of Melbourne
3 July 2014, 16:00, D1.07 (Zeeman building)

Rooted Triplet Inconsistency: Hardness and Algorithms

Andrew Chester (Melbourne), Riccardo Dondi (Bergamo), Anthony Wirth (Melbourne) The Minimum Rooted Triplet Inconsistency (MinRTI) problem represents a key computational task in the construction of phylogenetic trees, whose goal is the reconstruction of a large tree that incorporates the information contained in a given set of triplets. Inspired by Aho et al's seminal paper and Bryant's thesis, we consider an edge-labelled multigraph problem, called Minimum Dissolving Graph (MinDG), and we show that is equivalent to MinRTI. We prove that on an n-vertex graph, for every x > 0, MinDG is hard to approximate within a factor in O(2^{\og^{1-x}n}), even on trees formed by multi-edges. Then, via a further reduction, this inapproximability result extends to MinRTI. In addition, we provide polynomial-time algorithms that return optimal solutions when the input multigraph is restricted to a multi-edge path or a simple tree. Finally, we investigate the complexity of MinRTI (and of MinDG) when the number of occurrences of every label is bounded, and we show that the problem is APX-hard when this bound is equal to five.

19 June 2014, 14:00, MS.05

Subexponential Parameterized Complexity of Completion Problems

Let P be a fixed hereditary graph class. In the P Completion problem, given a graph G and an integer k, we ask whether it is possible to add at most k edges to G to obtain a member of P. In the recent years completion problems received significant attention from the perspective of parameterized complexity, with the standard parameterization by k. In my talk I will first survey the history of the study of parameterized complexity of completion problems, including the breakthrough paper of Villanger et al that settles fixed-parameter tractability of Interval Completion, as well as recent advancements on polynomial kernelization.
Then, we will move to the main topic of the talk, namely subexponential parameterized algorithms. First fixed-parameter algorithms for completion problems focused mostly on the ‘forbidden induced subgraphs’ definition of the graph class P in question. In 2012 Fomin and Villanger came up with a novel idea to instead focus on some structural definition of the class P, trying to build the modified output graph by dynamic programming. Slightly simplifying, we may say that the main technical contribution of Fomin and Villanger is a bound of at most exp(sqrt(k)log(k)) reasonable ‘partial chordal graphs’ for an input instance (G, k) of Chordal Completion. Consequently, Chordal Completion can be solved in exp(sqrt(k)log(k))+poly(n) time. Following the approach of Fomin and Villanger, in the past two years subexponential parameterized algorithms were shown for the class of chain, split, threshold, trivially perfect, pseudosplit and, very recently, proper interval and interval graphs. Moreover, a few lower bounds for related graph classes were found.
In my talk I will present the approach of Fomin and Villanger mostly on the example of Trivially Perfect Completion, and then survey the main ideas needed in the remaining algorithms.

Jun Zhang, NTU Singapore
18 June 2014, 15:00, MS.05

Differentially Private Data Release

Privacy preserving data publishing is an important problem that has been extensively studied in the past decades. The state-of-the-art solution to the problem is differential privacy, which offers a strong degree of privacy protection without relying on restrictive assumptions of the adversary. In this talk, we first give a brief introduction to differential privacy, then show how to handle the publication of high-dimensional sensitive data with differential privacy guarantees.

10 June 2014, 16:00, MS.05

The height of the tower in Szemerédi's regularity lemma

Szemerédi’s regularity lemma is one of the most powerful tools in graph theory. Roughly speaking, the lemma says that the vertex set of any graph may be partitioned into a small number of parts such that the bipartite subgraph between almost every pair of parts behaves in a random-like fashion. Addressing a question of Gowers, we determine the order of the tower height for the partition size in a version of Szemerédi’s regularity lemma.

Joint work with Jacob Fox.

3 Jun 2014, 16:00, MS.05

Minors and dimension

The dimension of a poset P is the minimum number of linear extensions of P whose intersection is equal to P. This parameter plays a similar role for posets as the chromatic number does for graphs. A lot of research has been carried out in order to understand when and why the dimension is bounded. There are constructions of posets with height 2 (but very dense cover graphs) or with planar cover graphs (but unbounded height) that have unbounded dimension. Streib and Trotter proved in 2012 that posets with bounded height and with planar cover graphs have bounded dimension. Recently, Joret et al. proved that the dimension is bounded for posets with bounded height whose cover graphs have bounded tree-width. My current work generalizes both these results, showing that the dimension is bounded for posets of bounded height whose cover graphs exclude a fixed (topological) minor. The proof is based on the Robertson-Seymour and Grohe-Marx structural decomposition theorems.

In this talk, I will survey results relating the dimension of a poset to structural properties of its cover graph and present some ideas behind the proof of the result on excluded minors.

27 May 2014, 16:00, MS.05

Some problems around random walks

I will present a selection of open problems related to random walks on graphs that I hope will be of interest to the computer science and extremal combinatorics communities.

20 May 2014, 16:00, MS.05

Subcubic triangle-free graphs have fractional chromatic number at most 14/5

We prove the following conjecture of Heckman and Thomas: every triangle-free graph with maximum degree 3 is fractionally (14/5)-colorable. A graph is fractionally k-colorable if every vertex can be assigned a measurable subset of [0,k) of unit Lebesgue measure and the sets of every two adjacent vertices are disjoint.

This is a joint work with Zdeněk Dvořák and Jean-Sébastien Sereni.

13 May 2014, 16:00, MS.05

The topology and Möbius function of the permutation pattern poset

An occurrence of the pattern p in a permutation Π is a subsequence of length |p| in Π whose letters appear in the same order of size as the letters in p. For example, the subsequence 425 in 341625 is an occurrence of 213, whereas 341625 avoids 321. The set of all finite permutations forms a poset with respect to pattern containment. As this poset embodies all pattern containment (and avoidance) in permutations, it is a fundamental object in all pattern studies. We study its intervals [A,B], that is, sets of permutations containing a given permutation A and contained in another permutation B. This poset has a rich structure, only a little of which is understood so far.

Any interval I has an associated simplicial complex, called its order complex, whose topological properties are closely connected to properties of I. For example, the reduced Euler characteristic of an order complex equals the Möbius function of the underlying poset. Thus, a boolean algebra, whose order complex is a sphere, has Möbius function ±1.

So far, we only know a few things about the topology and Möbius function of intervals in this poset. For example, we know that intervals of layered permutations (which are concatenations of decreasing sequences, each with smaller letters than the next, such as 321465) are shellable. That implies they are homotopy equivalent to wedges of spheres, and the Möbius function, which equals, up to a sign, the number of spheres, can be computed in polynomial time. The same is true for intervals of permutations with a fixed number of descents (pairs of adjacent letters in decreasing order). We can also compute, in polynomial time, the Möbius function for intervals of separable permutations (those avoiding 2413 and 3142) and we conjecture that they are also shellable.

I will report on joint work with Peter McNamara and work of my student Jason Smith.

29 April 2014, 16:00, MS.05

Diffusion Load Balancing

In Diffusion Load Balancing one assumes that a set of m tasks are arbitrarily distributed over n resources. The system is modelled by a graph where the nodes represent the resources and the edges represent communication links between resources. Diffusion Load Balancing works in synchronous rounds. In every round every node is allowed to balance its load with its direct neighbours in the network. The later model is usually assumed to be the more realistic model. The question is not how long does it take to balance the load as evenly as possible.

One distinguishes between continuous and discrete models. In continuous load balancing the tasks can be split arbitrarily, allowing neighbouring nodes to balance their load evenly. In discrete load balancing tasks can not be split at all and a well balanced situation is much harder to achieve.

In this talk I will introduce two different frameworks that translate a continuous load balancing scheme into a discrete versions. The first scheme tries to simulate the continuous algorithm whereas the second scheme is a more direct approach.

14 March 2014, 15:00, MS.05

Balls into Bins via Local Search

We study a natural process for allocating m balls (tasks) into n bins (resources) that are organized as the vertices of an undirected graph G. Balls arrive one at a time. When a ball arrives, it first chooses a vertex u in G uniformly atrandom. Then the ball performs a local search in G starting from u until it reaches a vertex with local minimum load, where the ball is finally placed on. Then the next ball arrives and this procedure is repeated. In this talk we derive bounds on the maximum load of this process and the time until every bin has at least one ball allocated to it.

11 March 2014, 16:00, MS.05

RSP-based analysis for the efficiency of l1-minimization for solving l0-problems

Many practical problems (e.g., signal and image processing) can be formulated as l0-minimization problems, which seek the sparsest solution to an underdetermined linear system. The recent study indicates that l1-minimization is efficient for solving l0-problems in many situations. From a mathematical point of view, however, the understanding of the relationship between l0- and l1-minimization remains incomplete. Their relationship can be further interpreted via the property of the range space of the transpose of matrices, which provides an angle to completely characterize the uniqueness of l1-minimizers and the uniform recovery of k-sparse signals. This analysis leads naturally to the concept of range space property (RSP) and the so-called 'full-column-rank' property, which altogether provide a broad understanding of the equivalence and the strong equivalence between l0- and l1-minimization problems.

04 March 2014, 16:00, MS.05

A new optimization framework for dynamic resource allocation problems

Many decision problems can be cast as dynamic resource allocation problems, i.e., they can be framed in the context of a set of requests requiring a complex time-based interaction amongst a set of available resources. For instance scheduling, one of the classic problem in Operations Research, belongs to this class.

For this class of problem we present a new optimization framework. The framework a) allows modeling flexibility by incorporating different objective functions, alternative sets of resources and fairness controls; b) is widely applicable in a variety of problems in transportation, services and engineering; and c) is tractable, i.e., provides near optimal solutions fast for large-scale instances. To justify these assertions, we model and report encouraging computational results on three widely studied problems - the Air Traffic Flow Management, the Aircraft Maintenance Problems and Job Shop Scheduling. Finally, we provide several polyhedral results that offer insights on its effectiveness.

Joint work with D. Bertsimas and S. Gupta

25 February 2014, 16:00, MS.05

Fast projection methods for robust separable nonnegative matrix factorization

Nonnegative matrix factorization (NMF) has become a widely used tool for analysis of high-dimensional data. In this talk, we first present a series of applications of NMF in image processing, text mining, and hyperspectral imaging. Then we address the problem of solving NMF, which is NP-hard in general. However, certain special cases, such as the case of separable data with noise, are known to be solvable in polynomial time. We propose a very fast successive projection algorithm for this case. Variants of this algorithm have appeared previously in the literature; our principal contribution is to prove that the algorithm is robust against noise and to formulate bounds on the noise tolerance. A second contribution is an analysis of a preconditioning strategy based on semidefinite programming that significantly improves the robustness against noise. We present computational results on artificial data and on a simulated hyperspectral imaging problem. (Joint work with Stephen Vavasis; see http://arxiv.org/abs/1208.1237 and http://arxiv.org/abs/1310.2273 for more details.)

18 February 2014, 16:00, MS.05

Temporal Networks : Optimization and Connectivity

We consider here Temporal Networks. They are defined by a labelling L assigning a set of discrete time-labels to each edge of the graph G of the network. The labels of an edge are natural numbers. They indicate the discrete time moments at which the edge is available. We focus on path problems of temporal networks. In particular we consider time-respecting paths , i.e. paths whose edges are assigned by L a strictly increasing sequence of labels. We begin by giving efficient algorithms for computing shortest time-respecting paths. We then prove a natural analog of Menger’s theorem holding for arbitrary temporal networks. Finally we propose two cost minimization parameters for temporal network design. One is the “temporality” of G , in which the goal is to minimize the maximum number of labels per edge and the other is the temporal cost of G , where the goal is to minimize the total number of labels used. Optimization of these parameters is subject to some connectivity constraints. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graphs like rings, trees, directed acyclic graphs. We also examine how hard is to compute (even approximately) such costs. This work is joint with G. Mertzios and O. Michail and appeared in ICALP 2013.

11 February 2014, 16:00, MS.05

Strongly polynomial algorithm for generalized flow maximization

The generalized flow model is a classical extension of network flows. Besides the capacity constraints, there is a gain factor given on every arc, such that the flow amount gets multiplied by this factor while traversing the arc. Gain factors can be used to model physical changes such as leakage or theft. Other common applications use the nodes to represent different types of entities, e.g. different currencies, and the gain factors correspond to the exchange rates. We investigate the flow maximization problem in such networks. This is described by a linear program, however, no strongly polynomial algorithm was known before, despite a vast literature on combinatorial algorithms. From a general linear programming perspective, this is probably the simplest linear program where Tardos's general result on strongly polynomial algorithms does not apply. We give the first strongly polynomial algorithm for the problem. It uses a new variant of the classical scaling technique, called continuous scaling. The main measure of progress is that within a strongly polynomial number of steps, an arc can be identified that must be tight in every dual optimal solution, and thus can be contracted, reducing the size of the problem instance.

31 January 2014, 15:00, MS.04

Learning Sums of Independent Integer Random Variables

Let S be a sum of n independent integer random variables, each supported on {0,1,...,k−1}. How many samples are required to learn the distribution of S to high accuracy? In this talk I will show that the answer is completely independent of n, and moreover I will describe a computationally efficient algorithm which achieves this low sample complexity. More precisely, the algorithm learns any such S to ε-accuracy (with respect to the total variation distance) using poly(k,1/ε) samples, independent of n. Its running time is poly(k,1/ε) in the standard word RAM model.

This result a broad generalization of the main result of [Daskalakis, Diakonikolas, Servedio-STOC'12] which gave a similar learning result for the special case k=2 (when the distribution S is a Poisson Binomial Distribution). Prior to this work, no nontrivial results were known for learning these distributions even in the case k=3. A key difficulty is that, in contrast to the case of k=2, sums of independent {0,1,2}-valued random variables may behave very differently from (discretized) normal distributions, and in fact may be rather complicated --- they are not log-concave, they can have Ω(n) local optima, there is no relationship between the Kolmogorov distance and the total variation distance for the class, etc.

The heart of the learning algorithm is a new limit theorem which characterizes what the sum of an arbitrary number of arbitrary independent {0,1,...,k−1}-valued random variables may look like. Previous limit theorems in this setting made strong assumptions on the "shift invariance" of the constituent random variables in order to force a discretized normal limit. We believe that our new limit theorem, as the first result for truly arbitrary sums of independent {0,1,...,k−1}-valued random variables, is of independent interest.

The talk will be based on a joint work with Costis Daskalakis, Ryan O'Donnell, Rocco Servedio and Li-Yang Tan.

21 January 2014, 16:00, MS.05

A domination algorithm for {0,1}-instances of the travelling salesman problem

In this talk, I shall discuss an approximation algorithm for {0,1}-instances of the travelling salesman problem which performs well with respect to combinatorial dominance. More precisely, given a {0,1}-edge-weighting of the complete graph Kn on n vertices, our algorithm outputs a Hamilton cycle H of Kn with the following property: the proportion of Hamilton cycles of Kn whose weight is smaller than that of H is at most n-1/29 = o(1). Our analysis is based on a martingale approach. Previously, the best result in this direction was a polynomial-time algorithm with domination ratio 1/2 - o(1) for arbitrary edge weights. On the hardness side we can show that, if the Exponential Time Hypothesis holds, there exists a constant C such that n-1/29 cannot be replaced by exp(-(log n)C) in the result above. (Joint work with Daniela Kühn and Deryk Osthus)

4 December 2013, 16:00, MS.03

Tight Lower Bounds for the Online Labeling Problem

In the online labeling problem with parameters n and m we are presented with a sequence of n keys from a totally ordered universe U and must assign each arriving key a label from the label set {1,2,...,m} so that the order of labels (strictly) respects the ordering on U. As new keys arrive it may be necessary to change the labels of some items; such changes may be done at any time at unit cost for each change. The goal is to minimize the total cost. An alternative formulation of this problem is the file maintenance problem, in which the items, instead of being labeled, are maintained in sorted order in an array of length m, and we pay unit cost for moving an item. Although there exists simple algorithm for solving file maintenance problem for more than 30 years (Itai, Konheim, Rodeh, 1981) no matching lower bound was known until our recent result (Bulanek, Koucky, Saks, STOC 2012). For the case m=nC for C>1, there was known lower bound (Dietz, Seiferas, Zhang, 2004), however the proof was rather complicated and could not be extended to even bigger m (for example m=nlog n). This was solved in our next result (Babka, Bulanek, Cunat, Koucky, Saks, ESA 2013). In this talk I would like to talk about the importance of the Online Labeling for Cache oblivious algorithms and also I would like to show basic ideas of our results.

Joint work with Martin Babka, Vladimír Cunat, Michal Koucky, Michael Saks

3 December 2013, 16:00, MS.05

The Evolution of Subcritical Achlioptas Processes

In Achlioptas processes, starting from an empty graph, in each step two potential edges are chosen uniformly at random, and using some rule one of them is selected and added to the evolving graph. Although the evolution of such 'local' modifications of the Erdös-Rényi random graph process has received considerable attention during the last decade, so far only rather simple rules are well understood. Indeed, the main focus has been on 'bounded-size' rules, where all component sizes larger than some constant B are treated the same way, and for more complex rules very few rigorous results are known.

We study Achlioptas processes given by (unbounded) size rules such as the sum and product rules. Using a variant of the neighbourhood exploration process and branching process arguments we show that certain key statistics are tightly concentrated at least until the susceptibility (the expected size of the component containing a randomly chosen vertex) diverges. Our convergence result is most likely best possible for certain rules: in the later evolution the number of vertices in small components may not be concentrated. Furthermore, we believe that for a large class of rules the critical time where the susceptibility 'blows up' coincides with the percolation threshold.

Joint work with Oliver Riordan.

26 November 2013, 16:00, MS.05

A Unified Approach to Truthful Scheduling on Related Machines

We present a unified framework for designing deterministic monotone polynomial time approximation schemes (PTAS's) for a wide class of scheduling problems on uniformly related machines. This class includes (among others) minimizing the makespan, maximizing the minimum load, and minimizing the lp norm of the machine loads vector. Previously, this kind of result was only known for the makespan objective. Monotone algorithms have the property that an increase in the speed of a machine cannot decrease the amount of work assigned to it. The key idea of our novel method is to show that for goal functions that are sufficiently well-behaved functions of the machine loads, it is possible to compute in polynomial time a highly structured nearly optimal schedule. Monotone approximation schemes have an important role in the emerging area of algorithmic mechanism design. In the game-theoretical setting of these scheduling problems there is a social goal, which is one of the objective functions that we study. Each machine is controlled by a selfish single-parameter agent, where its private information is its cost of processing a unit sized job, which is also the inverse of the speed of its machine. Each agent wishes to maximize its own profit, defined as the payment it receives from the mechanism minus its cost for processing all jobs assigned to it, and places a bid which corresponds to its private information. For each one of the problems, we show that we can calculate payments that guarantee truthfulness in an efficient manner. Thus, there exists a dominant strategy where agents report their true speeds, and we show the existence of a truthful mechanism which can be implemented in polynomial time, where the social goal is approximated within a factor of 1+ε for every ε>0.

26 November 2013, 11:00, Mathematics A1.01

Balanced Graph Partitioning for Massive Scale Computations

In recent time, there has been a lot of interest in balanced graph partitioning of large scale graphs to scale out computations that use as input a large-scale graph input data by running in parallel on distributed clusters of machines. Traditional balanced graph partitioning asks to partition a set of vertices such that the number of vertices across different partitions is balanced within some degree of slackness, and the number of cut edges is minimized. This problem is known to be NP hard and the best known approximation ratio is O(\sqrt{\log n \log k}) for partitioning of a graph with n vertices to k partitions. Substantial work was recently devoted to studying the online graph partitioning problem where vertices are required to be irrevocably assigned to partitions as they are observed in an input stream of vertices, mostly by studying various heuristics and developing some limited theoretical results. An alternative way to partition a graph that has emerged recently is so called edge partitioning, where instead, the set of edges is partitioned. The motivation for the latter strategy for partitioning a graph is that many real-world graphs exhibit a power-law degree sequence, so some portion of vertices have large degrees and allowing to assign the edges incident to high-degree vertices to different partitions may allow to achieve a good load balance and a small cut. The problem becomes even more challenging with the cut cost defined to account for the aggregation of messages that is allowed by many distributed computations. In this talk, we will present our work and results in the area.

20 November 2013, 16:00, MS.05

Bidimensionality on Geometric Intersection Graphs

Let B be a finite collection of geometric (not necessarily convex) bodies in the plane. Clearly, this class of geometric objects naturally generalizes the class of disks, polylines, ellipsoids and even convex polyhedra. We consider geometric intersection graphs GB where each body of the collection B is represented by a vertex, and two vertices of GB are adjacent if the intersection of the corresponding bodies is non-empty. For such graph classes and under natural restrictions on their maximum degree or subgraph exclusion, we prove that the relation between their tree-width and the maximum size of a grid minor is linear. These combinatorial results vastly extend the applicability of all the meta-algorithmic results of the bidimensionality theory to geometrically defined graph classes.

19 November 2013, 16:00, MS.05

Equidistributions on Planar Maps via Involutions on Description Trees

Description trees were introduced by Cori, Jacquard and Schaeffer in 1997 to give a general framework for the recursive decompositions of several families of planar maps studied by Tutte in a series of papers in the 1960s. We are interested in two classes of planar maps which can be thought as connected planar graphs embedded in the plane or the sphere with a directed edge distinguished as the root. These classes are rooted non-separable (or, 2-connected) and bicubic planar maps, and the corresponding to them trees are called, respectively, β(1,0)-trees and β(0,1)-trees.

Using different ways to generate these trees we define two endofunctions on them that turned out to be involutions. These involutions are not only interesting in their own right, in particular, from counting fixed points point of view, but also they were used to obtain non-trivial equidistribution results on planar maps, certain pattern avoiding permutations, and objects counted by the Catalan numbers.

The results to be presented in this talk are obtained in a series of papers in collaboration with several researchers.

15 November 2013, 15:00, MS.03

Multi-Branch Split Cutting Planes for Mixed-Integer Programs

Cutting planes (cuts, for short), or inequalities satisfied by integral solutions of systems of linear inequalities, are important tools used in modern solvers for integer programming problems. In this talk we present theoretical and computational results on split cuts — studied by Cook, Kannan and Schrijver (1990), and related to Gomory mixed-integer cuts — and recent generalizations, namely multi-branch split cuts. In particular, we give the first pure cutting plane algorithm to solve mixed-integer programs based on multi-branch split cuts. We also show that there are mixed-integer programs with n+1 variables which are unsolvable by (n-1)-branch split cuts. In computational work, we consider a family of quadratic unconstrained boolean optimization problems recently used in tests on the DWave quantum computer and discuss how they can be solved using special families of split cuts in reasonable time.

This is joint work with Neil Dobbs, Oktay Gunluk, Tomasz Nowicki, Grzegorz Swirczsz and Marcos Goycoolea.

12 November 2013, 16:00, MS.05

Partition Regularity in the Rationals

A system of linear equations is partition regular if, whenever the natural numbers are finitely coloured, it has a monochromatic solution. We could instead talk about partition regularity over the rational numbers, and if the system is finite then these notions coincide. What happens in the infinite case? Joint work with Neil Hindman and Imre Leader.

5 November 2013, 16:00, MS.05

Dynamic Financial Hedging Strategies for a Storable Commodity with Demand Uncertainty

We consider a firm purchasing and processing a storable commodity in a volatile commodity price market. The firm has access to both a commodity spot market and an associated financial derivatives market. The purchased commodity serves as a raw material which is then processed into an end product with uncertain demand. The objective of the firm is to coordinate the replenishment and financial hedging decisions to maximize the mean-variance utility of its terminal wealth over a finite horizon.

We employ a dynamic programming approach to characterize the structure of optimal time-consistent policies for inventory and financial hedging decisions of the firm. Assuming unmet demand is lost, we show that under forward hedges the optimal inventory policy can be characterized by a myopic state-dependent base-stock level. The optimal hedging policy can be obtained by minimizing the variance of the hedging portfolio, the value of excess inventory and the profit-to-go as a function of future price. In the presence of a continuum of option strikes, we demonstrate how to construct custom exotic derivatives using forwards and options of all strikes to replicate the profit-to-go function. The financial hedging decisions are derived using the expected profit function evaluated under the optimal inventory policy. These results shed new light into the corporate risk and operations management strategies: inventory replenishment decisions can be separated from the financial hedging decisions as long as forwards are in place, and the dynamic inventory decision problem reduces to a sequence of myopic optimization problems. In contrast to previous results, our work implies that financial hedges do affect optimal operational policies, and inventory and financial hedges can be substitutes. Finally, we extend our analysis to the cases with backorders, price-sensitive demand, and variable transaction costs.

Joint work with Panos Kouvelis (Olin School of Business, Washington University in St. Louis, US) and Qing Ding (Huazhong University of Science and Technology, Wuhan, China.)

30 October 2013, 16:00, MS.05

Faster Algorithms for the Sparse Fourier Transform

The Fast Fourier Transform (FFT) is one of the most fundamental numerical algorithms. It computes the Discrete Fourier Transform (DFT) of an n-dimensional signal in O(n log n) time. The algorithm plays an important role in many areas. It is not known whether its running time can be improved. However, in many applications, most of the Fourier coefficients of a signal are "small" or equal to zero, i.e., the output of the transform is (approximately) sparse. In this case, it is known that one can compute the set of non-zero coefficients faster than in O(n log n) time.

In this talk, I will describe a new set of efficient algorithms for the sparse Fourier Transform. One of the algorithms has the running time of O(k log n), where k is the number of non-zero Fourier coefficients of the signal. This improves over the runtime of the FFT for any k = o(n). If time allows, I will also describe some of the applications, to spectrum sensing and GPS locking, as well as mention a few outstanding open problems.

The talk will cover the material from the joint papers with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Michael Kapralov, Dina Katabi, Eric Price and Lixin Shi. The papers are available at http://groups.csail.mit.edu/netmit/sFFT/

22 October 2013, 16:00, MS.05

Near-Optimal Multi-Unit Auctions with Ordered Bidders

I will discuss prior-free profit-maximizing auctions for digital goods. In particular, I will give an overview of the area and I will focus on prior-free auctions with ordered bidders and identical items. In this model, we compare the expected revenue of an auction to the monotone price benchmark: the maximum revenue that can be obtained from a bid vector using prices that are non-increasing in the bidder ordering and bounded above by the second-highest bid. I will discuss an auction with constant-factor approximation guarantee for identical items, in both unlimited and limited supply settings. Consequently, this auction is simultaneously near-optimal for essentially every Bayesian environment in which bidders' valuation distributions have non-increasing monopoly prices, or in which the distribution of each bidder stochastically dominates that of the next.

15 October 2013, 16:00, MS.05

Information Theory and Compressed Sensing

The central goal of compressed sensing is to capture attributes of a signal using very few measurements. In most work to date this broader objective is exemplified by the important special case of classification or reconstruction from a small number of linear measurements. In this talk we use information theory to derive fundamental limits on compressive classification, on the maximum number of classes that can be discriminated with low probability of error and on the tradeoff between the number of classes and the probability of misclassification. We also describe how to use information theory to guide the design of linear measurements by maximizing mutual information between the measurements and the statistics of the source.

10 October 2013, 16:00, CS1.01

Routing in Directed Graphs with Symmetric Demands

In this talk, we consider some fundamental maximum throughput routing problems in directed graphs. In this setting, we are given a capacitated directed graph. We are also given source-destination pairs of nodes (s_1, t_1), (s_2, t_2), ..., (s_k, t_k). The goal is to select a largest subset of the pairs that are simultaneously routable subject to the capacities; a set of pairs is routable if there is a multicommodity flow for the pairs satisfying certain constraints that vary from problem to problem (e.g., integrality, unsplittability, edge or node capacities). Two well-studied optimization problems in this context are the Maximum Edge Disjoint Paths (MEDP) and the All-or-Nothing Flow (ANF) problem. In MEDP, a set of pairs is routable if the pairs can be connected using edge-disjoint paths. In ANF, a set of pairs is routable if there is a feasible multicommodity flow that fractionally routes one unit of flow from s_i to t_i for each routed pair (s_i, t_i).

MEDP and ANF are both NP-hard and their approximability has attracted substantial attention over the years. Over the last decade, several breakthrough results on both upper bounds and lower bounds have led to a much better understanding of these problems. At a high level, one can summarize this progress as follows. MEDP and ANF admit poly-logarithmic approximations in undirected graphs if one allows constant congestion, i.e., the routing violates the capacities by a constant factor. Moreover, these problems are hard to approximate within a poly-logarithmic factor in undirected graphs even if one allows constant congestion. In sharp contrast, both problems are hard to approximate to within a polynomial factor in directed graphs even if a constant congestion is allowed and the graph is acyclic.

In this talk, we focus on routing problems in directed graphs in the setting in which the demand pairs are symmetric: the input pairs are unordered and a pair s_i t_i is routed only if both the ordered pairs (s_i,t_i) and (t_i,s_i) are routed. Perhaps surprisingly, the symmetric setting can be much more tractable than the asymmetric setting. As we will see in this talk, when the demand pairs are symmetric, ANF admits a poly-logarithmic approximation with constant congestion. We will also touch upon some open questions related to MEDP in directed graphs with symmetric pairs.

This talk is based on joint work with Chandra Chekuri (UIUC).

8 October 2013, 16:00, MS.05

NL-completeness of Equivalence for Deterministic One-Counter Automata

Emerging from formal language theory, a classical model of computation is that of pushdown automata. A folklore result is that equivalence of pushdown automata is undecidable. Concerning deterministic pushdown automata, there is still an enormous complexity gap, where the primitive recursive upper bound is not matched by the best-known lower bound of P-hardness. Thus, further subclasses have been studied. The aim of the talk is to sketch the ideas underlying the recent result that has been presented at STOC'13. It is shown that equivalence of deterministic one-counter automata is NL-complete. One-counter automata are pushdown automata over a singleton stack alphabet plus a bottom stack symbol. This improves the superpolynomial complexity upper bound shown by Valiant and Paterson in 1975. The talk is based on joint results with S. Göller and P. Jančar.

1 October 2013, 16:00, MS.05

Vertex-Minors of Graphs

The vertex-minor relation of graphs is defined in terms of local complementation and vertex deletion. This concept naturally arises in the study of circle graphs (intersection graphs of chords in a circle) and rank-width of graphs. Many theorems on graph minors have analogous results in terms of vertex-minors and some graph algorithms are based on vertex-minors. We will survey known results and discuss a recent result regarding unavoidable vertex-minors in very large graphs.

25 July 2013, 15:00, CS104

On Optimality of Clustering by Space Filling Curves

This talk addresses the following question: “How much do I lose if I insist on handling multi-dimensional data using one-dimensional indexes?”. A space filling curve (SFC) is a mapping from a multi-dimensional universe to a single dimension, and has been used widely in the design of data structures in databases and scientific computing, including commercial products such as Oracle. A fundamental quality metric of an SFC is its “clustering number”, which measures the average number of contiguous segments a query region can be partitioned into.

We present the first non-trivial lower bounds on the clustering number of any space filling curve for a general class of multidimensional rectangles, and a characterization of the clustering number of a general class of space filling curves. These results help resolve open questions including one posed by Jagadish in 1997, and show fundamental limits on the performance of this data structuring technique.

This is based on joint work with Pan Xu.

25 June 2013, 16:00, MS.05

Optimizing the Management of Crews and Aircrafts in Canary Islands

This talk deals with a routing-and-scheduling optimization problem that is faced by an airline company that operates in the Canary Islands. There are 10 airports, with around 180 flights each day in total, and the flight time between any two airports is around 30 minutes. All of the maintenance equipment is based at the airport in Gran Canaria, and therefore each aircraft must go to Gran Canaria after flying for two days. The crew members, however, live not only in Gran Canaria, but also in Tenerife. Each crew member expects to return to his domicile at the end of each day. There are also other regulations on the activities of the crew members to be considered in the optimization problem. The goal is to construct routes and schedules for the aircraft and the crew simultaneously, at minimum cost, while satisfying the above constraints.

This problem can be modelled as a "2-depot vehicle routing problem with capacitated vehicles and driver exchanges". It is a new and challenging optimization, closely related to the classical vehicle routing problem, but with some new features requiring an ad-hoc analysis.

In this seminar we show mathematical models in Integer Linear Programming, and describe "branch-and-cut" and "branch-and-price" techniques to find optimimal or near-optimal solutions. These techniques are capable of solving real-life instances. The software package is currently being used by the airline company.

20 June 2013, 16:00, MS.05

Streaming Verification of Outsourced Computation

When handling large quantities of data, it is desirable to outsource the computational effort to a third party: this captures current efforts in cloud computing, but also scenarios within trusted computing systems and inter-organizational data sharing. When the third party is not fully trusted, it is desirable to give assurance that the computation has been performed correctly. This talk presents some recent results in designing new protocols for verifying computations which are streaming in nature: the verifier (data owner) needs only a single pass over the input, storing a sublinear amount of information, and follows a simple protocol with a prover (service provider) that takes a small number of rounds. A dishonest prover fools the verifier with only polynomially small probability, while an honest prover's answer is always accepted. Starting from basic aggregations, interactive proof techniques allow a quite general class of computations to be verified, leading to practical implementations.

18 June 2013, 16:00, MS.05

A family of effective payoffs in stochastic games with perfect information

We consider two-person zero-sum stochastic games with perfect information and, for each integer k>=0, introduce a new payoff function, called the k-total reward. For k = 0 and 1 they are the so called mean payoff and total rewards, respectively. For all k, we prove Nash-solvability of the considered games in pure stationary strategies, and show that the uniformly optimal strategies for the discounted mean payoff (0-reward) function are also uniformly optimal for k-total rewards if the discount factor is close enough (depending on k) to 1. We also demonstrate that k-total reward games form a proper subset of (k + 1)-total reward games for each k. In particular, all these classes contain mean payoff games.

Joint work with Vladimir Gurvich (Rutgers, New Brunswick, USA), Khaled Elbassioni (Masdar Institute, Abu Dhabi, UAE), and Kazuhise Makino (RIMS, Kyoto, Japan)

11 June 2013, 16:00, MS.05

Reachability in Two-Clock Timed Automata is PSPACE-complete

Timed automata are a successful and widely used formalism, which are used in the analysis and verification of real time systems. Perhaps the most fundamental problem for timed automata is the reachability problem: given an initial state, can we perform a sequence of transitions in order to reach a specified target state? For timed automata with three or more clocks, this problem is PSPACE-complete, and for one-clock timed automata the problem is NL-complete. The complexity of reachability in two-clock timed automata has been left open: the problem is known to be NP-hard, and contained in PSPACE.

Recently, Haase, Ouaknine, and Worrell have shown that reachability in two-clock timed automata is log-space equivalent to reachability in bounded one-counter automata. In this work, we show that reachability in bounded one-counter automata is PSPACE-complete, and therefore we resolve the complexity of reachability in two-clock timed automata.

4 June 2013, 16:00, MS.05

Quadratization of pseudo-Boolean functions

A pseudo-Boolean function f is a real-valued function of 0,1 variables. Any such function can be represented uniquely by a multilinear expression in the binary input variables, and it is said to be quadratic if this expression has degree two or less. In recent years, several authors have proposed to consider the problem of `quadratizing' pseudo-Boolean functions by expressing f(x) as min {g(x,y): y in {0,1}^m } where g(x,y) is a quadratic pseudo-Boolean function of x and of additional binary variables y. We say that g(x,y) is a quadratization of f. In this talk, we investigate the number of additional variables needed in a quadratization.

28 May 2013, 16:00, MS.05

New trends for graph searches

In this talk I will survey some new results of computation of graph parameters using graph search, but also how to discover graph structure using a series of graph searches.

First I will describe how to evaluate the diameter of a graph using only 4 successive BFS (Breadth First searches). This heuristic is very acurate and can be applied on huge graphs. Furthermore it can be completed by an exact algorithm which seems to be very efficient (more that is worst case complexity).

Then I will focuse on cocomparability graphs showing that first LDFS (Lexicographic Depth First Search) can be used to compute a minimum path cover.

I will finish by describing some fixed point properties of graph search acting on a cocomparability graph, describing many open problems.

21 May 2013, 16:00, MS.05

Reachability Problems for Words, Matrices and Maps

Most computational problems for matrix semigroups and groups are inherently difficult to solve and even undecidable starting from dimension three or four. The examples of such problems are the membership problem (including the special cases of the mortality and identity problems), vector reachability, scalar reachability, freeness problems and emptiness of matrix semigroups intersection. Many questions about the decidability and complexity of problems for two-dimensional matrix semigroups remain open and are directly linked with other challenging problems in the field.

In this talk several closely related fundamental problems for words and matrices will be considered. I will survey a number of new techniques and encodings used to solve long standing open problem about reachability of Identity matrix; discuss hardness and decidability results for 2x2 matrix semigroups as well as highlight interconnections between matrix problems, reachability for iterative maps, combinatorics on words and computational problems for braids.

14 May 2013, 16:00, MS.05

Skew Bisubmodularity and Valued CSPs

An instance of the Finite-Valued Constraint Satisfaction Problem (VCSP) is given by a finite set of variables, a finite domain of values, and a sum of rational-valued functions, each function depending on a subset of the variables. The goal is to find an assignment of values to the variables that minimises the sum.

In this talk I will investigate VCSPs in the case when the variables can take three values and provide a tight description of the tractable cases.

Joint work with Andrei Krokhin and Robert Powell.

13 May 2013, 16:00, MS.04

The Tradeoff Between Privacy and Communication

Traditional communication complexity scenarios feature several participants, each with a piece of the input, who must cooperate to compute some function of the input. The difficulty of such problems is measured by the amount of communication (in bits) required. However, in more realistic scenarios, participants may wish to keep their own input (personal preferences, banking information, etc.) as private as possible, revealing only the information necessary to compute the function.

In the 80s it was shown that most two-player functions are not computable while maintaining perfect privacy (Kushilevitz '89). Unwilling to let this rest, researchers have in recent years defined several relaxed notions of "approximate" privacy. Not surprisingly, protocols providing better privacy can require exponentially more bits than standard "simple" protocols.

I will present all background necessary, as well as our result providing tight lower bounds on the tradeoff between privacy and communication complexity. If time permits, we may explore the relation between different notions of privacy, as well as several open questions that stem from these results.

This talk will assume no background in communication complexity.

07 May 2013, 16:00, MS.05

Robust algorithms for constraint satisfaction problems

In a constraint satisfaction problem (CSP), one is given a set of variables, a set of values for the variables, and constraints on combinations of values that can be taken by specified subsets of variables. An algorithm for a constraint satisfaction problem is called robust if it outputs an assignment satisfying at least a (1-f(epsilon))-fraction of constraints for each (1-epsilon)-satisfiable instance (i.e., such that at most an epsilon-fraction of constraints needs to be removed to make the instance satisfiable), where f approaches 0 as epsilon goes to 0. Barto and Kozik recently characterized all CSPs admitting a robust polynomial algorithm (with doubly exponential f), confirming a conjecture of Guruswami and Zhou. In the present talk, based on joint work with Victor Dalmau, we shall explain how homomorphism dualities, universal algebra, and linear programming are combined to give large classes of CSPs that admit a robust polynomial algorithm with polynomial loss, i.e., with f(epsilon)=O(epsilon^{1/k}) for some k.

30 April 2013, 16:00, MS.05

Coloring (H1,H2)-free graphs

A k-coloring of a graph G=(V,E) is a mapping c: V -> {1,2,..,k} such that c(u) is not equal to c(v) whenever u and v are adjacent vertices. The Colouring problem is that of testing whether a given graph has a k-coloring for some given integer k. If a graph contains no induced subgraph isomorphic to any graph in some family {H1,..,Hp}, then it is called (H1,..,Hp)-free. The complexity of Colouring for H1-free graphs is known to be completely classified. For (H1,H2)-free graphs the classification is still open, although many partial results are known. We survey the known results and present a number of new results for this case.

23 April 2013, 16:00, MS.05

On the maximum difference between several graph invariants

A graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph. Although bounds on invariants have been studied for a long time by graph theorists, the past few years have seen a surge of interest in the systematic study of linear relations (or other kinds of relations) between graph invariants. We focus our attention on the average distance in a graph as well as on the maximum orders of an induced (linear) forest, an induced tree, an induced bipartite graph, and a stable set. We give upper bounds on differences between some of these invariants, and prove that they are tight.

19 March 2013, 16:00, MS.04

Squared Metric Facility Location Problem and the Upper Bound Factor-Revealing Programs

The facility location problem (FLP) aims to place facilities in a subset of locations, serving a given set of clients, and minimizing the cost of opening facilities and connecting clients to facilities. In the Metric FLP (MFLP), the underling distance function is a metric. We consider a variant of the FLP for different distances. Namely, we study the Squared Metric FLP (SMFLP), when the distance function is a squared metric. We analyze algorithms for the MFLP when applied to the SMFLP. Surprisingly, the (now) standard LP-rounding algorithm for the FLP (Chudak and Shmoys 2003, etc.) achieves the approximation lower bound for the SMFLP of 2.040..., unless P = NP. This lower bound is obtained by extending the 1.463... hardness of the metric case. We have also studied known primal-dual algorithms for the MFLP (Jain et al. 2003, and Mahdian, Ye, and Zhang 2006), and showed that they performed well for the SMFLP. More interestingly, we showed how to obtain 'upper bound factor-revealing programs' (UPFRPs) to bound the so called factor-revealing linear programs used in these primal-dual analyses. Solving one UPFRP suffices to obtain the approximation factor, as a more straightforward alternative to analytical proofs, that could be long and tedious.

18 March 2013, 16:00, B3.02

Truthful Mechanisms for Approximating Proportionally Fair Allocations

We revisit the classic problem of fair division from a mechanism design perspective and provide a simple truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution concept of Proportional Fairness, a solution concept used in money-free settings. Unfortunately, Proportional Fairness cannot be implemented truthfully. Instead, we have designed a mechanism that discards carefully chosen fractions of the allocated resources so as to induce the agents to be truthful in reporting their valuations.

For a multi-dimensional domain with an arbitrary number of agents and items, for all homothetic valuation functions, this mechanism provides every agent with at least a 1/e fraction of her Proportionally Fair valuation. We also uncover a connection between this mechanism and VCG-based mechanism design.

Finally, we ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works particularly well when all the items are in high demand.

Joint work with Vasilis Gkatzelis and Gagan Goel.

12 March 2013, 16:00, MS.05

Optimal coalition structure and stable payoff distribution in large games

Cooperative games with transferable utilities belong to a branch of game theory where groups of players can form coalitions in order to jointly achieve the groups’ objectives. Two key questions that arise in cooperative game theory are (a) How to optimally form coalitions of players? and (b) How to share/distribute the cost/reward among the players? The first problem can be viewed as a complete set covering problem which can be formulated as an MILP with an exponentially large number of binary variables. For the second problem, the nucleolus and the Shapley value are often considered as the most important solution concepts thank to their attractive properties. However, computing the nucleolus is extremely difficult and existing methods can only solve games with less than 30 players. In this talk, we review some applications of cooperative game theory and present recent developments in solving these two problems. We test the algorithms with a number of simulated games with up to 400 players.

5 March 2013, 16:00, MS.05

On bounded degree spanning trees in the random graph

The appearence of certain spanning subraphs in the random graph is a well-studied phenomenon in probabilistic graph theory. In this talk, we present results on the threshold for the appearence of bounded-degree spanning trees in G(n,p) as well as for the corresponding universality statements. In particular, we show hitting time thresholds for some classes of bounded degree spanning trees.

Joint work with Daniel Johannsen and Michael Krivelevich.

26 February 2013, 16:00, MS.05

More on perfect matchings in uniform hypergraphs

Last year I gave a talk at Warwick on minimum degree conditions which force a hypergraph to contain a perfect matching. In this self-contained talk I will discuss recent work with Yi Zhao (Georgia State University) on this problem: Given positive integers k and r with k/2 ≤ r ≤ k−1, we give a minimum r-degree condition that ensures a perfect matching in a k-uniform hypergraph. This condition is best possible and improves on work of Pikhurko, who gave an asymptotically exact result. Our approach makes use of the absorbing method.

19 February 2013, 16:00, MS.05

Ramsey multiplicity

Ramsey's theorem states that for any graph H there exists an n such that any 2-colouring of the complete graph on n vertices contains a monochromatic copy of H. Moreover, for large n, one can guarantee that there are at least c_H n^v copies of H, where v is the number of vertices in H. In this talk, we investigate the problem of optimising the constant c_H, focusing in particular on the case of complete graphs.

12 February 2013, 16:00, MS.05

On Optimality of Deterministic and Non-Deterministic Transformations

Dynamical systems can be modelled by Markov semigroups, and we use Markov transition kernels to model computational machines and algorithms. We study points on a simplex of all joint probability measures corresponding to various types of transition kernels. In particular, deterministic or non-deterministic (e.g. randomised) algorithms correspond respectively to boundary or interior points of the simplex. The performance of the corresponding systems is evaluated by expected cost (or expected utility), which is a linear functional. The domain of optimisation is defined by information constraints. We use our result on mutual absolute continuity of optimal measures to show that optimal transition kernels cannot be deterministic, unless information is unbounded. As an illustration, we construct an example where any deterministic kernel can only have unbounded expected cost, unless the information constraint is removed. On the other hand, a non-deterministic kernel can have finite expected cost under the same information constraint.

6 February 2013, 16:00, D1.07 (Zeeman Building)

Approximation Algorithm for the Resource Dependent Assignment Problem

Assignment problems deal with the question of how to assign a set of n agents to a set of n tasks such that each task is performed only once and each agent is assigned to a single task so as to minimize a specific predefined objective. We define a special kind of assignment problem where the cost of assigning agent j to task i is not a constant cost but rather a function of the amount of resource allocated to this particular task. We assume this function, known as the resource consumption function, to be convex in order to ensure the law of diminishing marginal returns. The amount of resource allocated to each task is a continuous decision variable. One variation of this problem, proven to be NP-hard, is minimizing the total assignment cost when the total available resource is limited. In this research we provide an approximation algorithm to solve this problem, where a ρ-approximation algorithm is an algorithm that runs in polynomial time and produces a solution with cost within a factor of at most ρ of the optimal solution.

29 January 2013, 16:00, MS.05

Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor

A fundamental model of operations research is the finite, but infinite-horizon, discounted Markov Decision Process. Ye showed recently that the simplex method with Dantzig pivoting rule, as well as Howard's policy iteration algorithm, solve discounted Markov decision processes, with a constant discount factor, in strongly polynomial time. More precisely, Ye showed that for both algorithms the number of iterations required to find the optimal policy is bounded by a polynomial in the number of states and actions. We improve Ye's analysis in two respects. First, we show a tighter bound for Howard's policy iteration algorithm. Second, we show that the same bound applies to the number of iterations performed by the strategy iteration (or strategy improvement) algorithm used for solving 2-player turn-based stochastic games with discounted zero-sum rewards. This provides the first strongly polynomial algorithm for solving these games.

Joint work with Peter Bro Miltersen and Uri Zwick.

22 January 2013, 16:00, MS.05

A new method of searching a network

An object (hider) is at an unknown point on a given network Q, not necessarily at a node. Starting from a known point (known to the hider), a searcher moves around the network to minimize the time T required to find (reach) the hider. The hider's location may be a known distribution or one chosen by the hider to make T large. We study the Bayesian problem where the hider's distribution over Q is known and also the game problem where it is chosen by an adversarial hider. Two types of searcher motion (continuous search or expanding search) are considered.

15 January 2013, 16:00, MS.05

Algorithms for optimization over noisy data

To deal with NP-hard reconstruction problems, one natural possibility consists in assuming that the input data is a noisy version of some unknown ground truth. We present two such examples: correlation clustering, and transitive tournaments. In correlation clustering, the goal is to partition data given pairwise similarity and dissimilarity information, and sometimes semi-definite programming can, with high probability, reconstruct the optimal (maximum likelihood) underlying clustering. The proof uses semi-definite programming duality and the properties of eigenvalues of random matrices. The transitive tournament problem asks to reverse the fewest edge orientations to make an input tournament transitive. In the noisy setting it is possible to reconstruct the underlying ordering with high probability using simple dynamic programming.

8 January 2013, 16:00, MS.05

On the factorial layer of hereditary classes of graphs

A class of graphs is hereditary if it is closed under taking induced subgraphs. It is known that the rates of growth of the number of n-vertex labeled graphs in hereditary classes constitute discrete layers. Alekseev and independently Balogh, Bollobas and Weinreich obtained global structural characterizations of hereditary classes in the first few lower layers. The minimal layer for which no such characterization is known is the factorial one, i.e. the layer containing classes with the factorial speed of growth of the number of n-vertex labeled graphs. Many classes of theoretical or practical importance belong to this layer, including forests, planar graphs, line graphs, interval graphs, permutation graphs, threshold graphs etc. In this talk we discuss some approaches to obtaining structural characterization of the factorial layer and related results.

27 November 2012, 16:00, MS.05

Turan Numbers of Bipartite Graphs Plus an Odd Cycle

The theory of Turan numbers of non-bipartite graphs is quite well-understood, but for bipartite graphs the field is wide open. Many of the main open problems here were raised in a series of conjectures by Erdos and Simonovits in 1982. One of them can be informally stated as follows: for any family F of bipartite graphs, there is an odd number k, such that the extremal problem for forbidding F and all odd cycles of length at most k within a general graph is asymptotically equivalent to that of forbidding F within a bipartite graph. In joint work with Sudakov and Verstraete we proved a stronger form of this conjecture, with stability and exactness, for some specific families of even cycles. Also, in joint work with Allen, Sudakov and Verstraete, we gave a general approach to the conjecture using Scott's sparse regularity lemma. This proves the conjecture for some specific complete bipartite graphs, and moreover is effective for any F based on some reasonable assumptions on the maximum number of edges in a bipartite F-free graph, which are similar to the conclusions of another conjecture of Erdos and Simonovits.

20 November 2012, 16:00, MS.05

Distributionally Robust Convex Optimisation

Distributionally robust optimisation is a decision-making paradigm that caters for situations in which the probability distribution of the uncertain problem parameters is itself subject to uncertainty. The distribution is then typically assumed to belong to an ambiguity set comprising all distributions that share certain structural or statistical properties. In this talk, we propose a unifying framework for modelling and solving distributionally robust optimisation problems. We introduce standardised ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold. It turns out that these ambiguity sets are sufficiently expressive to encompass and extend several approaches from the recent literature. They also allow us to accommodate a variety of distributional properties from classical and robust statistics that have not yet been studied in the context of robust optimisation. We determine sharp conditions under which distributionally robust optimisation problems based on our standardised ambiguity sets are computationally tractable. We also provide tractable conservative approximations for problems that violate these conditions.

13 November 2012, 16:00, MS.05

A Tight, Combinatorial Algorithm for Submodular Matroid Maximization

In recent joint work with Yuval Filmus, I considered the problem of maximizing a submodular function subject to a single matroid constraint, obtaining a novel 1 - 1/e approximation algorithm based on non-oblivious local search. This is the same approximation ratio previously attained by the "continuous greedy" algorithm of Calinescu, Chekuri, Vondrak, and Pal, and is the best possible in polynomial time for the general value oracle setting (or, alternatively, assuming that P ≠ NP). Unlike the continuous greedy algorithm, however, our algorithm is purely combinatorial, and extremely simple; it requires no rounding and considers only integral solutions. In this talk, I will give a brief overview of both the problem and prior algorithmic approaches, and present our new algorithm, together with some of its analysis.

30 October 2012, 16:00, MS.05

Local Optimality in Algebraic Path Problems

Due to complex policy constraints, some Internet routing protocols are associated with non-standard metrics that fall outside of the approach to path problems based on semirings and "globally optimal" paths. Some of these exotic metrics can be captured by relaxing the semiring axioms to include algebras that are not distributive. A notion of "local optimality" can be defined for such algebras as a fixed-point of a matrix equation, which models solutions required by Internet routing protocols. Algorithms for solving such equations are explored, as well as applications beyond network routing.

23 October 2012, 16:00, MS.05

Semi-local String Comparison

The computation of a longest common subsequence (LCS) between two strings is a classical algorithmic problem. Some applications require a generalisation of this problem, which we call semi-local LCS. It asks for the LCS between a string and all substrings of another string, and/or the LCS between all prefixes of one string and all suffixes of another. Apart from an important role that this generalised problem plays in string algorithms, it turns out to have surprising connections with semigroup algebra, computational geometry, planar graph algorithms, comparison networks, as well as practical applications in computational biology. The talk will present an efficient solution for the semi-local LCS problem, and will survey some related results and applications. Among these are dynamic LCS support; fast clique computation in special graphs; fast comparison of compressed strings; parallel string algorithms.

16 October 2012, 16:00, MS.05

Lyndon Words and Short Superstrings

In the Shortest-Superstring problem, we are given a set of strings S and want to find a string that contains all strings in S as substrings and has minimum length. This is a classical problem in approximation and the best known approximation factor is 2 1/2, given by Sweedyk in 1999. Since then, no improvement has been made, however two other approaches yielding a 2 1/2-approximation algorithms have been proposed by Kaplan et al. and recently by Paluch et al., both based on a reduction to maximum asymmetric TSP path (Max-ATSP-Path) and structural results of Breslauer et al.

In this talk we give an algorithm that achieves an approximation ratio of 2 11/23, breaking through the long-standing bound of 2 1/2. We use the standard reduction of Shortest-Superstring to Max-ATSP-Path. The new, somewhat surprising, algorithmic idea is to take the better of the two solutions obtained by using: (a) the currently best 2/3-approximation algorithm for Max-ATSP-Path and (b) a naive cycle-cover based 1/2-approximation algorithm. To prove that this indeed results in an improvement, we further develop a theory of string overlaps, extending the results of Breslauer et al. This theory is based on the novel use of Lyndon words, as a substitute for generic unbordered rotations and critical factorizations, as used by Breslauer et al.

9 October 2012, 16:00, MS.05

The Power of Linear Programming for Valued CSPs

The topic of this talk is Valued Constraint Satisfaction Problems (VCSPs) and the question of how VCSPs can be solved efficiently. This problem can also be cast as how to minimise separable functions efficiently. I will present algebraic tools that have been developed for this problem and will also mention a recent result on the connection between linear programming and VCSPs (based on a paper with J. Thapper, to appear in FOCS'12).

4 July 2012, 16:00, MS.05

Algorithmic Applications of Baur-Strassen's Theorem: Shortest Cycles, Diameter and Matchings

Consider a directed or an undirected graph with integral edge weights from the set [-W, W], that does not contain negative weight cycles. In this paper, we introduce a general framework for solving problems on such graphs using matrix multiplication. The framework is based on the usage of Baur-Strassen's theorem and of Strojohann's determinant algorithm. It allows us to give new and simple solutions to the following problems:

  • Finding Shortest Cycles - We give a simple Õ(Wnω) time algorithm for finding shortest cycles in undirected and directed graphs. For directed graphs this matches the time bounds obtained in 2011 by Roditty and Vassilevska-Williams. On the other hand, no algorithm working in Õ(Wnω) time was previously known for undirected graphs with negative weights.
  • Computing Diameter - We give a simple Õ(Wnω) time algorithm for computing a diameter of an undirected or directed graphs. This considerably improves the bounds of Yuster from 2010, who was able to obtain this time bound only in the case of directed graphs with positive weights. To the contrary, our algorithm works in the same time bound for both directed and undirected graphs with negative weights.
  • Finding Minimum Weight Perfect Matchings - We present an Õ(Wnω) time algorithm for finding minimum weight perfect matchings in undirected graphs. This resolves an open problem posted by Sankowski in 2006, who presented such an algorithm but only in the case of bipartite graphs.

We believe that the presented framework can find applications for solving larger spectra of related problems. As an illustrative example we apply it to the problem of computing a set of vertices that lie on cycles of length at most t, for some t. We give a simple Õ(Wnω) time algorithm for this problem that improves over the Õ(tWnω) time algorithm given by Yuster in 2011.

This is joint work work with Marek Cygan and Harold N. Gabow. A preliminary version of the paper appeared in arxiv.1204.1616.

29 Jun 2012, 14:00, MS.05

Analyzing Graphs via Random Linear Projections

We present a sequence of algorithmic results for analyzing graphs via random linear projections or "sketches". We start with results for evaluating basic connectivity and k-connectivity and then use these primitives to construct combinatorial sparsifiers that allow every cut to be approximated up to a factor 1+ε. Our results have numerous applications including single-pass stream algorithms for constructing sparsifiers in fully-dynamic graph streams where edges can be added and deleted in the underlying graph.

This is joint work work with Kook Jin Ahn and Sudipto Guha.

19 Jun 2012, 16:00, MS.05

Pricing on Paths: A PTAS for the Highway Problem

In the highway problem, we are given an n-edge line graph (the highway), and a set of paths (the drivers), each one with its own budget. For a given assignment of edge weights (the tolls), the highway owner collects from each driver the weight of the associated path, when it does not exceed the budget of the driver, and zero otherwise. The goal is to choose weights so as to maximize the total profit.

A lot of research has been devoted to this apparently simple problem. The highway problem was shown to be strongly NP-hard only recently [Elbassioni,Raman,Ray,Sitters-'09]. The best-known approximation is O(log n/loglog n) [Gamzu,Segev-'10], which improves on the previous-best O(log n) approximation [Balcan,Blum-'06]. Better approximations are known for a number of special cases.

In this work we present a PTAS for the highway problem, hence closing the complexity status of the problem. Our result is based on a novel randomized dissection approach, which has some points in common with Arora's quadtree dissection for Euclidean network design [Arora-'98]. The basic idea is enclosing the highway in a bounding path, such that both the size of the bounding path and the position of the highway in it are random variables. Then we consider a recursive O(1)-ary dissection of the bounding path, in subpaths of uniform optimal weight. Since the optimal weights are unknown, we construct the dissection in a bottom-up fashion via dynamic programming, while computing the approximate solution at the same time. Our algorithm can be easily derandomized.

The same basic approach provides PTASs also for two generalizations of the problem: the tollbooth problem with a constant number of leaves and the maximum-feasibility subsystem problem on interval matrices. In both cases the previous best approximation factors are polylogarithmic [Gamzu,Segev-'10,Elbassioni,Raman,Ray,Sitters-'09].

Joint work with Thomas Rothvoß.

06 Jun 2012, 11:00, MS.03

Improved Lower Bounds on Crossing Numbers of Graphs Through Optimization

The crossing number problem for graphs is to draw (or embed) a graph in the plane with a minimum number of edge crossings. Crossing numbers are of interest for graph visualization, VLSI design, quantum dot cellular automata, RNA folding, and other applications. On the other hand, the problem is notoriously difficult. In 1973, Erdös and Guy wrote that: "Almost all questions that one can ask about crossing numbers remain unsolved." For example, the crossing numbers of complete and complete bipartite graphs are still unknown in general. The case of the complete bipartite graph is known as Turán's brickyard problem, and was already posed by Paul Turán in the 1940's. Moreover, even for cubic graphs, it is NP-hard to compute the crossing number.

Different types of crossing numbers may be defined by restricting drawings; thus the k-page (book) crossing number corresponds to drawings where all vertices are drawn on a line (the spine of a book), and each edge on one of k planes intersecting the spine (the book pages). It is conjectured that the two-page and normal crossing numbers coincide for complete and complete bipartite graphs. In this talk, we will survey some recent results, where improved lower bounds were obtained for (k-page) crossing numbers of complete and complete bipartite graphs through the use of optimization techniques. (Joint work with D.V. Pasechnik and G. Salazar)

22 May 2012, 16:00, MS.05

Arithmetic Progressions in Sumsets via (Discrete) Probability, Geometry and Analysis

If A is a large subset of {1,...,N}, then how long an arithmetic progression must A+A = {a+b : a,b in A} contain? Answers to this ostensibly combinatorial question were given by Bourgain and then Green, both using some very beautiful Fourier analytic techniques. Here I plan to discuss a new attack on the problem that rests on a lemma in discrete geometry, proven by a random sampling technique, and applied in a discrete analytic context. We shall not use any deep results, and we shall try to keep things as self-contained as possible.

Based on joint work with Ernie Croot and Izabella Laba.

18 May 2012, 16:00, B3.02

Induced Matchings, Arithmetic Progressions and Communication

Extremal Combinatorics is one of the central branches of discrete mathematics which deals with the problem of estimating the maximum possible size of a combinatorial structure which satisfies certain restrictions. Often, such problems have also applications to other areas including Theoretical Computer Science, Additive Number Theory and Information Theory. In this talk we will illustrate this fact by several closely related examples focusing on a recent work with Alon and Moitra.

15 May 2012, 16:00, MS.05

Treewidth Reduction Theorem and Algorithmic Problems on Graphs

We introduce the so-called Treewidth Reduction Theorem. Given a graph G, two specified vertices s and t, and an integer k, let C be the union of all minimal s-t (vertex) separators of size at most k. Furthermore, let G* be the graph obtained from G by contracting all the connected components of G - C into single vertices. The theorem states that the treewidth of G* is bounded by a function of k and that the graph G* can be computed in linear time for any fixed k.

The above theorem allows us to solve the following generic graph separation problem in linear time for every fixed k. Let G be a graph with two specified vertices s and t and let Z be a hereditary class of graphs. The problem asks if G has an s-t vertex separator S of size at most k such that the subgraph induced by S belongs to the class Z.

In other words, we show that this generic problem is fixed-parameter tractable. This allows us to resolve a number of seemingly unrelated open questions scattered in the literature concerning fixed-parameter tractability of various graph separation problems under specific constraints.

The role of the Treewidth Reduction Theorem is that it reduces an arbitrary instance of the given problem to an instance of the problem where the treewidth is bounded. Then the standard methodology using Courcelle's theorem can be applied.

The purpose of this talk is to convey the main technical ideas of the above work at an intuitive level. The talk is self-contained. In particular, no prior knowledge of parameterized complexity, treewidth, and Courcelle's theorem is needed. Everything will be intuitively defined in the first 10-15 minutes of the talk.

Joint work with D. Marx and B. O'Sullivan. Available at http://arxiv.org/abs/1110.4765.

8 May 2012, 16:00, MS.05

Exact Quantum Query Algorithms

The framework of query complexity is a setting in which quantum computers are known to be significantly more powerful than classical computers. In this talk, I will discuss some new results in the model of exact quantum query complexity, where the goal is to compute a boolean function f with certainty using the smallest possible number of queries to the input. It is known that quantum algorithms exist in this model which can achieve significant speed-ups over any possible classical algorithm; however, when f is a total function (ie. there is no promise on the input) the best quantum speed-up known is a factor of 2.

I will present several families of total boolean functions which have exact quantum query complexity which is a constant fraction of their classical query complexity. These results were originally inspired by numerically solving the semidefinite programs characterising quantum query complexity for small problem sizes. I will also discuss the model of nonadaptive exact quantum query complexity, which can be characterised in terms of coding theory.

The talk will be based on the paper arXiv:1111.0475, which is joint work with Richard Jozsa and Graeme Mitchison.

1 May 2012, 16:00, MS.05

Representing Graphs by Words

A simple graph G=(V,E) is (word-)representable if there exists a word W over the alphabet V such that any two distinct letters x and y alternate in W if and only if (x,y) is an edge in E. If W is k-uniform (each letter of W occurs exactly k times in it) then G is called k-representable. It is known that a graph is representable if and only if it is k-representable for some k. The minimum k for which a representable graph G is k-representable is called its representation number.

Representable graphs first appeared in algebra, in study of the Perkins semigroup, which has played a central role in semigroup theory since 1960, particularly as a source of examples and counterexamples. However, these graphs have connections to robotic scheduling and they are interesting from combinatorial and graph theoretical point of view (for example, representable graphs are a generalization of circle graphs, which are exactly 2-representable graphs).

Some questions one can ask about representable graphs are as follows. Are all graphs representable? How do we characterize those graphs that are (non-)representable? How many representable graphs are there? How large can the representation number be for a graph on n nodes?

In this talk, we will go through these and some other questions stating what progress has been made in answering them. In particular, we will see that a graph is representable if and only if it admits a so-called semi-transitive orientation. This allows us to prove a number of results about representable graphs, not least that 3-colorable graphs are representable. We also prove that the representation number of a graph on n nodes is at most n, from which one concludes that the recognition problem for representable graphs is in NP. This bound is tight up to a constant factor, as there are graphs whose representation number is n/2.

24 April 2012, 16:00, MS.05

Making Markov Chains Less Lazy

There are only a few methods for analysing the rate of convergence of an ergodic Markov chain to its stationary distribution. One is the canonical path method of Jerrum and Sinclair. This method applies to Markov chains which have no negative eigenvalues. Hence it has become standard practice for theoreticians to work with lazy Markov chains, which do absolutely nothing with probability 1/2 at each step. This must be frustrating for practitioners, who want to use the most efficient Markov chain possible.

I will explain how laziness can be avoided by the use of a twenty-year old lemma of Diaconis and Stroock's, or my recent modification of that lemma. Other relevant approaches will also be discussed. A strength of the new result is that it can be very easy to apply. We illustrate this by revisiting the analysis of Jerrum and Sinclair's well-known chain for sampling perfect matchings of a graph.

16 April 2012, 16:00, MS.04

Polynomial-Time Approximation Schemes for Shortest Path with Alternatives

Consider the generic situation that we have to select k alternatives from a given ground set, where each element in the ground set has a random arrival time and cost. Once we have done our selection, we will greedily select the first arriving alternative, and the total cost is the time we had to wait for this alternative plus its random cost. Our motivation to study this problem comes from public transportation, where each element in the ground set might correspond to a bus or train, and the usual user behavior is to greedily select the first option from a given set of alternatives at each stop. We consider the arguably most natural arrival time distributions for such a scenario: exponential distributions, uniform distributions, and distributions with mon. decreasing linear density functions. For exponential distributions, we show how to compute an optimal policy for a complete network, called a shortest path with alternatives, in O(n(log n + δ^3)) time, where n is the number of nodes and δ is the maximal outdegree of any node, making this approach practicable for large networks if δ is relatively small. Moreover, for the latter two distributions, we give PTASs for the case that the distribution supports differ by at most a constant factor and only a constant number of hops are allowed in the network, both reasonable assumptions in practice. These results are obtained by combining methods from low-rank quasi-concave optimization with fractional programming. We finally complement them by showing that general distributions are NP-hard.

23 March 2012, 16:00, MS.05

Minimum Degree Thresholds for Perfect Matchings in Hypergraphs

Given positive integers k and r where 4 divides k and k/2 ≤ r ≤ k-2, we give a minimum r-degree condition that ensures a perfect matching in a k-uniform hypergraph. This condition is essentially best possible and improves on work of Pikhurko, who gave an asymptotically exact result. Our approach makes use of the Hypergraph Removal Lemma as well as a structural result of Keevash and Sudakov relating to the Turan number of the expanded triangle. This is joint work with Yi Zhao.

21 March 2012, 16:00, MS.05

Robust Optimization over Integers

Robust optimization is an approach for optimization under uncertainty that has recently attracted attention both from theory and practitioners. While there is an elaborate and powerful machinery for continuous robust optimization problems, results on robust combinatorial optimization and robust linear integer programs are still rare and hardly general. We consider robust counterparts of integer programs and combinatorial optimization problems, i.e., seek solutions that stay feasible if at most Γ-many parameters change within a given range.

We show that one can optimize a not necessarily binary, cost robust problem, for which one can optimize a slightly modified version of the deterministic problem. Further, in case there is a ρ-approximation for the modified deterministic problem, we give a method for the cost robust counterpart to attain a (ρ+ε)-approximation (for minimization problems; for maximization we get a 2ρ-approximation), or again a ρ-approximation in a slightly more restricted case. We further show that general integer linear programs where a single or few constraints are subject to uncertainty can be solved, in case the problem can be solved for constraints on piecewise linear functions. In case these programs are binary, it suffices to solve the underlying non-robust program (n+1) times.

We demonstrate the applicability of our approach on two classes of integer programs, namely, totally unimodular integer programs and integer programs with two variables per inequality. Further, for combinatorial optimization problems our method yields polynomial time approximations and pseudopolynomial, exact algorithms for robust Unbounded Knapsack Problems.

This is joint work with Kai-Simon Goetzmann (TU Berlin) and Claudia Telha (MIT).

20 March 2012, 16:00, MS.05

The Complexity of Computing the Sign of the Tutte Polynomial

The Tutte polynomial of a graph is two-variable polynomial that captures many interesting properties of the graph. In this talk, I will describe the polynomial and then discuss the complexity of computing the sign of the polynomial. Having fixed the two parameters, the problem is, given an input graph, determine whether the value of the polynomial is positive, negative, or zero. We determine the complexity of this problem over (most of) the possible settings of the parameters. Surprisingly, for a large portion of the parameter space, the problem is #P-complete. (This is surprising because the problem feels more like a decision problem than a counting problem --- in particular, there are only three possible outputs.) I'll discuss the ramifications for the complexity of approximately evaluating the polynomial. As a consequence, this resolves the complexity of computing the sign of the chromatic polynomial. Here there is a phase transition at q=32/27, which I will explain. The talk won't assume any prior knowledge about graph polynomials or the complexity of counting.

(Joint work with Mark Jerrum.)

13 March 2012, 16:00, MS.05

Combinatorics of Tropical Linear Algebra

Tropical linear algebra is an emerging and rapidly evolving area of idempotent mathematics, linear algebra and applied discrete mathematics. It has been designed to solve a class of non-linear problems in mathematics, operational research, science and engineering. Besides the main advantage of dealing with non-linear problems as if they were linear, the techniques of tropical linear algebra enable us to efficiently describe complex sets, reveal combinatorial aspects of problems and view the problems in a new, unconventional way. Since 1995 we have seen a remarkable expansion of this field following a number of findings and applications in areas as diverse as algebraic geometry, phylogenetics, cellular protein production, the job rotation problem and railway scheduling.

We will give an overview of selected combinatorial aspects of tropical linear algebra with emphasis on set coverings, cycles in digraphs, transitive closures and the linear assignment problem. An application to multiprocessor interactive systems and a number of open problems will be presented.

6 March 2012, 16:00, MS.05

Every Property of Hyperfinite Graphs is Testable

The analysis of complex networks like the webgraph, social networks, metabolic networks or transportation networks is a challenging problem. One problem that has drawn a significant amount of attention is the question to classify the domain to which a given network belongs, i.e. whether it is, say, a social network or a metabolic network. One empirical approach to solve this problem uses the concept of network motifs. A network motif is a subgraph that appears more frequently in a certain class of graphs than in a random graph. This approach raises the theoretical question about the structural properties we can learn about a graph by looking at small subgraphs and how one can analyze graph structure by looking at random samples, as these will typically contain frequent subgraphs.

Obviously, it is not possible to to analyze classical structural properties like, for example, connectivity by only looking at small subgraphs. One needs a relaxed and more robust definition of graph properties. Such a definition is given by the concept of property testing. In my talk and within the framework of property testing I will give a partial answer to the question what we can learn about graph properties from the distribution of constant sized subgraphs. I will show that every planar graph with constant maximum degree is defined up to epsilon n edges by its distribution (frequency) of subgraphs of constant size. This result implies that every property of planar graphs is testable in the property testing sense.

(Joint work with Ilan Newman)

5 March 2012, 10:00, MS.05

Improved Approximations for Monotone Submodular k-Set Packing and General k-Exchange Systems

In the weighted k-set packing problem, we are given a collection of k-element sets, each with a weight, and seek a maximum weight collection of pairwise disjoint sets. In this talk, we consider a generalization of this problem in which the goal is to find a pairwise disjoint collection of sets that maximizes a monotone submodular function.

We present a novel combinatorial algorithm for the problem, which is based on the notion of non-oblivious local search, in which a standard local search process is guided by an auxiliary potential function distinct from the problem's objective. Specifically, we use a potential function inspired by an algorithm of Berman for weighted maximum independent set in (k+1)-claw free graphs. Unfortunately, moving from the linear (weighted) case to the monotone submodular case introduces several difficulties, necessitating a more nuanced approach. The resulting algorithm guarantees a (k + 3)/2 approximation, improving on the performance of the standard, oblivious local search algorithm by a factor of nearly 2 for large k.

More generally, we show that our algorithm applies to all problems that can be formulated as k-exchange systems, which we review in this talk. This class of independence systems, introduced by Feldman, Naor, Schwartz, and Ward, generalize the matroid k-parity problem in a wide class of matroids and capture many other combinatorial optimization problems. Such problems include matroid k-parity in strongly base orderable matroids, independent set in (k + 1)-claw free graphs (which includes k-set packing as a special case), k-uniform hypergraph b-matching, and maximum asymmetric traveling salesperson (here, k = 3). Our non-oblivious local search algorithm improves on the current state-of-the-art approximation performance for many of these specific problems, as well.

28 February 2012, 16:00, MS.05

Hybridising Heuristic and Exact Methods to Solve Scheduling Problems

The research community has focussed on the use of heuristics and meta-heuristic methods to solve real life scheduling problems, as such problems are too large to solve exactly. However there is much to learn and utilise from exact models. This talk will explain how hybridising exact methods within heuristic techniques can enable better solutions to be obtained. Specifically, exact methods will be used to ensure that feasible solutions are guaranteed, allowing the heuristic to focus on improving secondary objectives.

Two test cases will be described. The first is a nurse rostering problem where a knapsack model is used to ensure feasibility, allowing a tabu search method to locate high quality solutions. The second is the problem of allocating medical students to clinical specialities over a number of time periods. A network flow model is hybridised within a Greedy Randomised Adaptive Search Procedure framework. It will be demonstrated that this produces better solutions than using GRASP on its own.

14 February 2012, 16:00, MS.05

Local Matching Dynamics in Social Networks

Stable marriage and roommates problems are the classic approach to model resource allocation with incentives and occupy a central position in the intersection of computer science and economics. There are a number of players that strive to be matched in pairs, and the goal is to obtain a stable matching from which no pair of players has an incentive to deviate. In many applications, a player is not aware of all other players and must explore the population before finding a good match. We incorporate this aspect by studying stable matching under dynamic locality constraints in social networks. Our interest is to understand local improvement dynamics and their convergence to matchings that are stable with respect to their imposed information structure in the network.

6 February 2012, 17:00, MS.03

Approximating Graphic TSP by Matchings

We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.

31 January 2012, 16:00, MS.05

The Quadratic Assignment Problem: on the Borderline Between Hard and Easy Special Cases

The quadratic assignment problem (QAP) is a well studied and notoriously hard combinatorial optimisation problem both from the theoretical and the practical point of view. Other well known combinatorial optimisation problems as for example the travelling salesman problem or the linear arrangement problem can be seen as special cases of the QAP.

In this talk we will first introduce the problem and briefly review some important computational complexity results. Then we will focus on polynomially solvable special cases and introduce structural properties of the coefficient matrices of the QAP which guarantee the efficient solvability of the problem.

We will show that most of these special cases have the so called constant permutation property, meaning that the optimal solution of the problem does not depend on the concrete realization of the coefficient matrices as soon as those matrices possess the structural properties which turn the QAP polynomially solvable as mentioned above. We will show that the borderline between hard and easy special cases is quite thin, in the sense that slight perturbations of the structural properties mentioned above are enough to produce NP-hard special cases again. We will conclude with an outlook of further research and some open special cases which could be well worth analysing next in terms of computational complexity.

25 January 2012, 16:00, MS.05

Extended Formulations in Combinatorial Optimization

Applying the polyhedral method to a combinatorial optimization problem usually requires a description of the convex hull P of the set of feasible solutions. Typically, P is determined by an exponential number of inequalities, and a complete description is often hopeless, even for polynomial-time solvable problems. In some cases, P can be represented in a simpler way as the projection of some polyhedron Q in a higher-dimensional space, where often Q is defined by a much smaller system of constraints than P. In this talk we discuss techniques to obtain extended formulations for combinatorial optimization problems, and cases where extended formulations prove useful.

The work presented includes papers with M. Conforti, A. Del Pia, B. Gerards, and L. Wolsey.

24 January 2012, 16:00, MS.05

The "Power of ..." Type Results in Parallel Machine Scheduling

In this talk, I will present the results on parametric analysis of the power of pre-emption for two and three uniform machines, with the speed of the fastest machine as a parameter. For identical parallel machines, I will present a series of results on the impact that adding an extra machine may have on the makespan and the total completion time. For the latter models, the solution approaches to the problem of the cost-optimal choice of the number of machines are reported.

10 January 2012, 16:00, MS.05

Robust Markov Decision Processes

Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1-β. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1-β. Our method involves the solution of tractable conic programs of moderate size.

6 December 2011, 16:00, MS.05

Independent Sets in Hypergraphs

We say that a hypergraph is stable if each sufficiently large subset of its vertices either spans many hyperedges or is very structured. Hypergraphs that arise naturally in many classical settings posses the above property. For example, the famous stability theorem of Erdős and Simonovits and the triangle removal lemma of Ruzsa and Szemerédi imply that the hypergraph on the vertex set E(K_n) whose hyperedges are the edge sets of all triangles in K_n is stable. In the talk, we will present the following general theorem: If (H_n)_n is a sequence of stable hypergraphs satisfying certain technical conditions, then a typical (i.e., uniform random) m-element independent set of H_n is very structured, provided that m is sufficiently large. The above abstract theorem has many interesting corollaries, some of which we will discuss. Among other things, it implies sharp bounds on the number of sum-free sets in a large class of finite Abelian groups and gives an alternate proof of Szemerédi's theorem on arithmetic progressions in random subsets of integers.

Joint work with Noga Alon, József Balogh, and Robert Morris.

5 December 2011, 14:00, MS.05

Creating a Partial Order and Finishing the Sort

In 1975, Mike Fredman showed that, given a set of distinct values satisfying an arbitrary partial order, one can finish sorting the set in a number of comparisons equal to the information theory bound (Opt) plus at most 2n. However, it appeared that determining what comparisons to do could take exponential time. Shortly after this several people, including myself, wondered about the complementary problem Fredman's "starting point" of arranging elements into a given partial order. My long term conjecture was that one can do this, using a number of comparisons equal to the information theory lower bound for the problem plus a lower order term plus O(n) through a reduction to multiple selection. Along the way some key contributions to the problems were made, including the development of graph entropy and the work of Andy Yao (1989) and Jeff Kahn and Jeong Hang Kim (1992). The problems were both solved, with Jean Cardinal, Sam Fiorini, Gwen Joret, Raph Jungers (2009, 2010), using the techniques of graph entropy, multiple selection and merging in polynomial time to determine the anticipated Opt + o(Opt) + O(n) comparisons. The talk will discuss this long term development and some amusing stops along the way.

29 November 2011, 16:00, MS.05

Statistical Mechanics Approach to the Problem of Counting Large Subgraphs in Random Graphs

Counting the number of distinct copies of a given pattern in a graph can be a difficult problem when the sizes of both the pattern and the graph get large. This talk will review statistical mechanics approaches to some instances of this problem, focusing in particular on the enumeration of large matchings and long circuits of a graph. The outcomes of these methods are conjectures on the typical number of such patterns in large random graphs, and heuristic algorithms for counting and constructing them. In the case of the matchings, the heuristic statistical mechanics method has been also turned into a mathematically rigorous proof.

22 November 2011, 16:00, MS.05

Applications and Studies in Modular Decomposition

The modular decomposition is a powerful tool for describing combinatorial structures (such as graphs, tournaments, posets or permutations) in terms of smaller ones. Since its appearance in a talk by Fraisse in the 1950s, and first appearance in print by Gallai in the 1960s, it has appeared in a wide variety of settings ranging from game theory to combinatorial optimisation.

In this talk, after discussing some of these various historical settings, I will present a number of different settings where the modular decomposition has influenced my own research, including the enumeration and structural theory of permutations (in particular the general study of well-quasi-ordering), and -- quite unrelatedly -- recent connections made with the celebrated reconstruction conjecture.

Of increasing importance to this work has been our growing understanding of the "prime" structures: those that cannot be broken down into smaller structures under the modular decomposition. Started by Schmerl and Trotter in the early 1990s, there is now an industry of researchers looking at the fine structure of these objects, and I will present some recent work in this area. Taker a "broader" view, we also know, in the case of permutations, a Ramsey-theoretic result for these prime structures: every sufficiently long prime permutation must contain a subpermutation belonging to one of three families. However, it still remains to translate this result into one for graphs, and I will close by exploring some of the difficulties and differences discovered in our attempts to make this conversion.

15 November 2011, 16:00, MS.05

Random Graphs on Spaces of Negative Curvature

Random geometric graphs have been well studied over the last 50 years or so. These are graphs that are formed between points randomly allocated on a Euclidean space and any two of them are joined if they are close enough. However, all this theory has been developed when the underlying space is equipped with the Euclidean metric. But, what if the underlying space is curved?

The aim of this talk is to initiate the study of such random graphs and lead to the development of their theory. Our focus will be on the case where the underlying space is a hyperbolic space. We will discuss some typical structural features of these random graphs as well as some applications, related to their potential as a model for networks that emerge in social life or in biological sciences.

8 November 2011, 16:00, MS.05

Upper Bound for Centerlines

Given a set of points P in R^3, what is the best line that approximates P? We introduce a notion of approximation which is robust under the perturbations of P, and analyse how good it is.

4 November 2011, 14:00, MS.01

Algorithms for Testing FOL Properties

Algorithmic metatheorems guarantee that certain types of problems have efficient algorithms. A classical example is the theorem of Courcelle asserting that every MSOL property can be tested in linear time for graphs with bounded tree-width. Another example is a result of Frick and Grohe that every FOL property can be tested in almost linear time in graph classes with locally bounded tree-width. Such graph classes include planar graphs or graphs with bounded maximum degree. An example of an FOL property is the existence of a fixed graph as a subgraph.

We extend these results in two directions:

  • we show that FOL properties can be tested in linear time for classes of graphs with bounded expansion (this is a joint result with Zdenek Dvorak and Robin Thomas), and
  • FOL properties can be polynomially tested (with the degree of the polynomial independent of the property) in classes of regular matroids with locally bounded branch-width (this is a joint result with Tomas Gavenciak and Sang-il Oum).

An alternative proof of our first result has been obtained by Dawar and Kreutzer.

1 November 2011, 16:00, MS.05

Lower Bounds for Online Integer Multiplication and Convolution in the Cell-probe Model

We will discuss time lower bounds for both online integer multiplication and convolution in the cell-probe model. For the multiplication problem, one pair of digits, each from one of two n digit numbers that are to be multiplied, is given as input at step i. The online algorithm outputs a single new digit from the product of the numbers before step i+1. We give a lower bound of Omega((d/w)*log n) time on average per output digit for this problem where 2^d is the maximum value of a digit and w is the word size. In the convolution problem, we are given a fixed vector V of length n and we consider a stream in which numbers arrive one at a time. We output the inner product of V and the vector that consists of the last n numbers of the stream. We show an Omega((d/w)*log n) lower bound for the time required per new number in the stream. All the bounds presented hold under randomisation and amortisation. These are the first unconditional lower bounds for online multiplication or convolution in this popular model of computation.

14 October 2011, 14:00, MS.01

Nash Codes for Noisy Channels

We consider a coordination game between a sender and a receiver who communicate over a noisy channel.

The sender wants to inform the receiver about the state by transmitting a message over the channel. Both receive positive payoff only if the receiver decodes the received signal as the correct state. The sender uses a known "codebook" to map states to messages. When does this codebook define a Nash equilibrium?

The receiver's best response is to decode the received signal as the most likely message that has been sent. Given this decoding, an equilibrium or "Nash code" results if the sender encodes every state as prescribed by the codebook, which is not always the case. We show two theorems that give sufficient conditions for Nash codes. First, the "best" codebook for the receiver (which gives maximum expected receiver payoff) defines a Nash code.

A second, more surprising observation holds for communication over a binary channel which is used independently a number of times, a basic model of information theory: Given a consistent tie-breaking decoding rule which holds generically, ANY codebook of binary codewords defines a Nash code. This holds irrespective of the quality of the code and also for nonsymmetric errors of the binary channel.

(Joint work with P. Hernandez)

4 October 2011, 16:00, MS.05

On Clique Separator Decomposition of Some Hole-Free Graph Classes

(joint work with Vassilis Giakoumakis and Frédéric Maffray)

In a finite undirected graph G=(V,E), a vertex set Q ⊆ V is a clique separator if the vertices in Q are pairwise adjacent and G\Q has more connected components than G. An atom of G is a subgraph of G without clique separators. Tarjan and Whitesides gave polynomial time algorithms for determining clique separator decomposition trees of graphs. By a well-known result of Dirac, a graph is chordal if and only if its atoms are cliques.

A hole is a chordless cycle of length at least five. Hole-free graphs generalize chordal graphs; G is chordal if and only if it is C_4-free and hole-free. We characterize hole- and diamond-free graphs (hole- and paraglider-free graphs, respectively) in terms of the structure of their atoms. Hereby a diamond is K_4-e, and a paraglider is the result of substituting an edge into one of the vertices of a C_4; equivalently, it is the complement graph of the disjoint union of P_2 and P_3. Thus, hole- and paraglider-free graphs generalize chordal graphs and are perfect. Hole- and diamond-free graphs generalize chordal bipartite graphs (which are exactly the triangle-free hole-free graphs).

Our structural results have various algorithmic implications. Thus, the problems Recognition, Maximum Independent Set, Maximum Clique, Coloring, Minimum Fill-In, and Maximum Induced Matching can be solved efficiently on hole- and paraglider-free graphs.

30 August 2011, 16:00, MS.05

Network Design under Demand Uncertainties

Telecommunication network design has been an extremely fruitful area for the application of discrete mathematics and optimization. To cover for uncertain demands, traffic volumes are typically highly overestimated. In this talk, we adapt the methodology of robust optimization to obtain more resource- and cost-efficient network designs. In particular, we generalize valid inequalities for network design to the robust network design problem, and report on their added value.

25 July 2011, 16:00, MS B3.02

Cutting-Planes for the Max-Cut Problem

We present a cutting-plane scheme based on the separation of some product-type classes of valid inequalities for the max-cut problem. These include triangle, odd clique, rounded psd and gap inequalities. In particular, gap inequalities were introduced by Laurent & Poljak and include many other known inequalities as special cases. Yet, they have received little attention so far and are poorly understood. This paper presents the first ever computational results, showing that gap inequalities yield extremely strong upper bounds in practice.

Joint work with Professor Adam N. Letchford and Dr. Konstantinos Kaparis, Lancaster University.

12 July 2011, 16:00, MS.05

Holographic Algorithms

Using the notion of polynomial time reduction computer scientists have discovered an astonishingly rich web of interrelationships among the myriad computational problems that arise in diverse applications. These relationships can be used both to give evidence of intractability, such as that of NP-completeness, as well as to provide efficient algorithms.

In this talk we discuss a notion of a holographic reduction that is more general than the traditional one in the following sense. Instead of locally mapping solutions one-to-one, it maps them many-to-many but preserves some measure such as the sum of the solutions. One application is to finding new polynomial time algorithms where none was known before. Another is to give evidence of intractability. There are pairs of related problems that can be contrasted in this manner. For example, for a skeletal version of Cook’s 3CNF problem (restricted to be planar and where every variable occurs twice positively) the problem of counting the solutions modulo 2 is NP-hard, but counting them modulo 7 is polynomial time computable. Holographic methods have proved useful in establishing dichotomy theorems, which offer a more systematic format for distinguishing the easy from the probably hard. Such theorems state that for certain wide classes of problems every member is either polynomial time computable, or complete in some class conjectured to contain intractable problems.

29 June 2011, 16:00, MS.05

New and Old Algorithms for Matroid and Submodular Optimization

Many practical problems can be formulated in the language of matroids and submodular functions. Objective functions of many optimization problems are submodular. Such functions also model "economies of scale" and "law of diminishing returns" in a natural way. We study the problem of maximizing a submodular function subject to k matroid constraints. We show how our analytic framework can be adapted for various types of submodular functions such as symmetric, monotone or linear. We show that the local search algorithm has performance guarantee of roughly 2/k for the matroid hypergraph matching problem where k is the size of the largest hyperedge. In the end of the talk, we will discuss randomized rounding algorithms for matroid intersection and matroid base polytops with additional constraints (or objectives) that can be represented as polynomial constraints. While the measure concentration phenomenon is well-studied in probability theory, the concentration of polynomial functions of independent random variables is a relatively recent development (Kim- Vu (2000)). Our randomized rounding is inherently dependent, so we need to prove our own concentration inequalities for such processes. We will discuss multiple applications of such concentration bounds in optimization.
28 June 2011, 16:00, MS.05

Ultra-Fast Rumor Spreading in Models of Real-World Networks

In this talk an analysis of the popular push-pull protocol for spreading a rumor on networks will be presented. Initially, a single node knows of a rumor. In each succeeding round, every node chooses a random neighbor, and the two nodes share the rumor if one of them is already aware of it. We present the first theoretical analysis of this protocol on two models of random graphs that have a power law degree distribution with an arbitrary exponent β > 2. In particular, we study preferential attachment graphs and random graphs with a given expected degree sequence.

The main findings reveal a striking dichotomy in the performance of the protocol that depends on the exponent of the power law. More specifically, we show that if 2 < β < 3, then the rumor spreads to almost all nodes in Ω(loglog n) rounds with high probability. This is exponentially faster than all previously known upper bounds for the push-pull protocol established for various classes of networks. On the other hand, if β > 3, then Ω(log n) rounds are necessary.

I will also discuss the asynchronous version of the push-pull protocol, where the nodes do not operate in rounds, but exchange information according to a Poisson process with rate 1. Surprisingly, if 2 < β < 3, the rumor spreads even in constant time, which is much smaller than the typical distance of two nodes.

This is joint work with N. Fountoulakis, T. Sauerwald.

21 June 2011, 16:00, MS.05

On the Erdős-Szekeres Problem and its Modifications

In our talk, we shall concentrate on a classical problem of combinatorial geometry going back to P. Erdős and G. Szekeres. First of all, we shall introduce and discuss the minimum number g(n) such that from any set of g(n) points in general position in the plane, one can choose a set of n points which are the vertices of a convex n-gon. Further, we shall proceed to multiple important modifications of the quantity g(n). In particular we shall consider a value h(n): it’s definition is almost the same as the just-mentioned definition of g(n); one should only replace in it “a convex n-gon” by “a convex empty n-gon”. Also we shall generalize h(n) to h(n, k) and to h(n, mod q), where the previous condition “an n-gon is empty” is substituted either by the condition “an n-gon contains at most k points” (so that h(n) = h(n, 0)) or by the condition “an n-gon contains 0 points modulo q”. Finally, we shall speak about various chromatic versions of the above quantities.

We shall present a series of recent achievments in the field, and we shall discuss some new approaches and conjectures.

20 June 2011, 16:00, MS.05

Optimal (Fully) LZW-compressed Pattern Matching

We consider the following variant of the classical pattern matching problem motivated by the increasing amount of digital data we need to store: given an uncompressed pattern s[1..m] and a compressed representation of a string t[1..N], does s occur in t? I will present a high-level description of an optimal linear time algorithm which detects the occurrence in t compressed using the Lempel-Ziv-Welsch method (widely used in real-life applications due to its simplicity and relatively good approximation ratio), thus answering a question of Amir, Benson, and Farach from 1994. Then I will show how to extend this method to solve the fully compressed version of the problem, where both the pattern and the text are compressed, also in optimal linear time, hence improving the previously known solution of Gąsieniec and Rytter, and essentially closing this line of research.

14 June 2011, 16:00, MS.05

Understanding the Kernelizability of Multiway Cut

In this talk I will present results of my ongoing research whose goal is to understand kernelizability of the multiway cut problem. To make the talk self-contained, I will start from the definition of kernelization, accompanied by a simple kernelization procedure for the Vertex Cover problem. Then I will define the multiway cut problem, briefly overview the existing parameterization results, and provide reasons why understanding kernelizability of the problem is an interesting question.

In the final part of my talk I will present a kernelization algorithm for a special, yet NP-hard, case of the multiway cut problem and discuss possible ways of its generalization.

7 June 2011, 16:00, MS.05

Fractional Colouring and Pre-colouring Extension of Graphs

A vertex-colouring of a graph is an assignment of colours to the vertices of the graph so that adjacent vertices receive different colours. The minimum number of colours needed for such a colouring is called the chromatic number of the graph. Now suppose that certain vertices are already pre-coloured, and we want to extend this partial colouring to a colouring of the whole graph. Because of the pre-coloured vertices, we may need more colours than just the chromatic number. How many extra colours are needed under what conditions has been well-studied, and we will give a short overview of those results.

A different way of colouring the vertices is so-called fractional colouring. For such a colouring we are given an interval [0,K] of real numbers, and we need to assign to each vertex a subset of [0,K] of measure one so that adjacent vertices receive disjoint subsets. The fractional chromatic number is the minimum K for which this is possible.

Again we can look at this problem assuming that certain vertices are already pre-coloured (are already assigned a subset of measure one). Assuming some knowledge about the pre-coloured vertices, what K is required to guarantee that we can always extend this partial colouring to a fractional colouring of the whole graph? The answer to this shows a surprising dependence on the fractional chromatic number of the graph under consideration.

This is joint work with Dan Kral, Martin Kupec, Jean-Sebastien Sereni and Jan Volec.

31 May 2011, 16:00, MS.05

Partitioning Posets

It is well known and easy to prove that every graph with m edges has a cut containing at least m/2 edges. While the complete graph shows that the constant ½ cannot be improved, Edwards established a more precise extremal bound that includes lower order terms. The problem of determining such bounds is called the extremal maxcut problem and many variants of it have been studied. In the first part of the talk, we consider a natural analogue of the extremal maxcut problem for posets and some of its generalisations. (Whereas a graph cut is a set of all edges that cross some bipartition of the graph’s vertex set, we shall define a poset cut to be a set of all comparable pairs that cross some order-preserving partition of the poset’s ground set.)

The algorithmic maxcut problem for graphs, i.e. the problem of determining the size of the largest cut in a graph, is well known to be NP-hard. In the second part of the talk, we examine the complexity of the poset analogue of the max-cut problem and some of its generalisations.

24 May 2011, 16:00, MS.05

Solution to an Edge-coloring Conjecture of Grunbaum

By a classical result of Tait, the four color theorem is equivalent to the statement that each 2-edge-connected 3-regular planar graph has a 3-edge-coloring. An embedding of a graph into a surface is called polyhedral if its dual has no multiple edges or loops. A conjecture of Grunbaum, presented in 1968, states that each 3-regular graph with a polyhedral embedding into an orientable surface has a 3-edge-coloring. With respect to the result of Tait, it aims to generalize the four color theorem for any orientable surface. We present a negative solution to this conjecture, showing that for each orientable surface of genus at least 5, there exists a 3-regular non 3-edge-colorable graph with a polyhedral embedding into the surface.

17 May 2011, 16:00, MS.05

Achlioptas Process Phase Transitions are Continuous

It is widely believed that certain simple modifications of the random graph process lead to discontinuous phase transitions. In particular, starting with the empty graph on $n$ vertices, suppose that at each step two pairs of vertices are chosen uniformly at random, but only one pair is joined, namely one minimizing the product of the sizes of the components to be joined. Making explicit an earlier belief of Achlioptas and others, in 2009, Achlioptas, D'Souza and Spencer conjectured that there exists a δ>0 (in fact, δ\ge 1/2) such that with high probability the order of the largest component `jumps' from o(n) to at least δ n in o(n) steps of the process, a phenomenon known as `explosive percolation'. We give a simple proof that this is not the case. Our result applies to all `Achlioptas processes', and more generally to any process where a fixed number of independent random vertices are chosen at each step, and (at least) one edge between these vertices is added to the current graph, according to any (online) rule. We also prove the existence and continuity of the limit of the rescaled size of the giant component in a class of such processes, settling a number of conjectures. Intriguing questions remain, however, especially for the product rule described above.

Joint work with Oliver Riordan.

3 May 2011, 16:00, MS.05

Dynamics of Boolean Networks - An Exact Solution

In his seminal work Kauffman introduced a very simple dynamical model of biological gene-regulatory networks. Each gene was modeled by a binary variable that can be in an ON/OFF state and interacts with other genes via a coupling Boolean function which determines the state of a gene at the next time-step. It was argued that this model, also known as Random Boolean network (RBN) or Kauffman net, is relevant to the understanding of biological systems. RBNs belong to a larger class of Boolean networks that exhibits a rich dynamical behavior, which is very versatile and has found its use in the modeling of genetic, neural and social networks as well as in many other branches of science.

The annealed approximation has proved to be a valuable tool in the analysis of large scale Boolean networks as it allows one to predict the time evolution of network activity (proportion of ON/OFF states) and Hamming distance (the difference between the states of two networks of identical topology) order parameters. The broad validity of the annealed approximation to general networks of this type has remained an open question; additionally, while the annealed approximation provides accurate activity and Hamming distance results for various Boolean models with quenched disorder it cannot compute correlation functions, used in studying memory effects. In particular, there are models with strong memory effects where the annealed approximation is not valid in specific regimes.

In the current work we study the dynamics of a broad class of Boolean networks with quenched disorder and thermal noise using the generating functional analysis; the analysis is general and covers a large class of recurrent Boolean networks and related models. We show that results for the Hamming distance and network activity obtained via the quenched and annealed approaches, for this class, are identical. In addition, stationary solutions of Hamming distance and two-time autocorrelation function (inaccessible via the annealed approximation) coincide, giving insight into the uniform mapping of states within the basin of attraction onto the stationary states. In the presence of noise, we show that above some noise level the system is always ergodic and explore the possibility of spin-glass phase below this level. Finally, we show that our theory can be used to study the dynamics of models with strong memory effects.

Joint work with Alexander Mozeika

27 April 2011, 16:00, MS.05

The Traveling Salesman Problem: Theory and Practice in the Unit Square

In the Traveling Salesman Problem (TSP) we are given a collection of cities and the distance between each pair, and asked to find the shortest route that visits all the cities and returns to its starting place. When writers in the popular press wish to talk about NP-completeness, this is the problem they typically use to illustrate the concept, but how typical is it really? The TSP has also been highly attractive to theorists, who have proved now-classical results about the worst-case performance of heuristics for it, but how relevant to practice are those results?

In this talk I provide a brief introduction to the TSP, its applications, and key theoretical results about it, and then report on experiments that address both the above questions. I will concentrate on randomly generated instances with cities uniformly distributed in the unit square, which I will argue provide a reasonable surrogate for the instances arising in many real-world TSP applications. I'll first survey the performance of heuristics on these instances, and then report on an ongoing study into the average length and structure of their optimal tours, based on extensive data generated using state-of-the-art optimization software for the TSP, which can regularly find optimal solutions to TSP instances with 1000 cities or more.

26 April 2011, 16:00, MS.05

No-wait Flow Shop with Batching Machines

Scheduling problems with batching machines are extensively considered in the literature. Batching means that sets of jobs which are processed on the same machine must be grouped into batches. Two types of batching machines have been considered, namely s-batching machines and p-batching machines. For s-batching machines, the processing time of a batch is given by the sum of the processing times of the jobs in the batch, whereas, for p-batching machines the processing time of batches is given by the maximum of the processing times of the jobs in the batch.

In this talk we consider no-wait flowshop scheduling problems with batching machines. The first problem that will be considered is no-wait flowshop with two p-batching machines and three batching machines. For these problems we characterize the optimal solution and we give a polynomial time algorithm to minimize the makespan for the two-machine problem. For the three-machine problem we show the number of batches can be limited to nine and give an example where all optimal schedules have seven batches. The second problem that will be considered is the no-wait flowshop with two-machines, where the first machine is a p-batching machine, and the second machine is an s-batching machine. We show that the makespan minimization is NP-hard and we present some polynomial cases by reducing the scheduling problem to a matching problem with minimal cost in a specific graph.

15 March 2011, 16:00, MS.05

Space Fullerenes

A (geometric) fullerene is a 3-valent polyhedron whose faces are hexagons and pentagons (so, 12 of them). A fullerene is said to be Frank-Kasper if its hexagons are adjacent only to pentagons; there are four such fullerenes: with 20, 24, 26 and 28 vertices.

A space fullerene is a 4-valent 3-periodic tiling of R^3 by Frank-Kasper fullerenes. Space fullerenes are interesting in Crystallography (metallic alloys, zeolites, clathrates) and in Discrete Geometry. 27 such physical structures, all realized by alloys, were already known.

A new computer enumeration method has been devised for enumerating the space fullerenes with a small fundamental domain under their translation groups: 84 structures with at most 20 fullerenes in the reduced unit cell (i.e. by a Biberbach group) were found. The 84 obtained structures have been compared with the 27 physical ones and all known special constructions: by Frank-Kasper-Sullivan, Shoemaker-Shoemaker, Sadoc-Mossieri and Deza-Shtogrin. 13 obtained structures are among the above 27, including A_{15}, Z, C_{15} and 4 other Laves phases.

Moreover, there are 16 new proportions of 20-, 24-, 26-, 28-vertex fullerenes in the unit cell. 3 of them provide the first counterexamples to a conjecture by Rivier-Aste, 1996, and to the old conjecture by Yarmolyuk-Kripyakevich, 1974, that the proportion should be a conic linear combination of proportions (1:3:0:0), (2:0:0:1), (3:2:2:0) of A_{15}, C_{15}, Z.

So, a new challenge to practical Crystallography and Chemistry is to check the existence of alloys, zeolites, or other compounds having one of the 71 new geometrical structures.

This is joint work with Mathieu Dutour and Olaf Delgado.

8 March 2011, 16:00, MS.05

Elementary Polycycles and Applications

Given q \in \mathbb{N} and R \subset \mathbb{N}, an (R,q)-polycycle is a non-empty, 2-connected, planar, locally finite (i.e. any circle contains only a finite number of its vertices) graph G with faces partitioned into two non-empty sets F_1 and F_2, so that:

  1. all elements of F_1 (called proper faces) are combinatorial i-gons with i \in R,
  2. all elements of F_2 (called holes, the exterior face(s) are amongst them) are pair-wise disjoint, i.e. have no common vertices,
  3. all vertices have degree within {2,...,q} and all interior (i.e. not on the boundary of a hole) vertices are q-valent.

Such a polycycle is called elliptic, parabolic or hyperbolic when 1/q+1/r-1/2 (where r={max_{i \in R}i}) is positive, zero or negative, respectively.

A bridge of an (R,q)-polycycle is an edge, which is not on a boundary and goes from a hole to a hole (possibly the same hole). An (R,q)-polycycle is called elementary if it has no bridges. An open edge of an (R,q)-polycycle is an edge on a boundary, such that each of its end-vertices has degree less than q. Every (R,q)-polycycle is formed by the agglomeration of elementary (R,q)-polycycles along their open edges.

We classify all elliptic elementary (R,q)-polycycles and present various applications.

23 Feb 2011, 16:00, B3.03

Triangle-intersecting Families of Graphs

A family of graphs F on a fixed set of n vertices is said to be `triangle-intersecting' if for any two graphs G and H in F, the intersection of G and H contains a triangle. Simonovits and S\'{o}s conjectured that such a family has size at most $\frac{1}{8}2^{{n \choose 2}}$, and that equality holds only if F consists of all graphs containing some fixed triangle. Recently, the author, Yuval Filmus and Ehud Friedgut proved a strengthening of this conjecture, namely that if F is an odd-cycle-intersecting family of graphs, then $|F| \leq \tfrac{1}{8} 2^{{n \choose 2}}$. Equality holds only if $F$ consists of all graphs containing some fixed triangle. A stability result also holds: an odd-cycle-intersecting family with size close to the maximum must be close to a family of the above form. We will outline proofs of these results, which use Fourier analysis, together with an analysis of the properties of random cuts in graphs, and some results from the theory of Boolean functions. We will then discuss some related open questions.

All will be based on joint work with Yuval Filmus (University of Toronto) and Ehud Friedgut (Hebrew University of Jerusalem).

15 Feb 2011, 16:00, MS.05

Overlap Colourings of Graphs: A Generalization of Multicolourings

An r-multicolouring of a graph allocates r colours (from some `palette') to each vertex of a graph such that the colour sets at adjacent vertices are disjoint; in an (r,\lambda) overlap colouring the sets at adjacent vertices must also overlap by \lambda colours. The (r,\lambda) chromatic number, \chi_{r,\lambda}(G), is the smallest possible palette size. Classifying graphs by their overlap chromatic properties turns out to be strictly finer than by their multichromatic properties but not as fine as by their cores.

I shall survey what is currently known: basically, everything concerning series-parallel (i.e. K_4-minor-free) and wheel graphs, and the asymptotics for complete graphs.

11 February 2011, 14:00, B3.03

Number Systems and Data Structures

The interrelationship between numerical representations and data structures is efficacious. However, in many write-ups such connection has not been made explicit. As far as we know, their usage was first discussed in the seminar notes by Clancy and Knuth. Early examples of data structures relying on number systems include finger search trees and binomial queues.

In this talk, we survey some known number systems and their usage in existing worst-case efficient data structures. We formalize properties of number systems and requirements that should be imposed on a number system to guarantee efficient performance on the corresponding data structures. We introduce two new number systems: the strictly-regular system and the five-symbol skew system. We illustrate how to perform operations on the two number systems and give applications for their usage to implement worst-case efficient data structures. We also give a simple method that extends any number system supporting increments to support decrements using the same number of digit flips.

The strictly-regular system is a compact system that supports increments and decrements in constant number of digit flips. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the regular system for its efficient support of decrements, and superior to the extended-regular system for being more compact by using three symbols instead of four. To demonstrate the applicability of the new number system, we modify Brodal's meldable priority queues making delete require at most 2lg(n)+O(1) element comparisons (improving the bound from 7lg(n)+O(1)) while maintaining the efficiency and the asymptotic time bounds for all operations.

The five-symbol skew system also supports increments and decrements with a constant number of digit flips. In this number system the weight of the ith digit is 2i-1, and hence it can be used to implement efficient structures that rely on complete binary trees. As an application, we implement a priority queue as a forest of heap-ordered complete binary trees. The resulting data structure guarantees O(1) worst-case cost per insert and O(lg(n)) worst-case cost per delete.

8 February 2011, 16:00, MS.05

The Complexity of the Constraint Satisfaction Problem: Beyond Structure and Language

The Constraint Satisfaction Problem (CSP) is concerned with the feasibility of satisfying a collection of constraints. The CSP paradigm has proven to be useful in many practical applications.

In a CSP instance, a set of variables must be assigned values from some domain. The values allowed for certain (ordered) subsets of the variables are restricted by constraint relations. The general CSP is NP-hard. However there has been considerable success in identifying tractable fragments of the CSP: these have traditionally been characterised in one of two ways:

The sets of variables that are constrained in any CSP instance can be abstracted to give a hypergraph structure for the instance. Subproblems defined by limiting the allowed hypergraphs are called structural. The theory of tractable structural subproblems is analogous to the theory of tractable conjunctive query evaluation in relational databases and many of the tractable cases derive from generalisations of acyclic hypergraphs. We have several important dichotomy theorems for the complexity of structural subproblems.

Alternatively, it is possible to restrict the set of relations which can be used to define constraints. Subproblems defined in this way are called relational. It turns out that the complexity of relational subproblems can be studied by analysing a universal algebraic object: the clone of polymorphisms. This algebraic analysis is well advanced and again there are impressive dichotomy theorems for relational subproblems.

As such, it is timely to consider so-called hybrid subproblems which can neither be characterised by structural nor relational restrictions. This exciting new research direction shows considerable promise. In this talk we present several of the new results (tractable classes) for hybrid tractability: Turan, Broken Triangle, Perfect and Pivots.

18 January 2011, 16:00, MS.05

Auction and Equilibrium in Sponsored Search Market Design

The Internet enabled sponsored market has been one of those that have been attracting extensive attention. Within this framework of a new market design, the price and allocation of on-line advert placement through auction or market equilibrium become very important topic both in theory and in practice.

Within this context, we discuss incentive issues of both the market maker (the seller) and the market participants (the buyers) within the market equilibrium paradigm, and discuss existence, convergence and polynomial time solvability results with the comparison to auction protocols.

11 Jan 2011, 16:00, MS.05

Polylogarithmic Approximation for Edit Distance and the Asymmetric Query Complexity

We present a near-linear time algorithm that approximates the edit distance between two strings within a significantly better factor than previously known. This result emerges in our investigation of edit distance from a new perspective, namely a model of asymmetric queries, for which we present near-tight bounds.

Another consequence of this new model is the first rigorous separation between edit distance and Ulam distance, by tracing the hardness of edit distance to phenomena that were not used by previous analyses.

[Joint work with Alexandr Andoni and Krzysztof Onak.]

7 December 2010, 16:00, MS.05

All Ternary Permutation Constraint Satisfaction Problems Parameterized Above Average Have Kernels with a Quadratic Number of Variables

We will consider the most general ternary Permutation Constraint Satisfaction Problem (CSP) and observe that all other ternary Permutation-CSPs can be reduced to this one. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering alpha of V that maximises the number of triples whose ordering (under alpha) follows that of the constraint.

We prove that all ternary Permutation-CSPs parameterized above average (which is a tight lower bound) have kernels with a quadratic number of variables.

1 December 2010, 11:00, B3.20 (WBS Scarman Road)

Defining Real-world Vehicle Routing Problems: An Object-based Approach

Much interest is currently being expressed in "Real-world" VRPs. But what are they and how can they be defined? The conventional method of problem formulations for the VRP are rarely extended to deal with many real-world constraints and rapidly become complex and unwieldy as the number and complexity of constraints increases. In Software Engineering practice, complex systems and constraints are commonplace, and the prevailing modelling and programming paradigm is Object-Oriented Programming. This talk will present an OOP model for the VRP and show it's application to some classic VRP variants as well as some real-world problem domains.

To reserve your place please contact Sue Shaw <S.Shaw@wbs.ac.uk>.

30 November 2010, 16:00, MS.05

A Role for the Lovasz Theta Function in Quantum Mechanics

The mathematical study of the transmission of information without error was initiated by Shannon in the 50s. Only in 1979, Lovasz solved the major open problem of Shannon concerned with this topic. The solution is based on a now well-known object called the Lovasz theta function. Its role greatly contributed to the developments of areas of Mathematics like semidefinite programming and extremal problems in combinatorics. The Lovasz theta function is an upper bound to the zero-error capacity, however it is not always tight.

In the last two decades quantum information theory established itself as the natural extension of information theory to the quantum regime, i.e. for the study of information storage and transmission with the use of quantum systems and dynamics. I will show that the Lovasz theta function is an upper bound to the zero-error capacity when the parties of a channel can share certain quantum physical resources. This quantity, which is called entanglement-assisted zero-error capacity, can be greater than the classical zero-error capacity, in both single use of the channel and asymptotically.

Additionally, I will propose a physical interpretation of the Lovasz theta function as the maximum violation of certain noncontextual inequalities. These are inequalities traditionally used to study the difference between classical, quantum mechanical, and more exotic theories of nature.

This is joint work with Adan Cabello (Sevilla), Runyao Duan (Tsinghua/Sydney), and Andreas Winter (Bristol/Singapore).

23 November 2010, 16:00, MS.05

Boundedness, Rigidity and Global Rigidity of Direction-Length Frameworks

A mixed graph G=(V;D,L) is a graph G together with a bipartition (D,L) of its edge set. A d-dimensional direct-length framework (G,p) is a mixed graph G together with a map p:V->R^d. We imagine that the vertices are free to move in R^d subject to the constraints that the directions of the direction edges and the lengths of the length edges edges remain constant. The framework is rigid if the only such motions are simple translations.

Two frameworks (G,p) and (G,q) are equivalent if the edges in D have the same directions and edges in L have the same lengths in both (G,p) and (G,q). The framework (G,p) is globally rigid if every framework which is equivalent to (G,p), can be obtained from (G,p) by a translation or a dilation by -1. It is bounded if there exists K in R such that every framework (G,q) which is equivalent to (G,q), satisfies |q(u)-q(v)| <= K for all u,v in V.

I will describe characterizations of boundedness and rigidity of generic 2-dimensional direction-length frameworks, and give partial results for the open problem of characterizing the global rigidity of such frameworks. This is joint work with Peter Keevash and Tibor Jordán.

16 November 2010, 16:00, MS.05

Grid Pattern Classes

Matrix griddings are structural means of representing permutations as built of finitely many increasing and decreasing permutations. More specifically, let M be an m × n matrix with entries m_{ij} ∈ { 0,1,-1}. We say that a permutation π admits an M-gridding if the xy-plane in which the graph Γ of π has been plotted can be partitioned into an xy-parallel, m × n rectangular grid with cells C_{ij}, such that the following hold:

  • if m_{ij} = 1 then Γ ∩ C_{ij} is an increasing sequence of points;
  • if m_{ij} = -1 then Γ ∩ C_{ij} is decreasing;
  • if m_{ij} = 0 then Γ ∩ C_{ij} = ∅.

Let G(M) denote the set (pattern class) of all permutations which admit M-griddings. Grid classes have been present in the pattern classes literature from very early on. For example, Atkinson (1999) observed that the class of permutations avoiding 321 and 2143 is equal to G((1 1)) ∩ G((1 1)t) and used this to enumerate the class. Much more recently, grid classes have played a crucial role in Vatter's (to appear) classification of small growth rates of pattern classes.

These past uses hint strongly at the natural importance of grid classes in the general theory of pattern classes. If this is to be so, the next step is to establish `nice' general properties of grid classes themselves. A number of researchers, including M.H. Albert, M.D. Atkinson, M. Bouvel, R. Brignall, V. Vatter and myself, have been engaged on such a project over the past year, and I will report on their findings. The results are proved by an intriguing interplay of language-theoretic and combinatorial-geometric methods, the flavour of which I will try to convey. The talk will conclude with a discussion of some open problems concerning general grid classes, which ought to point the way for the next stage in this project.

9 November 2010, 16:00, MS.05

Unsatisfiability Below the Threshold(s)

It is well known that there is a sharp density threshold for a random r-SAT formula to be satisfiable, and a similar, smaller, threshold for it to be satisfied by the pure literal rule. Also, above the satisfiability threshold, where a random formula is with high probability (whp) unsatisfiable, the unsatisfiability is whp due to a large ``minimal unsatisfiable subformula'' (MUF).

By contrast, we show that for the (rare) unsatisfiable formulae below the pure literal threshold, the unsatisfiability is whp due to a unique MUF with smallest possible ``excess'', failing this whp due to a unique MUF with the next larger excess, and so forth. In the same regime, we give a precise asymptotic expansion for the probability that a formula is unsatisfiable, and efficient algorithms for satisfying a formula or proving its unsatisfiability. It remains open what happens between the pure literal threshold and the satisfiability threshold. We prove analogous results for the k-core and k-colorability thresholds for a random graph, or more generally a random r-uniform hypergraph.

4 November 2010, 15:00, D1.07

Matchings in 3-uniform Hypergraphs

A theorem of Tutte characterises all graphs that contain a perfect matching. In contrast, a result of Garey and Johnson implies that the decision problem of whether an r-uniform hypergraph contains a perfect matching is NP-complete for r>2. So it is natural to seek simple sufficient conditions that ensure a perfect matching. Given an r-uniform hypergraph H, the degree of a k-tuple of vertices is the number of edges in H containing these vertices. The minimum vertex degree of H is the minimum of these degrees over all 1-tuples. The minimum codegree of H is the minimum of all the degrees over all (r-1)-tuples of vertices in H.

In recent years there has been significant progress on this problem. Indeed, in 2009 Rödl, Ruciński and Szemerédi characterised the minimum codegree that ensures a perfect matching in an r-uniform hypergraph. However, much less is known about minimum vertex degree conditions for perfect matchings in r-uniform hypergraphs H. Hàn, Person and Schacht gave conditions on the minimum vertex degree that ensures a perfect matching in the case when r>3. These bounds were subsequently lowered by Markström and Ruciński. This result, however, is believed to be far from tight. In the case when r=3, Hàn, Person and Schacht asymptotically determined the minimum vertex degree that ensures a perfect matching. In this talk we discuss a result which determines this threshold exactly. This is joint work with Daniela Kühn and Deryk Osthus.

2 November 2010, 16:00, MS.05

On Incidentor Colorings of Multigraphs

An incidentor in a directed or undirected multigraph is an ordered pair of a vertex and an arc incident to it. It is convenient to treat an incidentor as half of an arc incident to a vertex. Two incidentors of the same arc are called mated. Two incidentors are adjacent if they adjoin the same vertex. The incidentor coloring problem (indeed, a class of problems) is to color all incidentors of a given multigraph with the minimum number of colors satisfying some restrictions on colors of adjacent and mated incidentors.

A review of various results on incidentor coloring will be given in the talk.

26 October 2010, 16:00, MS.05

The Empire Colouring Problem: Old and New Results

Assume that the vertex set of a graph G is partitioned into blocks B_1, B_2, ... of size r>1, so that B_i contains vertices labelled (i-1)r+1, (i-1)r+2, ... , ir. The r-empire chromatic number of G is the minimum number of colours \chi_r(G) needed to colour the vertices of G in such a way that all vertices in the same block receive the same colour, but pairs of blocks connected by at least one edge of G are coloured differently.The decision version of this problem (termed the r-empire colouring problem) dates back to the work of Perci Heawood on the famous Four Colour Theorem (note that the 1-empire colouring problem is just planar graph colouring). In this talk I will present a survey of some old and new results on this problem. Among other things, I will focus on the computational complexity of the r-empire colouring problem and then talk about the colourability of random trees.

19 October 2010, 16:00, MS.05

Reconstruction Problems and Polynomials

There are three classical unsolved reconstruction problems: vertex reconstruction of S.M. Ulam and P.J. Kelly (1941), edge reconstruction of F. Harary (1964) and switching reconstruction of R.P. Stanley (1985). It turns out that these and similar questions are intimately connected with a wide range of important and also mostly open problems related to polynomials. For example, a simplest analogue of vertex reconstruction - reconstruction of a sequence from its subsequences leads to Littlewood-type problems concerning polynomials with -1,0, 1 coefficients. In switching reconstruction one has to know the number of zero coefficients in the expansion of (1-x)^n (1+x)^m, which is the same as the number of integer zeros of Krawtchouk polynomials. In this talk I will try to explain these connections and show how they can be applied to reconstruction.

12 October 2010, 16:00, MS.05

A Decidability Result for the Dominating Set Problem

We study the following question: given a finite collection of graphs G_1,...,G_k, decide whether the dominating set problem is NP-hard in the class of (G_1,...,G_k)-free graphs or not. In this talk, we prove the existence of an efficient algorithm that answers this question for k=2.

23 September 2010, 14:00, MS B3.03

Hardness and Approximation of Minimum Maximal Matching in k-regular graphs

We consider the problem of finding a maximal matching of minimum size in a graph, and in particular, in bipartite regular graphs. This problem was motivated by a stable marriage allocation problem. The minimum maximal matching is known to be NP-hard in bipartite graphs with maximum degree 3. We first extend this result to the class of $k$-regular bipartite graphs, for any fixed $k\geq 3$. In order to find some “good” solutions, we compare the size $M$ of a maximum matching and the size $MMM$ of a minimum maximal matching in regular graphs. It is well known that $M\leq 2MMM$ in any graph and we show that it can be improved to $M\leq (2-1/k)MMM$ in $k$-regular graphs. On the other hand, we analyze a greedy algorithm finding in $k$-regular bipartite graphs a maximal matching of size $MM$ satisfying $MM\leq (1-\epsilon(k))M$. It leads to a $(1-\epsilon(k))(2-1/k)$-approximation algorithm for $k$-regular bipartite graphs.

This is joint work with Tinaz Ekim and C. Tanasescu

12 August 2010, 16:00, MS B3.03

Query Complexity Lower Bounds for Reconstruction of Codes

We investigate the problem of "local reconstruction", as defined by Saks and Seshadhri (2008), in the context of error correcting codes.

The first problem we address is that of "message reconstruction", where given an oracle access to a corrupted encoding $w \in \{0,1\}^n$ of some message $x \in \{0,1\}^k$ our goal is to probabilistically recover $x$ (or some portion of it). This should be done by a procedure (reconstructor) that given an index $i$ as input, probes $w$ at few locations and outputs the value of $x_i$. The reconstructor can (and indeed must) be randomized, but all its randomness is specified in advance by a single random seed, such that with high probability ALL $k$ values $x_i$ for $1 \leq i \leq k$ are reconstructed correctly.

Using the reconstructor as a filter allows to evaluate certain classes of algorithms on $x$ efficiently. For instance, in case of a parallel algorithm, one can initialize several copies of the reconstructor with the same random seed, and they can autonomously handle decoding requests while producing outputs that are consistent with the original message $x$. Another example is that of adaptive querying algorithms, that need to know the value of some $x_i$ before deciding which index should be decoded next.

The second problem that we address is "codeword reconstruction", which is similarly defined, but instead of reconstructing the message our goal is to reconstruct the codeword itself, given an oracle access to its corrupted version.

Error correcting codes that admit message and codeword reconstruction can be obtained from Locally Decodable Codes (LDC) and Self Correctible Codes (SCC) respectively. The main contribution of this paper is a proof that in terms of query complexity, these are close to be the best possible constructions, even when we disregard the length of the encoding.

This is joint work with Eldar Fischer and Arie Matsliah.

10 August 2010, 16:00, MS B3.03

Altruism in Atomic Congestion Games

We study the effects of introducing altruistic agents into atomic congestion games. Altruistic behavior is modeled by a linear trade-off between selfish and social objectives. Our model can be embedded in the framework of congestion games with player-specific latency functions. Stable states are the pure Nash equilibria of these games, and we examine their existence and the convergence of sequential best-response dynamics. In general, pure Nash equilibria are often absent and existence is \NP-hard to decide. Perhaps surprisingly, if all delay functions are linear, the games remain potential games even when agents are arbitrarily altruistic. This result can be extended to a class of general potential games and social cost functions, and we study a number of prominent examples.

In addition to these results for uncoordinated dynamics, we consider a scenario with a central altruistic institution that can set incentives for the agents. We provide constructive and hardness results for finding the minimum number of altruists to stabilize an optimal congestion profile and more general mechanisms to incentivize agents to adopt favorable behavior.

29 June 2010, 16:00, MS B3.03

Proportional Optimization and Fairness: Applications

The problem of allocating resources in proportion to some measure has been studied in various fields of science for a long time. The apportionment problem of allocating seats in a parliament in proportion to the number of votes obtained by political parties is one example. This presentation will show a number of other real-life problems, for instance the Liu-Layland problem, stride scheduling and fair queueing which can be formulated and solved as the problems of proportional optimization and fairness.

22 June 2010, 16:00, MS B3.03

A Decomposition Approach for Insuring Critical Paths

We consider a stochastic optimization problem involving protection of vital arcs in a critical path network. We analyze a problem in which task finishing times are uncertain, but can be insured a priori to mitigate potential delays. We trade off costs incurred in insuring arcs with expected penalties associated with late completion times, where lateness penalties are lower semi-continuous nondecreasing functions of completion time. We provide decomposition strategies to solve this problem with respect to either convex or nonconvex penalty functions. In particular, we employ the Reformulation-Linearization Technique to make the problem amenable to solution via Benders decomposition. We also consider a chance-constrained version of this problem, in which the probability of completing a project on time is sufficiently large.

21 June 2010, 16:00, MS.03

Energy Efficient Job Scheduling with Speed Scaling and Sleep Management

Energy usage has become a major issue in the design of microprocessors, especially for battery-operated devices. Many modern processors support dynamic speed scaling to reduce energy usage. The speed scaling model assumes that a processor, when running at speed s, consumes energy at the rate of s^\alpha, where \alpha is typically 2 or 3. In older days when speed scaling was not available, energy reduction was mainly achieved by allowing a processor to enter a low-power sleep state, yet waking up requires extra energy. It is natural to study job scheduling on a processor that allows both sleep state and speed scaling. In the awake state, a processor running at speed s>0 consumes energy at the rate s^\alpha + \sigma , where \sigma > 0 is the static power and s^\alpha is the dynamic power. In this case, job scheduling involves two components: a sleep management algorithm to determine when to work or sleep, and a speed scaling algorithm to determine which job to run and at what speed to run. Adding a sleep state changes the nature of speed scaling. Without sleep state, running a job slower is a natural way to save energy. With sleep state, one can also save energy by working faster to allow a longer sleep period. It is not trivial to strike a balance. In this talk, we will discuss some new scheduling results involving both speed scaling and sleep management.

15 June 2010, 16:00, MS B3.03

Edge Expansion in Graphs on Surfaces

Edge expansion for graphs is a well-studied measure of connectivity, which is important in discrete mathematics and computer science. While there has been much recent work done in finding approximation algorithms for determining edge expansion, there has been less attention in developing exact polynomial-time algorithms to determine edge expansion for restricted graph classes. In this talk, I will present an algorithm that, given an n-vertex graph G of genus g, determines the edge expansion of G in time n^{O(g)}.

8 June 2010, 16:00, MS B3.03

A Survey of Connectivity Approximation via a Survey of the Techniques

We survey some crucial techniques in approximating connectivity problems. The most general question we study is the Steiner Network problem, where we are given an undirected weighted graph with costs on the edges, and required number rij of paths between every i, j. The paths need to be vertex or edge disjoint depending on the problem. The goal is to find a minimum cost feasible solution.

The full talk has the following techniques and problems:

  1. Solving k out-connectivity in polynomial time in the edge (Edmonds) and in the vertex (Frank Tardos) cases. This gives a simple ratio 2 for edge k-connectivity.
  2. The cycle lemma of Mader: together with technique 1 it both gives results for minimum power k-connectivity power problems (see the talk for exact definition) and an improved result for k-edge connectivity in the metric case.
  3. Laminarity and the new charging scheme by Ravi et. al. getting a much simplified version of Jain's theorem of 2 approximating of Steiner network in the edge disjoint paths case.
2 June 2010, 16:00, MS.04

The Computational Complexity of Trembling Hand Perfection and Other Equilibrium Refinements

The king of refinements of Nash equilibrium is trembling hand perfection. In this talk, we show that it is NP-hard and SQRTSUM-hard to decide if a given pure strategy Nash equilibrium of a given three-player game in strategic form with integer payoffs is trembling hand perfect. Analogous results are shown for a number of other solution concepts, including proper equilibrium, (the strategy part of) sequential equilibrium, quasi-perfect equilibrium and CURB.

1 June 2010, 16:00, MS B3.03

Exponential Lower Bounds For Policy Iteration

We study policy iteration for infinite-horizon Markov decision processes. In particular, we study greedy policy iteration. This is an algorithm that has been found to work very well in practice, where it is used as an alternative to linear programming. Despite this, very little is known about its worst case complexity. Friedmann has recently shown that policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total-reward and average-reward optimality criteria.

25 May 2010, 16:00, MS B3.03

Paths of Bounded Length and their Cuts: Parameterized Complexity and Algorithms

We study the parameterized complexity of two families of problems: the bounded length disjoint paths problem and the bounded length cut problem. From Menger's theorem both problems are equivalent (and computationally easy) in the unbounded case for single source, single target paths. However, in the bounded case, they are combinatorial distinct and are both NP-hard, even to approximate. Our results indicate that a more refined landscape appears when we study these problems with respect to their parameterized complexity. For this, we consider several parameterizations (with respect to the maximum length l of paths, the number k of paths or the size of a cut, and the treewidth of the input graph) of all variants of both problems (edge/vertex-disjoint paths or cuts, directed/undirected). We provide several FPT-algorithms (for all variants) when parameterized by both k and l and hardness results when the parameter is only one of k and l. Our results indicate that the bounded length disjoint-path variants are structurally harder than their bounded length cut counterparts. Also, it appears that the edge variants are harder than their vertex-disjoint counterparts when parameterized by the treewidth of the input graph. Joint work with Dimitrios M. Thilikos (Athens).

18 May 2010, 16:00, MS B3.03

Colouring Pairs of Binary Trees and the Four Colour Problem - Results and Achievements

The Colouring Pairs of Binary Trees problem was introduced by Gibbons and Czumaj, and its equivalence to the Four Colour Problem means that it is an interesting combinatorial problem. Given two binary trees Ti and Tj, the question is whether Ti and Tj can be 3-coloured in such a way that the edge adjacent to leaf k is the same colour in Ti and Tj. This talk will introduce the problem and discuss some of the results that have been achieved so far, and will also discuss the potential benefits of finding a general solution to the problem. In particular we present two approaches that lead to linear-time algorithms solving CPBT for specific sub-classes of tree pairs. This is joint work with Alan Gibbons.

11 May 2010, 16:00, MS B3.03

An Approximate Version of Sidorenko's Conjecture

A beautiful conjecture of Erdos-Simonovits and Sidorenko states that if H is a bipartite graph, then the random graph with edge density p has in expectation asymptotically the minimum number of copies of H over all graphs of the same order and edge density. This conjecture also has an equivalent analytic form and has connections to a broad range of topics, such as matrix theory, Markov chains, graph limits, and quasirandomness. Here we prove the conjecture if H has a vertex complete to the other part, and deduce an approximate version of the conjecture for all H. Furthermore, for a large class of bipartite graphs, we prove a stronger stability result which answers a question of Chung, Graham, and Wilson on quasirandomness for these graphs.

4 May 2010, 16:00, MS B3.03

Hybridizing Evolutionary Algorithms and Parametric Quadratic Programming to Solve Multi-Objective Portfolio Optimization Problems with Cardinality Constraints

The problem of portfolio selection is a standard problem in financial engineering and has received a lot of attention in recent decades. Classical mean-variance portfolio selection aims at simultaneously maximizing the expected return of the portfolio and minimizing portfolio variance, and determining all efficient (Pareto-optimal) solutions. In the case of linear constraints, the problem can be solved efficiently by parametric quadratic programming (i.e., variants of Markowitz’ critical line algorithm). However, there are many real-world constraints that lead to a non-convex search space, e.g., cardinality constraints which limit the number of different assets in a portfolio, or minimum buy-in thresholds. As a consequence, the efficient approaches for the convex problem can no longer be applied, and new solutions are needed. In this talk, we present a way to integrate an active set algorithm into a multi-objective evolutionary algorithm (MOEA). The idea is to let the MOEA come up with some convex subsets of the set of all feasible portfolios, solve a critical line algorithm for each subset, and then merge the partial solutions to form the solution of the original non-convex problem. Because this means the active set algorithm has to be run for the evaluation of every candidate solution, we also discuss how to efficiently implement parametric quadratic programming for portfolio selection. We show that the resulting envelope-based MOEA significantly outperforms other state-of-the-art MOEAs.

26 April 2010, 15:00, CS1.01

A Rigorous Approach to Statistical Database Privacy

Privacy is a fundamental problem in modern data analysis. We describe "differential privacy", a mathematically rigorous and comprehensive notion of privacy tailored to data analysis. Differential privacy requires, roughly, that any single individual's data have little effect on the outcome of the analysis. Given this definition, it is natural to ask: what computational tasks can be performed while maintaining privacy? In this talk, we focus on the tasks of machine learning and releasing contingency tables.

Learning problems form an important category of computational tasks that generalizes many of the computations applied to large real-life datasets. We examine what concept classes can be learned by an algorithm that preserves differential privacy. Our main result shows that it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the hypothesis class. This is a private analogue of the classical Occam's razor result.

Contingency tables are the method of choice of government agencies for releasing statistical summaries of categorical data. We provide tight bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees when the data being summarized is sensitive. Our investigation also leads to new results on the spectra of random matrices with correlated rows.

22 Apr 2010, 16:00, MS B3.03

Local Algorithms for Robotic Formation Problems

Consider a scenario with a set of autonomous mobile robots having initial positions in the plane. Their goal is to move in such a way that they eventually reach a prescribed formation. Such a formation may be a straight line between two given endpoints (short communication chain), a circle or any other geometric pattern, or just one point (gathering problem). In this talk, I consider simple local strategies for such robotic formation problems: the robots are limited to see only robots within a bounded radius; they are memoryless, without common orientation. Thus, their decisions where to move next are solely based on the relative positions of robots within the bounded radius. I will present local strategies for short communication chains and gathering, and present runtime bounds assuming different time models. All previous algorithms with a proven time bound assume global view on the positions of all robots.

16 Mar 2010, 16:00, MS B3.03

Algorithmic Meta-Theorems: Upper and Lower Bounds for the Parameterized Complexity of Problems on Graphs

In 1990, Courcelle proved a fundamental theorem stating that every property of graphs definable in monadic second-order logic can be decided in linear time on any class of graphs of bounded tree-width. This theorem is the first of what is today known as algorithmic meta-theorems, that is, results of the form: every property definable in a logic L can be decided efficiently on any class of structures with property P.

Such theorems are of interest both from a logical point of view, as results on the complexity of the evaluation problem for logics such as first-order or monadic second-order logic, and from an algorithmic point of view, where they provide simple ways of proving that a problem can be solved efficiently on certain classes of structures.

Following Courcelle's theorem, several meta-theorems have been established, primarily for first-order logic with respect to properties of structures derived from graph theory. In this talk I will motivate the study of algorithmic meta-theorems from a graph algorithmic point of view, present recent developments in the field and illustrate the key techniques from logic and graph theory used in their proofs.

So far, work on algorithmic meta-theorems has mostly focused on obtaining tractability results for as general classes of graphs as possible. The question of finding matching lower bounds, that is, intractability results for monadic second-order or first-order logic with respect to certain graph properties, has so far received much less attention. Tight matching bounds, for instance for Courcelle's theorem, would be very interesting as they would give a clean and exact characterisation of tractability for MSO model-checking with respect to structural properties of the models. In the second part of the talk I will present a recent result in this direction showing that Courcelle's theorem can not be extended much further to classes of unbounded tree-width.

10 Mar 2010, 16:00, MS B3.03

Time-optimal Strategies for Infinite Games

The use of two-player games of infinite duration has a long history in the synthesis of controllers for reactive systems. Classically, the quality of a winning strategy is measured in the size of the memory needed to implement it. But often there are other natural quality measures: in many games (even if they are zero-sum) there are winning plays for Player 0 that are more desirable than others. In this talk, we define and compute time-optimal winning strategies for three winning conditions.

In a Poset game, Player 0 has to answer request by satisfying a partially ordered set of events. We use a waiting time based approach to define the quality of a strategy and show that Player 0 has optimal winning strategies, which are finite-state and effectively computable.

In Parametric LTL, temporal operators may be equipped with free variables for time bounds. We present algorithms that determine whether a player wins a game with respect to some, infinitely many, or all variable valuations. Furthermore, we show how to determine optimal valuations that allow a player to win a game.

In a k-round Finite-time Muller game, a play is stopped as soon as some loop is traversed k times in a row. For k=n^2n!+1, the winner of the k-round Finite-time Muller game wins also the classical Muller game. For k=2, this equivalence does no longer hold. For all values in between, it is open whether the games are equivalent.

9 Mar 2010, 16:00, MS B3.03

Induced Minors and Contractions - An Algorithmic View

The theory of graph minors by Robertson and Seymour is one of very active fields in modern (algorithmic) graph theory. In this talk we will be interested in two containment relations similar to minors -- contractions and induced minors. We will survey known classic results and present some new work. In particular, I would like to talk about two recent results: (1) a polynomial-time algorithm for finding induced linkages in claw-free graphs, and (2) a polynomial-time algorithm for contractions to a fixed pattern in planar graphs. The talk will be based on joint work with Jiri Fiala, Bernard Lidicky, Daniel Paulusma and Dimitrios Thilikos.

3 Mar 2010, 14:00, MS B3.02

The Generalized Triangle-triangle Transformation in Percolation (joint seminar with Statistical Mechanics)

One of the main aims in the theory of percolation is to find the `critical probability' above which long range connections emerge from random local connections with a given pattern and certain individual probabilities. The quintessential example is Kesten's result from 1980 that if the edges of the square lattice are selected independently with probability p, then long range connections appear if and only if p>1/2. The starting point is a certain self-duality property, observed already in the early 60s; the difficulty is not in this observation, but in proving that self-duality does imply criticality in this setting.

Since Kesten's result, more complicated duality properties have been used to determine a variety of other critical probabilities. Recently, Scullard and Ziff have described a very general class of self-dual planar percolation models; we show that for the entire class (in fact, a larger class), self-duality does imply criticality.

2 Mar 2010, 16:00, MS B3.03

A Flow Model Based on Linking Systems with Applications in Network Coding

The Gaussian relay channel network, is a natural candidate to model wireless networks. Unfortunately, it is not known how to determine the capacity of this model, except for simple networks. For this reason, Avestimehr, Diggavi and Tse introduced in 2007 a purely deterministic network model (the ADT model), which captures two key properties of wireless channels, namely broadcast and superposition. Furthermore, the capacity of an ADT model can be determined efficiently and approximates the capacity of Gaussian relay channels. In 2009, Amaudruz and Fragouli presented the first polynomial time algorithm to determine a relay encoding strategy that achieves the min-cut value of an ADT network.

In this talk, I will present a flow model which shares many properties with classical network flows (as introduced by Ford and Fulkerson) and includes the ADT model as a special case. The introduced flow model is based on linking systems, a structure closely related to matroids. Exploiting results from matroid theory, many interesting prop result. Furthermore, classical matroid algorithms can be used to obtain efficient algorithms for finding maximum flows, minimum cost flows and minimum cuts. This is based on joint work with Michel Goemans and Satoru Iwata.

23 Feb 2010, 16:00, MS B3.03

A Nonlinear Approach to Dimension Reduction

The celebrated Johnson-Lindenstrauss lemma says that every n points in Euclidean space can be represented using O(log n) dimensions with only a minor distortion of pairwise distances. It has been conjectured that a much-improved dimension reduction representation is achievable for many interesting data sets, by bounding the target dimension in terms of the intrinsic dimension of the data, e.g. by replacing the log(n) term with the doubling dimension. This question appears to be quite challenging, requiring new (nonlinear) embedding techniques.

We make progress towards resolving this question by presenting two dimension reduction theorems with similar flavour to the conjecture. For some intended applications, these results can serve as an alternative to the conjectured embedding.

[Joint work with Lee-Ad (Adi) Gottlieb.]

16 Feb 2010, 16:00, MS B3.03

k-Means has Polynomial Smoothed Complexity

The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, we study the k-means method in the model of smoothed analysis.

Smoothed analysis is a hybrid of worst-case and average-case analysis, which is based on a semi-random input model in which an adversary first specifies an arbitrary input that is subsequently slightly perturbed at random. This models random influences (e.g., measurement errors or numerical imprecision) that are present in most applications, and it often yields more realistic results than a worst-case or average-case analysis.

We show that the smoothed running time of the k-means method is bounded by a polynomial in the input size and 1/sigma, where sigma is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.

This talk is based on joint work with David Arthur (Stanford University) and Bodo Manthey (University of Twente).

10 Feb 2010, 16:00, MS B3.03

Total Fractional Colorings of Graphs with Large Girth

A total coloring is a combination of a vertex coloring and an edge coloring of a graph: every vertex and every edge is assigned a color and any two adjacent/incident objects must receive distinct colors. One of the main open problems in the area of graph colorings is the Total Coloring Conjecture of Behzad and Vizing from the 1960's asserting that every graph has a total coloring with at most D+2 colors where D is its maximum degree. When relaxed to fractional total colorings, the Total Coloring Conjecture was verified by Kilakos and Reed. In the talk, we will present a proof of the following conjecture of Reed:

For every real ε > 0 and integer D, there exists g such that every graph with maximum degree D and girth at least g has total fractional chromatic number at most D+1+ε.

For D=3 and D=4,6,8,10,..., we prove the conjecture in a stronger form: there exists g such that every graph with maximum degree D and girth at least g has total fractional chromatic number equal to D+1.

Joint work with Tomas Kaiser, František Kardoš, Andrew King and Jean-Sébastien Sereni.

9 Feb 2010, 16:00, MS B3.03

Hamilton Decompositions of Graphs and Digraphs

A Hamilton decomposition of a graph or digraph G is a set of edge-disjoint Hamilton cycles which together contain all the edges of G. In 1968, Kelly conjectured that every regular tournament has a Hamilton decomposition. We recently proved an approximate version of this conjecture (joint work with D. Kuhn and A. Treglown).

I will also describe an asymptotic solution of a problem by Nash-Williams (from 1971) on the number of edge-disjoint Hamilton cycles in a graph with given minimum degree (joint work with D. Christofides and D. Kuhn).

2 Feb 2010, 16:00, MS B3.03

Time Complexity of Decision Trees

We study time complexity of decision trees over an infinite set of k-valued attributes. As time complexity measures, we consider the depth and its extension - weighted depth of decision trees. The problem under consideration is the following. Is it possible for an arbitrary finite set of attributes to construct a decision tree which recognizes values of these attributes for a given input, and has the weighted depth less than the total weight of the considered attributes? The solution of this problem for the case of depth is given in terms of independence dimension (which is closely connected with VC dimension) and a condition of decomposition of granules. Each granule can be described by a set of equations of the kind "attribute = value". The solution of the considered problem for the weighted depth is based on the solution for the depth. We also discuss the place of the obtained results in the comparative analysis of time complexity of deterministic and nondeterministic decision trees.

27 Jan 2010, 16:00, MS B3.03

Random Graph Processes

The random triangle-free graph process starts with an empty graph and a random ordering on all the possible edges and in each step considers an edge and adds it too the graph if it remains triangle-free. In the same way one can define the random planar graph process where an edge is added when the graph remains planar. In this talk the two processes are compared. For example we show that with high probability at the end of the random planar process every fixed planar graph is a subgraph whereas in the triangle-process dense triangle-free subgraphs will not appear.

26 Jan 2010, 16:00, MS B3.03

Cycles in Directed Graphs

There are many theorems concerning cycles in graphs for which it is natural to seek analogous results for directed graphs. I will survey some recent results of this type, including:

  1. a solution to a question of Thomassen on an analogue of Dirac's theorem for oriented graphs,
  2. a theorem on packing cyclic triangles in tournaments that "almost" answers a question of Cuckler and Yuster, and
  3. a bound for the smallest feedback arc set in a digraph with no short directed cycles, which is optimal up to a constant factor and extends a result of Chudnovsky, Seymour and Sullivan.

These are joint work respectively with (1.) Kuhn and Osthus, (2.) Sudakov, and (3.) Fox and Sudakov.

19 Jan 2010, 16:00, MS B3.03

Solvable and Unsolvable in Cellular Automata

  1. The problem of ergodicity for cellular automata.
  2. The problem of eroders for cellular automata.

This is a joint seminar with the Centre for Complexity Science.

12 Jan 2010, 16:00, MS B3.03

Using Neighbourhood Exploration to Speed up Random Walks

We consider strategies that can be used to speed-up the cover time of a random walk on undirected connected graphs. The price of this speed up is normally some extra work that can be performed locally by the walk or by the vertices of the graph. Typical assumptions about what is allowed include: Biased walk transitions, Use of previous history, Local exploration of the graph.

Methods of local exploration include the neighbour marking process RW-MARK and look-ahead RW-LOOK (searching to fixed depth).

The marking process, RW-MARK, made by a random walk on an undirected graph G is as follows. Upon arrival at a vertex v, the walk marks v if unmarked and otherwise it marks a randomly chosen unmarked neighbor of v.

Depending on the degree and the expansion of the graph, we prove several upper bounds on the time required by the process RW-MARK to mark all vertices of G. If, for instance G is the hypercube on n vertices the processes marks all vertices in time O(n), with high probability. This significantly reduces the n ln n cover time of the hypercube using a standard random walk.

The process RW-MARK can be compared to the marking process where a vertex v is chosen uniformly at random (coupon collecting) at each step. For the hypercube also has a marking time of O(n).

In the related look-ahead process RW-LOOK, the walk marks all neighbours of the visited vertex to some depth k. For the hypercube, for example, the performance of the processes RW-LOOK-1, and CC-LOOK-1 is asymptotic to n ln 2 with high probability.

This research is joint work with Petra Berenbrink, Robert Elsaesser, Tomasz Radzik and Thomas Sauerwald.

2 Dec 2009, 16:00, MS.05

Robustness of the Rotor-router Mechanism for Graph Exploration

We consider the model of exploration of an undirected graph G by a single agent which is called the rotor-router mechanism or the Propp machine (among other names). Let p_v indicate the edge adjacent to a node v which the agent took on its last exit from v. The next time when the agent enters node v, first the "rotor" at node v advances pointer p_v to the next edge in a fixed cyclic order of the edges adjacent to v. Then the agent is directed onto edge p_v to move to the next node. It was shown before that after initial O(mD) steps, the agent periodically follows one established Eulerian cycle (that is, in each period of 2m consecutive steps the agent will traverse each edge exactly twice, once in each direction). The parameters m and D are the number of edges in G and the diameter of G. We investigate robustness of such exploration in presence of faults in the pointers p_v or dynamic changes in the graph. In particular, we show that after the exploration establishes an Eulerian cycle, if at some step k edges are added to the graph, then a new Eulerian cycle is established within O(km) steps. We show similar results for the case when the values of k pointers p_v are arbitrarily changed and when an arbitrary edge is deleted from the graph. Our proofs are based on the relation between Eulerian cycles and spanning trees known as the "BEST" Theorem (after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte).

This is joint work with: E. Bampas, L. Gasieniec, R. Klasing and A. Kosowski.

1 Dec 2009, 16:00, MS.05

Discrepancy and Signed Domination in Graphs and Hypergraphs

For a graph G, a signed domination function of G is a two-colouring of the vertices of G with colours +1 and -1 such that the closed neighbourhood of every vertex contains more +1's than -1's. This concept is closely related to combinatorial discrepancy theory as shown by Fueredi and Mubayi [J. Combin. Theory, Ser. B76 (1999) 223-239]. The signed domination number of G is the minimum of the sum of colours for all vertices, taken over all signed domination functions of G. In this talk, we will discuss new upper and lower bounds for the signed domination number. These new bounds improve a number of known results.

25 Nov 2009, 16:00, MS.05

An Overview of Multi-index Assignment Problems

In this presentation we give an overview of applications of, and algorithms for, multi-index assignment problems (MIAPs). MIAPs, and relatives of it, have a long history both in applications as well as in theoretical results, starting at least in the 1950's. In particular, we focus here on the complexity and approximability of special cases of MIAPs.

A prominent example of a MIAP is the so-called axial three index assignment problem (3AP) which has many applications in a variety of domains including clustering. A description of 3AP is as follows. Given are three n-sets R, G, and B. For each triple in R X G X B a cost-coefficient c(i,j,k) is given. The problem is to find n triples such that each element is in exactly one triple, while minimizing total cost. We show positive and negative results for finding an optimal solution to this problem that depend upon different ways of how the costs c(i,j,k) are specified.

24 Nov 2009, 16:00, MS.05

Approximability and Parameterized Complexity of Minmax Values

We consider approximating the minmax value of a multiplayer game in strategic form. Tightening recent bounds by Borgs et al., we observe that approximating the value with a precision of "ε log n" digits (for any constant ε > 0) is NP-hard, where n is the size of the game. On the other hand, approximating the value with a precision of c log log n digits (for any constant c ≤ 1) can be done in quasi-polynomial time.We consider the parameterized complexity of the problem, with the parameter being the number of pure strategies k of the player for which the minmax value is computed. We show that if there are three players, k = 2 and there are only two possible rational payoffs, the minmax value is a rational number and can be computed exactly in linear time. In the general case, we show that the value can be approximated with any polynomial number of digits of accuracy in time nO(k). On the other hand, we show that minmax value approximation is W[1]-hard and hence not likely to be fixed parameter tractable. Concretely, we show that if k-CLIQUE requires time nΩ(k) then so does minmax value computation.

This is joint work with Kristoffer Arnsfelt Hansen, Thomas Dueholm Hansen, and Peter Bro Miltersen.

18 Nov 2009, 16:00, MS.05

Profit-maximizing Pricing: The Highway and Tollbooth Problem

We consider the profit maximizing pricing problem for single-minded buyers. Here, we wish to sell a set of m items to n buyers, each of whom is interested in buying a single set of items. Our goal is to set prices for the items such that the profit obtained by selling the items to the buyers who can afford them is maximized. We also assume in our case, that we have arbitrarily many copies of each item to sell.

When the underlying set of items are edges of a graph, and the buyers are interested in buying specific paths, this is called the tollbooth problem. We consider the special case where the graph is a tree, or a path. In the case of a tree, the problem is already known to be APX-hard. We give an O(log n) approximation algorithm. When the graph is a path, the problem is called the highway problem. In this case, we show that the problem is strongly NP-hard, complementing an earlier QPTAS. We also consider the discount model where some items are allowed to have negative prices, and show that a very simple case is already APX-hard.

This is joint work with K. Elbassioni, S. Ray and R. Sitters.

17 Nov 2009, 16:00, MS.05

Pricing Lotteries

Randomized mechanisms, which map a set of bids to a probability distribution over outcomes rather than a single outcome, are an important but ill-understood area of computational mechanism design. We investigate the role of randomized outcomes ("lotteries") in the context of a fundamental and archetypical multi-parameter mechanism design problem: selling heterogeneous items to unit-demand bidders. To what extent can a seller improve her revenue by pricing lotteries rather than items, and does this modification of the problem affect its computational tractability? We show that the answers to these questions hinge on the exact model of consumer behavior we deploy and present several tight bounds on the increase in revenue obtainable via randomization and the computational complexity of revenue maximization in these different models. This is joint work with Shuchi Chawla, Bobby Kleinberg, and Matt Weinberg.

12 Nov 2009, 14:00, CS1.04

Some Multiobjective Optimization Problems

Multiobjective Optimization has many applications in such fields as the internet, finance, biomedicine, management science, game theory and engineering. However, solving MO problems is not an easy task. Searching for all Pareto optimal solutions is expensive and time consuming process because there are usually exponentially large (or infinite) Pareto optimal solutions. Even for the simplest problem determining whether a point belongs to the Pareto curve is NP-hard.

In this talk we are going to discuss some continuous and combinatorial multiobjective optimization problems and their applications in management, finance and military. Exact and heuristic techniques for solving these problems are presented.

We also consider nondifferentiable multiobjective programming problems involving generalized convex functions and present optimality conditions and duality results for the problems.

11 Nov 2009, 16:00, MS.05

The Extremal Function for Partial Bipartite Tilings

For a fixed graph H, let ex(n,H) be the maximum number of edges of an n-vertex graph not containing a copy of H. The asymptotic estimation of ex(n,H) is a central problem to extremal graph theory, and for the case when H is non-bipartite the answer is given by the Erdős-Stone theorem. However despite considerable effort, the problem is still open when H is bipartite.

A related topic is the study of ex(n, l × H), the maximum number of edges of an n-vertex graph not containing l-vertex disjoint copies of a graph H. Insofar, this function has been investigated only for some special values of H. In this talk I shall first discuss known results about ex(n, l × H). Then for a given α ∈ (0,1) I shall determine the asymptotic behaviour of ex(n, αn × H), in the particular case when H is a bipartite graph. The proof is an application of the regularity lemma.

This is joint work with Jan Hladký.

10 Nov 2009, 16:30, University of Oxford, Mathematical Institute, Room SR2

The Power of Choice in a Generalized Pólya Urn Model

We introduce a "Pólya choice" urn model combining elements of the well known "power of two choices" model and the "rich get richer" model. From a set of k urns, randomly choose c distinct urns with probability proportional to the product of a power γ > 0 of their occupancies, and increment one with the smallest occupancy. The model has an interesting phase transition. If γ ≤ 1, the urn occupancies are asymptotically equal with probability 1. For γ > 1, this still occurs with positive probability, but there is also positive probability that some urns get only finitely many balls while others get infinitely many.

10 Nov 2009, 14:45, University of Oxford, Mathematical Institute, Room L3

Random Graphs with Few Disjoint Cycles

Fix a positive integer k, and consider the class of all graphs which do not have k+1 vertex-disjoint cycles. A classical result of Erdős and Pósa says that each such graph G contains a blocker of size at most f(k). Here a blocker is a set B of vertices such that G-B has no cycles. We give a minor extension of this result, and deduce that almost all such labelled graphs on vertex set 1,...,n have a blocker of size k. This yields an asymptotic counting formula for such graphs; and allows us to deduce further properties of a graph Rn taken uniformly at random from the class: we see for example that the probability that Rn is connected tends to a specified limit as n → ∞. There are corresponding results when we consider unlabelled graphs with few disjoint cycles. We consider also variants of the problem involving for example disjoint long cycles. This is joint work with Valentas Kurauskas and Mihyun Kang.

10 Nov 2009, 14:00, University of Oxford, Mathematical Institute, Room L3

Oblivious Routing in the L_p-norm

Gupta et al. introduced a very general multi-commodity flow problem in which the cost of a given flow solution on a graph G=(V,E) is calculated by first computing the link loads via a load-function l, that describes the load of a link as a function of the flow traversing the link, and then aggregating the individual link loads into a single number via an aggregation function.

We show the existence of an oblivious routing scheme with competitive ratio O(log n) and a lower bound of Ω(log n/loglog n) for this model when the aggregation function agg is an Lp-norm.

Our results can also be viewed as a generalization of the work on approximating metrics by a distribution over dominating tree metrics and the work on minimum congestion oblivious. We provide a convex combination of trees such that routing according to the tree distribution approximately minimizes the Lp-norm of the link loads. The embedding techniques of Bartal and Fakcharoenphol et al. can be viewed as solving this problem in the L1-norm while the result on congestion minmizing oblivious routing solves it for L∞. We give a single proof that shows the existence of a good tree-based oblivious routing for any Lp-norm.

4 Nov 2009, 16:00, MS.05

Correlation Decay and Applications to Counting Colourings

We present two algorithms for counting the number of colourings of sparse random graph. Our approach is based on correlation decay techniques originating in statistical physics.

The first algorithm is based on establishing correlation decay properties of Gibbs distribution which are related to Dobrushin's condition for uniqueness of Gibbs measure on infinite trees. More specifically, we impose boundary conditions to a specific set of vertices of the graph and we show that the effect of this boundary decays as we move away.

For the second algorithm we set a new context for exploiting correlation decay properties. Instead of imposing boundary conditions -fixing the colouring of vertices-, we impose a specific graph structure to some region (i.e. delete edges) and show that the effect of this change on the Gibbs distribution decays as we move away. It turns out that this approach designates a new set of spatial correlation decay conditions that can be used for counting algorithms.

In both cases the algorithms with high probability provide in polynomial time a (1/poly(n))-approximation of the logarithm of the number of k-colourings of the graph ("free energy") with k constant. The value of k depends on the expected degree of the graph. The second technique gives better results than the first one in terms of minimum number of colours needed.

Finally, the second algorithm can be applied to another class of graphs which we call locally a-dense graphs of bounded maximum degree Δ. A graph G = (V, E) in this family has following property: For all {u,v} in E the number of vertices which are adjacent to v but not adjacent to u are at most (1-a)Δ, where 0<a<1 is a parameter of the model. For a locally a-dense graph G with bounded Δ the algorithm computes in polynomial time a (1/polylogn)-approximation to the logarithm of the number of k-colourings, for k> (2-a)Δ. By restricting the treewidth of the neighbourhoods in G we can improve the approximation.

3 Nov 2009, 16:00, MS.05

Antichains and the Structure of Permutation Classes

The analogue of hereditary properties of graphs for permutations are known as "permutation classes", defined as downsets in the "permutation containment" partial ordering. They are most commonly described as the collection "avoiding" some set of permutations, cf forbidden induced subgraphs for hereditary graph properties. Their origin lies with Knuth in the analysis of sorting machines, but in recent years have received a lot of attention in their own right. While much of the emphasis has been on exact and asymptotic enumeration of particular families of classes, an ongoing study of the general structure of permutations is yielding remarkable results which typically also have significant enumerative consequences.

In this talk I will describe a number of these structural results, with a particular emphasis on the question of partial well-order -- i.e. the existence or otherwise of infinite antichains in any given permutation class. The building blocks of all permutations are "simple permutations", and we will see how these on their own contribute to the partial well-order problem. We will see how "grid classes", a seemingly independent concept used to express large complicated classes in terms of smaller easily-described ones, also have significant consequences in determining the existence of infinite antichains. Finally, I will present recent and ongoing work in combining these two concepts, both to describe a general method of constructing antichains and to prove when certain classes are partially well-ordered.

27 Oct 2009, 16:00, MS.05

Wiretapping a Hidden Network

We consider the problem of maximizing the probability of hitting a strategically chosen hidden virtual network by placing a wiretap on a single link of a communication network. This can be seen as a two-player win-lose (zero-sum) game that we call the wiretap game. The value of this game is the greatest probability that the wiretapper can secure for hitting the virtual network. The value is shown to be equal the reciprocal of the strength of the underlying graph. We provide a polynomial time algorithm that finds linear-sized description of the maxmin-polytope, and a characterization of its extreme points. It also provides a succint representation of all equilibrium strategies of the wiretapper that minimize the number of pure best responses of the hider. Among these strategies, we efficiently compute the unique strategy that maximizes the least punishment that the hider incurs for playing a pure strategy that is not a best response.

Finally, we show that this unique strategy is the nucleolus of the recently studied simple cooperative spanning connectivity game.

Joint work with: Haris Aziz, Mike Paterson and Rahul Savani.

13 October 2009, 16:00, MS.05

Synchronization

A reset word for a finite deterministic automaton is a word which takes the machine to a fixed state from any starting state. Investigations into an old conjecture on the length of a reset word (if one exists) has led to new properties of permutation groups lying between primitivity and 2-transitivity, and a surprising fact about the representation of transformation monoids as endomorphism monoids of graphs. In the talk I will discuss some of these things and their connections.

6 October 2009, 16:00, MS.05

On Graphs that Satisfy Local Pooling

Efficient operation of wireless networks and switches requires using simple (and in some cases distributed) scheduling algorithms. In general, simple greedy algorithms (known as Greedy Maximal Scheduling - GMS) are guaranteed to achieve only a fraction of the maximum possible throughput (e.g., 50% throughput in switches). However, it was recently shown that in networks in which the local pooling conditions are satised, GMS achieves 100% throughput. A graph G = (V,E) is said to satisfy the local pooling conditions if for every induced subgraph H of G there exists a function g : V(H) → [0, 1] such that ΣvS g(v)=1 for every maximal stable set S in H.

We first analyze line graphs and give a characterization of line graphs that satisfy local pooling. Line graphs are of interest since they correspond to the interference graphs of wireless networks under primary interference constraints. Finally we consider claw-free graphs and give a characterization of claw-free graphs that satisfy local pooling. This is joint work with Berk Birand, Maria Chudnovsky, Paul Seymour, Gil Zussman and Yori Zwols.

29 September 2009, 16:00, MS.03

The Power of Online Reordering

Online algorithms studied in theory are characterized by the fact that they get to know the input sequence incrementally, one job at a time, and a new job is not issued until the previous one is processed by the algorithm. In real applications, jobs can usually be delayed for a short amount of time. As a consequence, the input sequence of jobs can be reordered in a limited fashion to optimize the performance. In this talk, the power and limits of this online reodering paradigm is discussed for several problems.

24 June 2009, 16:00, MS.03

Triangles in Random Graphs

Let X be the number of triangles in a random graph G(n,1/2). Loebl, Matousek and Pangrac showed that X is close to uniformly distributed modulo q when q=O(log n) is prime. We extend this result considerably, and discuss further implications of our methods for the distribution of X. This is joint work with Atsushi Tateno (Oxford).

23 June 2009, 16:00, MS B3.03

Positive Projections

If A is a set of n positive integers, how small can the set {a/(a,b) : a,b ∈ A} be? Here as usual (a,b) denotes the HCF of a and b. This elegant question was raised by Granville and Roesler, who also reformulated it in the following way: given a set A of n points in Zd, how small can (A-A)+, the projection of the difference set of A onto the positive orthant, be?

Freiman and Lev gave an example to show that (in any dimension) the size can be as small as n2/3 (up to a constant factor). Granville and Roesler proved that in two dimensions this bound is correct, i.e. that the size is always at least n2/3, and asked if this held in any dimension. Holzman, Lev and Pinchasi showed that in three dimensions the size is at least n3/5, and that in four dimensions the size is at least n6/11 (up to a logarithmic factor), and they also asked if the correct exponent is always 2/3.

After some background material, the talk will focus on recent developments, including a negative answer to the n2/3 question.

Joint work with Béla Bollobás.

16 June 2009, 16:00, MS B3.03

Cycles, Paths, Connectivity and Diameter in Distance Graphs

Circulant graphs form an important and very well-studied class of graph. They are Cayley graphs of cyclic groups and have been proposed for numerous applications such as local area computer networks, large area communication networks, parallel processing architectures, distributed computing, and VLSI design. Their connectivity and diameter, cycle and path structure, and further graph-theoretical properties have been studied in great detail. Polynomial time algorithms for isomorphism testing and recognition of circulant graphs have been long-standing open problems which were completely solved only recently.

Our goal here is to extend some of the fundamental results concerning circulant graphs to the similarly defined yet more general class of distance graphs. We prove that the class of circulant graphs coincides with the class of regular distance graphs. We study the existence of long cycles and paths in distance graphs and analyse the computational complexity of problems related to their connectivity and diameter.

Joint work with L. Draque Penso und J.L. Szwarcfiter.

9 June 2009, 16:00, MS B3.03

Discrepancy and eigenvalues

A graph has low discrepancy if its global edge distribution is "close" to that of a random graph with the same overall density. It has been known that low discrepancy is related to the spectra of various matrix representations of the graph such as the adjacency matrix or the normalized Laplacian. More precisely, a large spectral gap implies low discrepancy. The topic of this talk is the converse implication: does low discrepancy imply a large spectral gap? The proofs are based on the Grothendieck inequality and the duality theorem for semidefinite programs.

3 June 2009, 15:00, CS 1.01

Social Context Games

We introduce the study of social context games. A social context game is defined by an underlying game in strategic form, and a social context consisting of an undirected graph of neighborhood among players and aggregation functions. The players and strategies in a social context game are as in the underlying game, while the players' utilities in a social context game are computed from their payoffs in the underlying game based on the graph of neighborhood and the aggregation functions. Examples of social context games are ranking games and coalitional con- gestion games. A signifcant challenge is the study of how various social contexts affect various properties of the game. In this work we consider resource selection games as the underlying games, and four basic social contexts. An important property of resource selection games is the existence of pure strategy equilibrium. We study the existence of pure strategy Nash equilibrium in the corresponding social context games. We also show that the social context games possessing pure strategy Nash equilibria are not potential games, and therefore are distinguished from congestion games.

Joint work with Itai Ashlagi and Moshe Tennenholtz.

26 May 2009, 16:00, MS B3.03

30 Years with Consensus: from Feasibility to Scalability

The problem of reaching consensus in a distributed environment is one of the fundamental topics in the area of distributed computing, systems and architectures. It was formally abstracted in late 70's by Lamport, Pease and Shostak. The early work on this problem focused on feasibility, i.e., what are the conditions under which the consensus can be solved. Later, the complexity of solutions has become of great importance. In this talk I'll introduce the problem, present selected classic algorithms and lower bounds, and conclude with my recent work on time and communication complexity of consensus in a message-passing system.

19 May 2009, 16:00, MS B3.03

Disconnecting a Graph by a Disconnected Cut

For a connected graph G = (V, E), a subset U of vertices is called a k-cut if U disconnects G, and the subgraph induced by U contains exactly k≥1 components. More specifically, a k-cut U is called a (k, l)-cut if V \ U induces a subgraph with exactly l≥2 components. We study two decision problems, called k-CUT and (k, l)-CUT, which determine whether a graph G has a k-cut or (k, l)-cut, respectively. By pinpointing a close relationship to graph contractibility problems we first show that (k, l)-CUT is in P for k=1 and any fixed constant l≥2, while the problem is NP-complete for any fixed pair k, l≥2. We then prove that k-CUT is in P for k=1, and NP-complete for any fixed k≥2. On the other hand, we present an FPT algorithm that solves (k, l)-CUT on planar graphs when parameterized by k + l. By modifying this algorithm we can also show that k-CUT is in FPT (with parameter k) and DISCONNECTED CUT is solvable in polynomial time for planar graphs. The latter problem asks if a graph has a k-cut for some k≥2.

Joint work with Takehiro Ito and Marcin Kamiński.

13 May 2009, 16:00, MS.03

Algorithms for Counting/Sampling Cell-Bounded Contingency Tables

This is a survey talk about counting/sampling contingency tables and cell-bounded contingency tables. In the cell-bounded contingency tables problem, we are given a list of positive integer row sums r=(r1,...,rm), a list of positive integer column sums c=(c1,...,cn), and a non-negative integer bound bij, for every 1 ≤ i ≤ m, 1 ≤ j ≤ n.

The problem is to count/sample the set of all m-by-n tables X ∈ Zmn of non-negative integers which satisfy the given row and column sums, and for which 0 ≤ Xij ≤ bij for all i, j. I will outline a complicated (reduction to volume estimation) algorithm for approximately counting these tables in polynomial time, when the number of row is constant. I also hope to outline a more recent, 'cute' dynamic programming algorithm for exactly the same case of constantly-many rows. The case for general m is still open.

Joint with Martin Dyer and Dana Randall

7 May 2009, 16:00, MS.03

4-Colour Ramsey Number of Paths

The Ramsey number R(l x H) is the smallest integer m such that any l-colouring of the edges of Km induces a monochromatic copy of H. We prove that R(4 x Pn)=2.5n+o(n). Luczak proposed a way how to attack the problem of finding a long monochromatic path in a coloured graph G: one should search for a large monochromatic connected matching in the reduced graph of G. Here we propose a modification of Luczak's technique: we find a large monochromatic connected fractional matching. This relaxation allows us to use LP duality and reduces the question to a vertex-cover problem.

This is a joint work with Jan Hladký and Daniel Král'.

6 May 2009, 16:00, MS.03

Sparse Random Graphs are Fault Tolerant for Large Bipartite Graphs

Random graphs G on n vertices and with edge probability at least c(log n/n)^(1/Δ) are robust in the following sense: Let H be an arbitrary planar bipartite graph on (1-ε)n vertices with maximum degree Δ. Now you, the adversary, may arbitrarily delete edges in G such that no vertex looses more than a third of its neighbours. However, you will not be able to destroy all copies of H.

This result is obtained (joint work with Yoshiharu Kohayakawa and Anusch Taraz) via a sparse version of the regularity lemma. In the talk I will provide some background concerning related results, introduce sparse regularity, and outline how this can be used to prove theorems such as the one above.

6 May 2009, 11:00, MS.03

Counting Flags in Triangle Free Digraphs

An important instance of the Caccetta-Häggkvist conjecture asserts that an n-vertex digraph with minimum outdegree at least n/3 contains a directed triangle. Improving on a previous bound of 0.3532n due to Hamburger, Haxell, and Kostochka we prove that a digraph with minimum outdegree at least 0.3465n contains a directed triangle. The proof is an application of a recent combinatorial calculus developed by Razborov. This calculus enables one to formalize common techniques in the area (such as induction or the Cauchy-Schwartz inequality). In the talk I shall describe Razborov's method in general, and its application to the setting of the Cacceta-Häggvist Conjecture.

This is joint work with Dan Král' and Sergey Norin.

5 May 2009, 16:00, MS B3.03

Regularity Lemmas for Graphs

Szemerédi's regularity lemma is a powerful tool in extremal graph theory, which had have many applications. In this talk we present several variants of Szemerédi's original lemma (due to several researchers including Frieze and Kannan, Alon et al., and Lovász and Szegedy) and discuss their relation to each other.

21 Apr 2009, 16:00, MS B3.03

Time and Space Efficient Anonymous Graph Traversal

We consider the problem of periodic graph traversal in which a mobile entity with constant memory has to visit all n nodes of an arbitrary undirected graph G in a periodic manner. Graphs are supposed to be anonymous, that is, nodes are unlabeled. However, while visiting a node, the robot has to distinguish between edges incident to it. For each node v the endpoints of the edges incident to v are uniquely identified by different integer labels called port numbers. We are interested in minimization of the length of the exploration period.

This problem is unsolvable if the local port numbers are set arbitrarily (shown by Budach in 1978). However, surprisingly small periods can be achieved when carefully assigning the local port numbers. Dobrev, Jansson, Sadakane, and Sung described an algorithm for assigning port numbers, and an oblivious agent (i.e. an agent with no memory) using it, such that the agent explores all graphs with n vertices within period 10n. Providing the agent with a constant number of memory bits, the optimal length of the period was later proved to be no more than 3.75n (using a different assignment of the port numbers). Following on from this, a period of length at most (4 1/3)n was shown for oblivious agents, and a period of length at most 3.5n for agents with constant memory.

This talk describes results in two papers by the speaker, which are joint work with several other authors.

6 April 2009, 12:00, MS B3.02

Eliminating Cycles in the Torus

I will discuss the problem of cutting the (discrete or continuous) d-dimensional torus economically, so that no nontrivial cycle remains. This improves, simplifies and/or unifies results of Bollobás, Kindler, Leader and O'Donnell, of Raz and of Kindler, O'Donnell, Rao and Wigderson. More formal, detailed abstract(s) appear in http://www.math.tau.ac.il/~nogaa/PDFS/torus3.pdf and in http://www.math.tau.ac.il/~nogaa/PDFS/torusone.pdf.

Joint work with Bo'az Klartag.

17 March 2009, 16:00, MS B3.03

Oblivious Interference Scheduling

In the interference scheduling problem, one is given a set of n communication requests described by pairs of points from a metric space. The points correspond to devices in a wireless network. In the directed version of the problem, each pair of points consists of a dedicated sending and a dedicated receiving device. In the bidirectional version the devices within a pair shall be able to exchange signals in both directions. In both versions, each pair must be assigned a power level and a color such that the pairs in each color class can communicate simultaneously at the specified power levels. The feasibility of simultaneous communication within a color class is defined in terms of the Signal to Interference Plus Noise Ratio (SINR) that compares the strength of a signal at a receiver to the sum of the strengths of other signals. This is commonly referred to as the "physical model" and is the established way of modelling interference in the engineering community. The objective is to minimize the number of colors as this corresponds to the time needed to schedule all requests.

We study oblivious power assignments in which the power value of a pair only depends on the distance between the points of this pair. We prove that oblivious power assignments cannot yield approximation ratios better than Ω(n) for the directed version of the problem, which is the worst possible performance guarantee as there is a straightforward algorithm that achieves an O(n)-approximation. For the bidirectional version, however, we can show the existence of a universally good oblivious power assignment: For any set of n bidirectional communication requests, the so-called "square root assignment" admits a coloring with at most polylog(n) times the minimal number of colors. The proof for the existence of this coloring is non-constructive. We complement it by an approximation algorithm for the coloring problem under the square root assignment. This way, we obtain the first polynomial time algorithm with approximation ratio polylog(n) for interference scheduling in the physical model.

This is joint work with Alexander Fanghänel, Thomas Keßelheim and Harald Räcke

16 March 2009, 11:00, WBS E2.02

Polynomial Time Solvable Cases of the Vehicle Routing Problem

The Traveling Salesman Problem (TSP) is one of the most famous NP-hard problems. So, much works have been done to study polynomially solvable cases, that is, to find good conditions for distances between cities such that an optimal tour can be found in polynomial time. These good conditions give some restriction on the optimal solution, for example, Monge property. For a given complete weighted digraph G, a vertex x of G, and a positive integer k, the Vehicle Routing Problem (VRP) is to find a minimum weight connected subgraph F of G such that F is a union of k cycles sharing only the vertex x. In this talk, we apply good conditions for the TSP to the VRP. We will show that if a given weighted digraph satisfies several conditions, which is known for the TSP, then an optimal solution of the VRP can also be computed in polynomial time.

11 March 2009, 14:00, MS B1.01

Bounding Revenue Deficiency in Multiple Winner Procurement Auctions

Consider a firm, called the buyer, that satisfies its demand over a T-period time horizon by assigning the demand vector to a supplier via a procurement (reverse) auction; call this the Standard auction. The firm is considering an alternative procedure in which it will allow bids on one or more periods; in this auction, there can be more than one winner covering the demand vector; call this the Multiple Winner auction. Choosing the Multiple Winner auction over the Standard auction will tend to: (1) allow each supplier the option of supplying demand for any subset of periods of the T-period horizon; (2) increase competition among the suppliers, and (3) allow the buyer to combine bids from different suppliers in order to lower his purchase cost. All three effects might lead one to expect that the buyer's cost will always be lower in the Multiple Winner auction than in the Standard auction. To the contrary, there are cases in which the buyer will have a higher cost in the Multiple Winner auction. We provide a bound on how much greater the buyer's cost can be and show that this bound is sharp.

10 March 2009, 16:00, MS B3.03

Prize-Collecting Steiner Trees and Disease

The identification of functional modules in protein-protein interaction networks is an important topic in systems biology. These modules might help, for example, to better understand the underlying biological mechanisms of different tumor subtypes. In this talk, I report on results of a cooperation with statisticians and medical researchers from the University of Würzburg. In particular, I will present an exact integer linear programming solution for this problem, which is based on its connection to the well-known prize-collecting Steiner tree problem from Operations Research.

3 March 2009, 16:00, MS B3.03

Adding Recursion to Markov Chains, Markov Decision Processes, and Stochastic Games

I will decribe a family of finitely presented, but infinite-state, stochastic models that arise by adding a natural recursion feature to Markov Chains, Markov Decision Processes, and Stochastic Games.

These models subsume a number of classic and heavily studied purely stochastic models, including (multi-type) branching processes, (quasi-)-birth-death processes, stochastic context-free grammars, and others. They also provide a natural abstract model of probabilistic procedural programs with recursion.

The theory behind the algorithmic analysis of these models, developed over the past few years, has turned out to be very rich, with connections to a number of areas of research. I will survey just a few highlights from this work. There remain many open questions about the computational complexity of basic analysis problems. I will highlight a few such open problems.

(Based on joint work with Mihalis Yannakakis, Columbia University)

2 March 2009, 16:00, MS B3.02

The Robust Network Loading Problem under Polyhedral Demand Uncertainty: Formulation, Polyhedral Analysis, and Computations

For a given undirected graph G, the Network Loading Problem (NLP) deals with the design of a least cost network by allocating discrete units of facilities with different capacities on the links of G so as to support expected pairwise demands between some endpoints of G. In this work, we relax the assumption of known demands and study robust NLP with polyhedral demands to obtain designs flexible enough to support changing communication patterns in the least costly manner. More precisely, we seek for a final design that remains operational for any feasible demand realization in a prescribed polyhedral set.

Firstly, we give a compact multi-commodity formulation of the problem for which we prove a nice decomposition property obtained from projecting out the flow variables. This simplifies the resulting polyhedral analysis and computations considerably by doing away with metric inequalities, an attendant feature of the most successful algorithms on the Network Loading Problem. Then, we focus on a specific choice of the uncertainty description, called the "hose model", which specifies aggregate traffic upper bounds for selected endpoints of the network. We study the polyhedral aspects of the Network Loading Problem under hose demand uncertainty and present valid inequalities for robust NLP with arbitrary number of facilities and arbitrary capacity structures as the second main contribution of the present work. Finally, we develop an efficient Branch-and-Cut algorithm supported by a simple but effective heuristic for generating upper bounds and use it to solve several well-known network design instances.

24 February 2009, 16:00, MS B3.03

Optimal Mechanisms for Scheduling

We study the design of optimal mechanisms in a setting where job-agents compete for being processed by a service provider that can handle one job at a time. Each job has a processing time and incurs a waiting cost. Jobs need to be compensated for waiting. We consider two models, one where only the waiting costs of jobs are private information (1-d), and another where both waiting costs and processing times are private (2-d). An optimal mechanism minimizes the total expected expenses to compensate all jobs, while it has to be Bayes-Nash incentive compatible. We derive closed formulae for the optimal mechanism in the 1-d case and show that it is efficient for symmetric jobs. For non-symmetric jobs, we show that efficient mechanisms perform arbitrarily bad. For the 2-d case, we prove that the optimal mechanism in general does not even satisfy IIA, the "independent of irrelevant alternatives" condition. We also show that the optimal mechanism is not even efficient for symmetric agents in the 2-d case.

Joined work with Birgit Heydenreich, Debasis Mishra and Marc Uetz.

10 February 2009, 16:00, MS B3.03

Distributed Optimisation in Network Control

Communication networks (such as the Internet or BT's network) are held together by a wide variety of interacting network control systems. Examples include routing protocols, which determine the paths used by different sessions, and flow control mechanisms, which determine the rate at which different sessions can send traffic. Many of these control processes can be viewed as distributed algorithms that aim to solve a network-wide optimisation problem.

I will present a mathematical framework, based on Lagrangian decomposition, for the design and analysis of such algorithms. As an illustration, I will discuss how this framework has been used in BT Innovate to develop a new resource control system which integrates multi-path routing and admission control decisions.

The notion of convexity plays an important role in convergence proofs for our algorithms. We can design a control system with stability guarantees as long as the system's target behaviour can be captured as the solution to a convex optimisation problem. Unfortunately, many standard network design problems are formulated as integer programming problems and therefore inherently non-convex. I will discuss some of the implications of this non-convexity and identify some related research challenges.

3 February 2009, 16:00, MS B3.03

Multidimensional Problems in Additive Combinatorics

We discuss bounds on extremal sets for problems like those below:

  1. What is the largest subset of (ℤ/nℤ)r that does not contain an arithmetic progression of length k?
  2. What is the largest subset/(multiset) of (ℤ/nℤ)r that does not contain n elements that sum to 0?
  3. What is the largest subset of [1,...,n]r that does not contain a solution of x+y=z (i.e., which is a sum free set)?
  4. Colour the elements of [1,...,n]r red and blue. How many monochromatic Schur triples are there?
27 January 2009, 16:00, MS B3.03

Fast Distance Multiplication of Unit-Monge Matrices

A matrix is called Monge, if its density matrix is nonnegative. Monge matrices play an important role in optimisation. Distance multiplication (also known as min-plus or tropical multiplication) of two Monge matrices of size n can be performed in time O(n2). Motivated by applications to string comparison, we introduced in a previous work the following subclass of Monge matrices. A matrix is called unit-Monge, if its density matrix is a permutation matrix; we further restrict our attention to a subclass that we call simple unit-Monge matrices. Our previous algorithm for distance multiplication of simple unit-Monge matrices runs in time O(n3/2). Landau conjectured in 2006 that this problem can be solved in linear time. In this work, we give an algorithm running in time O(n log4 n), thus approaching Landau's conjecture within a polylogarithmic factor. The new algorithm implies immediate improvements in running time for a number of string comparison and graph algorithms: semi-local longest common subsequences between permutations; longest increasing subsequence in a cyclic permutation; longest pattern-avoiding subsequence in a permutation; longest piecewise monotone subsequence; maximum clique in a circle graph; subsequence recognition in a grammar-compressed string; sparse spliced alignment of genome strings.

20 January 2009, 16:00, MS B3.03

Boundary Properties of Graphs and the Hamiltonian Cycle Problem

The notion of a boundary graph property is a relaxation of that of a minimal property. Several fundamental results in graph theory have been obtained in terms of identifying minimal properties. For instance, Robertson and Seymour showed that there is a unique minimal minor-closed property with unbounded tree-width (the planar graphs), while Balogh, Bollobás and Weinreich identified nine minimal hereditary properties with the factorial speed of growth. However, there are situations where the notion of minimal property is not applicable. A typical example of this type is given by graphs of large girth. It is known that for each particular value of k, the graphs of girth at least k have unbounded tree-, clique- or rank-width, their speed of growth is superfactorial and many algorithmic problems remain NP-hard for these graphs, while the “limit” property of this sequence (i.e., acyclic graphs) has bounded tree-, clique- and rank-width, its speed of growth is factorial, and most of algorithmic problems can be solved in polynomial time in this class. The notion of boundary properties of graphs allows to overcome this difficulty. In this talk we survey some available results on this topic and identify the first boundary class for the Hamiltonian cycle problem.

Joint work with N. Korpelainen and A. Tiskin

13 January 2009, 16:00, MS D1.07

Minimum Degree Conditions for Large Subgraphs

Turán's theorem shows that every n-vertex graph with minimum degree n/2 contains a triangle. A proof (for large n) of the Pósa conjecture shows that every n-vertex graph with minimum degree 2n/3 contains the square of a Hamilton cycle; this is a cyclic ordering of the vertices in which every three consecutive vertices forms a triangle. We fill in the gap between these theorems, giving the correct relationship (for large n) between the minimum degree of an n-vertex graph and the lengths of squared cycles which are forced to exist in the graph. We also discuss generalisations of all three theorems to other graphs with chromatic number three, and offer some conjectures on higher chromatic numbers. This is joint work with Julia Böttcher and Jan Hladký.

2 December 2008, 16:00, MS.05

Degree Sequence Conditions for Graph Properties

We discuss recent work on graphical degree sequences, i.e., sequences of integers that correspond to the degrees of the vertices of a graph. Historically, such degree sequences have been used to provide sufficient conditions for graphs to have a certain property, such as being k-connected or hamiltonian. For hamiltonicity, this research has culminated in a beautiful theorem due to Chvátal (1972). This theorem gives a sufficient condition for a graphical degree sequence to be forcibly hamiltonian, i.e., such that every graph with this degree sequence is hamiltonian. Moreover, the theorem is the strongest of an entire class of theorems in the following sense: if the theorem does not guarantee that a sequence π is forcibly hamiltonian, then there exists a nonhamiltonian graph with a degree sequence that majorizes π. Very recently, Kriesell solved an open problem due to Bauer et al. by establishing similar conditions for k-edge connectivity for any fixed k. We will introduce a general framework for such conditions and discuss recent progress and open problems on similar sufficient conditions for a variety of graph properties. This is based on joint work with Bauer, van den Heuvel, Kahl, and Schmeichel.

25 November 2008, 16:00, MS.05

Min-Max Functions

Min-Max function appear in various contexts in mathematics, computer science, and engineering. For instance, in the analysis of zero-sum two-player games on graphs, Min-Max functions arise as dynamic programming operators. They also play a role in the performance analysis of certain discrete event systems, which are used to model computer networks and manufacturing systems. In each of these fields it is important to understand the long-term iterative behaviour of Min-Max functions. In this talk I will explain how this problem is related to certain combinatorial geometric problems and discuss some results and a conjecture.

18 November 2008, 16:00, MS.05

Even-hole-free Graphs and the Decomposition Method

We consider finite and simple graphs. We say that a graph G contains a graph F, if F is isomorphic to an induced subgraph of G. A graph G is F-free if it does not contain F. Let ℱ be a (possibly infinite) family of graphs. A graph G is ℱ-free if it is F-free, for every F ∈ ℱ.

Many interesting classes of graphs can be characterized as being ℱ-free for some family ℱ. The most famous such example is the class of perfect graphs. A graph G is perfect if for every induced subgraph H of G, χ(H) = ω(H), where χ(H) denotes the chromatic number of H and ω(H) denotes the size of a largest clique in H. The famous Strong Perfect Graph Theorem states that a graph is perfect if and only if it does not contain an odd hole nor an odd antihole (where a hole is a chordless cycle of length at least four).

In the last 15 years a number of other classes of graphs defined by excluding a family of induced subgraphs have been studied, perhaps originally motivated by the study of perfect graphs. The kinds of questions this line of research was focused on were whether excluding induced subgraphs affects the global structure of the particular class in a way that can be exploited for putting bounds on parameters such as χ and ω, constructing optimization algorithms (problems such as finding the size of a largest clique or a minimum coloring), recognition algorithms and explicit construction of all graphs belonging to the particular class. A number of these questions were answered by obtaining a structural characterization of a class through their decomposition (as was the case with the proof of the Strong Perfect Graph Theorem).

In this talk we survey some of the most recent uses of the decomposition theory in the study of classes of even-hole-free graphs. Even-hole-free graphs are related to β-perfect graphs in a similar way in which odd-hole-free graphs are related to perfect graphs. β-Perfect graphs are a particular class of graphs that can be polynomially colored, by coloring greedily on a particular, easily constructable, ordering of vertices.

11 November 2008, 16:00, MS.05

Approximability of Combinatorial Exchange Problems

In a combinatorial exchange the goal is to find a feasible trade between potential buyers and sellers requesting and offering bundles of indivisible goods. We investigate the approximability of several optimization objectves in this setting and show that the problems of surplus and trade volume maximization are inapproximable even with free disposal and even if each agent's bundle is of size at most 3. In light of the negative results for surplus maximization we consider the complementary goal of social cost minimization and present tight approximation results for this scenario. Considering the more general supply chain problem, in which each agent can be a seller and buyer simultaneously, we prove that social cost minimization remains inapproximable even with bundles of size 3, yet becomes polynomial time solvable for agents trading bundles of size 1 or 2. This yields a complete characterization of the approximability of supply chain and combinatorial exchange problems based on the size of traded bundles. We finally briefly address the problem of exchanges in strategic settings.

This is joint work with Moshe Babaioff and Patrick Briest.

4 November 2008, 16:00, CS1.01

Sensor Networks and Random Intersection Graphs

A uniform random intersection graph G(n,m,k) is a random graph constructed as follows. Label each of n nodes by a randomly chosen set of k distinct colours taken from some finite set of possible colours of size m. Nodes are joined by an edge if and only if some colour appears in both their labels. These graphs arise in the study of the security of wireless sensor networks. Such graphs arise in particular when modelling the network graph of the well known key predistribution technique due to Eschenauer and Gligor. In this talk we consider the threshold for connectivity and the appearance of a giant component of the graph G(n,m,k) when n → ∞ with k a function of n such that k≥2 and m=⌊nα for some fixed positive real number α.

28 October 2008, 16:00, MS.05

Throughput Optimization in Two-Machine Flowshops with Flexible Operations

We discuss the following scheduling problem. A two-machine flowshop produces identical parts. Each of the parts is assumed to require a number of manufacturing operations, and the machines are assumed to be flexible enough to perform different operations. Due to economical or technological constraints, some specific operations are preassigned to one of the machines. The remaining operations, called flexible operations, can be performed on either one of the machines, so that the same flexible operation can be performed on different machines for different parts. The problem is to determine the assignment of the flexible operations to the machines for each part, with the objective of maximizing the throughput rate. We consider various cases depending on the number of parts to be produced and the capacity of the buffer between the machines. Along the way, we underline the fact that this problem belongs to the class of "high multiplicity scheduling problem": for this class of problems, the input data can be described in a very compact way due to the fact that the jobs fall into a small number of distinct job types, where all the jobs of a same type possess the same attribute values. High multiplicity scheduling problems have been recently investigated by several researchers who have observed that the complexity of such problems must be analyzed with special care.

Joint work with Hakan Gültekin.

21 October 2008, 16:00, MS.05

Tricky Problems for Graphs of Bounded Treewidth

In this talk I will consider computational problems that (A) can be solved in polynomial time for graphs of bounded treewidth and (B) where the order of the polynomial time bound is expected to depend on the treewidth of the instance. Among the considered problems are coloring problems, factor problems, orientation problems and satisfiability problems. I will present an algorithmic meta-theorem (an extension of Courcelle's Theorem) that provides a convenient way for establishing (A) for some of the considered problems and I will explain how concepts from parameterized complexity theory can be used to establish (B).

15 October 2008, 16:00, CS1.04

The External Network Problem and the Source Location Problem

The connectivity of a communications network can sometimes be enhanced if the nodes are able, at some expense, to form links using an external network. Using graphs, the following is a possible way to model such situations.

Let G = (V,E) be a graph. A subset X of vertices in V may be chosen, the so-called external vertices. An internal path is a normal path in G; an external path is a pair of internal paths P1 = v1 · · · vs, P2 = w1 · · · wt in G such that vs and w1 are from the chosen set X of external vertices. (The idea is that v1 can communicate with wt along this path using an external link from vs to w1.) For digraphs we use similar vocabulary, but then using directed paths.

Next suppose a certain desired connectivity for the network is prescribed in terms of edge, vertex or arc-connectivity. Say that for a given k there need to be at least k edge- (or vertex- or arc-) disjoint paths (which can be internal or external) between any pair of vertices. What is the smallest set X of external vertices such that this can be achieved?

A related problem is the Source Location Problem : In this we need to find, given a graph or digraph and a required connectivity requirement, a subset S of the vertices such that from each vertex in the graph there are at least the required number of edge- (or vertex- or arc-) disjoint paths between any vertex and the set S. And again, the goal is to minimise the number of vertices in S.

It seems clear that the External Network Problem and the Source Location Problem are closely related. In this talk we discuss these relations, and also show some instances where the problems behave quite differently. Some recent results on the complexity of the different types of problems will be presented as well.

Joint work with Matthew Johnson (University of Durham)

7 October 2008, 16:00, MS.05

Extremal Graph Theory with Colours

We consider graphs whose edges are painted with two colours (some edges might get both colours), and ask, given a fixed such graph H, how large another such graph G can be if it has many vertices but doesn't contain a copy of H. The notion of largeness here is the total weight of G if red edges are given weight p and blue edges weight q, with p+q=1.

This question arises in the applications of Szemerédi's Regularity Lemma, in particular to Ramsey-type games, and to the study of edit distance and property testing.

We shall describe an approach to answering this question and to settling some outstanding issues concerning edit distance.

30 September 2008, 16:00, MS.05

Machine-Job Assignment Problems with Separable Convex Costs and Match-up Scheduling with Controllable Processing Times

In this talk, we discuss scheduling and rescheduling problems with controllable job processing times. We start with the optimal allocation of a set of jobs on a set of parallel machines with given machining time capacities. Each job may have different processing times and different convex compression costs on different machines. We consider finding the optimal machine-job allocation and compression level decisions for the total cost. We propose a strengthened conic quadratic formulation for this problem. We provide theoretical and computational results that prove the quality of the proposed reformulation. We next shortly discuss match-up scheduling approach in the same scheduling environment with an initial schedule subject to a machine breakdown. We show how processing time controllability provides flexibility in rescheduling decisions.

22 September 2008, 11:00, B1.01

Discounted Deterministic Markov Decision Processes

We present an O(mn)-time algorithm for finding optimal strategies for discounted, infinite-horizon, Deterministic Markov Decision Processes (DMDP), where n is the number of vertices (or states) and m is the number of edges (or actions). This improves a recent O(mn^2)-time algorithm of Andersson and Vorobyov.

Joint work with Omid Madani and Mikkel Thorup.

24 June 2008, 16:00, MS.04

On a Disparity Between Relative Cliquewidth and Relative NLC-width

Cliquewidth and NLC-width are two closely related parameters that measure the complexity of graphs. Both clique- and NLC-width are defined to be the minimum number of labels required to create a labelled graph by certain terms of operations. Many hard problems on graphs become solvable in polynomial time if the inputs are restricted to graphs of bounded clique- or NLC-width. Cliquewidth and NLC-width differ by a factor of at most two.

The relative counterparts of these parameters are defined to be the minimum number of labels necessary to create a graph while the tree-structure of the term is fixed. We show that the problems Relative Cliquewidth and Relative NLC-width differ significantly in computational complexity: the former is NP-complete, the latter is solvable in polynomial time. Additionally, our technique enables combinatorial characterisations of clique- and NLC-width that avoid the usual operations on labelled graphs.

17 June 2008, 16:00, MS.04

Colouring Random Geometric Graphs

The random geometric graph G(n,r) is constructed by picking n vertices X1, · · · , Xn ∈ [0,1] d i.i.d. uniformly at random and adding an edge (Xi, Xj) if ‖X1-X2‖ < r. I will discuss the behaviour of the chromatic number of G(n,r) and some related random variables when n tends to infinity and r=r(n) is allowed to vary with n.

10 June 2008, 16:00, MS.04

A Clique-Based Integer Programming Formulation of Graph Colouring and Applications

In this talk, we first give an overview of several exact approaches to graph colouring: known integer linear programming formulations, approaches based on SDP relaxations, as well as selected approaches not using any mathematical programming. Second, we introduce an integer programming formulation of graph colouring based on reversible clique partition or, seen another way, optimal polynomial-time transformation of graph colouring to multicolouring. The talk will be concluded by discussion of some specifics of applications of graph colouring in timetabling applications and utility of the proposed formulation in this context.

4 June 2008, 16:00, MS.04

Fast-Converging Tatonnement Algorithms for One-Time and Ongoing Market Problems

Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, we formalize the setting of Ongoing Markets, by contrast with the classic market scenario, which we term One-Time Markets. The Ongoing Market allows trade at non-equilibrium prices, and, as its name suggests, continues over time. As such, it appears to be a more plausible model of actual markets.

For both market settings, we define and analyze variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis in three significant respects: the price update for a good depends only on the price, demand, and supply for that good, and on no other information; the price update for each good occurs distributively and asynchronously; the algorithms work (and the analyses hold) from an arbitrary starting point. We show that this update rule leads to fast convergence toward equilibrium prices in a broad class of markets that satisfy the weak gross substitutes property.

Our analysis identifies three parameters characterizing the markets, which govern the rate of convergence of our protocols. These parameters are, broadly speaking:

  • A bound on the fractional rate of change of demand for each good with respect to fractional changes in its price.
  • A bound on the fractional rate of change of demand for each good with respect to fractional changes in wealth.
  • The closeness of the market to a Fisher market (a market with buyers starting with money alone).

Joint work with Lisa Fleischer.

29 May 2008, 12:00, CS1.01

The Multi-Armed Bandit Meets the Web Surfer

The multi-armed bandit paradigm has been studied extensively for over 50 years in Operations Research, Economics and Computer Science literature, modeling online decisions under uncertainty in a setting in which an agent simultaneously attempts to acquire new knowledge and to optimize its decisions based on the existing knowledge. In this talk I'll discuss several new results motivated by web applications, such as content matching (matching advertising to page contest and user's profile) and efficient web crawling.

20 May 2008, 16:00, MS.04

Optimizing Communication Networks: Incentives, Welfare Maximization an1245210 Multipath Transfers

We discuss control strategies for communication networks such as the Internet or wireless multihop networks, where the aim is to optimize network performance. We describe how welfare maximization can be used as a paradigm for network resource allocation, and can also be used to derive rate-control algorithms. We apply this to the case of multipath data transfers. We show that welfare maximization requires active balancing across paths by data sources, which can be done provided the underlying "network layer" is able to expose the marginal congestion cost of network paths to the "transport layer". When users are allowed to change the set of paths they use, we show that for large systems the path-set choices correspond to Nash equilibria, and give conditions under which the resultant Nash equilibria correspond to welfare maximising states. Moreover, simple path selection policies that shift to paths with higher net benefit can find these states.

We conclude by commenting on incentives, pricing and open problems.

13 May 2008, 16:00, MS.04

Unsplittable Shortest Path Routing: Hardness and Approximation Algorithms

Most data networks nowadays employ shortest path routing protocols to control the flow of data packets. With these protocols, all end-to-end traffic streams are routed along shortest paths with respect to some administrative routing weights. Dimensioning and routing planning are extremely difficult for such networks, because end-to-end routing paths cannot be configured individually but only jointly and indirectly via the routing weights. Additional difficulties arise if the communication demands must be sent unsplit through the network - a requirement that is often imposed to ensure tractability of end-to-end traffic flows and to prevent package reordering in practice. In this case, the lengths must be chosen such that the shortest paths are uniquely determined for all communication demands.

In this talk we consider the minimum congestion unsplittable shortest path routing problem MinConUSPR: Given a capacitated digraph D = (V, A) and traffic demands d(s,t), (s, t) ∈ K ⊆ V2, the task is to find routing weights λa ∈ ℤ+, a ∈ A, such that the shortest (s, t)-path is unique for each (s, t) ∈ K and the maximum arc congestion (induced flow/capacity ratio) is minimal.

We discuss the approximability of this problem and the relation between the unsplittable shortest path routing paradigm and other routing schemes. We show that MinConUSPR is NP-hard to approximate within a factor of O(|V|1-ε), but approximable within min(|A|, |K|) in general. For the special cases where the underlying graph is an undirected cycle or a bidirected ring, we present constant factor approximation algorithms. We also construct problem instances where the minimum congestion that can be obtained by unsplittable shortest path routing is a factor of Ω(|V|2) larger than that achievable by unsplittable flow routing or shortest multi-path routing, and a factor of Ω(|V|) larger than that achievable by unsplittable source-invariant routing.

6 May 2008, 16:00, MS.04

Diophantine Problems Arising in the Theory of Monlinear Waves

I will consider waves in finite domains, for example water waves in a swimming pool. For such waves to exchange energy among different modes, these modes must be in resonance in both frequency and wavenumber, which leads to a set of nontrivial conditions/equations in integers. I will discuss some known results and open questions about the solutions of these Diophantine equations.

29 April 2008, 16:00, MS.04

Controlling Epidemic Spread on Networks

The observation that many natural and engineered networks exhibit scale-free degree distributions, and that such networks are particularly prone to epidemic outbreaks, raises questions about how best to control epidemic spread on such networks.

We will begin with some background on threshold phenomena for epidemics, describing how the minimum infection rate needed to sustain long-lived epidemics is related to the cure rate and the topology of the graph. In particular, we will see that, on scale-free networks, this minimum infection rate tends to zero as the network grows large. We will describe two ways of tackling this problem. One involves allocating curing or monitoring resources non-uniformly, favouring high-degree nodes. Another mechanism, called contact tracing, involves identifying and curing neighbours (or contacts) of infected nodes, and is used in practice. We will present some mathematical results for both these mechanisms.

Joint work with Borgs, Chayes, Saberi and Wilson.

22 April 2008, 16:00, MS.04

Strategic Characterization of the Index of an Equilibrium

The index of a Nash equilibrium is an integer that is related to notions of ``stability'' of the equilibrium, for example under dynamics that have equilibria as rest points. The index is a relatively complicated topological notion, essentially a geometric orientation of the equilibrium. We prove the following theorem, first conjectured by Hofbauer (2003), which characterizes the index in much simpler strategic terms:

Theorem. A generic bimatrix game has index +1 if and only if it can be made the unique equilibrium of an extended game with additional strategies of one player. In an mxn game, it suffices to add 3m strategies of the column player.

The main tool to prove this theorem is a novel geometric-combinatorial method that we call the "dual construction," which we think is of interest of its own. It allows to visualize all equilibria of an mxn game in a diagram of dimension m-1. For example, all equilibria of a 3xn game are visualized with a diagram (essentially, of suitably connected n+3 points) in the plane. This should provide new insights into the geometry of Nash equilibria.

Joint work with Arndt von Schemde.

11 March 2008, 16:00, MS.05

On Recent Progress on Computing Approximate Nash Equilibria

In view of the apparent computational difficulty of computing a Nash equilibrium of a game, recent work has addressed the computation of "approximate Nash equilibrium". The standard criterion of "no incentive for any player to change his strategy" is here replaced by "small incentive for any player to change his strategy". The question is, how small an incentive can we aim for, before once again we have a computationally difficult problem? In this talk, I explain in detail what is meant by Nash equilibrium and approximate Nash equilibrium, and give an overview of some recent results.

7 March 2008, 15:00, MS.02

Online Minimum Makespan Scheduling with Reordering

In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. We consider online scheduling algorithms without preemption. Much effort has been made to narrow the gap between upper and lower bounds on the competitive ratio of this problem.

A quite good and easy approximation can be achieved by sorting the jobs according to their size and than assigning them greedily to machines. However, the complete sorting of the input sequence contradicts the notion of an online algorithm. Therefore, we propose a new method to somewhat reorder the input in an online manner. Although our proofs are technically surprisingly simple, we give, for any number of identical machines, tight and significantly improved bounds on the competitive ratio for this new method.

This talk is based on joint work with Deniz Özmen and Matthias Westermann

4 March 2008, 16:00, MS.05

Balloon Popping with Applications to Ascending Auctions

We study the power of ascending auctions in a scenario in which a seller is selling a collection of identical items to anonymous unit-demand bidders. We show that even with full knowledge of the set of bidders' private valuations for the items, if the bidders are ex-ante identical, no ascending auction can extract more than a constant times the revenue of the best fixed price scheme.

This problem is equivalent to the problem of coming up with an optimal strategy for blowing up indistinguishable balloons with known capacities in order to maximize the amount of contained air. We show that the algorithm which simply inflates all balloons to a fixed volume is close to optimal in this setting.

Joint work with Anna Karlin, Mohammad Mahdian, and Kunal Talwar

26 February 2008, 16:00, MS.05

Randomly colouring a random graph

We will consider generating a random vertex colouring of a graph using a Markov chain. The aim is to determine the smallest number of colours, measured as a multiple of the maximum degree, for which the chain converges in time polynomial in the number of vertices. We will examine this problem for the case of an Erdős-Rényi random graph. This has been studied previously for graphs with small edge density. However, the methods used in the sparse case do not seem applicable as the edge density becomes larger. We will describe a different approach, which can be used to obtain results for denser graphs.

12 February 2008, 16:00, MS.05

Game theoretic Approach to Hedging of Rainbow Options

We suggest a game theoretic framework (so called now 'interval model') for the analysis of option prices in financial mathematics. This leads to remarkable explicit formulas and effective numeric procedures for rainbow (or coloured) options hedging.

5 February 2008, 16:00, MS.05

Short-length routes in low-cost networks via Poisson line patterns

How efficiently can one move about in a network linking a configuration of n cities? Here the notion of "efficient" has to balance (a) total network length against (b) short network distances between cities. My talk will explain how to use Poisson line processes to produce networks which are nearly of shortest total length, which make the average inter-city distance almost Euclidean.

This is joint work with David Aldous.

29 January 2008, 16:00, MS.05

Clustering for Metric and Non-Metric Distance Measures

We study a generalization of the k-median problem with respect to an arbitrary dissimilarity measure D. Given a finite point set P, our goal is to find a set C of size k such that the sum of errors D(P,C) = Σ{p in P} min{c in C} {D(p,c)} is minimized. The main result in this talk can be stated as follows: There exists an O(n 2(k/ε)^O(1)) time (1+ε)-approximation algorithm for the k-median problem with respect to D, if the 1-median problem can be approximated within a factor of (1+ε) by taking a random sample of constant size and solving the 1-median problem on the sample exactly.

Using this characterization we obtain the first linear time (1+ε)-approximation algorithms for the k-median problem for doubling metrics, for the Kullback-Leibler divergence, Mahalanobis distances and some special cases of Bregman divergences.

22 January 2008, 16:00, MS.05

The Travelling Salesman Problem: Are there any Open Problems left? Part II.

We consider the notorious travelling salesman problem (TSP) and such seemingly well-studied algorithms as the Held & Karp dynamic programming algorithm, the double-tree and Christofides heuristics, as well as algorithms for finding optimal pyramidal tours. We show how the ideas from these classical algorithms can be combined to obtain one of the best tour-constructing TSP heuristics. The main message of our presentation: despite the fact that the TSP has been investigated for decades, and thousands(!) of papers have been published on the topic, there are still plenty of exciting open problems, some of which (we strongly believe) are not that difficult to resolve.

15 January 2008, 16:00, MS.05

The Travelling Salesman Problem: Are there any Open Problems left? Part I.

We consider the notorious travelling salesman problem (TSP) and such seemingly well-studied algorithms as the Held & Karp dynamic programming algorithm, the double-tree and Christofides heuristics, as well as algorithms for finding optimal pyramidal tours. We show how the ideas from these classical algorithms can be combined to obtain one of the best tour-constructing TSP heuristics. The main message of our presentation: despite the fact that the TSP has been investigated for decades, and thousands(!) of papers have been published on the topic, there are still plenty of exciting open problems, some of which (we strongly believe) are not that difficult to resolve.

11 December 2007, 16:00, CS1.04

Improving Integer Programming Solvers: {0,½}-Chvátal-Gomory Cuts

Chvátal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In case the constraint multipliers are either 0 or ½, such cuts are known as {0,½}-cuts. This talk reports on our study to separate general {0,½}-cuts effectively, despite its NP-completeness.

We propose a range of preprocessing rules to reduce the size of the separation problem. The core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated {0,½}-cut, we formulate the (reduced) problem as an integer linear program. Some simple heuristic separation routines complete the algorithmic framework.

Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation times of state-of-the-art ILP-solvers.

4 December 2007, 16:00, CS1.04

Strategic Game Playing and Equilibrium Refinements

Koller, Megiddo and von Stengel showed in 1994 how to efficiently find minimax behavior strategies of two-player imperfect information zero-sum extensive games using linear programming. Their algorithm has been widely used by the AI-community to solve very large games, in particular variants of poker. However, it is a well known fact of game theory that the Nash equilibrium concept has serious deficiencies as a prescriptive solution concept, even for the case of zero-sum games where Nash equilibria are simply pairs of minimax strategies. That these deficiencies show up in practice in the AI-applications was documented by Koller and Pfeffer. In this talk, we argue that the theory of equilibrium refinements of game theory provides a satisfactory framework for repairing the deficiencies, also for the AI-applications. We describe a variant of the Koller, Megiddo and von Stengel algorithm that computes a *quasi-perfect* equilibrium and another variant that computes all *normal-form proper* equilibria. Also, we present a simple yet non-trivial characterization of the normal form proper equilibria of a two-player zero-sum game with perfect information.

The talk is based on joint papers with Troels Bjerre Sørensen, appearing at SODA'06 and SODA'08.

27 November 2007, 16:00, CS1.04

An Approximation Trichotomy for Boolean #CSP

This talk examines the computational complexity of approximating the number of solutions to a Boolean constraint satisfaction problem (CSP). It extends a line of investigation started in a classical paper of Schaefer on the complexity of the decision problem for CSPs, and continued by Creignou and Hermann, who addressed exact counting.

We find that the class of Boolean CSPs exhibits a trichotomy. Depending on the set of allowed relations, the CSP may be polynomial-time solvable (even exactly); or the number of solutions may be as hard to approximate as the number of accepting computations of a non-deterministic Turing machine. But there is a third possibility: approximating the number of solutions may be complete for a certain logically defined complexity class, and hence equivalent in complexity to a number of natural approximate counting problems, of which independent sets in a bipartite graph is an example.

All the necessary background material on approximate counting and CSPs will be covered.

This is joint work with Martin Dyer (Leeds) and Leslie Ann Goldberg (Liverpool).

20 November 2007, 16:00, CS1.04

From Tree-width to Clique-width: Excluding a Unit Interval Graph

From the theory of graph minors we know that the class of planar graphs is the only critical class with respect to tree-width. In this talk, we reveal a critical class with respect to clique-width, a notion generalizing tree-width. This class is known in the literature under different names, such as unit interval, proper interval or indifference graphs, and has important applications in various fields. We prove that the unit interval graphs constitute a minimal hereditary class of unbounded clique-width and discuss some applications of the obtained result.

13 November 2007, 16:00, CS1.04

Cut-based Inequalities for Capacitated Network Design Problems

In this talk we study basic network design problems with modular capacity assignment. The focus is on classes of strong valid inequalities that are defined for cuts of the underlying network. We show that regardless of the link model, facets of the polyhedra associated with such a cut translate to facets of the original network design polyhedra if the two subgraphs defined by the network cut are (strongly) connected. Accordingly, we focus on the facial structure of the cutset polyhedra. A large class of facets of cutset polyhedra is defined by flow-cutset inequalities. These inequalities are presented in a unifying way for different model variants. We propose a generic separation algorithm and in an extensive computational study on 54 instances from the survivable network design library (SNDlib), we show that the performance of CPLEX can significantly be enhanced by this class of cutting planes.

6 November 2007, 16:00, CS1.04

Submodular Percolation and the Worm Order

Suppose we have a grid mxn, two real-valued functions f and g on [m] and [n] respectively, and a threshold value b. We declare a point (i,j) in the grid to be open if f(i) + g(j) is at most b. It turns out that, if there is a path of open points from the bottom point (1,1) to the top point (m,n), then there is a monotone path of open points, i.e., if we can get from bottom to top at all, then we can do so without having to retreat.

This is an instance of the following much more general result. Let f be a submodular function on a modular lattice L; we show that there is a maximal chain C in L on which the sequence of values of f is minimal among all paths from 0 to 1 in the Hasse diagram of L, in a certain well-behaved partial order -- the "worm order" -- on sequences of reals. One consequence is that the maximum value of f on C is minimised over all such paths. We discuss the worm order, relate our work to that on searching a graph, and mention some sample applications.

Joint work with Peter Winkler.

30 October 2007, 16:00, CS1.04

Ant Colony Optimization for Vehicle Routing

In this talk I will first present the basic working principles of Ant Colony Optimization (ACO) and then explain some of our work on applying ACO to vehicle routing problems. The main idea in this context is to exploit problem knowledge to improve the performance of the general purpose heuristic ACO.

23 October 2007, 16:00, CS1.04

Parameterized Algorithms for Directed Maximum Leaf Problems

We prove that finding a rooted subtree with at least k leaves in a digraph is a fixed parameter tractable problem. A similar result holds for finding rooted spanning trees with many leaves in digraphs from a wide family L that includes all strong and acyclic digraphs. This settles completely an open question of Fellows and solves another one for digraphs in L. We also prove the following combinatorial result which can be viewed as a generalization of many results for a "spanning tree with many leaves" in the undirected case, and which is interesting on its own: If a digraph D ∈ L of order n with minimum in-degree at least 3 contains a rooted spanning tree, then D contains one with at least (n/4)1/3-1 leaves.

Joint work with Noga Alon, Fedor Fomin, Michael Krivelevich and Saket Saurabh

16 October 2007, 16:00, CS1.04

Hierarchical Graph Decompositions for Minimizing Congestion

An oblivious routing protocol makes its routing decisions independent of the traffic in the underlying network. This means that the path chosen for a routing request may only depend on its source node, its destination node, and on some random input. In spite of these serious limitations it has been shown that there are oblivious routing algorithms that obtain a polylogarithmic competitive ratios w.r.t. the congestion in the network (i.e., maximum load of a network link).

In this talk I will present a new hierarchical decomposition technique that leads to good oblivious routing algorithms. This decomposition can also be used as a generic tool for solving a large variety of cut based problems in undirected graphs.

9 October 2007, 16:00, CS1.04

Overhang Bounds

How far can a stack of n identical blocks be made to hang over the edge of a table? The question dates back to at least the middle of the 19th century and the answer to it was widely believed to be of order log n. Recently, we (Paterson and Zwick) constructed n-block stacks with overhangs of order n1/3, exponentially better than previously thought possible. The latest news is that we (Paterson, Peres, Thorup, Winkler and Zwick) can show that order n1/3 is best possible, resolving the long-standing overhang problem up to a constant factor.

I shall review the construction and describe the upper bound proof, which illustrates how methods founded in algorithmic complexity can be applied to a discrete optimization problem that has puzzled some mathematicians and physicists for more than 150 years.

19 July 2007, 11:00, CS1.01

Connecting Bits with Floating-Point Numbers: Model Checking and Theorem Proving in Practice

High performance floating point hardware is notoriously difficult to design correctly. In fact, until recently, the most expensive hardware bug encountered in industry, was the Intel FDIV bug in 1994 that lead to a charge of 450 million dollars against earnings. At the same time, floating point arithmetic is unique in that there is a fairly unambiguous specification, the IEEE 754 standard, which all implementations should obey. Unfortunately, bridging the gap between the mathematical specification of the standard and a low-level implementation is an extremely difficult task. At Intel we have developed an approach that combines the strengths of theorem proving with symbolic trajectory evaluation (a type of model checking), that allows us to completely verify the correctness of todayÇs floating point units. In this talk I will discuss the evolution of this approach and discuss some of the results that have been obtained in addition to areas of future research.

28 June 2007, 11:00, B3.02

Tree-width and Optimization in Bounded Degree Graphs

It is well known that boundedness of tree-width implies polynomial-time solvability of many algorithmic graph problems. The converse statement is generally not true, i.e., polynomial-time solvability does not necessarily imply boundedness of tree-width. However, in graphs of bounded vertex degree, for some problems, the two concepts behave in a more consistent way. We study this phenomenon with respect to three important graph problems -- dominating set, independent dominating set and induced matching -- and obtain several results toward revealing the equivalency between boundedness of the tree-width and polynomial-time solvability of these problems in bounded degree graphs. Joint work with Martin Milanic.

29 May 2007, 12:00, Room CS1.01

A Survey of Short PCP Constructions

The PCP theorem [AS, ALMSS] specifies a way of encoding any mathematical proof such that the validity of the proof can be verified probabilistically by querying the proof only at a constant number of locations. Furthermore, this encoding is complete and sound, i.e., valid proofs are always accepted while invalid proofs are accepted with probability at most 1/2. A natural question that arises in the construction of PCPs is by how much does this encoding blow up the original proof while retaining low query complexity.

The study of efficient probabilistic checkable proofs was initiated in the works of Babai et. al. [BFLS] and Feige at. al. [FGLSS] with very different motivation and emphases. The work of Babai et. al. considered the direct motivation of verifying proofs highly efficiently. In contrast, Feige et. al. established a dramatic connection between efficient probabilistically checkable proofs (PCPs) and the inapproximability of optimization problems.

In this talk, I'll review some of the recent works of Ben-Sasson et. al. [BGHSV'04, BGHSV'05], Dinur-Reingold [DR], Ben-Sasson and Sudan [BS'04] and Dinur [Din'06] that have led to the construction of PCPs that are at most a polylogarithmic factor longer than the original proof and furthermore, are checkable by a verifier running in time at most polylogarithmic in the length of the original proof.

An important ingredient in these constructions is a a new variant of PCPs called "PCPs of Proximity''. These new PCPs facilitate proof composition, a crucial component in PCP construction. I'll outline how these new PCPs allow for shorter PCP constructions and efficient proof verification

3 May 2007, 15:30, B2.13 (WBS Scarman Road Meeting Room)

Solved and Unsolved Optimization Challenges in Telecommunications

We give an overview of the optimization challenges addressed in recent years ranging from frequency assignment in mobile networks to multi-layer network design in backbone networks. We discuss the practical problems, the mathematical modelling, the methodologies used to tackle these problems, and the results achieved.