Winter 2021/22

Algorithmische Diskrete Mathematik

Wintersemester 2021/22, 9CP
03-M-WP-18, StudIP

Die algorithmische diskrete Mathematik ist ein recht junges Gebiet mit Wurzeln in der Algebra, Graphentheorie, Kombinatorik, Informatik (Algorithmik) und Optimierung. Sie behandelt diskrete Strukturen wie Mengen, Graphen, Permutationen, Partitionen und diskrete Optimierungsprobleme.

Diese Veranstaltung gibt eine Einführung in die algorithmische diskrete Mathematik. Es werden strukturelle und algorithmische Grundlagen der Graphentheorie und kombinatorischen Optimierung vermittelt. Im Vordergrund steht die Entwicklung und mathematische Analyse von Algorithmen zum exakten L?sen von kombinatorischen Optimierungsproblemen. Es werden u.a. folgende Themen behandelt:

  • Einführung in Graphentheorie, kombinatorische und lineare Optimierung
  • Graphentheorie: Grundbegriffe, Wege in Graphen, Euler- und Hamiltonkreise, B?ume
  • Algorithmische Grundlagen (Kodierungsl?nge, Laufzeit, Polynomialzeitalgorithmen)
  • Spannb?ume, Matchings, Netzwerkflüsse und -schnitte (kombinatorische Algorithmen)
  • Einblick in lineare Optimierung: Modellierung, Polyedertheorie, Optimalit?tskriterien, Dualit?t
  • Elemente der Komplexit?tstheorie


Die Veranstaltung richtet sich vorrangig an fortgeschrittene Bachelorstudierende, ist aber auch für Masterstudierende geeignet.

6 SWS setzen sich zusammen aus

  • 2 SWS Vorlesung nach flipped class room Konzept: asynchron/video + material und Termine für Diskussion und Austausch
  • 2 SWS interaktive ?bung
  • 2 SWS seminaristischer Anteil, Eigenstudium eines aktuellen Forschungsartikels und Vortrag

Erster Termin zur Einführung und 1. VL: Pr?senztermin am Do 21.10., 10-12, im MZH 3150.
Verteilung der Vortragsthemen (Seminaranteil): Pr?senz am Do 21.10. nach VL

 

Dozentin:  Prof. Dr. Nicole Megow
Assistent:  Jens Schl?ter

Literatur:

  • [KV] Korte, Vygen: Kombinatorische Optimierung: Theorie und Algorithmen, Springer, 2012.
  • [KN] Krumke, Noltemeier: Graphentheoretische Konzepte und Algorithmen, Springer-Vieweg, 2012.
  • [CLRS] Corman, Leiserson, Rivest, Stein: Introduction to Algorithms, 3rd edition, MIT Press, 2009.
  • [KT] Kleinberg, Tardos: Algorithm Design, Pearson, 2006.
  • [AMO] Ahuja, Magnanti, Orlin: Network Flows: Theory, Algorithms, and Applications, Prentice-Hall, 1993.
  • [BT] Bertsimas, Tsitsiklis: Introduction to Linear Optimization, Athena Scientific, 1997.
  • [GKT] Guenin, K?nemann, Tuncel: A Gentle Introduction to Optimization, Cambridge University Press, 2014.

Algorithms and Uncertainty

Wintersemester 2021/22, 9CP
03-IMAT-AU, StudIP

A key assumption of many powerful optimization methods is that all the data is fully accessible from the beginning.

However, from the point of view of many real-world applications (e.g., in logistics, production or project planning, cloud computing, etc.) this assumption is simply not true. Large data centers allocate resources to tasks without knowledge of exact execution times or energy requirements; transit times in networks are often uncertain; or, parameters such as bandwidth, demands or energy consumption are highly fluctuating. The current trend of data collection and data-driven applications often amplifies this phenomenon. As the amount of available data is increasing tremendously due to internet technology, cloud systems and sharing markets, modern algorithms are expected to be highly adaptive and learn and benefit from the dynamically changing mass of data. 

In the above examples, our knowledge of the current data is only partial or based on historical estimates. The class Algorithms and Uncertainty will teach students about the most common models of such uncertain data and how to design and analyze efficient algorithms in these models.

Specifically, we will cover the theory of online optimization, where the input arrives without any prior information (such as network packets arriving to a router) and also needs to be processed immediately, before the next piece of input arrives. This model is best suited for analyzing critical networking and scheduling systems where devices and algorithms must perform well even in the worst-case scenario.

In the cases where previous history can be used to model the upcoming data, we often employ robust optimization or stochastic optimization. In robust optimization, the aim is to optimize the worst-case of all possible realizations of the input data. Hence, this model is rather conservative. In stochastic optimization however, the algorithms work with the assumption that data is drawn from some probability distribution known ahead of time and typically the goal is to optimize the expected value.

Nowadays, another source of information is often available: machine learning algorithms can generate predictions which are accurate most of the time. However, there is no guarantee on the quality of the prediction, as the current instance may not be covered by the training set. This statement motivated a very recent research domain that will be covered in this course: how to use error-prone predictions in order to improve guaranteed algorithms.

Lecturers: Prof. Dr. Nicole MegowDr. Felix Hommelsheim

Time + Room: Tuesday 12-14 (MZH 1450) and Thursday 8-10 (MZH 3150)

Format: lectures twice a week with integrated, interactive exercise sessions

Examination: individual oral exam; as admission to the oral exam it is mandatory to present solutions in the exercise session at least twice during the term.
 

Seminar: Highlights of Robust Optimization

Wintersemester 2021/2022, 3/6 CP

This seminar focuses on recent research in the broad area of robust optimization and highlights various techniques to handle different
types of uncertainty.

Many combinatorial optimization problems do not consider that the input data of real-world applications may be uncertain. One approach to tackle this uncertainty is robust optimization, in which we optimize the worst-case of all possible realizations of the input data. However, depending on the application, the uncertainty can be expressed in many different ways: the cost of the resources, the structure of the problem affecting the set of feasible solutions, or even which precise constraints we have to satisfy. We will study different approaches on handling these kind of uncertainties for robust counterparts of some classical combinatorial optimization problems such as shortest path, minimum spanning tree or the assignment problem.

Lecturer: Prof. Dr. Nicole Megow, Dr. Felix Hommelsheim

Format:
The seminar aims at Master's students; Bachelor's students in higher semesters are also welcome. Students are expected to read and thoroughly understand original research papers, and to deliver an oral presentation and an optional write up.

The first  meeting is on Wednesday, October 20, at 10:15 am. We will discuss the organization as well as intermediate meetings and allocate the research articles. Please register with StudIP! We intend to schedule the talks as a two-day block seminar during the first week after the end of the semester (Feb 9-10, 2022). We will discuss this in the first meeting.