Vorlesung und Kolloquium am 17. Januar 2000

Veranstaltungsort

Technische Universität Berlin
Straße des 17. Juni 136, 10623 Berlin
Mathematikgebäude - Raum MA 304


Vorlesung - 14.00 Uhr c.t.

Andreas Schulz - MIT Boston

Inverse OPtimization and Beyond

Abstract: A typical optimization problem identifies the values of the decision variables given the value of the model parameters. An inverse optimization problem consists of inferring the values of the model parameters (e.g., the cost coefficients) given the value of the decision variables. The inverse optimization problem is to perturb the cost vector so that the given solution is optimal with respect to the perturbed cost vector, and the cost of perturbation is minimum. Inverse optimization problems arise, e.g., in predicting the movement of earthquakes, in medical imaging, in deriving reasonable toll policies in traffic networks, and in portfolio optimization. In addition, they are also theoretically important; for instance, for some algorithms the solutions converge to an optimum in an inverse manner; for problems in which the cost coefficients are not known with precision, it is plausible to determine a solution to the inverse problem; and in numerical analysis, errors for solving systems of equations are measured in terms of inverse optimization.

After reviewing the concept of inverse optimization and surveying some of the main results and applications, we will deal with inverse optimization as an alternative approach to measure distance from optimality. More specifically, we will introduce a notion of an approximate solution which complements the standard notion of the worst-case relative error. In particular, the new notion is invariant under subtraction of a constant from the objective function, and it is properly defined even if the objective function takes on negative values. The second part of the lecture will be devoted to discuss the relations and interdependences of these two concepts of approximation.

(This lecture draws from various background materials, produced by many different authors. A list of references will be provided during the lecture.)


Kolloquium - 16 Uhr s.t.

Andreas Fest - Technische Universität Berlin

Maximizing Parallelism in Resource-Constrained Project Scheduling

Abstract: Within resource-constrained project scheduling, jobs have to be scheduled in order to minimize a given objective function. The jobs are subject to both temporal and resource constraints. The objective most frequently addressed is to find a feasible schedule minimizing the project makespan, which is the time required to complete all jobs.

Obviously, every feasible schedule induces an interval order on the set of jobs. On the other hand, an interval order which respects the precedence and resource constraints can be used to derive feasible schedules. Our approach is to compute, in a certain sense, "good" interval orders and evaluate the corresponding schedules.

In this talk we describe a branch and cut approach for this purpose which is based upon a formulation in so-called linear ordering variables. The algorithm uses some well-known classes of inequalities for the interval order polytope and additional inequalities for representation of the resource constraints. The objective of project makespan minimization can not be represented in linear ordering variables directly. So with the intuition that a schedule which minimizes the makespan has a high degree of "parallelism", we minimize a linear objective function which implicitly maximizes parallelism. Finally, we study the correlation between usual objective functions for project scheduling and measures of parallelism.