Classification of mathematical programming problems

Get Complete Project Material File(s) Now! »

Classification of mathematical programming problems

In this section we propose a classification of MP problems. Before doing that, a very important concept must be introduced: convexity. Note that in the rest of this chapter we always refer to minimization problems. A maximization problem where one wants to maximize an objective function f can be reformulated as a minimization problem by means of the relationship max f = − min −f .

Convexity

For a class of problems, namely convex problems in form of minimization, the set of global optima is the same as the set of local optima. Intuitively, they are easier to solve, since there is no need to continue the search for a global optimum after having found a local optimum, whilst in general this is not true. In order to define more formally convexity, some definitions (mostly taken from [87]) are introduced in the following: Definition 1.2.1 (Convex combination [87]). The convex combination of k
points x1 x k ∈ Rn is defined as z = k } λi ≥ 0 i=1 λixi, where ∀i ∈ {1 k and k λi = 1. If λ ∈ (0 1)k, then z is called strict convex combination. i=1 When k = 2, the previous definition can be reformulated as follows: given two points x y ∈ Rn, its convex combination z is defined as z = λx + (1 − λ)y where λ ∈ [0 1] (strict if λ ∈ (0 1)). For the sake of clarity, in the following definitions we consider the case when k = 2.
Definition 1.2.2 (Convex set [87]). A set X ⊆ Rn is called convex if ∀x y ∈ X, it holds that X contains all the convex combinations z of x and y, that is z = λx + (1 − λ)y ∈ X ∀λ ∈ [0 1]. It also holds that intersection of convex sets is a convex set. An example of convex and nonconvex sets is depicted in Figure 1.2. (a) (b)
Definition 1.2.3 (Convex function [87]). A function f : X → R defined on a convex set X ⊆ Rn is called convex if ∀x y ∈ X ∀λ ∈ [0 1], it holds that f (z) ≤ λf (x) + (1 − λ)f (y), where z = λx + (1 − λ)y. If λ ∈ (0 1) and ∀x = y f (z) < λf (x) + (1 − λ)f (y) then f is called strict convex function. A graphical representation of a convex function is given in Figure 1.3. It is inter-esting to underline some facts: (i) if a function g(x) is convex, then the constraints having the form g(x) ≤ b b ∈ R are convex. In general, g(x) ≥ b could be a non-convex constraint, even if g(x) is a convex function. In the case of g(x) convex, the constraint g(x) ≥ b is called reverse convex [240]; (ii) if g(x) is linear, g(x) O b, where O ∈ {≤ = ≥} is a convex constraint; (iii) if the set of feasible solutions X is defined by convex constraints, then it is convex. We can now introduce the following theorem: Theorem 1.2.4 (Property of optimal solutions for convex problems [87]). Consider a convex problem, that is a problem stated in the form (1.1) where X ⊆ Rn is a convex set, and the objective function to be minimized f : X → R is a convex function. Each local optimum is also a global optimum. Another important concept, that is concavity, is strictly related to convexity. More precisely, substituting ≤ and < with ≥ and > in Definition 1.2.3, we obtain the definitions of concave and strict concave function. These concepts are useful in the case of MP problem stated as maximization problems. The role of convexity and concavity in MP can be summarized by these facts [38]:
• A local minimum (maximum) of a convex (concave) function on a convex feasible region is also a global minimum (maximum).
• A local minimum (maximum) of a strict convex (concave) function on a convex feasible region is the unique global minimum (maximum).

Classes of mathematical programming problems

We can now propose a classification of the MP problems formulated in the very general form (1.1). Remember that the set X is given by the bounds and kinds (as integer, continuous, or discrete) of the variables, and by the constraints of the problem, which are usually on the form g(x) ≤ 0 or g(x) = 0.
• Linear Programming (LP): the objective function and the constraints are lin-ear, and the variables are continuous;
• Mixed Integer Linear Programming (MILP or MIP): the objective function and the constraints are linear, and at least one variable is integer;2

Convex mixed integer nonlinear programming

To solve cMINLP problems the main approaches are the following:
• Branch-and-Bound [116]: this is the extension of the BB algorithm for MILP to nonlinear problems. At each node of the BB tree, instead of solving the LP problem corresponding to the continuous relaxation of a MILP problem, a cNLP problem (which corresponds to the continuous relaxation of a cMINLP problem) is solved.
• Outer-Approximation [81]: this is an iterative method where at each it-eration a MILP relaxation of the cMINLP problem is solved (the nonlinear constraints are replaced by linear approximations). Then, the solution ob-tained is used to fix the integer variables of the cMINLP problem, and the corresponding cNLP relaxation is solved. The solution of the cNLP problem is used to generate some cuts to add to the MILP formulation, and the process is repeated. Solving the MILP problem provides a lower bound and solving the cNLP problem provides an upper bound on the solution of the cMINLP problem. When these two bounds are equal within a certain tolerance, then the optimal solution is found.
• Generalized Benders Decomposition [102]: this method is based on the Benders decomposition technique previously proposed by Benders for MILP. It can be seen as a variant of the outer-approximation method, where the MILP relaxation is not obtained by linearizing all the nonlinear constraints, but all these linearized constraints are combined to obtain a single constraint which is adjoined to the model (surrogate relaxation). As a consequence, the solution of this MILP problem provides in general a worse (i.e., lower) lower bound with respect to the outer-approximation method, leading to a larger number of iterations needed to obtain the solution, but on the other hand each MILP problem can be solved faster.
• Extended Cutting Plane [248]: this method works by solving iteratively a MILP relaxation of the original cMINLP problem, where the linearization of the most violated nonlinear constraint by the optimal solution is adjoined to the MILP formulation which is solved at the next iteration.
• LP/NLP based Branch-and-Bound [208]: this technique extends the outer-approximation approach in a branch-and-cut framework. More precisely, as in the outer-approximation method, a MILP relaxation is solved, but only once. In fact, this problem is solved by means of the BB as described for MILPs, with a main difference. Whenever an integer solution is found at the current node of the BB tree, it is used to fix the integer variables of the cMINLP problem yielding a cNLP problem. The solution of this cNLP problem is then used to derive some cuts that are adjoined to the MILP formulation at the current node, and the BB solution process continues. As for MILPs, heuristics are very important for cMINLPs, since they can be used to find good feasible solutions and thus accelerate the algorithms presented above. Some examples are presented in [1,29,34,35].

READ  RECENT ADVANCES IN PEMFC CATHODE ELECTROCATALYSTS

Mixed integer nonlinear programming

In the general case a MINLP problem is nonconvex. In this case, the use of the techniques employed for cMINLPs would provide a local optimum without proof of global optimality (unless the MINLP problem is reformulated as a cMINLP problem, but this is not always possible [153]). For obtaining global optimal solutions for nonconvex MINLPs, one can employ an ε-approximation algorithm called spatial Branch-and-Bound (sBB). Several variants exist, among which [5,26,83,91,154,217, 232, 242]. COUENNE [26], or BARON [220] are examples of solvers implementing sBB. Given a constant ε > 0, the sBB recursively generates a binary search tree, some leaf node of which contains a feasible point x∗ for which f (x∗) differs by at most ε from the globally optimal value of the objective function (with a slight abuse of notation, we refer to x∗ as the ε approximation of the optimal solution instead of the real global optimum).
A very important step for each sBB algorithm is the convex relaxation of the original nonconvex problem. The solution of the convex relaxation provides a lower bound for the value of the optimal solution in the original problem. Some examples of convex relaxations are presented in Chapter 4, and more details about how these convex relaxations are computed are provided in [156]. At each iteration of the algorithm, convex relaxations restricted to particular sub-regions of space are solved, and a lower and an upper bound to the optimal value of the objective function can be assigned to each sub-region. A global optimum relative to the sub-region is identified when lower and upper bounds are very close together. More precisely, a generic node a of the sBB tree contains a formulation restricted to some region, or box Ba as well as a lower bound value f (ˇxa) relative to the parent node. All along the sBB run, the following data are maintained:
• the search tree, encoded in some efficiently accessible form;
• the best solution so far (also called the incumbent).

Table of contents :

1 Introduction 
1.1 Motivations
1.2 Mathematical programming
1.2.1 Classification of mathematical programming problems
1.2.1.1 Convexity
1.2.1.2 Classes of mathematical programming problems
1.2.2 Approaches to solve mathematical programming problems
1.2.2.1 Linear programming
1.2.2.2 Mixed integer linear programming
1.2.2.3 Nonlinear and convex nonlinear programming
1.2.2.4 Convex mixed integer nonlinear programming
1.2.2.5 Mixed integer nonlinear programming
1.3 Reformulations
1.3.1 Classification of reformulations
1.3.1.1 Exact reformulations
1.3.1.2 Narrowings
1.3.1.3 Relaxations
1.4 Contributions
I An application of exact reformulations 
2 Clustering in general and bipartite graphs 
2.1 Definitions and notation
2.2 Clustering based on modularity maximization
2.2.1 Hierarchical divisive heuristic
2.2.1.1 Reduction of number of variables and constraints
2.2.1.2 Binary decompositions
2.2.1.3 Symmetry breaking constraint
2.2.1.4 Numerical results
2.2.2 Extension to bipartite graphs
2.2.2.1 Fortet linearization
2.2.2.2 Square reformulation
2.2.2.3 Binary decomposition
2.2.2.4 Numerical results
2.3 Clustering based on strong and almost-strong conditions
2.3.1 Strong communities detection
2.3.2 Almost-strong communities detection
2.3.3 Comparison between SC and ASC
2.4 Conclusions
II An application of narrowings 
3 Circle packing in a square 
3.1 Mathematical programming formulations
3.2 Detection of symmetries for circle packing
3.2.1 Definitions and notation
3.2.2 Automatic symmetry detection
3.2.3 Symmetric structure of circle packing
3.3 Order symmetry breaking constraints
3.3.1 Weak constraints
3.3.2 Strong constraints
3.3.3 Mixed constraints
3.3.4 Numerical results
3.4 Other constraints
3.4.1 Fixing points symmetry breaking constraints
3.4.2 Bounds symmetry breaking constraints
3.4.3 Triangular inequality constraints
3.4.4 Numerical results
3.5 A conjecture about the reduction of the search space
3.6 Conclusions
III An application of relaxations 
4 Primal and dual convex relaxations for multilinear terms 
4.1 Definitions and notation
4.2 Primal relaxation
4.2.1 Bilinear terms
4.2.1.1 McCormick’s inqualities
4.2.1.2 Fortet inequalities
4.2.2 Trilinear terms: Meyer-Floudas inequalities
4.2.3 Quadrilinear terms
4.3 Dual relaxation
4.3.1 Example
4.4 Comparison and numerical results
4.5 Conclusions
IV Conclusions and bibliography 
5 Conclusions 
Bibliography

GET THE COMPLETE PROJECT

Related Posts