A Critical Study on Non – Linear Optimization

Exploring the principles and applications of non-linear optimization

by Satpal Singh*,

- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659

Volume 6, Issue No. 11, Nov 2013, Pages 0 - 0 (0)

Published by: Ignited Minds Journals


ABSTRACT

MathematicalOptimization is the selection of a best element with regard to somecriteria from some set of available alternatives. In the simplest case,an optimization problem consists of maximizingor minimizing a real function by systematicallychoosing input values from within an allowed setand computing the value of the function. The generalizationof optimization theory and techniques to other formulations comprises a largearea of applied mathematics. More generally,optimization includes finding "best available" values of someobjective function given a defined domain or a set of constraints, includinga variety of different types of objective functions and different types ofdomains.

KEYWORD

non-linear optimization, mathematical optimization, criteria, alternatives, maximizing, minimizing, real function, input values, allowed set, value, optimization theory, applied mathematics, objective function, domain, constraints

INTRODUCTION

For every basis B contained in G  H we need to determine whether there is a better basis B’ contained in G and if so, deliver such a basis B’ . This enables us to find the optimal basis contained in G by successively improving a candidate basis, provided some initial basis is known. Of course, LP-type systems present a more specific realization of this paradigm. First of all, the quality of bases is measured by some function z. In itself, this gives no more than an arbitrary ordering among the bases, but in addition we have required z to extend to all subsets of the ground set H. In particular, unlike the general paradigm above, this requires every basis B to be an optimal basis contained in B.

Even more important, we know that if B is not the optimal basis in G, then it is already not the optimal basis in B U {j} for some j E G - B. It becomes apparent that the improved basis B’’ needs to be a subset of G, but not necessarily a subset of B’ U {j} where B’ is the previous basis and j an improving variable. Hence, the main defining property of LP-type systems is not enforced by the general paradigm of sub-problem solvability by successive improvement. It rather introduces additional structure that facilitates the process of testing for improvement and finding an improved basis. We shall assume no additional structure and instead require the existence of an oracle that exactly realizes the general paradigm. Obviously, such an oracle introduces additional structure itself, and for any concrete problem it needs an explicit realization. At this point, one probably arrives at the conclusion that the structure of LP-type systems leads to the most natural realization, and this is actually the case for any concrete problem in this thesis. We therefore do not claim that the new level of abstraction we are going to climb immediately leads to practical applications. Rather, the goal is ² to investigate how far we can get by requiring just a minimal set of axioms

OBJECTIVES OF THE STUDY:

(i) It should be general enough to cover two concrete problems that we are particularly interested in, namely the polytope distance problem and the minimum spanning ball problem. (ii) It should be specific enough to allow a detailed treatment without drowning in technicalities. The class we get covers problems of minimizing a convex function subject to linear equality and non-negativity constraints.

RESEARCH METHODOLOGY:

We start off by introducing the polytope distance problem and derive from this the class of convex programs we are going to consider. We present a solution method for the generic problem in this class, along with time bounds, where we keep track of what concretely happens for the polytope distance problem. Having done this, we apply the developed machinery to obtain a solution for the minimum spanning ball problem. Given two (disjoint and finite) sets P;Q of points in IRd, the objective is to find a pair of points (p; q), p in the convex hull of P, q in the convex hull of Q, such that the length of the difference vector v = p - q is minimized. Formally, if

1 1

:{,.....,}, :{,.....,},

T s

PPP QQQ

 we let P (resp. Q) denote the (d x r)-matrix (resp. (d x s)-matrix) containing as columns the points of P (resp. Q). PD is the following problem in the variables x = x1; : : : ; xr, y = y1; : : : ; ys.

11 1

, 1, 1,

,0.

r i sjj

vPxQy x y xy

 

   

 

This is a quadratic programming problem; this means quadratic objective function and linear constraints whose optimum value is the square of the euclidean distance between conv(P) and conv(Q). Pattern behind PD is the following: minimize a real- valued convex function ,dfIR with its argument p ranging over the nonnegative span of some (d x n) - matrix M , where the coefficients satisfy additional q equality constraints Ax = b. In case of PD, we have p = v, f(v) = vT v, M = (P| - Q) (thus n = r + s) and q = 2 with

1100 0011

1 1

rs

A b







In general, our convex programming problems look as follows. (CP) minimize f() subject to  = Mx, Ax = b, x  0, where p is a d-vector, M a (d x n) - matrix, A a (q x n) - matrix, b a q-vector and x an n-vector. As in LP, inequality constraints Ax < b can be handled by introducing slack variables. We restrict ourselves to equality constraints here. No condition is imposed on M. For PD, this in particular means that we do not require the point sets P and Q to be in any kind of general position. We demand f to be a differentiable function with continuous partial derivatives, which is convex, i.e. for p; p’ and 0 < t < 1, 1'1'ftttftf

CONCLUSION:

Conditions on sub-problems of CP are assumed to hold, and they concern non-degeneracy and uniqueness of solution. For any G  H := [n], consider the problems CP(G) and UCP(G), defined as follows. (CP (G)) minimize f () subject to  = MGxG, AGxG = b, xG  0. (UCP(G)) minimize f () subject to  = MGxG, AGxG = b. The sub-problem CP(G) minimizes f(p) over all feasible x which satisfy the additional restrictions xH-G = 0. In case of PD, this corresponds to a distance computation between sub-polytopes of P and Q specified by G. Where CP(G) asks for the minimum of f over all nonnegative linear combinations with coefficients in some subspace, the relaxation UPD(G) considers any linear combination, and this problem is obtained by simply dropping the non-negativity constraints from CP(G). The `U' stands for `unconstrained', although UCP is not an unconstrained problem in the strict sense. The equality constraints of CP are still present, but later we will see that they are no `real' constraints and that UCP is substantially easier to solve than CP. In case of PD, U C P(G) amounts to a distance computation between the hulls of sub-polytopes.

REFERENCES:

  • B. Gartner and G. M. Ziegler. Randomized simplex algorithms on Klee-Minty cubes.
  • In Proceedings of 35th Annual IEEE Symposium on Foundations of Computer Science.
  • D. Goldfarb and W. Y. Sit. Worst case behavior of the steepest edge Simplex Method. Discrete Applied Mathematics.

 M. Goldwasser. A survey of linear programming in randomized subexponential time. ACM-SIGACT News.

Satpal Singh

Reading, MA.

  • M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization, volume 2 of Algorithms and Combinatorics. Springer-Verlag, Berlin Heidelberg.
  • R. G. Jeroslow. The simplex algorithm with the pivot rule of maximizing criterion improvement. Discrete Mathematics.
  • G. Kalai. A subexponential randomized simplex algorithm. In Proceedings of 24th Annual ACM Symposium on Theory of Computing.

 G. Kalai and D. J. Kleitman. A quasi-polynomial bound for the diameter of graphs of polyhedra. Bulletin of the American Mathematical Society.