Analysis on Sub-Gradient and Semi-Definite Optimization

Exploring the principles and applications of Sub-Gradient and Semi-Definite Optimization

by Shashi Sharma*,

- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540

Volume 12, Issue No. 2, Jan 2017, Pages 1057 - 1061 (5)

Published by: Ignited Minds Journals


ABSTRACT

Semidefinite Programming (SDP) is one of most fascinating parts of mathematical programming with regards to most recent twenty years. Semi definite Programming can be utilized to display numerous practical problems in different fields, for example, curved compelled optimization, combinatorial optimization, and control theory. The sub gradient method is a straightforward algorithm for limiting a no differentiable curved function. The method looks especially like the conventional inclination method for differentiable functions, however with a few eminent exemptions. The subgradient method is promptly reached out to handle problems with requirements. The most broadly known executions of SDP solvers are inside point methods. They give highaccuracy solutions in polynomial time. Sub gradient methods can be much slower than inside point methods (or Newton's method in the unconstrained case). Specifically, they are first-order methods their exhibition depends particularly on the problem scaling and molding. In this Article, we studied about the Sub-Gradient and Semi-Definite Optimization in detail.

KEYWORD

Semidefinite Programming, sub gradient method, curved optimization, combinatorial optimization, control theory, SDP solvers, inside point methods, Newton's method, polynomial time, problem scaling

1. M. Akg¨ul.( Akg¨ul,2014.)Topics in Relaxation and Ellipsoidal Methods, volume 97 of Research Notes in Mathematics. Pitman, 2. P. Camerini, L. Fratta, and F. Maffioli. (Fratta & Maffioli 2015.) On improving relaxation methods by modifying gradient techniques. Math. Programming Study, pp. 26–34, 3. B. Polyak. (Polyak, 2007) Introduction to Optimization. Optimization Software, Inc.,

4. Y. Nesterov. (Nesteroy 2009) Primal-dual sub gradient methods for convex problems. Mathematical Programming, 120(1): pp. 221–259,

5. N. Shor. (Shor. 2008) Nondifferentiable Optimization and Polynomial Problems. Nonconvex Optimization and its Applications. Kluwer, 6. Christoph Helmberg, Franz Rendl, Robert J.(Helmerg & Robert, 2014) Vanderbei, and Henry Wolkowicz. An interior-point method for semidefinite programming. Technical report, Program in Statistics and Operations Research, Princeton University,

pca via stochastic optimization. In NIPS, 8. Ming Gu. (Gu 2007) Primal-dual interior point methods for semidefinite programming in finite precision. SIAM J. Optimization, 10(2), 9. Michel X. Goemans and David P. Williamson. (Goemans & Williamson.2014) .879 approximationn algorithms for max cut and max 2sat. In STOC,

10. Yuri Nesterov. (Nesterov, 2014)Smoothing technique and its application in semidefinite optimization. CORE Discussion Paper,

Corresponding Author Shashi Sharma*

Mathematics Department, DAV College, Muzaffarnagar-251001