Dimensions of optimization settings: (simple vs. complex)

  1. Continuous vs. discrete (There may also be mixed continuous and discrete problems.)
    • A quantity can be modeled as continuous if your instrument can take on any value its resolution allows.
  2. Deterministic vs. stochastic (infinite dimensional, dynamic)
    • A quantity can be modeled as stochastic if its generating mechanism is inherently stochastic, or you acknowledge ignorance towards its mechanism, or you intentionally introduce ignorance by reducing information.
  3. Single objective vs. multi-objective vs. game

Subdivision of continuous optimization problems:

  1. Convex Optimization: linear programming (LP);
  2. Non-convex Optimization

Definitions

An optimization problem is any problem of minimizing an objective function over a feasible set. Given optimization variable $x \in \mathbb{R}^n$ and real-valued functions: $f_0$, the objective function (cost function); inequality constraint functions $f_i$ and equality constraint functions $h_i$, the standard form of an optimization problem is: $$\begin{align} \min \quad & f_0(x) \\ \text{s.t.} \quad & f_i(x) \le 0 && i = 1,\dots,m \\ & h_i(x) = 0 && i = 1,\dots,p \end{align}$$ The epigraph form of an optimization problem is: $$\begin{align} \min \quad & t \\ \text{s.t.} \quad & f_0(x) - t \le 0 \\ & f_i(x) \le 0 && i = 1,\dots,m \\ & h_i(x) = 0 && i = 1,\dots,p \end{align}$$

Domain of an optimization problem is intersection of the domains of the objective and all constraints: $D = (\cap_i \text{dom}f_i) ∩ (\cap_j \text{dom}h_j)$. Feasible set or constraint set $X$ of an optimization problem is the subset of its domain that satisfy all the constraints. An inequality constraint is active at a feasible point, if the inequality binds: $\exists x \in X: f_i(x) = 0$. Otherwise it is inactive at that point. A constraint is redundant if it doesn't change the feasible set.

Locally optimal point is a feasible point that optimizes the objective function over the feasible set within one of its neighborhood: $x = \inf { f_0(X \cap B_x(R)) }$.

Optimal value of an optimization problem is the infimum of the image of the feasible set, which takes value of extended real numbers: $p^∗ = \inf \{f_0(X)\}$, where $p$ is for primal. By convention, the infimum of an empty set is infinity. If the image of feasible points can be arbitrarily low, the optimal value is negative infinity. Optimal set of an optimization problem is the intersection of the preimage of its optimal value and its feasible set: $X_{\text{opt}} = f_0^{-1}( p^∗ ) \cap X$. Optimal point is a point of the optimal set.

For $\varepsilon>0$, the ε-suboptimal set of an optimization problem is defined as $X_{\text{opt}} = f_0^{-1}[p^∗ , p^∗ + \varepsilon] \cap X$. An $\varepsilon$-suboptimal point is a point of the $\varepsilon$-suboptimal set.

An optimization problem is unconstrained if there is no constraints: $m = p = 0$. An optimization problem is infeasible, if its feasible set is empty. Informally, two optimization problems are equivalent if from a solution of one, a solution of the other is readily found, and vice versa. It is possible but complicated to give a formal definition of such equivalence.

Ascent direction of a function at a point is a direction in which the function locally strictly increases. Direction of steepest ascent of a function at a point is an ascent direction with the largest directional derivative.

Propositions

Convex function has identical local and global minimum sets.

Computational properties of convex optimization problems:

  • Interior-point methods can be proved to solve (to a specified accuracy) some cases of convex optimization problems in polynomial time.
  • There are NP-hard convex optimization problems.

Miscellaneous

Optimization is the pursuit of optimal points in practical/computational problems. Although calculus easily solves this task when the function is differentiable and unconstrained, optimization mostly deal with constrained problems and continuity is not guaranteed.

Development of optimization theory:

  1. Prehistory: Early 1900s-1949.
  2. Fenchel-Rockafellar era: 1949-mid1980s.
    • Duality theory
  3. Modern era: Mid1980s-present
    • Algorithms
    • A change in the assumptions underlying the field.

Milestones in Optimization.

对近代最优化(通用算法)发展,我经常讲几个里程碑, 上世纪40年代中期的线性规划的单纯形算法 (simplex algorithm [@Dantzig1947]), 60年代的拟牛顿法 (quasi-Newton methods, e.g. DFP [@Davidon1959]), 80年代的内点法 (interior-point methods, e.g. [@Karmarkar1984]), 我总是把它归结为20年一个阶段。 2005年后,ADMM (alternating direction method of multipliers, see [@Boyd et al., 2011]) 是被学术界最认可的一个重要优化方法之一。 (何炳生)

Duality in general means two different views of the same object. The dual problem of a discrete problem is continuous/convex, which provides a lower bound and other important information for the solution of the discrete primal.

LPs and convex programs are often solved by the same methods:

  • subgradient methods (descent methods);
  • cutting plane methods (outer approximation);
  • simplicial decomposition (inner approximation);
  • proximal methods (regularization);
  • interior-point methods. NLPs are solved by gradient/Newton methods.
  • incremental methods;
  • gradient projection method;
  • optimal complexity methods;

Key distinction is not linear-nonlinear but convex-nonconvex. {Rockafellar}

Problem ranking in increasing practical difficulty

  1. Linear programming (LP) and (convex) quadratic programming (QP): network flows;
  2. Second order cone programming (SOCP);
  3. Semidefinite programming (SDP);
  4. Convex programming (CP): separable, large sum; network flows, monotropic programming, geometric programming;
  5. Nonconvex/continuous programming (NCP): twice differen-tiable, quasi-convex programming;
  6. Discrete optimization/integer programming (IP);

🏷 Category=Optimization