Dimensions of optimization settings: (simple vs. complex)
Subdivision of continuous optimization problems:
An optimization problem is any problem of minimizing an objective function over a feasible set. Given optimization variable $x \in \mathbb{R}^n$ and real-valued functions: $f_0$, the objective function (cost function); inequality constraint functions $f_i$ and equality constraint functions $h_i$, the standard form of an optimization problem is: $$\begin{align} \min \quad & f_0(x) \\ \text{s.t.} \quad & f_i(x) \le 0 && i = 1,\dots,m \\ & h_i(x) = 0 && i = 1,\dots,p \end{align}$$ The epigraph form of an optimization problem is: $$\begin{align} \min \quad & t \\ \text{s.t.} \quad & f_0(x) - t \le 0 \\ & f_i(x) \le 0 && i = 1,\dots,m \\ & h_i(x) = 0 && i = 1,\dots,p \end{align}$$
Domain of an optimization problem is intersection of the domains of the objective and all constraints: $D = (\cap_i \text{dom}f_i) ∩ (\cap_j \text{dom}h_j)$. Feasible set or constraint set $X$ of an optimization problem is the subset of its domain that satisfy all the constraints. An inequality constraint is active at a feasible point, if the inequality binds: $\exists x \in X: f_i(x) = 0$. Otherwise it is inactive at that point. A constraint is redundant if it doesn't change the feasible set.
Locally optimal point is a feasible point that optimizes the objective function over the feasible set within one of its neighborhood: $x = \inf { f_0(X \cap B_x(R)) }$.
Optimal value of an optimization problem is the infimum of the image of the feasible set, which takes value of extended real numbers: $p^∗ = \inf \{f_0(X)\}$, where $p$ is for primal. By convention, the infimum of an empty set is infinity. If the image of feasible points can be arbitrarily low, the optimal value is negative infinity. Optimal set of an optimization problem is the intersection of the preimage of its optimal value and its feasible set: $X_{\text{opt}} = f_0^{-1}( p^∗ ) \cap X$. Optimal point is a point of the optimal set.
For $\varepsilon>0$, the ε-suboptimal set of an optimization problem is defined as $X_{\text{opt}} = f_0^{-1}[p^∗ , p^∗ + \varepsilon] \cap X$. An $\varepsilon$-suboptimal point is a point of the $\varepsilon$-suboptimal set.
An optimization problem is unconstrained if there is no constraints: $m = p = 0$. An optimization problem is infeasible, if its feasible set is empty. Informally, two optimization problems are equivalent if from a solution of one, a solution of the other is readily found, and vice versa. It is possible but complicated to give a formal definition of such equivalence.
Ascent direction of a function at a point is a direction in which the function locally strictly increases. Direction of steepest ascent of a function at a point is an ascent direction with the largest directional derivative.
Convex function has identical local and global minimum sets.
Computational properties of convex optimization problems:
Optimization is the pursuit of optimal points in practical/computational problems. Although calculus easily solves this task when the function is differentiable and unconstrained, optimization mostly deal with constrained problems and continuity is not guaranteed.
Development of optimization theory:
Milestones in Optimization.
对近代最优化(通用算法)发展,我经常讲几个里程碑, 上世纪40年代中期的线性规划的单纯形算法 (simplex algorithm [@Dantzig1947]), 60年代的拟牛顿法 (quasi-Newton methods, e.g. DFP [@Davidon1959]), 80年代的内点法 (interior-point methods, e.g. [@Karmarkar1984]), 我总是把它归结为20年一个阶段。 2005年后,ADMM (alternating direction method of multipliers, see [@Boyd et al., 2011]) 是被学术界最认可的一个重要优化方法之一。 (何炳生)
Duality in general means two different views of the same object. The dual problem of a discrete problem is continuous/convex, which provides a lower bound and other important information for the solution of the discrete primal.
LPs and convex programs are often solved by the same methods:
Key distinction is not linear-nonlinear but convex-nonconvex. {Rockafellar}
Problem ranking in increasing practical difficulty