Dimensions of optimization settings: (simple vs. complex)

- Continuous vs. discrete (There may also be mixed continuous and discrete problems.)
- A quantity can be modeled as
**continuous**if your instrument can take on any value its resolution allows.

- A quantity can be modeled as
- Deterministic vs. stochastic (infinite dimensional, dynamic)
- A quantity can be modeled as
**stochastic**if its generating mechanism is inherently stochastic, or you acknowledge ignorance towards its mechanism, or you intentionally introduce ignorance by reducing information.

- A quantity can be modeled as
- Single objective vs. multi-objective vs. game

Subdivision of continuous optimization problems:

- Convex Optimization: linear programming (LP);
- Non-convex Optimization

An **optimization problem** is any problem of minimizing an objective function over a feasible set.
Given **optimization variable** $x \in \mathbb{R}^n$ and real-valued functions: $f_0$, the **objective function** (cost function); **inequality constraint functions** $f_i$ and **equality constraint functions** $h_i$,
the **standard form** of an optimization problem is:
$$\begin{align}
\min \quad & f_0(x) \\
\text{s.t.} \quad & f_i(x) \le 0 && i = 1,\dots,m \\
& h_i(x) = 0 && i = 1,\dots,p
\end{align}$$
The **epigraph form** of an optimization problem is:
$$\begin{align}
\min \quad & t \\
\text{s.t.} \quad & f_0(x) - t \le 0 \\
& f_i(x) \le 0 && i = 1,\dots,m \\
& h_i(x) = 0 && i = 1,\dots,p
\end{align}$$

The **domain** of an optimization problem is the set of points for which the objective and all constraint functions are defined: $D = (\cap_i \text{dom}f_i) ∩ (\cap_j \text{dom}h_j)$.
The **feasible set** (or **constraint set**) $X$ of an optimization problem is the set of all points in the domain of an optimization problem that satisfy all constraints.
An inequality constraint is **active** at a feasible point, if the inequality binds: $\exists x \in X: f_i(x) = 0$.
Otherwise it is **inactive** at that point.
A constraint is **redundant** if it doesn't change the feasible set.

A **locally optimal point** is a feasible point that optimizes the objective function over the feasible set within one of its neighborhood: $x = \inf { f_0(X \cap B_x(R)) }$.

The **optimal value** of an optimization problem is the infimum of the image of the feasible set, which takes value of extended real numbers: $p^∗ = \inf \{f_0(X)\}$, where $p$ is for primal.
By convention, the infimum of an empty set is infinity.
If the image of feasible points can be arbitrarily low, the optimal value is negative infinity.
The **optimal set** of an optimization problem is the intersection of the inverse image of its optimal value and its feasible set: $X_{\text{opt}} = f_0^{-1}( p^∗ ) \cap X$.
An **optimal point** is a point of the optimal set.

For $\varepsilon>0$, the **$\varepsilon$-suboptimal set** of an optimization problem is defined as $X_{\text{opt}} = f_0^{-1}[p^∗ , p^∗ + \varepsilon] \cap X$.
An **$\varepsilon$-suboptimal point** is a point of the $\varepsilon$-suboptimal set.

An optimization problem is **unconstrained** if there is no constraints: $m = p = 0$.
An optimization problem is **infeasible**, if its feasible set is empty.
Informally, two optimization problems are **equivalent** if from a solution of one, a solution of the other is readily found, and vice versa.
It is possible but complicated to give a formal definition of such equivalence.

An **ascent direction** of a function at a point is a direction in which the function locally strictly increases.
A **direction of steepest ascent** of a function at a point is an ascent direction with the largest directional derivative.

Convex function has identical local and global minimum sets.

Computational properties of convex optimization problems:

- Interior-point methods can be proved to solve (to a specified accuracy) some cases of convex optimization problems in polynomial time.
- There are NP-hard convex optimization problems.

Optimization is the pursuit of optimal points in practical/computational problems. Although calculus easily solves this task when the function is differentiable and unconstrained, optimization mostly deal with constrained problems and continuity is not guaranteed.

Development of optimization theory:

- Prehistory: Early 1900s-1949.
- Fenchel-Rockafellar era: 1949-mid1980s.
- Duality theory

- Modern era: Mid1980s-present
- Algorithms
- A change in the assumptions underlying the field.

**Duality** in general means two different views of the same object.
The dual problem of a discrete problem is continuous/convex, which provides a lower bound and other important information for the solution of the discrete primal.

LPs and convex programs are often solved by the same methods:

- subgradient methods (descent methods);
- cutting plane methods (outer approximation);
- simplicial decomposition (inner approximation);
- proximal methods (regularization);
- interior point methods. NLPs are solved by gradient/Newton methods.
- incremental methods;
- gradient projection method;
- optimal complexity methods;

Key distinction is not linear-nonlinear but convex-nonconvex. {Rockafellar}

Problem ranking in increasing practical difficulty

- Linear programming (LP) and (convex) quadratic programming (QP): network flows;
- Second order cone programming (SOCP);
- Semidefinite programming (SDP);
- Convex programming (CP): separable, large sum; network flows, monotropic programming, geometric programming;
- Nonconvex/continuous programming (NCP): twice differen-tiable, quasi-convex programming;
- Discrete optimization/integer programming (IP);