unconstrained problems and optimality conditions one-dimensional linear search exact linear search Direct search method non-precise one-dimensional search method descent algorithm convergence and convergence speed unconstrained programming steepest descent method Newton method newton-steepest descent hybrid algorithm damping Newton method quasi-Newton method Conjugate gradient method
Unconstrained problem and optimal solution
Consider the following optimization questions:
MINX∈RNF (x) min x∈r n f (x) \min_{x\in\mathbb{r}^{n}}f (x) The solution of unconstrained optimization problem is solved by local solution and global solution, but it is often feasible to only local solutions (or strictly local solutions). If we do not add that we are talking about local solutions.
Local Solution Definition
Set X∗∈rn x∗∈r n X^{*}\in\mathbb{r}^{n}, if there is x∗x∗x^{*} δ (δ>0) δ (δ> 0) \delta (\delta>0) neighborhood
Nδ (x∗) ={x|∥x−x∗∥<δ} nδ (x∗) = {x |‖x−x∗‖<δ} N_{\delta} (x^{*}) =\{x | \|x-x^{*}\| makes
F (x) ≥f (x∗), ∀x∈nδ (x∗) f (x) ≥f (x∗), ∀x∈nδ (x∗) f (x) \geq f (x^{*}), \forall x\in N_{\delta} (x^{*}) is said to be x∗x∗x^{ *} is f (x)