From Wakapon
Line 9: | Line 9: | ||
== Optimization == | == Optimization == | ||
− | === Unconstrained Optimization === | + | * Convex optimization |
+ | |||
+ | === One-Dimensional Search Methods === | ||
+ | * Golden Section Search 91 | ||
+ | * Fibonacci Search 95 | ||
+ | * Newton's Method 103 | ||
+ | * Secant Method | ||
+ | |||
+ | === Unconstrained Optimization and Neural Networks === | ||
* Descent methods | * Descent methods | ||
− | * Line search | + | * Line search |
− | * Descent methods with trust region | + | * Descent methods with trust region |
− | * Steepest descent | + | * Steepest descent |
− | * Quadratic models | + | * Quadratic models |
− | * Conjugate gradient methods | + | * Conjugate gradient methods |
+ | * Single-Neuron Training | ||
+ | * Backpropagation Algorithm | ||
=== Newton-Type Methods === | === Newton-Type Methods === | ||
− | * Newton’s method | + | * Newton’s method |
− | * Damped Newton methods | + | * Damped Newton methods |
− | * Quasi–Newton methods | + | * Quasi–Newton methods |
− | * DFP formula | + | * DFP formula |
− | * BFGS formulas | + | * BFGS formulas |
− | * Quasi–Newton implementation | + | * Quasi–Newton implementation |
=== Direct Search === | === Direct Search === | ||
− | * Simplex method | + | * Simplex method |
− | * Method of Hooke and Jeeves | + | * Method of Hooke and Jeeves |
=== Linear Data Fitting === | === Linear Data Fitting === | ||
− | * “Best” fit | + | * “Best” fit |
− | * Linear least squares | + | * Linear least squares |
− | * Weighted least squares | + | * Weighted least squares |
− | * Generalized least squares | + | * Generalized least squares |
− | * Polynomial fit | + | * Polynomial fit |
− | * Spline fit | + | * Spline fit |
− | * Choice of knots | + | * Choice of knots |
=== Nonlinear Least Squares Problems === | === Nonlinear Least Squares Problems === | ||
− | * Gauss–Newton method | + | * Gauss–Newton method |
− | * The Levenberg–Marquardt method | + | * The Levenberg–Marquardt method |
− | * Powell’s Dog Leg Method | + | * Powell’s Dog Leg Method |
− | * Secant version of the L–M method | + | * Secant version of the L–M method |
− | * Secant version of the Dog Leg method | + | * Secant version of the Dog Leg method |
+ | |||
+ | === Duality === | ||
+ | * The Lagrange dual function | ||
+ | * The Lagrange dual problem |
Revision as of 19:29, 30 July 2017
This page is a rough summary of the various methods for optimization, curve fitting/linear regression, etc.
First, some definitions:
- In statistics, linear regression is basically a way to make a curve fit a set of data points
Contents
Optimization
- Convex optimization
One-Dimensional Search Methods
- Golden Section Search 91
- Fibonacci Search 95
- Newton's Method 103
- Secant Method
Unconstrained Optimization and Neural Networks
- Descent methods
- Line search
- Descent methods with trust region
- Steepest descent
- Quadratic models
- Conjugate gradient methods
- Single-Neuron Training
- Backpropagation Algorithm
Newton-Type Methods
- Newton’s method
- Damped Newton methods
- Quasi–Newton methods
- DFP formula
- BFGS formulas
- Quasi–Newton implementation
Direct Search
- Simplex method
- Method of Hooke and Jeeves
Linear Data Fitting
- “Best” fit
- Linear least squares
- Weighted least squares
- Generalized least squares
- Polynomial fit
- Spline fit
- Choice of knots
Nonlinear Least Squares Problems
- Gauss–Newton method
- The Levenberg–Marquardt method
- Powell’s Dog Leg Method
- Secant version of the L–M method
- Secant version of the Dog Leg method
Duality
- The Lagrange dual function
- The Lagrange dual problem