OPEN CASCADE multiple Variable Function
[Email protected]
Abstract. Multiple variable function with gradient and Hessian matrix are very very import in OPEN CASCADE optimization algorithms. In order to understand these optimization algorithm better, let's study some basic knowledge about Gradient, Hessian Matri X.
Key Words. Multiple Variable Function, Gradient, Hessian Matrix, optimization algorithm,
1. Introduction
When a function has only one argument, it feels relatively simple and is still in control. As the B-spline curve represented by the parameter u, when the parameter u changes between [0,1], the point on the curve corresponding to the parameter can be obtained. When a function has more than one argument, it is a bit of a struggle, after all, people like to be stable, do not like too much change. When the changes are too long, the results are more colorful. If there are two parameters u,v the B-spline surface, the parameter variation range is a u∈[0,1],v∈[0,1] rectangular space, and the parameter u,v corresponds to a point on a piece of surface.
In practice, the concepts of the first derivative (gradient gradient), the second derivative (Hessian Matrix) and the extremum of multivariate functions are the basic knowledge of understanding nonlinear optimization problems.
In the OPEN cascade, a nonlinear optimization algorithm is used in some algorithms for extremum and approximation, as shown in the following class diagram:
Figure 1.1 Multiple Variable Function in OPEN CASCADE
As shown, the multivariate function with the second derivative (Hessian Matrix) is applied to the global optimization of the extremum algorithm. In the Curve smoothing (Fair Curve) algorithm realized by the energy method, the multivariate function with the second order derivative is also used. In order to better understand the specific implementation of optimization, we first learn the expression of concepts related to multivariate functions in open cascade: Multivariate functions, gradient gradient, and Hessian Matrix.
2.Multiple Variable Function
The definition of multivariate function is given in advanced mathematics: Set D is a set of points on a plane. If for each point P (x, y) ∈d, the variable z always has a definite value according to a certain law and it corresponds, then it is called Z is the variable x, y two-dollar function. The point set D is called the function's definition field, and X, Y is called the argument, and z is called the dependent variable. When the number of independent variables is greater than 1, that is, the N≥2,n meta-function is collectively called a multivariate function.
In open cascade, the most direct correspondence with the multivariate function is the class geom_surface, that is, the surface of the parameter representation, the value area of the parameter u,v is the definition field of the multivariate function, and the point on the surface when the UV is specified is the multivariate function value. In the OPEN cascade mathematical Package, we also give a more abstract multivariate function class: Math_multiplevarfunction.
Figure 2.1 Math_multiplevarfunction Class diagram
As the above class diagram shows, Class math_multiplevarfunction is an abstract class with two pure virtual functions:
V nbvariables () const = 0: The number of independent variables;
V Value (const math_vector& X, standard_real& F) = 0: Calculates the function value F corresponding to the specified argument X. The independent variable x is a vector in which the values correspond to multiple independent variables respectively;
The following is a specific application of a multivariate function, that is, the calculation of double integrals. The source of the topic is "advanced mathematics" textbook Exercise 9-2 (1):
In the open cascade the above two-integral code is calculated as follows:
/** Copyright (c) Shing Liu All rights reserved.** file:main.cpp* author:shing Liu (email Protected]) * date:2015-11-28 21:00* version:opencascade6.9.0** description:test Gauss Multiple I Ntegration.*/#defineWnt#include<math_MultipleVarFunction.hxx>#include<math_GaussMultipleIntegration.hxx>#pragmaComment (lib, "TKernel.lib")#pragmaComment (lib, "TKMath.lib")classMath_testfunction: Publicmath_multiplevarfunction{ Public: VirtualStandard_integer Nbvariables ()Const { return 2; } VirtualStandard_boolean Value (Constmath_vector& X, standard_real&f) {f= X (1) * X (1) + X (2) * X (2); returnstandard_true; }};voidTestmultipleintegration (void) {Math_vector alower (1,2); Math_vector Aupper (1,2); Math_integervector Aorder (1,2,Ten); Alower (1) = -1.0; Alower (2) = -1.0; Aupper (1) =1.0; Aupper (2) =1.0; Math_testfunction afunction; Math_gaussmultipleintegration Aintegrator (afunction, Alower, Aupper, Aorder); if(Aintegrator.isdone ()) {Std::cout<<Aintegrator; }}intMainintargcChar*argv[]) {testmultipleintegration (); return 0;}
By deriving a specific multivariate function class from the abstract class Math_multiplevarfunction, the function value corresponding to the specified vector x is computed in the virtual function value (), and the number of arguments is determined by the function nbvariables (). Why is it possible to use parentheses to directly take values and assign values to objects of class Math_vector? Because the parentheses operator is overloaded in this class. The calculation results are as follows:
Figure 2.2 Integration Value
Because you need to specify an integral region when using math_gaussmultipleintegration, and only constants, multiple integrals containing variables in the integral region as shown below cannot be calculated:
The integral calculation of the unary function corresponding to the multivariate function in OPEN cascade reference:
OPEN CASCADE Gauss Integration:
Http://www.cppblog.com/eryar/archive/2014/09/11/208275.html
3.Gradient
In the case of a two-tuple function, where the design function z=f (x, y) has a first-order continuous partial derivative in the planar area D, a vector can be set for each point P (x, y) ∈d:
This vector is called the function z=f (x, y) at the point P (x, y) gradient (gradient), which is recorded as GRADF (x, y). For multivariate functions u=f (X), x= (x1,x2, ..., xn) T, there are the following definitions:
Set U=f (X), X∈s, if at point x0= (X1 (0), X2 (0), ..., xn (0)) T is present for the partial derivative of the Independent variable x= (x1,x2, ..., xn) t, then the function u=f (x) is first-order at point X0 and is called the Vector u=f (x) Gradient gradient or first-order derivative at point x0.
The first order condition of unconstrained optimization optimality is: If x0 is the local optimal point of unconstrained optimization problem, then.
The mathematical analysis shows that the direction of the normals of F (X) is in the direction of the point x0, that is, the value of the function along the gradient direction changes the fastest. The point where the gradient of the objective function is zero is called the stable point of unconstrained optimization problem. The stability point may be the maximum point of the target function, or it may be a minimum point, or even neither. The last case corresponds to a point called the saddle point of the function, which is the maximum value of the function in one direction from that point, and a minimum point in the other direction.
For unconstrained optimization problems, the derivative of the objective function in any direction of the optimal value point is zero, i.e. the tangent plane of the objective function at the optimal point is horizontal. But the local maximum point and saddle point of unconstrained optimization problem also satisfy the above conditions. Therefore, to confirm whether a stable point is an optimal point, we also need to consider the second order optimality condition of the point.
The corresponding class in open cascade is math_multiplevarfunctionwithgradient, which is a multivariate function with gradients. This class is also an abstract class and cannot be instantiated directly, to use the need to derive a new class, and to implement the following pure virtual functions:
V nbvariables () const = 0: Number of multivariate function arguments;
V Value (const math_vector& X, standard_real& F) = 0: Computes the function value F of the multivariate function at the specified variable X, by reference;
V Gradient (const math_vector& X, math_vector& G) = 0: Computes the gradient value of the multivariate function at the specified variable x, by reference;
V VALUES (const math_vector& X, standard_real& F, math_vector& g) = 0: Computes the function value F and the gradient value G at the specified variable X of the multivariate function;
4.Hessian Matrix
The second derivative of the multivariate function and the Hessian matrix are defined as follows: Set U=f (x), X0∈s, if f is present at point X0∈s for the second derivative of each component of the independent variable x∈s, then the function f (x) is the X0 guide at the point differentiable, and is called The Matrix
The second derivative of F (X) at point x0 or Hessian Matrix,hessian Matrix is sometimes recorded as H (x0).
The second order condition of unconstrained optimization optimality condition is: Set F (x) two times at Point X0∈s, if x0 the local minimum of the most f (x), then and semi-definite.
The second order sufficient conditions for unconstrained optimization optimality conditions are: set F (x) at point X0∈s two times, if and positive definite, then x0 is the strict local minimum of function f (x).
The two-x= (x1,x2,..., xn) T is a two-time homogeneous function, which plays an important role in the study of nonlinear optimization problems. The judging condition of hessian matrix positive definite can be obtained by the correlation theorem of positive definite two in linear algebra. Compared with the theorem on the sufficient conditions of the extremum of the binary function in higher mathematics, the Hessian matrix is more general and applicable to any multivariate function. Finally know that an application of positive definite two, the original in the linear algebra textbook suddenly emerged such an abstract concept, can be associated with the actual application. It seems that linear algebra must be understood in practical applications, such as the solution of differential equations in modern control engineering, the theory of eigenvalue is used, and the explanation of the sufficient conditions of extreme value in unconstrained optimization problems is to use the positive definite two-order theory. No wonder at that time in the study of linear algebra, so laborious, not understanding these abstract concepts, learning time can only be mechanical memory, if not used later, will certainly be returned to the teacher. The classes corresponding to the Hessian matrix in OPEN Cascade are Math_multiplevarfunctionwithhessian, and their class diagrams are as follows:
Figure 4.1 Math_multiplevarfunctionwithhessian class diagram
From the above class diagram, Math_multiplevarfunctionwithhessian derives from the Multivariate function class math_multiplevarfunctionwithgradient with gradients. Therefore, there are several dashed functions that are evaluated more than the Multivariate function class with gradients:
Values (const math_vector&x, standard_real&f, math_vector&g, math_matrix&h) = 0
The function f, gradient value G and Hessian Matrix H of the multivariate function at the specified variable x are computed by this line virtual function.
Because Math_multiplevarfunctionwithhessian has a pure virtual function, it cannot be instantiated directly, and it needs to be used from its derived class according to the actual situation. As with several classes in the class diagram shown at the beginning of this article, a new class is derived from the multivariate function with the Hessian matrix to calculate global extrema and curve smoothing, as shown in the following class diagram:
Figure 4.2 Math_multiplevarfunctionwithhessian Class diagram
5.Conclusion
In the Open cascade Math Package Math, we describe not only a unary function math_function with only one argument, but also a multivariate function with multiple independent variables, its first derivative (gradient) and second derivative (Hessian Matrix). By applying the integral calculation of multivariate functions, it can be seen that the class encapsulation of these concepts by OCC is clear and easy to understand and use. For the introduction of these basic concepts, it is easy to understand the use of the following specific optimization algorithms.
It can be seen from the optimization algorithm in open cascade that the core is the skillful application of the ingenious tool of mathematics. We can find the related books such as advanced mathematics, linear algebra, optimization method, nonlinear Optimization theory and method, numerical analysis and so on, by comparing the procedures in open cascade to realize effective theory connection practice and improve the learning efficiency. But for higher mathematics and linear algebra can do restudying, to the original Baisibuxie problem can have a aha!, find the joy of learning.
6. References
1. Department of Mathematics, Tongji University. Advanced mathematics. Higher Education Press. 1996
2. Department of Applied Mathematics, Tongji University. Linear algebra. Higher Education Press. 2003
3. Easy righteousness, Chen Daoqi. Introduction to numerical analysis. Zhejiang University Press. 1998
4. Writing Group of research materials. Operations. Tsinghua University Press. 2012
5. He Jianyong. Optimization method. Tsinghua University Press. 2007
6. The LI. Optimization method. Science Press. 2015
7. Wang Yi, Xiu Hua. Nonlinear optimization theory and method. Science Press. 2012
PDF version:open CASCADE multiple Variable Function
OPEN CASCADE multiple Variable Function