§ 6   The principle of minimum (large) value

 

[ Minimum (maximum) value principle of continuous system ] Consider a control system whose state equation is

                                (1)

and satisfy the initial conditions

                                  (2)

As for the terminal state , it is either free or satisfying the target set

                                (3)

The performance index is

                   (4)

where x and m are the state vector and control vector, respectively:

is an n -dimensional vector function, and R is an m -dimensional vector function.

Suppose , , G ( x , t ), R ( x , t ) are all continuous functions of their arguments, continuously differentiable to x and bounded.

Suppose the control vector m ( t ) is admissible control, that is, the following conditions are satisfied:

(i)           m ( t ) is a piecewise continuous function on the closed interval [ t 0 , t f ] (that is, there are only a finite number of discontinuities of the first kind, and at discontinuities, it is assumed to be left continuous);

(ii)         m ( t ) is continuous at the endpoints t 0 , t f ;

(iii)      Here U is a bounded closed set in R r .

The formulation of the problem assumes that the above equations (1), (2), (3), and (4) are all given, and it is required to find a control from the allowable control , which makes the system (1) satisfy the initial condition (2) The trajectory of , reaches the target set (3) at the termination time t f , and makes the performance index (4) take a minimum value (or a maximum value). 

Covariate variables (as opposed to state variables ) are introduced for this purpose.

It satisfies the system of differential equations

                (5)

as a helper function

                    (6)

is called the Hamiltonian of system (1) . The equations (1) and (5) can then be expressed in the following form:

 

                    (7)

It is called the Hamiltonian system of equations or the canonical system of equations. then there are

If the minimum principle is the optimal control of the problem mentioned above, , is the optimal trajectory and optimal co-state variables corresponding to the regular equations (7) , then we have 

So it can be solved by the following steps:

(1)   Write the Hamiltonian function and the regular system of equations.

(2)   To find the minimum value of the Hamiltonian function, find the relation:

                            ( 8 )

(3) Substitute   the relational formula (8) into the canonical equation system, and solve the two-point boundary value problem for the canonical equation system according to the following boundary conditions, and then the optimal trajectory and optimal co-state variables can be obtained :

(i)   Assuming that the system of equations ( 1 ) has been given initial conditions and termination conditions ( target set ) , the boundary conditions of the system of canonical equations ( 7 ) are:

,

(ii)   Assuming that it is given and there is no given target set (3) , that is, it is free, then the boundary conditions of the regular equation system (7) are:

,

where is the first item in performance index ( 4 ) . If , then the boundary conditions become

,

(iii) The hypothesis is given and the terminal state satisfies the target set  

and is assumed to be dimensional, where , the boundary conditions of the regular system of equations (7) are

,

where is a constant-valued column vector of undetermined dimensions. The above has a total of boundary conditions.

Second, the assumption is not fixed, but free. At this time, it is equivalent to one more independent parameter in the boundary conditions, so a relational expression should be added. For the case of boundary conditions (i) and (ii) , the supplementary relation is

For the case of boundary condition (iii) , the supplementary relation is

(4) Substitute   the obtained value into relation ( 8 ) , and then the optimal control can be obtained .

The above steps can also be applied flexibly according to the nature of the problem.

A similar statement is made for the maximum principle.

The minimum ( large ) value principle is described to describe the necessary conditions for the optimality of the control system, and it gives a method to determine the optimal control. This principle is derived from the classical variational method. It can deduce all the necessary conditions that are well known in the variational method, but the main advantage of this principle compared with the classical variational method is that it is applicable to any set , especially if it contains a bounded closed set, while The classical variant method is only suitable for the case of open set OR , so this can be said to be an extension of the control domain class . But it suffers from the same difficulty of the two-point boundary value problem as the variational method. 

[ The principle of minimum ( large ) value of discrete system ] Consider a discrete control system ( Fig. 18. 13 ) whose state equation is 

                     (1)

and satisfy the initial conditions

                                    (2)

The termination state is free, and the performance index is

                        (3)

In the formula , are the state vector and control vector of the system corresponding to the moment , and they are the dimension and the dimension vector respectively.

The formulation of the problem seeks a control vector that satisfies the initial condition (2) , so that the performance index takes an extremely small ( large ) value. 

The processing method is similar to the continuous case. Introduce the covariate , which is a dimensional column vector. Construct the Hamiltonian

                    (4)

Then there are regular equations

, namely (5)                 

, namely (6)               

The discrete minimum principle is set as the optimal control, the corresponding optimal trajectory, and the corresponding optimal co-state variables, then they satisfy the regular equations (5) , (6) and one of the following conditions: 

(i) ,   i.e.

(ii)  

At the same time, the boundary conditions are satisfied ( if they are given in advance, this condition is not required )

 ( at that time , )

So it can be solved by the following steps:

(1)  Write the Hamiltonian function (4) and the regular equations (5) , (6) .  

(2)  Fix , , and apply the condition (i) or (ii) of the minimum principle to the Hamiltonian function , and find the relational expression  

                  (7)

(3)  Substitute the relational formula (7) into the regular equations (5) , (6) , and use the condition  

,

The problem is reduced to a two-point boundary value problem for solving a system of equations. From this, the sum can be obtained .

(4)  Substitute the obtained value into (7) to get the optimal control .  

It shows that the minimum ( large ) value principle of discrete systems does not exist except in some special cases. Refer to G. S. _ G. _ Beveridge and R. S. _ Schechter, Opiimization: Theory and Practice . 1970, McGraw-Hill, Inc. , pp . 257-258 . _ 

 

 

Original text