Pursuit and Evasion Game under Uncertainty

Bankole Abiola, R.K. Ojikutu

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Pursuit and Evasion Game under Uncertainty

Bankole Abiola1, R.K. Ojikutu1,

1Department of Actuarial Science and Insurance Faculty of Business Administration University of Lagos Akoka, Lagos

Abstract

This paper examined a class of multidimensional differential games. In particular, it considered a situation in which the pursuer and evader are affected by uncertain disturbances. A necessary and sufficient condition for the existence of saddle point for this class of games was developed.

At a glance: Figures

1
Prev Next

Cite this article:

  • Abiola, Bankole, and R.K. Ojikutu. "Pursuit and Evasion Game under Uncertainty." American Journal of Applied Mathematics and Statistics 1.2 (2013): 21-26.
  • Abiola, B. , & Ojikutu, R. (2013). Pursuit and Evasion Game under Uncertainty. American Journal of Applied Mathematics and Statistics, 1(2), 21-26.
  • Abiola, Bankole, and R.K. Ojikutu. "Pursuit and Evasion Game under Uncertainty." American Journal of Applied Mathematics and Statistics 1, no. 2 (2013): 21-26.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction and System Description

Let and be Euclidean spaces and denote the Euclidean norm. We consider the uncertain dynamical system modelled by:

(1)
(2)
(3)
(4)

The subscripts and in equations (1)-(4) stand for pursuer and evader respectively. Also, and . The vectors and are uncertain vectors.

2. The Problem

The pursuer uses control to attempt to capture the evader, while the evader uses control to avoid being captured. The problem of interest is the minimization and the maximization of the final miss by pursuer and evader respectively, in the presence of uncertainties and.

3. The Game

The final miss is defined as a weighted quadratic form:

(5)

To make the game meaningful, we shall impose the following limitations:

(6)
(7)

Where are positive definite matrices such that

T is the final time. Joining (6) and (7) to (5) , are the following pay-off functional defined as:

(8)

the controller is the minimizer of and the controller is the maximizer. In equations (1) and (2), the uncertainty is acting against the wish of the controller and also acting against the wish of the maximizer in equations (3) and (4).

Subsequently, we make the following assumptions:

Assumption A1: Assuming that the uncertainties

where and are Lebesgue measurable and ranged within a compact set, then there exist constants such that

Assumption A2: There exist non-singular matrices and of appropriate dimensions such that:

(9)
(10)

Also given any and there exist a unique solution such that the following matrix Lyapunov equations are satisfied.

(11)

4. Problem Formulation

Defining a new state variable

(12)

then, from equations (1) – (4) we have

(13)

where

(14)

We shall write equation (13) in a compact form as:

(15)

where

(16)

We also impose the condition that

On the basis of (15), the following problems arise:

Problem:1

Subject to

This problem would be solved under the assumption that the pay-off functional defined by can be separated to be of the form:

(17)

Based on the aforementioned assumption and noting that

(18)

we arrived at the formulation of the following two optimal control problems define by Problems (2) and (3)

Problem (2):

Subject to

(19)

Problem (3):

Subject to

(20)

5. Solution

Necessary Condition for a Saddle Point

For problems (2) and (3) we introduce the following assumptions:

i.  The matrix functions are constant matrices of appropriate dimensions.

ii. Control functions , and disturbances , in problems (2) and (3) are generated by strategies.

Now consider problem (2) and define the Hamiltonian for the problem as

(21)

The adjoint equation satisfies

(22)

Now,

Therefore

(23)

6. Deduction

From (23) we deduce the following three cases, namely:

then any admissible is an optimal solution

We knock off case (b) since cannot be negative as an adjoint vector.

Case (c) is a trivial solution. Assume that the solution is not trivial, we consider case (a).

Let be continuous and let be define as

(24)

where and are square matrices and is a solution to (19) . Differentiating (14) to get:

(25)

Hence,

(26)

Therefore on the basis of (12) we have

(27)

Substituting (14) in (17) we get

(28)

For (18) to hold, it is necessary and sufficient that

(29)

Let and be strategies for and respectively, we then summarize the saddle point solution as follows:

(30)

For problem (3) the Hamiltonian is defined as

(31)

Following the same procedure as explained in problem (2), the adjoint vector satisfies

(32)

Now,

(33)

Since is a maximizer, we choose such that

(34)

Therefore

(35)

The following three cases can be deduced from (25)

(36)

then any admissible is an optimal solution.

Since is a minimizer and from (26) we shall knock off case (e) and we shall also assume a non-trivial solution, hence we shall let be continuous and be defined as

(37)

Where and are square matrices and is a solution of (10). We observed that

(38)

From (22)

(39)

Therefore

(40)

becomes

(41)

On substituting (37) into (41) we have the following equation

(42)

For (42) to hold

(43)

Let and be strategies for control and disturbance respectively, then, the saddle point solution is summarized as follows:

(44)

7. Value of the Game

We now employ the results in (20) and (34) to compute the value of the objective functional defined in problems (2) and (3)

We recall that

(45)
(46)
(47)

Multiply (45) by and equation (46) by to get

(48)
(49)

adding equations (48) and (49) together to get

(50)

Integrating (50) we get the following:

We know from (47) that , therefore,

Hence,

and from

therefore

(51)

Similarly, we consider

(52)
(53)
(54)

From (52) and (53) we have

(55)
(56)

Combining (55) and (56) we have

(57)

Therefore

(58)

Now, given that , then

(59)

From (37) substitute for to get

(60)

Therefore

(61)

Combining (51) and (61) ,the value of the game is given as

(62)

8. Sufficient Condition for a Saddle Point

We shall employ the sufficiency theorem given by [Gutman, S. (1975)] to show that the solution obtained for each of the cases is indeed a saddle point.

Let be defined as

(63)

We assume that is continuous on and it is a function.

Define

(64)

By virtue of (23) and (24) we have

(65)
(66)

using (29), (66) is reduced to

(67)

In (66) we substitute for using (23) and (24) to get

(68)

Now

(69)

and

(70)

On the basis of (69) and (70), (30) is indeed a saddle point.

9. Conclusion

The idea of saddle point (min, max) controllers arises in engineering problems where extreme conditions are to be overcome (Gutman,1975) A natural example is the “boosted period” of missiles when high thrust acts on the body so that every small deviation from the designed specifications cause unpredictable (input) disturbances in three nodes.

In this work we have considered situations where the disturbances affect the motions of the pursuer and the evader respectively. In our subsequent paper we hope to apply the results in this work to problems arising from pricing of general insurance policies, particularly in a competitive and non-cooperative market.

References

[1]  Abiola, B.(2012) “On Generalized Saddle Point Solution for a Class of Differential Games” International Journal of Science and Advanced Technology, 2(8):27-31.
In article      
 
[2]  Abiola, B.(2009) “Control of Dynamical Systems in The Presence of Bounded Uncertainties” Unpublished PhD Thesis , Department of Mathematics University of Agriculture Abeokuta.
In article      
 
[3]  Arika, I. (1976). “Linear Quadratic Differential Games in Hilbert Space” SIAM Journal of Control and Optimization, 1(1).
In article      
 
[4]  Gutman,S, (1975). “Differential Games and Asymptotic Behaviour of Linear Dynamical Systems in the Presence of Bounded Uncertainty” PhD Dissertation, University of California, Berkley.
In article      
 
[5]  Leitmann, G, (2004) “A Direct Optimization Method and its Application to a Class of Differential Games” Journal of Dynamics of Continuous and Intensive Systems, 11, 191-204.
In article      
 
comments powered by Disqus
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn