Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Results to aid applications of ellipsoidal state bounds

Results to aid applications of ellipsoidal state bounds Mathematical and Computer Modelling of Dynamical Systems Vol. 11, No. 2, June 2005, 209 – 224 {{ J. P. NORTON* Department of Electronic, Electrical & Computer Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK Department of Mathematics/Integrated Catchment Assessment & Management Centre, SRES, Australian National University, Canberra, ACT 0200, Australia Results to assist in the application of the ellipsoidal bounds provided by standard state- bounding algorithms are presented. They include derived bounds on scalar state-dependent quantities and the state values which determine them, tests of intersection and inclusion of ellipsoids, measures of how much an ellipsoid may be changed without altering its inclusion in another, and an ellipsoidal inner bound for the set reachable in the worst case from an ellipsoidal set by ellipsoidally bounded forcing, in a linear system. Approximations are suggested for the most computationally demanding result. Ways in which these results might be employed in aerospace interception problems are discussed to illustrate their utility. Keywords: State estimation; State bounding; Ellipsoids; Reachability analysis; Aerospace control 1. Introduction Recursive updating of bounds on the state of a dynamical system is a long-established alternative to probabilistic state estimation [1 – 8]. At each observation instant, the error in a vector of observed output variables is assumed to have known bounds, so bounds on the outputs at that instant can be inferred from the observations. They are translated, via the observation equation, into bounds on the current state. The discrete- time state equation also imposes time-updated bounds on the state, by combining bounds on the state at the previous sampling instant and specified bounds on the effects of forcing inputs since then. The feasible set of state values consistent with all observations up to the present is found by combining the time-updated bounds and the new observation-derived bounds. As new observations add state bounds, the feasible set becomes inconveniently complicated. A simpler outer-bounding approximation is therefore usually updated, the most popular choice being an ellipsoid. The ellipsoidal bound *Email: j.p.norton@bham.ac.uk Mathematical and Computer Modelling of Dynamical Systems ISSN 1387-3954 print/ISSN 1744-5051 online # 2005 Taylor & Francis Group Ltd http://www.tandf.co.uk/journals DOI: 10.1080/13873950500069292 210 J. P. Norton T 1 n n nn ^ ^ ^ ^ x 2 Eðx ; P Þfx : ðx  x Þ P ðx  x Þ 1; x 2< ; x 2< ; P 2< ; k k k k k k k k k k k P ¼ P > 0g ð1Þ on the state at observation instant k has centre x^ , axes along the eigenvectors of P and half-axis lengths equal to the square roots of the eigenvalues. The updating algorithm for x^ and P aims to make the ellipsoid as tight as possible, usually by k k minimizing trP , jP j or a combination [3]. The recursive ellipsoid-updating k k algorithms are algebraically quite similar to the Kalman filter, and there is an obvious analogy between x^ and P in equation (1) and the mean and covariance k k updated by the Kalman filter. The analogy is closer if the mean and covariance are those of a Gaussian posterior probability density function (pdf) of state, as then the confidence ellipsoid defined by x and P is a direct analogue of the state-bounding k k ellipsoid. Bounding can (but need not) be regarded as a component part of probabilistic state estimation if the bounds are regarded as defining the finite support of a state pdf. Batch algorithms for ellipsoidal state bounding also exist [7]. They usually produce much tighter bounds than do the recursive algorithms but are, of course, less suited to real-time use. The results in this paper are applicable to ellipsoids generated by either batch or recursive algorithms. Ellipsoidal state bounds can be used for two purposes for which they are better suited than probabilistic state estimation yet which have received less attention than they deserve. The first is to produce bounds on practically significant, scalar functions of state. Bounds on certain variables may be needed for control or decision support when the state coordinate system does not give them directly, being chosen instead to give a simple model. For example, Cartesian coordinates are often convenient for modelling motion, while polar variables may be preferred for guidance or collision avoidance. Section 2 shows that bounds on scalar functions of state can be found from ellipsoidal state bounds in several important cases. By contrast, the mean and covariance of a nonlinear function of state cannot generally be derived from the estimated state mean and covariance, and are not readily found even if the state pdf is Gaussian. The second potential use of ellipsoidal state bounds is to check whether the future state, subject to uncertainty, can meet some desired condition. Examples are checking whether the future state can definitely be brought into a specified desirable region, true if its feasible set is contained in that region; checking if two objects can be made to meet or to miss in future, as in interception or collision avoidance; and unequivocal fault detection when the feasible set is outside the ‘no fault’ region. It is fairly straightforward to detect whether two ellipsoids intersect or one contains the other. If they do not intersect, the smallest change (translation, expansion or contraction) required to make one ellipsoid intersect or contain the other can also be found readily. This may show, for example, what action is necessary for the whole range of uncertain predicted position of a target to be reachable, ensuring interception. Similarly, when one set contains the other, it is easy to find the largest expansion or contraction factor for which that remains true. The next section gives results for bounds on state-derived scalar quantities. Section 3 considers the intersection and inclusion of ellipsoids, and ellipsoidal reachable sets. Section 4 illustrates their utility by outlining applications in aerospace interception. Ellipsoidal state bounds 211 2. Bounds on scalar functions of state 2.1 Linear functions The extremes of a scalar, linear function T n f ¼ f x; f 2< ð2Þ over the ellipsoid Eðx^; PÞ are at the points of tangency of the supporting hyperplanes of Eðx^; PÞ normal to f. The points are readily found by the Lagrange multiplier method: Pf x ¼ x^  pffiffiffiffiffiffiffiffiffiffiffi ð3Þ f Pf pffiffiffiffiffiffiffiffiffiffiffi T T so the extremes of f are f x^  f Pf. As each extreme point is the intersection of a hyperplane and the strictly convex ellipsoid, it is unique. By use of equation (3), the usual ellipsoid-updating state-bounding algorithms [1 – 6] can be modified to make the state bounds as tight as possible in any nominated direction (e.g. that of a particular crucial state variable). However, this is at the cost of bound looseness in other directions, so a better idea is to find the feasible set as the intersection of a number of ellipsoids, each tight in a different direction. A refinement would be first to find the minimum-volume bounding ellipsoid by the standard algorithm, then to update the ellipsoids tightest in the axis directions of that ellipsoid. 2.2 Ratio of two state variables This case covers derivation of bounds on azimuth or elevation y from joint bounds on Cartesian position components and their derivatives. For instance, for a target bounded by Eðx^; PÞ, with (x , x ), the Cartesian position coordinates of the target in 1 2 the horizontal plane with respect to an observer at the origin, the azimuth extremes are defined by the extremes of x /x over Eðx^; PÞ, as sketched in figure 1. Another example 2 1 is discrimination between target types, by comparing bounds on the image intensities of the target in two spectral bands, measured by a multispectral infrared sensor, with bounds on their ratio for different types of target. As x /x is constant on a hyperplane and is monotonic in the slope of the 2 1 hyperplane, its extremes are the points of tangency of such hyperplanes with Eðx^; PÞ. The extremes of x /x subject to x being on the boundary 2 1 ðx  x^Þ P ðx  x^Þ¼ 1 ð4Þ of Eðx^; PÞ are stationary points of the augmented cost function T 1 L ¼ þ l 1ðx  x^Þ P ðx  x^Þ ð5Þ 1 212 J. P. Norton Figure 1. Extremes of x /x over Eðx^;PÞ. 2 1 with respect to x and l. Differentiating L with respect to x, substituting equation (4) and solving for x, 2 3 4 5 x  x^ ¼ P ð6Þ where d ¼ x ^ x  x ^ x ð7Þ 1 2 2 1 Solution of equation (6) for x and substitution into equation (7) gives qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 d ¼ p x ^  2p x ^ x ^ þ p x ^  p p ð8Þ 22 12 1 2 11 11 22 1 2 where p p 11 12 p p 21 22 is the top left 2 6 2 partition of P. With d known, the first two rows of equation (6) (linear) are solved for x and x , yielding extremes 1 2 Ellipsoidal state bounds 213 x p  x 2 11 ¼ ð9Þ x p  d  x ^ x ^ 1 12 1 2 Each extreme point is the intersection of a hyperplane, which contains the normal to the x :x plane through the origin and is defined by a constant value of arctan (x /x ), 1 2 2 1 and the strictly convex ellipsoid. The point is thus unique. The values of the other state variables at these extremes are easily shown to be 2 3 6 7 x x Q Q 4 1 1 11 12 1 6 7 ¼Q Q where Q is 2  2;  P ð10Þ 22 21 11 4 5 ::: x Q Q 21 22 2.3 Product of two state variables This covers, for instance, derivation of bounds on the vertical velocity component of a target from bounds on its velocity and the cosine of the angle of the velocity to the local vertical, convenient state variables in ballistic flight. To find the extremes of x x over 1 2 Eðx^; PÞ, an analysis along similar lines to that in Section 2.2 gives 2 3 4 5 x  x^ ¼ P  ð11Þ where now d ¼ x ^ x þ x ^ x  2x x ð12Þ 1 2 2 1 1 2 The small increase in complexity of d leads to a quartic for d in terms of the first two elements of x^ and the top left 26 2 partition of P: bg 04 02 0 d ða þ 2b þ gÞd þ 2 p a þ d  bða  b þ gÞ¼ 0 ð13Þ where 0 2 2 d  d þ p ; a  p x ^ þ p x ^ ; b  p p ; g  2p x ^ x ^ ð14Þ 12 22 11 11 22 12 1 2 1 2 An exact solution of equation (13) is available by several classical methods [9]. The extremes of x x follow by solving the linear equation (11) for x and x in terms of d, 1 2 1 2 then forming x x directly or, to get a slightly simpler expression, using equation (12) 1 2 again to give 214 J. P. Norton d a  gd =p x x ¼  1 ð15Þ 1 2 2 b  d The nature of the stationary points of x x on the boundary of Eðx^; PÞ is seen by 1 2 examining the values of the Lagrange multiplier l=– d/2 at the roots of equation (13). They comprise either (i) a complex conjugate pair, a negative, real value yielding the minimum and a positive, real value giving the maximum, or (ii) four real values: a positive value at the global maximum, a negative value at the global minimum, another negative value at a local minimum, and a negative value at a point which is a local maximum with repect to x and x but a minimum with respect to the other state 1 2 variables. The global maximum is at the unique point of external tangency of the strictly convex Eðx;PÞ and a convex set of the form x x 5 c. Any minimum is at a 1 2 point where Eðx; PÞ is a tangent from the interior to a set of the form x x 5 c,soin 1 2 (ii), the pair of minima may be distinct yet coincide in value, as in figure 2 and the example below. Example. The extreme points of x x on the boundary of Eðx^; PÞ are required, where 1 2 Eðx^; PÞ has centre x^ ¼½31:2 and describing matrix 2 3 6:2 7 2:0 9 ::: 4 5 P ¼ 2:0 9 1:003 6 ::: ::: ::: ::: In equation (13) a = 18.0655, b = 6.2955, g = – 15.0545 yielding roots d’= 2.509091, 2.509091, – 4.244288 and – 0.773894. The corresponding points x = [5 0.2 . . .], [0.5 2 Figure 2. Points corresponding to roots of equation (13). Ellipsoidal state bounds 215 . . .], [3.723 1.489 . . .], [2.277 1.971 . . .] are respectively two global minima, the global maximum and a local maximum between the minima. D 2.4 Positive-semidefinite quadratic form The extremes of x Qx on the boundary of the ellipsoid Eðx^; PÞ, for Q symmetric and positive-semidefinite or definite, are useful in a number of situations. They include as a special case bounds on the range of a target with Cartesian position components and their derivatives as state variables. More generally, tests for the intersection and inclusion of ellipsoids can be based on these extremes. With Q positive definite, if 02=Eðx^; PÞ, one can check whether the ellipsoids Eð0; Q Þ and Eðx^; PÞ intersect by T T examining the minimum of x Qx subject to ðx  x^Þ P ðx  x^Þ¼ 1. If 0 2 Eðx^; PÞ, the minimum shows whether Eð0; Q Þ Eðx; PÞ. Such tests appear in Section 3. The case where Eðx; PÞ contains a point in the null space of Q (such as the position origin in the range-bounding case) is uninteresting, as the minimum of x Qx subject to T 1 T ðx  x^Þ P ðx  x^Þ 1 is then zero. Excluding this case, the extreme points of x Qx can be found by setting the derivative @L/@x of the augmented cost function T 1 L  x Qx þ l 1ðx  x^Þ P ðx  x^Þ ð16Þ to zero, yielding 1 1 x ¼ðI  l PQÞ x^ ð17Þ The inverse in equation (17) exists unless l is an eigenvalue of PQ. In that case, the origin, the centre x^ of Eðx^; PÞ and the point where Eðx^; PÞ and Eð0; Q Þ touch are collinear on the eigenvector associated with l. Such cases are detectable by noticing that x^ is an eigenvector of PQ, and the location and value of the extreme of x Qx are found from the corresponding eigenvalue. T T In the generic case, premultiplying @L/@x = 0 by ðx^  xÞ and setting ðx  x^Þ P ðx  x^Þ¼ 1 gives l ¼ x Qðx  x^Þð18Þ 1 1 1 T 1 ^ ^ ^ ^ From equation (17), x  x¼ðI  lQ P Þ x so ðx  xÞ P ðx  xÞ¼ 1 becomes 1 1 1 1 T 1 1 1 1 1 T x^ ðI  lP Q Þ P ðI  lQ P Þ x^  x^ ðQP  lIÞ QPQðPQ  lIÞ x^ ¼ 1 ð19Þ This polynomial in scalar l is solved; then, x follows from equation (17), yielding a bound 1 1 1 1 jljð  I  lQ P Þ x^ ¼jljð  PQ  lIÞ x^ ð20Þ 216 J. P. Norton pffiffiffiffiffiffiffiffiffiffiffiffiffi onkk x ¼ x Qx. In general, equation (19) is of degree 2n in l and requires a root- finder if n4 2. Computational cost may inhibit the use of equations (19) and (20) in conjunction with recursive state estimators at high update rates, so economical approximations may be useful. A very cheaply computed one is found by noticing that the terms of degree 2n and 2n –1in l in the polynomial equation obtained by rationalizing equation (19) are 2 2n 2n–1 those of jPQ – lIj , namely l , 7 2tr(PQ)l . Hence, for a dominant root j lj44 1 an approximate solution is l ffi 2trðPQÞ. The points of Eðx^; PÞ closest to and farthest from a given point x* not in it are found by putting I for Q and x^  x for x^. They may be approximated by the extreme points Pðx^  x Þ x ¼ x^  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð21Þ ðx^  x Þ Pðx^  x Þ in direction x^  x , using equation (3). This approximation is clearly good when the pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi distance from x*to x^ is much longer than the longest half-axis l ðPÞ of Eðx^; PÞ. max Example. The points of Eðx^; PÞ farthest from and nearest to the origin are required, where Eðx^; PÞ has centre x^ ¼½13 and describing matrix 4 2 P ¼ : Straightforward calculation gives: 3  l 2 1 2 1 ðP  lIÞ ¼ðl  7l þ 8Þ ; > 24  l > 1 1 l  3l > x ¼ lðlI  PÞ x^ ¼ðl  7l þ 8Þ ; 3l  10l ð22Þ 2 2 l  6l þ 29 ðP  lIÞ x^ ¼ðl  7l þ 8Þ ; > 3l  20l þ 46 Px^ ¼ Substituted into equation (19) with Q = I, this gives, after tidying up, 4 3 2 l  14l þ 22l þ 48l  15 ¼ 0 The only real roots are l = 11.9031, – 2.1247. The positive value indicates that x x T 1 increases if the bound on ðx  x^Þ P ðx  x^Þ is increased from 1, so it applies at the farthest point of Eðx^; PÞ from the origin, and the negative root gives the nearest. Through the second part of equation (22), the extremes are, respectively, x = [ – 2.6731 4.6114], [0.0679 1.2703]. The simplest approximation l ffi 2trðPÞ¼ 14 is fairly close to the larger root and gives as the farthest point x = [ – 2.2453 4.2264]. The other approximation Ellipsoidal state bounds 217 Px^ 1 1 10 ~ ^ pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi x ¼ x  ¼ 3 11 x^ Px^ gives x~ =[7 2.5250 4.6775], quite good given that the distance from the origin to x^ is only 3.162, comparable with the length 2.358 of the longer half-axis of Eðx^; PÞ. The approximation x~ = [0.5250 1.3225] to the nearest point is less good absolutely as well as relatively. Figure 3 sketches the situation, showing the extreme points and second approximation. 3. Operations on state-bounding ellipsoids 3.1 Test for intersection and inclusion of ellipsoids Results (17), (19) and (20) provide tests for inclusion and intersection of general ellipsoids Eðx^ ; P Þ and Eðx^ ; P Þ. They can be used directly, putting P , P for P, Q 1 1 2 2 1 and x^  x^ for x^. Alternatively, they can be simplified by an affine transformation 1 2 1 T x  A ðx  x^ Þ where A A  P , which transforms Eðx^ ; P Þ to the unit ball 2 2 2 2 2 2 2 Eð0; IÞfxjx x  1gð23Þ centred at the origin, so Q = I, and transforms Eðx^ ; P Þ to 1 1 T 1 ^ ^ ^ Eðx ;P Þ fxjðx  x Þ P ðx  x Þ 1gð24Þ 1 1 1 1 Figure 3. Points of Eðx^;PÞ nearest to and farthest from origin, and approximations. 218 J. P. Norton 1 T 1 where P  A P A > 0 and x  A ðx^  x^ Þ replace P and x^. This option is 1 1 1 2 2 2 1 2 adopted here to minimize algebra. The following tests are then obtained. Test 3.1.1 Does Eðx^ ; P Þ contain Eðx^ ; P Þ? If Eðx^ ; P Þ Eðx^ ; P Þ, then 1 1 2 2 1 1 2 2 ^ ^ Eðx ;P Þ Eð0; IÞ. This is so if and only if (i) the origin is in Eðx ;P Þ and (ii) the 1 1 1 1 point of the boundary of Eðx ;P Þ nearest the origin is at least distance 1 from it. Hence Eðx^ ; P Þ contains Eðx^ ; P Þ iff 1 1 2 2 ^ ^ i: x P x  1 ð25Þ 1 1 1 and (putting Q = I, P ¼ P ; x ¼ x ; l ¼ m in equations (17) and (19)) ^ ^ ^ 1 T T ^ ^ ^ ^ ^ ^ ^ ^ ^ ii: z z  1 where z ðI  m P Þ x and m is the root of z P z  x 1 1 1 1 1 1 1 1 1 ð26Þ 1 1 1 ^ ^ ðP  mIÞ P ðP  mIÞ x ¼ 1 minimizing kðI  m P Þ x k 1 1 1 1 Moreover, if tests (25) and (26) are passed, the largest factor by which Eð0; IÞ can be rffiffiffiffiffiffiffiffiffiffi ^ ^ ^ ^ expanded and stay contained in Eðx ;P Þ is z z . As ratios of distances in the same 1 1 1 direction are unchanged by affine transformation, this is the largest ratio by which Eðx^ ; P Þ can be expanded and still be contained in Eðx^ ; P Þ. 2 2 1 1 The following complementary test to Test 3.1.1 reuses the affine transformation. Test 3.1.2 Does Eðx^ ; P Þ contain Eðx^ ; P Þ? If Eðx^ ; P Þ Eðx^ ; P Þ then 2 2 1 1 2 2 1 1 Eð0; IÞ Eðx ;P Þ. This is so if and only if the distance of the farthest point of the boundary of Eðx ;P Þ from the origin is no greater than 1. From equations (17) and (20), this is so iff _ _ _ _ 1 _ ^ ^ ^ ^ ^ z z  1 where z ðI  m P Þ x and m is the real; positive root of z P z ¼ 1 1 1 1 1 1 1 1 1 which maximizes ðI  m P Þ x ð27Þ If Test 3.1.2 is passed, therlarge ffiffiffiffiffiffiffiffiffiffi st ratio by which Eðx ;P Þ can be expanded and still be _ _ ^ ^ contained in Eð0; IÞ is 1= z z , which is the largest ratio by which Eðx^ ; P Þ can be 1 1 1 1 expanded and remain in Eðx^ ; P Þ. 2 2 Test 3.1.3 Does Eðx^ ; P Þ intersect Eðx^ ; P Þ? If Eðx^ ; P Þ\ Eðx^ ; P Þ 6¼;, then 2 2 1 1 1 1 2 2 Eðx ;P Þ\ Eð0; IÞ 6¼;. This is so if either (i) the distance of the nearest point of the boundary of Eðx ;P Þ from the origin is no greater than 1 or (ii) Eðx^ ; P Þ Eðx^ ; P Þ. The test is thus whether 1 1 2 2 i: equation ð25Þ is true or equation ð26Þ is true with  forð28Þ or Ellipsoidal state bounds 219 ii: equation ð25Þ is true or equation ð26Þ is true with > forð29Þ Test 3.1.4 Are Eðx^ ; P Þ and Eðx^ ; P Þ disjoint? If Eðx^ ; P Þ\ Eðx^ ; P Þ¼; then 1 1 2 2 1 1 2 2 Eðx ;P Þ\ Eð0; IÞ¼;. This is so if Test 3.1.3 is failed, but an equivalent test is whether Eðx ;P Þ does not contain the origin (or any other test point of Eð0; IÞ; the origin is the simplest to test), and the distance of the nearest point of the boundary of Eðx ;P Þ to the origin is greater than 1, i.e. ^ ^ x P x > 1 and equation ð26Þ is true with > forð30Þ 1 1 1 The smallest factor by which Eð0; IÞ must be expanded to intersect Eðx ;P Þ, and ^ ^ hence the smallest ratio by which Eðx ; P Þ must be expanded to intersect Eðx ; P Þ 2 2 1 1 rffiffiffiffiffiffiffiffiffiffi ^T ^ ^ ^ (but not vice versa) is z z . 1 1 3.2 Largest expanded ellipsoid of given shape in a given ellipsoid The problem is to find x^ and the greatest scalar s such that Eðx^ ; s P Þ Eðx^ ; P Þ, 2 1 1 2 2 given x^ ; P and P . That is, find the largest scaled and shifted version of an ellipsoid 1 1 2 with describing matrix P which will fit into Eðx^ ; P Þ. 2 1 1 By symmetry, x^ ¼ x^ , and the ellipsoids Eðx^ ; P Þ and Eðx^ ; s P Þ touch when the 1 1 1 1 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi T 2 T ratio j j P j j=s j j P j j of the distances in direction j from their joint centre x^ to their 1 2 boundaries is unity. Now j j P j j min ¼ l ðP PÞð31Þ min 1 real j j P j j so 1=2 1=2 0 1 s ¼ l ðP Þ¼ l ðP PÞð32Þ 1 1 min min 2 Figure 4 illustrates a two-dimensional case. Example. Given 2 3 2 3 4 23 31 2 4 5 4 5 P ¼ 23 1 ; P ¼ 16 2 1 2 3 15 22 4 the largest scaled version of Eð0; P Þ fitting into Eð0; P Þ is required. 2 1 The eigenvalues of 220 J. P. Norton Figure 4. Largest shifted and expanded version of ellipsoid Eðx^ ; P Þ inside Eðx^ ;P Þ. 2 2 1 1 2 3 5:75 3:25 5:75 4 5 P P ¼ 32 3 5:125 2:8875 5:625 1=2 are 12.8146, 0.3814 and 0.1790, so s ¼ l ðP P Þ¼ 0.4231, and Eð0; P Þ has to be 1 2 min 2 shrunk in this proportion to fit into Eð0; P Þ. 3.3 Worst-case-reachable set The worst-case-reachable set comprises all future state values reachable, by the exercise of bounded forcing, from every point in the current feasible set. If the effect of forcing is additive (as in any system which is linear in additive forcing), the set is R fg x j8x 2 S; x  x 2 F ð33Þ r s r s That is, every point in R is reachable from any starting point in S by adding an increment (due to forcing) from F. It is convenient to approximate S and the bounded forcing by ellipsoids, but the forcing may be of lower dimension than the state. If so, F is Eðx^ ; FÞ with F positive-semidefinite. Whether or not F has full rank, R is generally not an ellipsoid but must be approximated by the largest (maximum- hypervolume) ellipsoid ^ ^ ^ Eðx ; RÞ x j8x 2 Eðx ; SÞ; x  x 2 Eðx ; FÞ ð34Þ r r s s r s f Denoting the vector sum of sets A and B by AB, (34) implies and is implied by Ellipsoidal state bounds 221 Eðx^ ; RÞ Eðx^ ; SÞ Eðx^ ; FÞð35Þ r s f The largest ellipsoid Eðx^ ; RÞ in the ‘vector difference’ of Eðx^ ; FÞ and Eðx^ ; SÞ can r f s be found as follows, by a derivation much like that for the smallest ellipsoid containing the vector sum of two ellipsoids ([5], appendix A). If x^ ; x^ were zero, the centre of s f Eðx^ ; RÞ would be the origin, by symmetry. Non-zero x^ ; x^ simply add x^ to every r s f f vector in Eðx^ ; FÞ and subtract x^ from every vector in Eðx^ ; SÞ, adding x^ þ x^ to f s s s f every point of Eðx^ ; RÞ, so for the centre of Eðx^ ; RÞ choose r r x^ ¼ x^ þ x^ ð36Þ r s f To permit an analytical solution for Eðx^ ; RÞ, we require R to be linear in S and F: R ¼ bF  aS ð37Þ pffiffiffiffiffiffiffiffiffiffiffiffiffi The width g ¼ j j Fj j of Eðx^ ; FÞ in any direction f must not be less than the sum of f f the widths g ,g of Eðx^ ; SÞ and Eðx^ ; RÞ in that direction, so using equation (37), s r s r 2 2 2 g ¼ðg þ ag Þ=b ðg þ g Þ ð38Þ r s f r s so a 1 2 2 1 g þ  1 g  2g g  0 ð39Þ s f s f b b or, defining r : g /g 5 0 (permissible as S4 0), f s 1 a a r 1 r  2r þ  1  0so ðr þ 1Þ  ð40Þ b b b b With g 4 0 and g 5 0, equation (39) requires at least one of a/b and 1/b to be greater s f than unity. It is easy to confirm that equation (40) is satisfied by a=b ¼ 1 þ 1=p; 1=b ¼ 1 þ p with p > 0 ð41Þ closely analogous to the choice in the standard vector-sum algorithm [1, 5]. Hence, 1 1 R ¼ F  S ð42Þ 1 þ p p The final step is to find the value of p maximizing the hypervolume of Eðx^ ; RÞ, proportional to jRj. Its maximum is where djRj dR ¼jRjtr R ¼ 0 ð43Þ dp dpÞ 222 J. P. Norton As jRj is not zero, the trace in equation (43) must be zero, giving ! ! F S F S tr   þ ¼ 0 ð44Þ 2 2 1 þ p p p ð1 þ pÞ so, after some manipulation, I F S S tr  þ  ¼ 0 ð45Þ 1 þ p 1 þ p p p ð1 þ pÞ Noting that tr(A + B)= tr(A)+ tr(B), this yields n 1 S F I þ tr ¼ 0 ð46Þ 1 þ p p ð1 þ pÞ 1 þ p p 1 1 where n is dim(x). As trðA Þ¼ l ðAÞ and l ðaA þ gIÞ¼ al ðAÞþ g, (46) gives i i i¼1 1 np ¼ ð47Þ 1 þ p pl ðS FÞ 1  p i¼1 A root-finder solves this polynomial equation for p; then, equation (42) gives R for each positive root, and the one giving the largest jRj is selected. D 4. Illustrative use of results 4.1 Interception subject to uncertainty in threat trajectory Terminal-phase guidance laws for ballistic-missile interceptors do not take account of longer-range factors such as target selection or economical use of fuel for manoeuvring. Control schemes with the potential to include these factors in mid-course navigation of the interceptor, through periodically updated constrained optimization of the interceptor’s planned trajectory, are approaching computational practicability. In such schemes, the performance of the interceptor may be made less sensitive to uncertainty in the predicted target trajectory by two strategies. One is to rely on the feedback provided by periodic correction, reoptimizing the interceptor trajectory in the light of up-to-date observation of the target and applying each successive optimized control sequence only up to the next observation. The second strategy is to account explicitly for the uncertainty during the optimization. This is desirable if a number of candidate targets must still be considered, weighted by their perceived threat and the uncertainty in their trajectories, or if the uncertainty is large enough to pose a large risk, if ignored, that the control action is inappropriate even in the short term. When uncertainty is included in the optimization, a worst-case optimality criterion suits interception. The predicted situation closer to interception is to be optimized in the most difficult case over the bounded uncertain future state of the threat. This has some similarity to employing a game-theory formulation of the problem [10], but the Ellipsoidal state bounds 223 connection will not be pursued here. The uncertain threat state prediction may be updated relatively easily, as new sensor measurements arrive, if it is in the form of an ellipsoidal feasible set Eðx^ ; P Þ [5, 7]. If uncertainty in predicting the interceptor state th th can be neglected, the guidance optimization needs to evaluate some measure of the discrepancy between the future controlled interceptor state and the most unfavourable future threat state, e.g. the largest possible range of the threat from the interceptor. The results of Section 2.4 give this discrepancy so long as it is expressed as the maximum of a weighted quadratic norm of the threat state with respect to interceptor state. The second approximation suggested in Section 2.4 can be employed while Eðx^ ; P Þ th th covers a small proportional range of the weighted quadratic norm, which will normally be so until late in the engagement, by which time a simple terminal guidance law will have been initiated. 4.2 Interception with uncertainty in trajectories of both threat and interceptor If the interceptor state is also significantly uncertain, worst-case-optimal guidance requires the maximum difference between the two states over both feasible sets. Generalizing Section 2.4 to find the maximum, over two ellipsoids, of a non-negative- definite quadratic form in the state difference is algebraically and computationally complex. Instead, the simpler strategy of making the predicted interceptor feasible set (usually much the smaller) first intersect that of the threat then be contained in it, positioned by some subsidiary criterion, is attractive. This requires the tests in Section 3.1. 4.3 Gauging of requirements to meet the control objective The results of Section 3.2 indicate what nominal final state maximizes the permissible uncertainty about this value while still meeting a terminal control condition, expressed as an ellipsoidal set of acceptable final state values. It also gives the size of that uncertainty. It could, for instance, be used to indicate the change in nominal future interceptor state and the reduction in its uncertainty required to ensure that the interceptor comes close enough to the threat to guarantee successful interception. 4.4 Interception employing worst-case-reachable set Computation of the worst-case-reachable set of the interceptor at each update allows the control sequence and choice of target to be optimized in full knowledge of the range of outcomes achievable. In the early stages of an engagement, the control can aim to keep as many objects as possible within reach of the interceptor or to keep specific objects reachable for as long as possible. Once the target is selected, a simple worst- case-optimal strategy is to make the reachable set at the predicted interception time cover, in the subspace of the state variables appearing in the control objective (including position components), as large an expanded version of the threat’s feasible set as possible. That is, the range of future threat behaviour (about its predicted, perhaps optimistic, feasible set) over which interception is guaranteed is maximized. Section 3.3 gives the largest ellipsoidal internal approximation to the set reachable in the worst case from an ellipsoidal feasible set by additive, ellipsoidally bounded forcing. 224 J. P. Norton 5. Conclusions A number of results, with some approximations to reduce computational load, have been presented for deriving bounds on scalar state-dependent quantities from the ellipsoidal bounds produced by standard state-bounding algorithms. Tests of intersection and inclusion of ellipsoids have been based on them. An algorithm for computing an ellipsoidal worst-case-reachable set has also been obtained, and some ways in which these results might be employed in interception problems have been discussed to illustrate their potential. Acknowledgements This material results in part from work supported by the UK Ministry of Defence. The advice and encouragement provided by Martyn Bennett of dstl, Farnborough, are gratefully acknowledged. References [1] Chernous’ko F., 1981, Optimal guaranteed estimates of indeterminacies with the aid of ellipsoids. I, II. Engineering Cybernetics, 18,1–9. [2] Chernousko, F.L., 1994, State Estimation for Dynamic Systems (Boca Raton, FL: CRC Press). [3] Durieu, C., Polyak, B.T. and Walter, E., 1996, Paper presented at Proceedings of the 13th Triennial IFAC World Congress, San Francisco, CA. [4] Kurzhanski, A.B. and Valyi, I., 1997, Elliposidal Calculus for Estimation and Control (Boston, MA: Birkha¨ user). [5] Maksarov, D.G. and Norton, J.P., 1996, State bounding with ellipsoidal set description of the uncertainty. International Journal of Control, 65, 847 – 866. [6] Maksarov, D.G. and Norton, J.P., 2002, Computationally efficient algorithms for state estimation with ellipsoidal approximations, International Journal of Adaptive Control & Signal Processing, 16, 411 – 434. [7] Pronzato, L. and Walter, E., 1994, Minimum-volume ellipsoids containing compact sets: application to parameter bounding. Automatica, 30, 1731 – 1739. [8] Schweppe, F.C., 1968, Recursive state estimation: unknown but bounded errors and system inputs. IEEE Transactions on Automatic Control, AC-13, 22 – 28. [9] O’Connor, J.J. and Robertson, E.F., 1996, Quadratic, cubic and quartic equations. Available online at: http://www-history.mcs.st-andrews.ac.uk/history/HistTopics/Quadratic_etc_equations.html (accessed 16 December 2003). [10] Anonymous news release: New research analyzes the game of missile interception, 10 July 2000. Available online at: American Technion Soc., http://www.ats.org/news.php?id = 13 (accessed 10 March 2004). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Mathematical and Computer Modelling of Dynamical Systems Taylor & Francis

Results to aid applications of ellipsoidal state bounds

Loading next page...
 
/lp/taylor-francis/results-to-aid-applications-of-ellipsoidal-state-bounds-yxVtxVcV68

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Taylor & Francis
Copyright
Copyright Taylor & Francis Group, LLC
ISSN
1744-5051
eISSN
1387-3954
DOI
10.1080/13873950500069292
Publisher site
See Article on Publisher Site

Abstract

Mathematical and Computer Modelling of Dynamical Systems Vol. 11, No. 2, June 2005, 209 – 224 {{ J. P. NORTON* Department of Electronic, Electrical & Computer Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK Department of Mathematics/Integrated Catchment Assessment & Management Centre, SRES, Australian National University, Canberra, ACT 0200, Australia Results to assist in the application of the ellipsoidal bounds provided by standard state- bounding algorithms are presented. They include derived bounds on scalar state-dependent quantities and the state values which determine them, tests of intersection and inclusion of ellipsoids, measures of how much an ellipsoid may be changed without altering its inclusion in another, and an ellipsoidal inner bound for the set reachable in the worst case from an ellipsoidal set by ellipsoidally bounded forcing, in a linear system. Approximations are suggested for the most computationally demanding result. Ways in which these results might be employed in aerospace interception problems are discussed to illustrate their utility. Keywords: State estimation; State bounding; Ellipsoids; Reachability analysis; Aerospace control 1. Introduction Recursive updating of bounds on the state of a dynamical system is a long-established alternative to probabilistic state estimation [1 – 8]. At each observation instant, the error in a vector of observed output variables is assumed to have known bounds, so bounds on the outputs at that instant can be inferred from the observations. They are translated, via the observation equation, into bounds on the current state. The discrete- time state equation also imposes time-updated bounds on the state, by combining bounds on the state at the previous sampling instant and specified bounds on the effects of forcing inputs since then. The feasible set of state values consistent with all observations up to the present is found by combining the time-updated bounds and the new observation-derived bounds. As new observations add state bounds, the feasible set becomes inconveniently complicated. A simpler outer-bounding approximation is therefore usually updated, the most popular choice being an ellipsoid. The ellipsoidal bound *Email: j.p.norton@bham.ac.uk Mathematical and Computer Modelling of Dynamical Systems ISSN 1387-3954 print/ISSN 1744-5051 online # 2005 Taylor & Francis Group Ltd http://www.tandf.co.uk/journals DOI: 10.1080/13873950500069292 210 J. P. Norton T 1 n n nn ^ ^ ^ ^ x 2 Eðx ; P Þfx : ðx  x Þ P ðx  x Þ 1; x 2< ; x 2< ; P 2< ; k k k k k k k k k k k P ¼ P > 0g ð1Þ on the state at observation instant k has centre x^ , axes along the eigenvectors of P and half-axis lengths equal to the square roots of the eigenvalues. The updating algorithm for x^ and P aims to make the ellipsoid as tight as possible, usually by k k minimizing trP , jP j or a combination [3]. The recursive ellipsoid-updating k k algorithms are algebraically quite similar to the Kalman filter, and there is an obvious analogy between x^ and P in equation (1) and the mean and covariance k k updated by the Kalman filter. The analogy is closer if the mean and covariance are those of a Gaussian posterior probability density function (pdf) of state, as then the confidence ellipsoid defined by x and P is a direct analogue of the state-bounding k k ellipsoid. Bounding can (but need not) be regarded as a component part of probabilistic state estimation if the bounds are regarded as defining the finite support of a state pdf. Batch algorithms for ellipsoidal state bounding also exist [7]. They usually produce much tighter bounds than do the recursive algorithms but are, of course, less suited to real-time use. The results in this paper are applicable to ellipsoids generated by either batch or recursive algorithms. Ellipsoidal state bounds can be used for two purposes for which they are better suited than probabilistic state estimation yet which have received less attention than they deserve. The first is to produce bounds on practically significant, scalar functions of state. Bounds on certain variables may be needed for control or decision support when the state coordinate system does not give them directly, being chosen instead to give a simple model. For example, Cartesian coordinates are often convenient for modelling motion, while polar variables may be preferred for guidance or collision avoidance. Section 2 shows that bounds on scalar functions of state can be found from ellipsoidal state bounds in several important cases. By contrast, the mean and covariance of a nonlinear function of state cannot generally be derived from the estimated state mean and covariance, and are not readily found even if the state pdf is Gaussian. The second potential use of ellipsoidal state bounds is to check whether the future state, subject to uncertainty, can meet some desired condition. Examples are checking whether the future state can definitely be brought into a specified desirable region, true if its feasible set is contained in that region; checking if two objects can be made to meet or to miss in future, as in interception or collision avoidance; and unequivocal fault detection when the feasible set is outside the ‘no fault’ region. It is fairly straightforward to detect whether two ellipsoids intersect or one contains the other. If they do not intersect, the smallest change (translation, expansion or contraction) required to make one ellipsoid intersect or contain the other can also be found readily. This may show, for example, what action is necessary for the whole range of uncertain predicted position of a target to be reachable, ensuring interception. Similarly, when one set contains the other, it is easy to find the largest expansion or contraction factor for which that remains true. The next section gives results for bounds on state-derived scalar quantities. Section 3 considers the intersection and inclusion of ellipsoids, and ellipsoidal reachable sets. Section 4 illustrates their utility by outlining applications in aerospace interception. Ellipsoidal state bounds 211 2. Bounds on scalar functions of state 2.1 Linear functions The extremes of a scalar, linear function T n f ¼ f x; f 2< ð2Þ over the ellipsoid Eðx^; PÞ are at the points of tangency of the supporting hyperplanes of Eðx^; PÞ normal to f. The points are readily found by the Lagrange multiplier method: Pf x ¼ x^  pffiffiffiffiffiffiffiffiffiffiffi ð3Þ f Pf pffiffiffiffiffiffiffiffiffiffiffi T T so the extremes of f are f x^  f Pf. As each extreme point is the intersection of a hyperplane and the strictly convex ellipsoid, it is unique. By use of equation (3), the usual ellipsoid-updating state-bounding algorithms [1 – 6] can be modified to make the state bounds as tight as possible in any nominated direction (e.g. that of a particular crucial state variable). However, this is at the cost of bound looseness in other directions, so a better idea is to find the feasible set as the intersection of a number of ellipsoids, each tight in a different direction. A refinement would be first to find the minimum-volume bounding ellipsoid by the standard algorithm, then to update the ellipsoids tightest in the axis directions of that ellipsoid. 2.2 Ratio of two state variables This case covers derivation of bounds on azimuth or elevation y from joint bounds on Cartesian position components and their derivatives. For instance, for a target bounded by Eðx^; PÞ, with (x , x ), the Cartesian position coordinates of the target in 1 2 the horizontal plane with respect to an observer at the origin, the azimuth extremes are defined by the extremes of x /x over Eðx^; PÞ, as sketched in figure 1. Another example 2 1 is discrimination between target types, by comparing bounds on the image intensities of the target in two spectral bands, measured by a multispectral infrared sensor, with bounds on their ratio for different types of target. As x /x is constant on a hyperplane and is monotonic in the slope of the 2 1 hyperplane, its extremes are the points of tangency of such hyperplanes with Eðx^; PÞ. The extremes of x /x subject to x being on the boundary 2 1 ðx  x^Þ P ðx  x^Þ¼ 1 ð4Þ of Eðx^; PÞ are stationary points of the augmented cost function T 1 L ¼ þ l 1ðx  x^Þ P ðx  x^Þ ð5Þ 1 212 J. P. Norton Figure 1. Extremes of x /x over Eðx^;PÞ. 2 1 with respect to x and l. Differentiating L with respect to x, substituting equation (4) and solving for x, 2 3 4 5 x  x^ ¼ P ð6Þ where d ¼ x ^ x  x ^ x ð7Þ 1 2 2 1 Solution of equation (6) for x and substitution into equation (7) gives qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 d ¼ p x ^  2p x ^ x ^ þ p x ^  p p ð8Þ 22 12 1 2 11 11 22 1 2 where p p 11 12 p p 21 22 is the top left 2 6 2 partition of P. With d known, the first two rows of equation (6) (linear) are solved for x and x , yielding extremes 1 2 Ellipsoidal state bounds 213 x p  x 2 11 ¼ ð9Þ x p  d  x ^ x ^ 1 12 1 2 Each extreme point is the intersection of a hyperplane, which contains the normal to the x :x plane through the origin and is defined by a constant value of arctan (x /x ), 1 2 2 1 and the strictly convex ellipsoid. The point is thus unique. The values of the other state variables at these extremes are easily shown to be 2 3 6 7 x x Q Q 4 1 1 11 12 1 6 7 ¼Q Q where Q is 2  2;  P ð10Þ 22 21 11 4 5 ::: x Q Q 21 22 2.3 Product of two state variables This covers, for instance, derivation of bounds on the vertical velocity component of a target from bounds on its velocity and the cosine of the angle of the velocity to the local vertical, convenient state variables in ballistic flight. To find the extremes of x x over 1 2 Eðx^; PÞ, an analysis along similar lines to that in Section 2.2 gives 2 3 4 5 x  x^ ¼ P  ð11Þ where now d ¼ x ^ x þ x ^ x  2x x ð12Þ 1 2 2 1 1 2 The small increase in complexity of d leads to a quartic for d in terms of the first two elements of x^ and the top left 26 2 partition of P: bg 04 02 0 d ða þ 2b þ gÞd þ 2 p a þ d  bða  b þ gÞ¼ 0 ð13Þ where 0 2 2 d  d þ p ; a  p x ^ þ p x ^ ; b  p p ; g  2p x ^ x ^ ð14Þ 12 22 11 11 22 12 1 2 1 2 An exact solution of equation (13) is available by several classical methods [9]. The extremes of x x follow by solving the linear equation (11) for x and x in terms of d, 1 2 1 2 then forming x x directly or, to get a slightly simpler expression, using equation (12) 1 2 again to give 214 J. P. Norton d a  gd =p x x ¼  1 ð15Þ 1 2 2 b  d The nature of the stationary points of x x on the boundary of Eðx^; PÞ is seen by 1 2 examining the values of the Lagrange multiplier l=– d/2 at the roots of equation (13). They comprise either (i) a complex conjugate pair, a negative, real value yielding the minimum and a positive, real value giving the maximum, or (ii) four real values: a positive value at the global maximum, a negative value at the global minimum, another negative value at a local minimum, and a negative value at a point which is a local maximum with repect to x and x but a minimum with respect to the other state 1 2 variables. The global maximum is at the unique point of external tangency of the strictly convex Eðx;PÞ and a convex set of the form x x 5 c. Any minimum is at a 1 2 point where Eðx; PÞ is a tangent from the interior to a set of the form x x 5 c,soin 1 2 (ii), the pair of minima may be distinct yet coincide in value, as in figure 2 and the example below. Example. The extreme points of x x on the boundary of Eðx^; PÞ are required, where 1 2 Eðx^; PÞ has centre x^ ¼½31:2 and describing matrix 2 3 6:2 7 2:0 9 ::: 4 5 P ¼ 2:0 9 1:003 6 ::: ::: ::: ::: In equation (13) a = 18.0655, b = 6.2955, g = – 15.0545 yielding roots d’= 2.509091, 2.509091, – 4.244288 and – 0.773894. The corresponding points x = [5 0.2 . . .], [0.5 2 Figure 2. Points corresponding to roots of equation (13). Ellipsoidal state bounds 215 . . .], [3.723 1.489 . . .], [2.277 1.971 . . .] are respectively two global minima, the global maximum and a local maximum between the minima. D 2.4 Positive-semidefinite quadratic form The extremes of x Qx on the boundary of the ellipsoid Eðx^; PÞ, for Q symmetric and positive-semidefinite or definite, are useful in a number of situations. They include as a special case bounds on the range of a target with Cartesian position components and their derivatives as state variables. More generally, tests for the intersection and inclusion of ellipsoids can be based on these extremes. With Q positive definite, if 02=Eðx^; PÞ, one can check whether the ellipsoids Eð0; Q Þ and Eðx^; PÞ intersect by T T examining the minimum of x Qx subject to ðx  x^Þ P ðx  x^Þ¼ 1. If 0 2 Eðx^; PÞ, the minimum shows whether Eð0; Q Þ Eðx; PÞ. Such tests appear in Section 3. The case where Eðx; PÞ contains a point in the null space of Q (such as the position origin in the range-bounding case) is uninteresting, as the minimum of x Qx subject to T 1 T ðx  x^Þ P ðx  x^Þ 1 is then zero. Excluding this case, the extreme points of x Qx can be found by setting the derivative @L/@x of the augmented cost function T 1 L  x Qx þ l 1ðx  x^Þ P ðx  x^Þ ð16Þ to zero, yielding 1 1 x ¼ðI  l PQÞ x^ ð17Þ The inverse in equation (17) exists unless l is an eigenvalue of PQ. In that case, the origin, the centre x^ of Eðx^; PÞ and the point where Eðx^; PÞ and Eð0; Q Þ touch are collinear on the eigenvector associated with l. Such cases are detectable by noticing that x^ is an eigenvector of PQ, and the location and value of the extreme of x Qx are found from the corresponding eigenvalue. T T In the generic case, premultiplying @L/@x = 0 by ðx^  xÞ and setting ðx  x^Þ P ðx  x^Þ¼ 1 gives l ¼ x Qðx  x^Þð18Þ 1 1 1 T 1 ^ ^ ^ ^ From equation (17), x  x¼ðI  lQ P Þ x so ðx  xÞ P ðx  xÞ¼ 1 becomes 1 1 1 1 T 1 1 1 1 1 T x^ ðI  lP Q Þ P ðI  lQ P Þ x^  x^ ðQP  lIÞ QPQðPQ  lIÞ x^ ¼ 1 ð19Þ This polynomial in scalar l is solved; then, x follows from equation (17), yielding a bound 1 1 1 1 jljð  I  lQ P Þ x^ ¼jljð  PQ  lIÞ x^ ð20Þ 216 J. P. Norton pffiffiffiffiffiffiffiffiffiffiffiffiffi onkk x ¼ x Qx. In general, equation (19) is of degree 2n in l and requires a root- finder if n4 2. Computational cost may inhibit the use of equations (19) and (20) in conjunction with recursive state estimators at high update rates, so economical approximations may be useful. A very cheaply computed one is found by noticing that the terms of degree 2n and 2n –1in l in the polynomial equation obtained by rationalizing equation (19) are 2 2n 2n–1 those of jPQ – lIj , namely l , 7 2tr(PQ)l . Hence, for a dominant root j lj44 1 an approximate solution is l ffi 2trðPQÞ. The points of Eðx^; PÞ closest to and farthest from a given point x* not in it are found by putting I for Q and x^  x for x^. They may be approximated by the extreme points Pðx^  x Þ x ¼ x^  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð21Þ ðx^  x Þ Pðx^  x Þ in direction x^  x , using equation (3). This approximation is clearly good when the pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi distance from x*to x^ is much longer than the longest half-axis l ðPÞ of Eðx^; PÞ. max Example. The points of Eðx^; PÞ farthest from and nearest to the origin are required, where Eðx^; PÞ has centre x^ ¼½13 and describing matrix 4 2 P ¼ : Straightforward calculation gives: 3  l 2 1 2 1 ðP  lIÞ ¼ðl  7l þ 8Þ ; > 24  l > 1 1 l  3l > x ¼ lðlI  PÞ x^ ¼ðl  7l þ 8Þ ; 3l  10l ð22Þ 2 2 l  6l þ 29 ðP  lIÞ x^ ¼ðl  7l þ 8Þ ; > 3l  20l þ 46 Px^ ¼ Substituted into equation (19) with Q = I, this gives, after tidying up, 4 3 2 l  14l þ 22l þ 48l  15 ¼ 0 The only real roots are l = 11.9031, – 2.1247. The positive value indicates that x x T 1 increases if the bound on ðx  x^Þ P ðx  x^Þ is increased from 1, so it applies at the farthest point of Eðx^; PÞ from the origin, and the negative root gives the nearest. Through the second part of equation (22), the extremes are, respectively, x = [ – 2.6731 4.6114], [0.0679 1.2703]. The simplest approximation l ffi 2trðPÞ¼ 14 is fairly close to the larger root and gives as the farthest point x = [ – 2.2453 4.2264]. The other approximation Ellipsoidal state bounds 217 Px^ 1 1 10 ~ ^ pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi x ¼ x  ¼ 3 11 x^ Px^ gives x~ =[7 2.5250 4.6775], quite good given that the distance from the origin to x^ is only 3.162, comparable with the length 2.358 of the longer half-axis of Eðx^; PÞ. The approximation x~ = [0.5250 1.3225] to the nearest point is less good absolutely as well as relatively. Figure 3 sketches the situation, showing the extreme points and second approximation. 3. Operations on state-bounding ellipsoids 3.1 Test for intersection and inclusion of ellipsoids Results (17), (19) and (20) provide tests for inclusion and intersection of general ellipsoids Eðx^ ; P Þ and Eðx^ ; P Þ. They can be used directly, putting P , P for P, Q 1 1 2 2 1 and x^  x^ for x^. Alternatively, they can be simplified by an affine transformation 1 2 1 T x  A ðx  x^ Þ where A A  P , which transforms Eðx^ ; P Þ to the unit ball 2 2 2 2 2 2 2 Eð0; IÞfxjx x  1gð23Þ centred at the origin, so Q = I, and transforms Eðx^ ; P Þ to 1 1 T 1 ^ ^ ^ Eðx ;P Þ fxjðx  x Þ P ðx  x Þ 1gð24Þ 1 1 1 1 Figure 3. Points of Eðx^;PÞ nearest to and farthest from origin, and approximations. 218 J. P. Norton 1 T 1 where P  A P A > 0 and x  A ðx^  x^ Þ replace P and x^. This option is 1 1 1 2 2 2 1 2 adopted here to minimize algebra. The following tests are then obtained. Test 3.1.1 Does Eðx^ ; P Þ contain Eðx^ ; P Þ? If Eðx^ ; P Þ Eðx^ ; P Þ, then 1 1 2 2 1 1 2 2 ^ ^ Eðx ;P Þ Eð0; IÞ. This is so if and only if (i) the origin is in Eðx ;P Þ and (ii) the 1 1 1 1 point of the boundary of Eðx ;P Þ nearest the origin is at least distance 1 from it. Hence Eðx^ ; P Þ contains Eðx^ ; P Þ iff 1 1 2 2 ^ ^ i: x P x  1 ð25Þ 1 1 1 and (putting Q = I, P ¼ P ; x ¼ x ; l ¼ m in equations (17) and (19)) ^ ^ ^ 1 T T ^ ^ ^ ^ ^ ^ ^ ^ ^ ii: z z  1 where z ðI  m P Þ x and m is the root of z P z  x 1 1 1 1 1 1 1 1 1 ð26Þ 1 1 1 ^ ^ ðP  mIÞ P ðP  mIÞ x ¼ 1 minimizing kðI  m P Þ x k 1 1 1 1 Moreover, if tests (25) and (26) are passed, the largest factor by which Eð0; IÞ can be rffiffiffiffiffiffiffiffiffiffi ^ ^ ^ ^ expanded and stay contained in Eðx ;P Þ is z z . As ratios of distances in the same 1 1 1 direction are unchanged by affine transformation, this is the largest ratio by which Eðx^ ; P Þ can be expanded and still be contained in Eðx^ ; P Þ. 2 2 1 1 The following complementary test to Test 3.1.1 reuses the affine transformation. Test 3.1.2 Does Eðx^ ; P Þ contain Eðx^ ; P Þ? If Eðx^ ; P Þ Eðx^ ; P Þ then 2 2 1 1 2 2 1 1 Eð0; IÞ Eðx ;P Þ. This is so if and only if the distance of the farthest point of the boundary of Eðx ;P Þ from the origin is no greater than 1. From equations (17) and (20), this is so iff _ _ _ _ 1 _ ^ ^ ^ ^ ^ z z  1 where z ðI  m P Þ x and m is the real; positive root of z P z ¼ 1 1 1 1 1 1 1 1 1 which maximizes ðI  m P Þ x ð27Þ If Test 3.1.2 is passed, therlarge ffiffiffiffiffiffiffiffiffiffi st ratio by which Eðx ;P Þ can be expanded and still be _ _ ^ ^ contained in Eð0; IÞ is 1= z z , which is the largest ratio by which Eðx^ ; P Þ can be 1 1 1 1 expanded and remain in Eðx^ ; P Þ. 2 2 Test 3.1.3 Does Eðx^ ; P Þ intersect Eðx^ ; P Þ? If Eðx^ ; P Þ\ Eðx^ ; P Þ 6¼;, then 2 2 1 1 1 1 2 2 Eðx ;P Þ\ Eð0; IÞ 6¼;. This is so if either (i) the distance of the nearest point of the boundary of Eðx ;P Þ from the origin is no greater than 1 or (ii) Eðx^ ; P Þ Eðx^ ; P Þ. The test is thus whether 1 1 2 2 i: equation ð25Þ is true or equation ð26Þ is true with  forð28Þ or Ellipsoidal state bounds 219 ii: equation ð25Þ is true or equation ð26Þ is true with > forð29Þ Test 3.1.4 Are Eðx^ ; P Þ and Eðx^ ; P Þ disjoint? If Eðx^ ; P Þ\ Eðx^ ; P Þ¼; then 1 1 2 2 1 1 2 2 Eðx ;P Þ\ Eð0; IÞ¼;. This is so if Test 3.1.3 is failed, but an equivalent test is whether Eðx ;P Þ does not contain the origin (or any other test point of Eð0; IÞ; the origin is the simplest to test), and the distance of the nearest point of the boundary of Eðx ;P Þ to the origin is greater than 1, i.e. ^ ^ x P x > 1 and equation ð26Þ is true with > forð30Þ 1 1 1 The smallest factor by which Eð0; IÞ must be expanded to intersect Eðx ;P Þ, and ^ ^ hence the smallest ratio by which Eðx ; P Þ must be expanded to intersect Eðx ; P Þ 2 2 1 1 rffiffiffiffiffiffiffiffiffiffi ^T ^ ^ ^ (but not vice versa) is z z . 1 1 3.2 Largest expanded ellipsoid of given shape in a given ellipsoid The problem is to find x^ and the greatest scalar s such that Eðx^ ; s P Þ Eðx^ ; P Þ, 2 1 1 2 2 given x^ ; P and P . That is, find the largest scaled and shifted version of an ellipsoid 1 1 2 with describing matrix P which will fit into Eðx^ ; P Þ. 2 1 1 By symmetry, x^ ¼ x^ , and the ellipsoids Eðx^ ; P Þ and Eðx^ ; s P Þ touch when the 1 1 1 1 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi T 2 T ratio j j P j j=s j j P j j of the distances in direction j from their joint centre x^ to their 1 2 boundaries is unity. Now j j P j j min ¼ l ðP PÞð31Þ min 1 real j j P j j so 1=2 1=2 0 1 s ¼ l ðP Þ¼ l ðP PÞð32Þ 1 1 min min 2 Figure 4 illustrates a two-dimensional case. Example. Given 2 3 2 3 4 23 31 2 4 5 4 5 P ¼ 23 1 ; P ¼ 16 2 1 2 3 15 22 4 the largest scaled version of Eð0; P Þ fitting into Eð0; P Þ is required. 2 1 The eigenvalues of 220 J. P. Norton Figure 4. Largest shifted and expanded version of ellipsoid Eðx^ ; P Þ inside Eðx^ ;P Þ. 2 2 1 1 2 3 5:75 3:25 5:75 4 5 P P ¼ 32 3 5:125 2:8875 5:625 1=2 are 12.8146, 0.3814 and 0.1790, so s ¼ l ðP P Þ¼ 0.4231, and Eð0; P Þ has to be 1 2 min 2 shrunk in this proportion to fit into Eð0; P Þ. 3.3 Worst-case-reachable set The worst-case-reachable set comprises all future state values reachable, by the exercise of bounded forcing, from every point in the current feasible set. If the effect of forcing is additive (as in any system which is linear in additive forcing), the set is R fg x j8x 2 S; x  x 2 F ð33Þ r s r s That is, every point in R is reachable from any starting point in S by adding an increment (due to forcing) from F. It is convenient to approximate S and the bounded forcing by ellipsoids, but the forcing may be of lower dimension than the state. If so, F is Eðx^ ; FÞ with F positive-semidefinite. Whether or not F has full rank, R is generally not an ellipsoid but must be approximated by the largest (maximum- hypervolume) ellipsoid ^ ^ ^ Eðx ; RÞ x j8x 2 Eðx ; SÞ; x  x 2 Eðx ; FÞ ð34Þ r r s s r s f Denoting the vector sum of sets A and B by AB, (34) implies and is implied by Ellipsoidal state bounds 221 Eðx^ ; RÞ Eðx^ ; SÞ Eðx^ ; FÞð35Þ r s f The largest ellipsoid Eðx^ ; RÞ in the ‘vector difference’ of Eðx^ ; FÞ and Eðx^ ; SÞ can r f s be found as follows, by a derivation much like that for the smallest ellipsoid containing the vector sum of two ellipsoids ([5], appendix A). If x^ ; x^ were zero, the centre of s f Eðx^ ; RÞ would be the origin, by symmetry. Non-zero x^ ; x^ simply add x^ to every r s f f vector in Eðx^ ; FÞ and subtract x^ from every vector in Eðx^ ; SÞ, adding x^ þ x^ to f s s s f every point of Eðx^ ; RÞ, so for the centre of Eðx^ ; RÞ choose r r x^ ¼ x^ þ x^ ð36Þ r s f To permit an analytical solution for Eðx^ ; RÞ, we require R to be linear in S and F: R ¼ bF  aS ð37Þ pffiffiffiffiffiffiffiffiffiffiffiffiffi The width g ¼ j j Fj j of Eðx^ ; FÞ in any direction f must not be less than the sum of f f the widths g ,g of Eðx^ ; SÞ and Eðx^ ; RÞ in that direction, so using equation (37), s r s r 2 2 2 g ¼ðg þ ag Þ=b ðg þ g Þ ð38Þ r s f r s so a 1 2 2 1 g þ  1 g  2g g  0 ð39Þ s f s f b b or, defining r : g /g 5 0 (permissible as S4 0), f s 1 a a r 1 r  2r þ  1  0so ðr þ 1Þ  ð40Þ b b b b With g 4 0 and g 5 0, equation (39) requires at least one of a/b and 1/b to be greater s f than unity. It is easy to confirm that equation (40) is satisfied by a=b ¼ 1 þ 1=p; 1=b ¼ 1 þ p with p > 0 ð41Þ closely analogous to the choice in the standard vector-sum algorithm [1, 5]. Hence, 1 1 R ¼ F  S ð42Þ 1 þ p p The final step is to find the value of p maximizing the hypervolume of Eðx^ ; RÞ, proportional to jRj. Its maximum is where djRj dR ¼jRjtr R ¼ 0 ð43Þ dp dpÞ 222 J. P. Norton As jRj is not zero, the trace in equation (43) must be zero, giving ! ! F S F S tr   þ ¼ 0 ð44Þ 2 2 1 þ p p p ð1 þ pÞ so, after some manipulation, I F S S tr  þ  ¼ 0 ð45Þ 1 þ p 1 þ p p p ð1 þ pÞ Noting that tr(A + B)= tr(A)+ tr(B), this yields n 1 S F I þ tr ¼ 0 ð46Þ 1 þ p p ð1 þ pÞ 1 þ p p 1 1 where n is dim(x). As trðA Þ¼ l ðAÞ and l ðaA þ gIÞ¼ al ðAÞþ g, (46) gives i i i¼1 1 np ¼ ð47Þ 1 þ p pl ðS FÞ 1  p i¼1 A root-finder solves this polynomial equation for p; then, equation (42) gives R for each positive root, and the one giving the largest jRj is selected. D 4. Illustrative use of results 4.1 Interception subject to uncertainty in threat trajectory Terminal-phase guidance laws for ballistic-missile interceptors do not take account of longer-range factors such as target selection or economical use of fuel for manoeuvring. Control schemes with the potential to include these factors in mid-course navigation of the interceptor, through periodically updated constrained optimization of the interceptor’s planned trajectory, are approaching computational practicability. In such schemes, the performance of the interceptor may be made less sensitive to uncertainty in the predicted target trajectory by two strategies. One is to rely on the feedback provided by periodic correction, reoptimizing the interceptor trajectory in the light of up-to-date observation of the target and applying each successive optimized control sequence only up to the next observation. The second strategy is to account explicitly for the uncertainty during the optimization. This is desirable if a number of candidate targets must still be considered, weighted by their perceived threat and the uncertainty in their trajectories, or if the uncertainty is large enough to pose a large risk, if ignored, that the control action is inappropriate even in the short term. When uncertainty is included in the optimization, a worst-case optimality criterion suits interception. The predicted situation closer to interception is to be optimized in the most difficult case over the bounded uncertain future state of the threat. This has some similarity to employing a game-theory formulation of the problem [10], but the Ellipsoidal state bounds 223 connection will not be pursued here. The uncertain threat state prediction may be updated relatively easily, as new sensor measurements arrive, if it is in the form of an ellipsoidal feasible set Eðx^ ; P Þ [5, 7]. If uncertainty in predicting the interceptor state th th can be neglected, the guidance optimization needs to evaluate some measure of the discrepancy between the future controlled interceptor state and the most unfavourable future threat state, e.g. the largest possible range of the threat from the interceptor. The results of Section 2.4 give this discrepancy so long as it is expressed as the maximum of a weighted quadratic norm of the threat state with respect to interceptor state. The second approximation suggested in Section 2.4 can be employed while Eðx^ ; P Þ th th covers a small proportional range of the weighted quadratic norm, which will normally be so until late in the engagement, by which time a simple terminal guidance law will have been initiated. 4.2 Interception with uncertainty in trajectories of both threat and interceptor If the interceptor state is also significantly uncertain, worst-case-optimal guidance requires the maximum difference between the two states over both feasible sets. Generalizing Section 2.4 to find the maximum, over two ellipsoids, of a non-negative- definite quadratic form in the state difference is algebraically and computationally complex. Instead, the simpler strategy of making the predicted interceptor feasible set (usually much the smaller) first intersect that of the threat then be contained in it, positioned by some subsidiary criterion, is attractive. This requires the tests in Section 3.1. 4.3 Gauging of requirements to meet the control objective The results of Section 3.2 indicate what nominal final state maximizes the permissible uncertainty about this value while still meeting a terminal control condition, expressed as an ellipsoidal set of acceptable final state values. It also gives the size of that uncertainty. It could, for instance, be used to indicate the change in nominal future interceptor state and the reduction in its uncertainty required to ensure that the interceptor comes close enough to the threat to guarantee successful interception. 4.4 Interception employing worst-case-reachable set Computation of the worst-case-reachable set of the interceptor at each update allows the control sequence and choice of target to be optimized in full knowledge of the range of outcomes achievable. In the early stages of an engagement, the control can aim to keep as many objects as possible within reach of the interceptor or to keep specific objects reachable for as long as possible. Once the target is selected, a simple worst- case-optimal strategy is to make the reachable set at the predicted interception time cover, in the subspace of the state variables appearing in the control objective (including position components), as large an expanded version of the threat’s feasible set as possible. That is, the range of future threat behaviour (about its predicted, perhaps optimistic, feasible set) over which interception is guaranteed is maximized. Section 3.3 gives the largest ellipsoidal internal approximation to the set reachable in the worst case from an ellipsoidal feasible set by additive, ellipsoidally bounded forcing. 224 J. P. Norton 5. Conclusions A number of results, with some approximations to reduce computational load, have been presented for deriving bounds on scalar state-dependent quantities from the ellipsoidal bounds produced by standard state-bounding algorithms. Tests of intersection and inclusion of ellipsoids have been based on them. An algorithm for computing an ellipsoidal worst-case-reachable set has also been obtained, and some ways in which these results might be employed in interception problems have been discussed to illustrate their potential. Acknowledgements This material results in part from work supported by the UK Ministry of Defence. The advice and encouragement provided by Martyn Bennett of dstl, Farnborough, are gratefully acknowledged. References [1] Chernous’ko F., 1981, Optimal guaranteed estimates of indeterminacies with the aid of ellipsoids. I, II. Engineering Cybernetics, 18,1–9. [2] Chernousko, F.L., 1994, State Estimation for Dynamic Systems (Boca Raton, FL: CRC Press). [3] Durieu, C., Polyak, B.T. and Walter, E., 1996, Paper presented at Proceedings of the 13th Triennial IFAC World Congress, San Francisco, CA. [4] Kurzhanski, A.B. and Valyi, I., 1997, Elliposidal Calculus for Estimation and Control (Boston, MA: Birkha¨ user). [5] Maksarov, D.G. and Norton, J.P., 1996, State bounding with ellipsoidal set description of the uncertainty. International Journal of Control, 65, 847 – 866. [6] Maksarov, D.G. and Norton, J.P., 2002, Computationally efficient algorithms for state estimation with ellipsoidal approximations, International Journal of Adaptive Control & Signal Processing, 16, 411 – 434. [7] Pronzato, L. and Walter, E., 1994, Minimum-volume ellipsoids containing compact sets: application to parameter bounding. Automatica, 30, 1731 – 1739. [8] Schweppe, F.C., 1968, Recursive state estimation: unknown but bounded errors and system inputs. IEEE Transactions on Automatic Control, AC-13, 22 – 28. [9] O’Connor, J.J. and Robertson, E.F., 1996, Quadratic, cubic and quartic equations. Available online at: http://www-history.mcs.st-andrews.ac.uk/history/HistTopics/Quadratic_etc_equations.html (accessed 16 December 2003). [10] Anonymous news release: New research analyzes the game of missile interception, 10 July 2000. Available online at: American Technion Soc., http://www.ats.org/news.php?id = 13 (accessed 10 March 2004).

Journal

Mathematical and Computer Modelling of Dynamical SystemsTaylor & Francis

Published: Jun 1, 2005

Keywords: State estimation; State bounding; Ellipsoids; Reachability analysis; Aerospace control

References