Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A General Multivariate Formulation of the Multi-Antenna GNSS Attitude Determination Problem

A General Multivariate Formulation of the Multi-Antenna GNSS Attitude Determination Problem Global Navigation Satellite System (GNSS) carrier phase ambiguity resolution is the key to high precision positioning, navigation and attitude determination. In this contribution we present a general formulation for the multi-antenna GNSS attitude determination problem. This multivariate formulation provides a general framework for solving various GNSS attitude determination problems. With the use of this formulation we show how the constrained integer least-squares carrier phase ambiguities and corresponding attitude matrix can be solved. INTRODUCTION In this contribution we consider the problem of ambiguity resolution for GNSS attitude determination, see e.g. (Peng et al., 1999), (Park and Teunissen, 2003), (Moon and Verhagen, 2006), (Teunissen, 2006). GNSS ambiguity resolution is the process of resolving the unknown cycle ambiguities of the carrier phase data as integers. The sole purpose of ambiguity resolution is to use the integer ambiguity constraints as a means of improving significantly on the precision of the remaining model parameters. Apart from the current Global Positioning System (GPS) models, carrier phase ambiguity resolution also applies to the future modernized GPS and the future European Galileo GNSS. An overview of GNSS models, together with their applications in surveying, navigation, geodesy and geophysics, can be found in textbooks such as (Hofmann-Wellenhof et al., 1997), (Le- ick, 1995), (Parkinson and Spilker, 1996), (Strang and Borre, 1997) and (Teunissen and Kleusberg, 1998). Attitude determination based on GNSS is a rich field of current studies, with a wide variety of challenging (terrestrial, air and space) applications. In the present contribution we introduce a general multivariate formulation for the GNSS attitude determination problem and show how the corresponding integer ambiguity resolution problems can be solved. Our formulation applies to an arbitrary multi-antenna situation (2, 3, 4 or more antennas) and it encompasses all GNSS configurations (data type, frequencies, integration of GNSSs). This formulation has the advantage that it provides one general approach to the various individual cases and moreover shows how these individual cases are interrealted. This contribution is organised as follows. In Section 2, we give a very brief review of the unconstrained integer ambiguity resolution problem. In Section 3, we introduce the multivariate formulation of the GNSS multiple antenna attitude model. In Section 4, we present orthogonal decompositions of the objective function and show how the overall solution can be computed by means of intermediate steps. Finally, in Section 5 we generalize the results to the multi-epoch case. UNCONSTRAINED INTEGER AMBIGUITY RESOLUTION The GNSS baseline model In principle all the GNSS baseline models can be cast in the following frame of linear(ized) observation equations, E(y) = Az + Gb , D(y) = Qy (1) where E(.) and D(.) denote the expectation and dispersion operator, y is the given GNSS data vector of order m, z and b are the unknown parameter vectors of order n and p, and where A and G are the given design matrices that link the data vector to the unknown parameters. The geometry matrix G contains the unit line-of-sight vectors. The variance matrix of y is given by the positive definite matrix Qy , which is assumed known. The data vector y will usually consist of the 'observed minus computed' single- or multiplefrequency double-difference (DD) phase and/or pseudorange (code) observations accumulated over all observation epochs. The entries of vector z are then the DD carrier phase ambiguities, expressed in units of cycles rather than range. They are known to be integers, z Z n . The entries of the vector b will consist of the remaining unknown parameters, such as for instance baseline components (coordinates) and possibly atmospheric delay parameters (troposphere, ionosphere). They are known to be real-valued, b Rp . Integer least-squares estimation When solving the GNSS model (1), one usually applies the least-squares principle. This amounts to solving the following minimization problem, min z,b y - Az - Gb 2 Qy , z Z n , b Rp (2) with the weighted squared norm ||.||2 y = (.)T Q-1 (.). This problem has been called a Q y (mixed) integer least-squares (ILS) problem in Teunissen (1993). To solve this problem, we first apply an orthogonal decomposition and write the quadratic form of (2) as a sum of three squares, y - Az - Gb with e ^ z ^ ^ b ^ b(z) 2 Qy = e ^ 2 Qy 2 Q^ b(z) (3) = = = = y - A^ - G^ z b ¯ ¯ ¯ (AT Q-1 A)-1 AT Q-1 y y y T -1 ¯ -1 ¯ T -1 ¯ (G Q G) G Q y (4) - Az) y y (GT Q-1 G)-1 GT Q-1 (y y y ¯ ¯ where A = PG A, G = PA G, with the orthogonal projectors PG = I - PG , PG = T -1 -1 T -1 T -1 -1 T -1 G(G Qy G) G Qy , PA = I - PA and PA = A(A Qy A) A Qy . The matrix PG is the orthogonal projector that projects orthogonally onto the range of G (with respect to the metric of Q-1 ). Similarly, PA is the orthogonal projector that projects orthogonally y b(z) are given as onto the range of A. The variance matrices of z , ^ and ^ ^ b ¯ ¯ Qz = (AT Q-1 A)-1 ^ y ¯ T Q-1 G)-1 Q^ = (G y ¯ b T -1 -1 Q^ b(z) = (G Qy G) (5) The vectors z and ^ are referred to as the float ambiguity solution and float baseline ^ b solution, respectively. They follow when one solves (2) without the integer constraints z Z n . The vector e is the least-squares residual vector that corresponds with this float ^ solution. In our case we need to take the integer constraints z Z n into account. It follows, with (3), from (2) that min n ,bRp = = e ^ e ^ 2 Qy 2 Qy y - Az - Gb 2 Qy = + min n ,bRp + min n 2 Q^ b(z) 2 Q^ b(z) (6) + minbRp Note that the last term can be made zero for any z. Hence, the sought for solution is given as z z = arg min n ||^ - z||2 z Q^ (7) = ^ z) b b( The vectors z and are often referred to as the fixed ambiguity solution and the fixed b baseline solution, respectively. The LAMBDA method An integer search is needed to compute z . Although we will refrain from discussing the computational intricacies of ILS estimation, the conceptual steps of the computational procedure will very briefly be described. The ILS procedure is mechanized in the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method (Teunissen, 1995). The main steps are as follows. One starts by defining the ambiguity search space z = {z Z n | (^ - z)T Q-1 (^ - z) 2 } z ^ z ^ z (8) with 2 a suitable chosen positive constant. In order for the search space not to contain too many integer vectors, a small value for 2 is required, but one that still guarantees that the search space contains at least one integer grid point. The boundary of the search space z is ellipsoidal. It is centred at z and its shape ^ ^ is governed by the variance matrix Qz . In case of GNSS, the search space is usually ^ extremely elongated, due to the high correlations between the ambiguities. Since this extreme elongation usually hinders the computational efficiency of the search, the search space is first transformed to a more spherical shape, a = {a Z n | (^ - a)T Q-1 (^ - a) 2 } a ^ a a ^ (9) using an admissible ambiguity transformation: a = T z, a = T z , Qa = T Qz T T . Am^ ^ ^ ^ biguity transformations T are said to be admissible when both T and its inverse T -1 have integer entries. Such matrices preserve the integer nature of the ambiguities. In order for the transformed search space to become more spherical, the volume-preserving T -transformation is constructed as a transformation that decorrelates the ambiguities as much as possible. Using the triangular decomposition of Qa , the left-hand side of the ^ quadratic inequality in (9) is then written as a sum-of-squares: (^i|I - ai )2 a 2 2 i|I i=1 (10) On the left-hand side one recognizes the conditional least-squares estimate ai|I , and its ^ 2 variance i|I , which follows when the conditioning takes place on the integers a1 , a2 , . . . , ai-1 (thus the short-hand notation i|I stands for i|1, 2, . . . , i - 1, and n|N for n|1, 2, . . . , n - 1). Using the sum-of-squares structure, one can finally set up the n intervals which are used for the search. These n sequential intervals are given as 2 2 (^1 - a1 )2 1 2 , . . . , (^n|N - an )2 n|N 2 - a a n-1 i=1 (^i|I - ai )2 a 2 i|I (11) For more information on the LAMBDA method, we refer to e.g. (Teunissen, 1993), (Teunissen, 1995) and (de Jonge and Tiberius, 1996) or to the textbooks (HofmannWellenhof et al., 1997), (Strang and Borre, 1997), (Teunissen and Kleusberg, 1998), (Misra and Enge, 2006). THE GNSS MULTIPLE ANTENNA ATTITUDE MODEL A matrix formulation Let us assume that n + 1 antennas simultaneously track m + 1 GNSS satellites. The DD phase and code data observed by antenna pair i is collected in the vector yi . We assume that all the data are observed at a single epoch and on a single frequency. The material that follows is easily extended to the multiple frequency case. The multiple epoch case is considered in Section 5. Hence, the vector yi will be of order 2m (m DD phase observations and m DD code observations). Since there are n + 1 antennas, there are n independent antenna pairs and therefore n data vectors yi . These data vectors will be collected in the 2m × n data matrix Y = [y1 , . . . , yn ]. We assume that the atmospheric delays can be neglected and that the separation between the antennas is such that their line-of-sight vectors to the same satellite are the same. Based on the above assumptions, the linear(ized) GNSS multiple antenna model can be formulated as E(Y ) = AZ + GB , Z Z m×n , B R3×n (12) with Z = [z1 , . . . , zn ] the m × n matrix of n unknown DD integer ambiguity vectors zi , B = [b1 , . . . , bn ] the 3 × n matrix of n unknown (incremental) baseline vectors bi , G the 2m × 3 geometry matrix that contains the unit line-of-sight vectors and A the 2m × m matrix that links the DD data to the integer ambiguities. The sets Z m×n and R3×n denote the set of integer matrices of order m × n and the set of real matrices of order 3 × n, respectively. Compare (12) with (1). The goal of attitude determination is to determine the attitude of a vehicle with respect to a user-defined navigation frame, such as e.g. the ECEF-frame (Earth Centred Earth Fixed) or the ENU-frame (East North Up). We therefore assume that the baseline vectors bi are expressed in such a user-defined navigation frame. Additionally, we assume that the n + 1 antennas are rigidly mounted on a vehicle and that their n baselines are known in the body frame of the vehicle. The n baselines, when expressed in the body frame, are denoted as fi , i = 1, . . . , n, and we collect them in the 3 × n matrix F T = [f1 , . . . , fn ] (note: the use of the transpose in F T is simply a matter of notational convenience for the remaining material). The two different expressions of the n baselines are related by a three-dimensional rotation matrix R (RT R = I3 , detR = 1). Thus, bi = Rfi , i = 1, . . . , n, or B = RF T . Substitution into (12) gives E(Y ) = AZ + GRF T The unknowns in this matrix observation equation are Z and R. (13) A body frame representation It will be clear that R is underdetermined in the two-antenna or single baseline case (n = 1). This case, however, does still alow us to obtain attitude information, namely heading and elevation (or yaw and pitch). To be able to include the case n < 3, without affecting the generality of our approach, we give some additional structure to the 3 × n matrix F T . We assume that the first three baselines define the (not necesarily orthogonal) axes of the body frame and that they are represented in this body frame as an uppertriangular matrix, f11 f21 f31 [f1 , f2 , f3 ] = 0 f22 f32 (14) 0 0 f33 Hence, if we denote the orthonormal columns of R as ri , i = 1, 2, 3, then T For n = 1 : RF T = r1 f11 = R1 F1 call For n = 2 : RF T = [r1 , r2 ] f11 f21 0 f22 call T = R2 F2 f11 f21 f31 call T T For n = 3 : RF = [r1 , r2 , r3 ] 0 f22 f32 = R3 F3 0 0 f33 f11 f21 f31 f41 . . . fn1 call T For n > 3 : RF T = [r1 , r2 , r3 ] 0 f22 f32 f42 . . . fn2 = R3 Fn 0 0 f33 f43 . . . fn3 (15) Thus with q = n if n 3 and q = 3 if n > 3, we have that Rq is a 3 × q matrix with T orthonormal columns and that Fn is a q × n matrix. With the notation of (15), we can write (13) as T (16) E(Y ) = AZ + GRq Fn , Z Z m×n , Rq O 3×q where O 3×q denotes the set of 3 × q matrices that have orthonormal columns. With this formulation, we have eliminated the underdeterminancy that occurs in (13) when n < 3. Both matrices Rq and Fn have full column rank q. The model in standard form We can now bring (16) in the form of (1) if we make use of the vec-operator. The vecoperator is a linear transformation which transforms a matrix into a vector by stacking the columns of the matrix one underneath the other. We will make a frequent use of the following two properties of the vec-operator: vecABC = (C T A)vecB and traceABCD = (vecD T )T (C T A)(vecB), where A, B, C and D are four matrices of appropriate order, and is the Kronecker product. Application of the vec-operator to (16) gives E(vecY ) = (In A)vecZ + (Fn G)vecRq , Z Z m×n , Rq O 3×q (17) If we compare with (1), we have the following correspondence: y vecY , A (In A), z vecZ, G (Fn G) and b vecRq . There is, however, one important difference with (1). In (17) we have the additional constraints Rq O 3×q . The 2mn equations of (17) constitute the observations equations for our attitude determination problem. However, in order to be able to estimate Z and Rn , we also need to know the dispersion of vecY . It is given as D(vecY ) = QvecY = P Q (18) in which P and Q are known matrices of order n × n and 2m × 2m, respectively. Matrix P takes care of the correlation that follows from the fact that the n baselines have one antenna in common and matrix Q takes care of the precision of the phase and code data, combined with the fact that these DD data have data from the same reference satellite in common (Odijk, 2002). INTEGER LEAST-SQUARES ATTITUDE ESTIMATION Orthogonal decomposition of objective function It is our goal to solve (17), with (18), in a least-squares sense, thereby taking the constraints on Z and Rn into account. We have in analogy with (6), the following minimization problem min m×n ,Rn O3×n vecY - (In A)vecZ - (Fn G)vecRn 2 vecY = Q ^ - vecZ 2 ^ ^ 2 vecZ vecRn (Z) - vecRn = vecE QvecY + min m×n ,Rn O3×n QvecZ + ^ = ^ vecE 2 QvecY 2 QvecRn (Z) ^ 2 QvecRn (Z) ^ + min m×n ^ vecZ - vecZ + minRn O3×n ^ vecRn (Z) - vecRn (19) where the minimization is taken with respect to the integer matrix Z and the orthonormal ^ ^ ^ ^ T matrix Rn . The least-squares residual matrix E is given as E = Y - AZ - GRn Fn , in ^ ^ which Z and Rn are the least-squares estimators of Z and Rn , respectively, without taking ^ the constraints Z Z m×n and Rn O 3×n into account. The 3 × q matrix Rn (Z) is the least-squares estimator of Rn , assuming that Z is known, but without using the constraints Rn O 3×n . Note that, in contrast to (6), the third term on the right-hand side of the last equation of (19) can not be made zero for any Z. This is due to the constraints on Rn . As we will see, this complicates the integer search for the DD ambiguities, as compared to the integer search needed for solving the GNSS baseline model (1). If we define ^ Rn (Z) = arg min ||vecRn (Z) - Rn ||2 vecR (Z) (20) Q ^ 3×n Rn O then the integer least-squares estimator of Z is given as the minimizer Z = arg min m×n ^ vecZ - vecZ ^ vecRn (Z) - vecRn (Z) 2 QvecRn (Z) ^ (21) from which the final sought for solution for Rq follows as ^ Rq = Rq (Z) (22) From this 3 × q matrix the necessary attitude information can be recovered. The computation of Z can be done efficiently by means of the method described in (Teunissen, 2006) and (Park and Teunissen, 2007). It is based on an integer search, with the search space given as = {Z Z m×n | ^ vecZ - vecZ ^ vecRn (Z) - vecRn (Z) 2 QvecRn (Z) ^ 2 } (23) where 2 is a suitably chosen positive constant. We will now show how the intermediate solutions and the final attitude solution can be computed. Intermediate least-squares solutions ^ ^ ^ We will now first derive the expressions for vecZ, vecRn , vecRn (Z) and their variance ^ ^ matrices. To determine vecZ and vecRn , we need to consider (17) without the integer constraints on Z and without the orthonormality constraints on R. The normal equations of the least-squares solution are then given as, (P -1 AT Q-1 )vecY T (Fn P -1 GT Q-1 )vecY (24) ^ ^ n and thus for Rn , the least-squares solution of Rn follows as If we solve for vecR ^ vecZ ^ vecRn = T ^ ¯ ¯ ¯ Rn = (GT Q-1 G)-1 GT Q-1 Y P -1 Fn (Fn P -1 Fn )-1 P -1 AT Q-1 A P -1Fn AT Q-1 G T -1 T -1 T Fn P G Q A Fn P -1Fn GT Q-1 G (25) ¯ ^ with G = PA G and PA = I2m - A(AT Q-1 A)-1 AT Q-1 . The variance matrix of vecRn is given as T ¯ ¯ QvecRn = (Fn P -1 Fn )-1 (GT Q-1 G)-1 (26) ^ Similarly, if Z is assumed known, we find the least-squares solution of Rn as T ^ Rn (Z) = (GT Q-1 G)-1 GT Q-1 (Y - AZ)P -1Fn (Fn P -1 Fn )-1 (27) ^ The variance matrix of vecRn (Z) is given as T QvecRn (Z) = (Fn P -1Fn )-1 (GT Q-1 G)-1 ^ (28) ^ The solution vecZ follows as ^ vecZ = In (AT Q-1 A)-1 AT Q-1 Its variance matrix is given as QvecZ = ^ = (P -1 AT Q-1 )(I - PFn PG )(I A) ¯ ~ -1 ¯ ~ P -1 AT Q-1 A + P -1 P AT Q-1 A Fn -1 ^ vecY - (Fn G)vecRn (29) (30) ~ ¯ with A = PG A, A = PG A, PG = I - PG , PFn = I - PFn , PG = G(GT Q-1 A)-1 AT Q-1 , T -1 -1 T -1 and PFn = Fn (Fn P Fn ) Fn P . The case n 3 The above least-squares solutions show a dependence on Fn , i.e. on the relative geometry of the antenna configuration as expressed in the body frame. This is of practical importance, since it allows one to improve the precision of these estimators by designing an appropriate geometry of the antenna configuration. In case of four or less than four antennas (n 3), however, the situation is somewhat different. In that case ma trix Fn is square and invertible, see (15), which implies that PFn = I, PFn = 0 and T T -1 (Fn P -1Fn )-1 Fn P -1 = Fn . The least-squares solutions reduce then to -T ¯ ¯ ^ ¯ Rn = (GT Q-1 G)-1 GT Q-1 Y Fn -T ^ Rn (Z) = (GT Q-1 G)-1 GT Q-1 (Y - AZ)Fn (31) For the variance matrices describing the precision, we have -1 -T ¯ ¯ = Fn P Fn (GT Q-1 G)-1 QvecRn ^ -1 -T QvecRn (Z) = Fn P Fn (GT Q-1 G)-1 ^ T -1 ¯ -1 ¯ QvecZ = P (A Q A) ^ (32) ^ ^ This shows, although the precision of Rn and Rn (Z) is still dependent on the choice of ^ is now independent of this configuration. antenna configuration, that the precision of Z The well-known fact that longer baselines improve the attitude capability is also clear 2 from the above. For instance, we have for n = 1, that QvecR1 (Z) = (P/f11 ) (GT Q-1 G)-1 , ^ with f11 the length of the baseline. Solving for the attitude matrix Nonlinear least-squares The computation of Rq (Z), see (20), requires for each Z, that one solves the minimization problem ^ min ||vecRq (Z) - vecRq ||2 vecR (Z) (33) Q ^ 3×q Rq O This minimization problem is a nonlinear least-squares problem due to the nonlinear constraints on vecRq . Geometrically it amounts to finding that point on a nonlinear or curved manifold which has the smallest distance to the given data vector, which in our ^ case is vecRq (Z). Distance is here measured with respect to the metric of the 3q × 3q variance matrix QvecRq (Z) . If n = 1, we have one constraint, namely the unit length of ^ R1 , and vecR1 is of order 3 × 1. Hence, in this case the curved manifold is of dimension 2 and it is embedded in the 3-dimensional space. If n = 2, we have three constraints, namely one orthogonality and two unit length constraints of the two columns of R2 , and vecR2 is of order 6 × 1. Hence in this case the curved manifold is of dimension 3 and it is embedded in the 6-dimensional space. If n 3, we have six constraints, namely three orthogonality and three unit length constraints of the three columns of R3 , and vecR3 is of order 9 × 1. Hence, in this case the curved manifold is of dimension 3 and it is embedded in the 9-dimensional space. The constraints Rq O 3×q can be taken into account by a suitable parametrization of Rq (e.g. by using one of the different ways in which rotations can be parametrized). Once this is done, one has an unconstrained nonlinear least-squares problem, which can be iteratively solved by means of the Gauss-Newton method (with a local linear rate of convergence) or by the Newton method (with a local quadratic rate of convergence), see e.g (Teunissen, 1990). These iterations are initialized with an appropriate approximation of Rq (see below). 4.3.2 Using the singular value decomposition Under certain conditions it is also possible to solve (33) directly by means of a singular T value decomposition (SVD). Let M = GT Q-1 G and N = Fn P -1Fn . Then QvecRq (Z) = ^ -1 -1 N M and the objective function of (33) can be written as ^ ^ = trace([Rq (Z) - Rq ]T M[Rq (Z) - Rq ]N) ^ ^ ^ = trace(Rq (Z)T M Rq (Z)N) - 2trace(N Rq (Z)T MRq ) T +trace(Rq MRq N) (34) This function is, in general, quadratic in Rq . It will be linear in Rq , if the last term of the above equation becomes independent of Rq . q (Z) ^ ||vecRq (Z) - vecRq ||2 vecR Q ^ The quadratic case With one exception, the quadratic case requires iterative solution techniques as the ones described above. The exception occurs when n = 1. Recall that matrix N is a q × q matrix, with q = n if n 3 and q = 3 if n > 3. Thus N is a scalar, if n = 1. The above minimization problem can then be written as R1 R3 T ^ min N||R1 (Z) - R1 ||2 -1 subject to R1 R1 = 1 M (35) This problem can be solved by means of the singular value decomposition (SVD), see e.g. (Golub and Van Loan, 1989). We can formulate problem (35) also as the problem of projecting orthogonally (with respect to the standard Euclidean metric) onto an ellipsoid. ^ To see this, let (Z) = M -1/2 R1 (Z) and = M -1/2 R1 . Then (35) is equivalent to ^ R3 min N||^(Z) - ||23 subject to T M -1 = 1 I (36) This problem of finding the closest point on a 2-dimensional ellipsoid is equivalent to the problem of computing geodetic coordinates from Cartesian coordinates. The case n = 1 occurs when one wants to determine heading and elevation (or yaw and pitch) from one GNSS-baseline (two antennas). This method was followed in (Park and Teunissen, 2003). The linear case The objective function becomes linear in Rq , when M is a scaled unit matrix, M = I3 , or when N is a scaled unit matrix, N = Iq , where , are scalars and n 3. When T M = I3 , we have trace(Rq MRq N) = trace(N), which is independent of Rq . And when T T N = Iq , we have trace(Rq MRq N) = trace(MRq Rq ) = trace(M), if q = 3 and thus if n 3. If (34) is linear in Rq , then the minimization of the objective function is equivalent to solving a maximization problem of the form Rq O 3×q max trace(T Rq ) (37) ^ with the q × 3 matrix T given. If M = I3 , then T = N Rq (Z)T , which is a q × 3 matrix. T ^ And when N = Iq and n 3, then T = Rq (Z) M, which is a q × 3 matrix, with q = 3. T Let T = UV be the SVD of T , with orthogonal matrices U and V of order q × q and 3 × 3, respectively, and with the q × 3 matrix = (q , 0), where q is the diagonal matrix containing the q singular values. Then the solution of (37) is given as Rq = V Iq 0 UT (38) To see this, we write trace(T Rq ) = trace(UV T Rq ) = trace(V T Rq U) = q (V T Rq U)ii i , i=1 in which the i are the q nonnegative entries of the q ×q diagonal matrix q in = (q , 0). Note that the q columns of the 3 × q matrix V T Rq U are orthonormal. Hence, trace(T Rq ) T is maximal if (V T Rq U)ii = 1 for i = 1, . . . , q, and thus if U T Rq V = (Iq , 0). From this the result (38) follows. Designing the antenna configuration In practice one has not much control over the matrices G and Q in M = GT Q-1 G. Matrix G is dictated by the relative receiver-satellite geometry and matrix Q is dictated by the precision of the observables. Hence, in practice, one can not expect that M = I3 holds true. The corresponding method of solution given above is therefore no option for solving (33). At most one can use M = I3 as an approximation of GT Q-1 G and thereby compute an approximate solution to (33), useful, for instance, for linearization purposes. T In contrast to M = GT Q-1 G, one can, however, exercise control over N = Fn P -1Fn by means of a suitable choice of the antenna configuration, that is, by means of choosing a suitable fixed body frame Fn . Let P = CC T , with C a lower triangular matrix, be the Cholesky decomposition of P . Then P -1 = C -T C -1 , with C -T an upper triangular matrix. If we now choose Fn as Iq (39) Fn = C 0 T then Fn P -1Fn = Iq . With this choice for the antenna configuration, we have QvecRq (Z) = ^ T -1 -1 Iq (G Q G) and therefore ^ ||vecRq (Z) - vecRq ||2 vecR Q ^ q (Z) = (1/) i=1 ||^i(Z) - ri ||2 T Q-1 G)-1 r (G (40) The constrained minimizer of this objective function can then be found, for q = 1 and q = 3, with the SVD-approach as discussed above. MULTIPLE EPOCH ATTITUDE DETERMINIATION So far we considered single epoch attitude determination. Now consider the multi epoch, multi antenna case. The matrix observation equation for epoch t is then given as T E(Yt ) = AZ + Gt Rq,t Fn t = 1, . . . , k (41) Matrices A and Fn are assumed to be time-invariant. This implies a constant tracking of the same satellites. Matrices Gt and Rq,t are assumed to depend on time. The geometry matrix Gt changes over time due to the changing relative receiver-satellite geometry and the orthonormal matrix Rq,t changes over time, because of the changing attitude of the vehicle. When we collect all observation equations for the k epochs, we may write them in vector-matrix form as E( vecY1 In A Fn G1 . = . .. . . . . . In A vecYk Fn Gk vecZ vecRq,1 . . . vecRq,k (42) The variance matrix is assumed to be block diagonal. Thus with Y = (Y1 , . . . , Yk ), we have D(vecY ) = blockdiag(P1 Q1 , . . . , Pk Qk ). In analogy with our earlier orthogonal decomposition of the objective function, we may decompose the objective function for (42) as ^ ^ H(Z, Rq,1, . . . , Rq,k ) = ||vecEk ||2 vecY +||vecZk -vecZ||2 vecZ + Q Q ^ k i=1 ^ ||vecRq,i(Z)-vecRq,i ||2 vecR Q ^ q,i (Z) (43) The unknowns are the m × n integer matrix Z and the 3 × q orthonormal matrices Rq,i , ^ i = 1, . . . , k. Matrix Zk is the real-valued least-squares estimator of Z, based on all the data up to and including epoch k, but without taking the integerness of Z and the ^ orthonormality of the Rq,i into account. Matrix Rq,i (Z) is the least-squares estimator of Rq,i , again based on all the data up to and including epoch k and without taking the orthonormality constraints into account, but now assuming that Z is known. Note, due ^ to the structure of (42) and the block diagonality of D(vecY ), that Rq,i (Z) only depends on Yi and not on Yj for j = i. This is also the reason why the last term in (43) can be written as a summation of k terms. The least-squares solution of (42) follows from minimizing the objective function H(Z, Rq,1, . . . , Rq,k ) subject to the integer constraints on Z and the orthonormality constraints on Rq,i . Thus if we define Rq,i (Z) = arg Rq,i O 3×q min ^ ||vecRq,i (Z) - vecRq,i ||2 Rq,i (Z) , i = 1, . . . k vec ^ (44) then the integer least-squares estimator of Z is given as ^ ||vecZk - vecZ||2 vecZ + Zk = arg min Q ^ m×n k i=1 ^ ||vecRq,i (Z) - vecRq,i (Z)||2 vecR Q ^ q,i (Z) (45) The sought for least-squares estimators of the k orthonormal matrices Rq,i , based on all the data up to and including epoch k, are then given as ^ Rq,i = Rq,i (Zk ) , i = 1, . . . , k (46) The integer solution Zk is obtained by means of an integer search, similar to the one described in the previous sections. For practical applications it would be very useful if the above solution can be obtained in a recursive manner. It will be clear that the structure of (42) and D(vecY ) is such ^ that the least-squares estimator Zk can indeed be obtained recursively. It is not difficult ^ ^k can be obtained from Zk-1 and Yk . However, if we now consider (45), to show that Z it will be clear that Zk can not be obtained in a recursive manner. This is due to the orthonormality constraints on Rq,i , i = 1, . . . , k. That is, in order to determine Zk , and ^ ^ q,k , it does not suffice to have Rq,k available, but we also need to store Rq,i for thus also R i < k. The conclusion is therefore reached that it is not possible to determine the integer least-squares estimator of Rq,k in a recursive manner, if the orthonormality constraints on all k matrices Rq,i are used. Hence, a rigorous recursive integer least-squares solution of (42) is not possible. The above analysis suggest, however, that it is possible to construct a recursive procedure, provided one is willing to accept a suboptimal integer solution for Z. We define this suboptimal solution as ^ ^ Zk = arg min ||vecZk - vecZ||2 vecZ + ||vecRq,k (Z) - vecRq,k (Z)||2 vecR Q Q ^ ^ m×n k q,k (Z) (47) This is the integer least-squares estimator of Z, that follows when one does not take the orthonormality constraints on the matrices Rq,i , i = 1, . . . , (k -1), into account. Compare (47) with (45). When use is made of (47), the updating steps in the recursion go as follows. From ^ ^ ^ Zk-1 and Yk , we obtain Zk . Similarly, Yk and Z, gives Rq,k (Z) and Rq,k (Z). From this ^ information Zk can be computed, which then finally leads to the output Rq,k = Rq,k (Zk ). This recursion is thus made possible by only taking the orthonormality constraints into account, that pertain to the current epoch k. Acknowledgement This research was done in the framework of the author's International Linkage Professorial Fellowship of the Australian Research Council, at the Curtin University of Technology, Perth, Australia, with Professor Will Featherstone as his host. This support is greatly acknowledged. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Artificial Satellites de Gruyter

A General Multivariate Formulation of the Multi-Antenna GNSS Attitude Determination Problem

Artificial Satellites , Volume 42 (2) – Jan 1, 2007

Loading next page...
 
/lp/de-gruyter/a-general-multivariate-formulation-of-the-multi-antenna-gnss-attitude-Z0ugKpDlIP
Publisher
de Gruyter
Copyright
Copyright © 2007 by the
ISSN
0208-841X
eISSN
2083-6104
DOI
10.2478/v10018-008-0002-3
Publisher site
See Article on Publisher Site

Abstract

Global Navigation Satellite System (GNSS) carrier phase ambiguity resolution is the key to high precision positioning, navigation and attitude determination. In this contribution we present a general formulation for the multi-antenna GNSS attitude determination problem. This multivariate formulation provides a general framework for solving various GNSS attitude determination problems. With the use of this formulation we show how the constrained integer least-squares carrier phase ambiguities and corresponding attitude matrix can be solved. INTRODUCTION In this contribution we consider the problem of ambiguity resolution for GNSS attitude determination, see e.g. (Peng et al., 1999), (Park and Teunissen, 2003), (Moon and Verhagen, 2006), (Teunissen, 2006). GNSS ambiguity resolution is the process of resolving the unknown cycle ambiguities of the carrier phase data as integers. The sole purpose of ambiguity resolution is to use the integer ambiguity constraints as a means of improving significantly on the precision of the remaining model parameters. Apart from the current Global Positioning System (GPS) models, carrier phase ambiguity resolution also applies to the future modernized GPS and the future European Galileo GNSS. An overview of GNSS models, together with their applications in surveying, navigation, geodesy and geophysics, can be found in textbooks such as (Hofmann-Wellenhof et al., 1997), (Le- ick, 1995), (Parkinson and Spilker, 1996), (Strang and Borre, 1997) and (Teunissen and Kleusberg, 1998). Attitude determination based on GNSS is a rich field of current studies, with a wide variety of challenging (terrestrial, air and space) applications. In the present contribution we introduce a general multivariate formulation for the GNSS attitude determination problem and show how the corresponding integer ambiguity resolution problems can be solved. Our formulation applies to an arbitrary multi-antenna situation (2, 3, 4 or more antennas) and it encompasses all GNSS configurations (data type, frequencies, integration of GNSSs). This formulation has the advantage that it provides one general approach to the various individual cases and moreover shows how these individual cases are interrealted. This contribution is organised as follows. In Section 2, we give a very brief review of the unconstrained integer ambiguity resolution problem. In Section 3, we introduce the multivariate formulation of the GNSS multiple antenna attitude model. In Section 4, we present orthogonal decompositions of the objective function and show how the overall solution can be computed by means of intermediate steps. Finally, in Section 5 we generalize the results to the multi-epoch case. UNCONSTRAINED INTEGER AMBIGUITY RESOLUTION The GNSS baseline model In principle all the GNSS baseline models can be cast in the following frame of linear(ized) observation equations, E(y) = Az + Gb , D(y) = Qy (1) where E(.) and D(.) denote the expectation and dispersion operator, y is the given GNSS data vector of order m, z and b are the unknown parameter vectors of order n and p, and where A and G are the given design matrices that link the data vector to the unknown parameters. The geometry matrix G contains the unit line-of-sight vectors. The variance matrix of y is given by the positive definite matrix Qy , which is assumed known. The data vector y will usually consist of the 'observed minus computed' single- or multiplefrequency double-difference (DD) phase and/or pseudorange (code) observations accumulated over all observation epochs. The entries of vector z are then the DD carrier phase ambiguities, expressed in units of cycles rather than range. They are known to be integers, z Z n . The entries of the vector b will consist of the remaining unknown parameters, such as for instance baseline components (coordinates) and possibly atmospheric delay parameters (troposphere, ionosphere). They are known to be real-valued, b Rp . Integer least-squares estimation When solving the GNSS model (1), one usually applies the least-squares principle. This amounts to solving the following minimization problem, min z,b y - Az - Gb 2 Qy , z Z n , b Rp (2) with the weighted squared norm ||.||2 y = (.)T Q-1 (.). This problem has been called a Q y (mixed) integer least-squares (ILS) problem in Teunissen (1993). To solve this problem, we first apply an orthogonal decomposition and write the quadratic form of (2) as a sum of three squares, y - Az - Gb with e ^ z ^ ^ b ^ b(z) 2 Qy = e ^ 2 Qy 2 Q^ b(z) (3) = = = = y - A^ - G^ z b ¯ ¯ ¯ (AT Q-1 A)-1 AT Q-1 y y y T -1 ¯ -1 ¯ T -1 ¯ (G Q G) G Q y (4) - Az) y y (GT Q-1 G)-1 GT Q-1 (y y y ¯ ¯ where A = PG A, G = PA G, with the orthogonal projectors PG = I - PG , PG = T -1 -1 T -1 T -1 -1 T -1 G(G Qy G) G Qy , PA = I - PA and PA = A(A Qy A) A Qy . The matrix PG is the orthogonal projector that projects orthogonally onto the range of G (with respect to the metric of Q-1 ). Similarly, PA is the orthogonal projector that projects orthogonally y b(z) are given as onto the range of A. The variance matrices of z , ^ and ^ ^ b ¯ ¯ Qz = (AT Q-1 A)-1 ^ y ¯ T Q-1 G)-1 Q^ = (G y ¯ b T -1 -1 Q^ b(z) = (G Qy G) (5) The vectors z and ^ are referred to as the float ambiguity solution and float baseline ^ b solution, respectively. They follow when one solves (2) without the integer constraints z Z n . The vector e is the least-squares residual vector that corresponds with this float ^ solution. In our case we need to take the integer constraints z Z n into account. It follows, with (3), from (2) that min n ,bRp = = e ^ e ^ 2 Qy 2 Qy y - Az - Gb 2 Qy = + min n ,bRp + min n 2 Q^ b(z) 2 Q^ b(z) (6) + minbRp Note that the last term can be made zero for any z. Hence, the sought for solution is given as z z = arg min n ||^ - z||2 z Q^ (7) = ^ z) b b( The vectors z and are often referred to as the fixed ambiguity solution and the fixed b baseline solution, respectively. The LAMBDA method An integer search is needed to compute z . Although we will refrain from discussing the computational intricacies of ILS estimation, the conceptual steps of the computational procedure will very briefly be described. The ILS procedure is mechanized in the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method (Teunissen, 1995). The main steps are as follows. One starts by defining the ambiguity search space z = {z Z n | (^ - z)T Q-1 (^ - z) 2 } z ^ z ^ z (8) with 2 a suitable chosen positive constant. In order for the search space not to contain too many integer vectors, a small value for 2 is required, but one that still guarantees that the search space contains at least one integer grid point. The boundary of the search space z is ellipsoidal. It is centred at z and its shape ^ ^ is governed by the variance matrix Qz . In case of GNSS, the search space is usually ^ extremely elongated, due to the high correlations between the ambiguities. Since this extreme elongation usually hinders the computational efficiency of the search, the search space is first transformed to a more spherical shape, a = {a Z n | (^ - a)T Q-1 (^ - a) 2 } a ^ a a ^ (9) using an admissible ambiguity transformation: a = T z, a = T z , Qa = T Qz T T . Am^ ^ ^ ^ biguity transformations T are said to be admissible when both T and its inverse T -1 have integer entries. Such matrices preserve the integer nature of the ambiguities. In order for the transformed search space to become more spherical, the volume-preserving T -transformation is constructed as a transformation that decorrelates the ambiguities as much as possible. Using the triangular decomposition of Qa , the left-hand side of the ^ quadratic inequality in (9) is then written as a sum-of-squares: (^i|I - ai )2 a 2 2 i|I i=1 (10) On the left-hand side one recognizes the conditional least-squares estimate ai|I , and its ^ 2 variance i|I , which follows when the conditioning takes place on the integers a1 , a2 , . . . , ai-1 (thus the short-hand notation i|I stands for i|1, 2, . . . , i - 1, and n|N for n|1, 2, . . . , n - 1). Using the sum-of-squares structure, one can finally set up the n intervals which are used for the search. These n sequential intervals are given as 2 2 (^1 - a1 )2 1 2 , . . . , (^n|N - an )2 n|N 2 - a a n-1 i=1 (^i|I - ai )2 a 2 i|I (11) For more information on the LAMBDA method, we refer to e.g. (Teunissen, 1993), (Teunissen, 1995) and (de Jonge and Tiberius, 1996) or to the textbooks (HofmannWellenhof et al., 1997), (Strang and Borre, 1997), (Teunissen and Kleusberg, 1998), (Misra and Enge, 2006). THE GNSS MULTIPLE ANTENNA ATTITUDE MODEL A matrix formulation Let us assume that n + 1 antennas simultaneously track m + 1 GNSS satellites. The DD phase and code data observed by antenna pair i is collected in the vector yi . We assume that all the data are observed at a single epoch and on a single frequency. The material that follows is easily extended to the multiple frequency case. The multiple epoch case is considered in Section 5. Hence, the vector yi will be of order 2m (m DD phase observations and m DD code observations). Since there are n + 1 antennas, there are n independent antenna pairs and therefore n data vectors yi . These data vectors will be collected in the 2m × n data matrix Y = [y1 , . . . , yn ]. We assume that the atmospheric delays can be neglected and that the separation between the antennas is such that their line-of-sight vectors to the same satellite are the same. Based on the above assumptions, the linear(ized) GNSS multiple antenna model can be formulated as E(Y ) = AZ + GB , Z Z m×n , B R3×n (12) with Z = [z1 , . . . , zn ] the m × n matrix of n unknown DD integer ambiguity vectors zi , B = [b1 , . . . , bn ] the 3 × n matrix of n unknown (incremental) baseline vectors bi , G the 2m × 3 geometry matrix that contains the unit line-of-sight vectors and A the 2m × m matrix that links the DD data to the integer ambiguities. The sets Z m×n and R3×n denote the set of integer matrices of order m × n and the set of real matrices of order 3 × n, respectively. Compare (12) with (1). The goal of attitude determination is to determine the attitude of a vehicle with respect to a user-defined navigation frame, such as e.g. the ECEF-frame (Earth Centred Earth Fixed) or the ENU-frame (East North Up). We therefore assume that the baseline vectors bi are expressed in such a user-defined navigation frame. Additionally, we assume that the n + 1 antennas are rigidly mounted on a vehicle and that their n baselines are known in the body frame of the vehicle. The n baselines, when expressed in the body frame, are denoted as fi , i = 1, . . . , n, and we collect them in the 3 × n matrix F T = [f1 , . . . , fn ] (note: the use of the transpose in F T is simply a matter of notational convenience for the remaining material). The two different expressions of the n baselines are related by a three-dimensional rotation matrix R (RT R = I3 , detR = 1). Thus, bi = Rfi , i = 1, . . . , n, or B = RF T . Substitution into (12) gives E(Y ) = AZ + GRF T The unknowns in this matrix observation equation are Z and R. (13) A body frame representation It will be clear that R is underdetermined in the two-antenna or single baseline case (n = 1). This case, however, does still alow us to obtain attitude information, namely heading and elevation (or yaw and pitch). To be able to include the case n < 3, without affecting the generality of our approach, we give some additional structure to the 3 × n matrix F T . We assume that the first three baselines define the (not necesarily orthogonal) axes of the body frame and that they are represented in this body frame as an uppertriangular matrix, f11 f21 f31 [f1 , f2 , f3 ] = 0 f22 f32 (14) 0 0 f33 Hence, if we denote the orthonormal columns of R as ri , i = 1, 2, 3, then T For n = 1 : RF T = r1 f11 = R1 F1 call For n = 2 : RF T = [r1 , r2 ] f11 f21 0 f22 call T = R2 F2 f11 f21 f31 call T T For n = 3 : RF = [r1 , r2 , r3 ] 0 f22 f32 = R3 F3 0 0 f33 f11 f21 f31 f41 . . . fn1 call T For n > 3 : RF T = [r1 , r2 , r3 ] 0 f22 f32 f42 . . . fn2 = R3 Fn 0 0 f33 f43 . . . fn3 (15) Thus with q = n if n 3 and q = 3 if n > 3, we have that Rq is a 3 × q matrix with T orthonormal columns and that Fn is a q × n matrix. With the notation of (15), we can write (13) as T (16) E(Y ) = AZ + GRq Fn , Z Z m×n , Rq O 3×q where O 3×q denotes the set of 3 × q matrices that have orthonormal columns. With this formulation, we have eliminated the underdeterminancy that occurs in (13) when n < 3. Both matrices Rq and Fn have full column rank q. The model in standard form We can now bring (16) in the form of (1) if we make use of the vec-operator. The vecoperator is a linear transformation which transforms a matrix into a vector by stacking the columns of the matrix one underneath the other. We will make a frequent use of the following two properties of the vec-operator: vecABC = (C T A)vecB and traceABCD = (vecD T )T (C T A)(vecB), where A, B, C and D are four matrices of appropriate order, and is the Kronecker product. Application of the vec-operator to (16) gives E(vecY ) = (In A)vecZ + (Fn G)vecRq , Z Z m×n , Rq O 3×q (17) If we compare with (1), we have the following correspondence: y vecY , A (In A), z vecZ, G (Fn G) and b vecRq . There is, however, one important difference with (1). In (17) we have the additional constraints Rq O 3×q . The 2mn equations of (17) constitute the observations equations for our attitude determination problem. However, in order to be able to estimate Z and Rn , we also need to know the dispersion of vecY . It is given as D(vecY ) = QvecY = P Q (18) in which P and Q are known matrices of order n × n and 2m × 2m, respectively. Matrix P takes care of the correlation that follows from the fact that the n baselines have one antenna in common and matrix Q takes care of the precision of the phase and code data, combined with the fact that these DD data have data from the same reference satellite in common (Odijk, 2002). INTEGER LEAST-SQUARES ATTITUDE ESTIMATION Orthogonal decomposition of objective function It is our goal to solve (17), with (18), in a least-squares sense, thereby taking the constraints on Z and Rn into account. We have in analogy with (6), the following minimization problem min m×n ,Rn O3×n vecY - (In A)vecZ - (Fn G)vecRn 2 vecY = Q ^ - vecZ 2 ^ ^ 2 vecZ vecRn (Z) - vecRn = vecE QvecY + min m×n ,Rn O3×n QvecZ + ^ = ^ vecE 2 QvecY 2 QvecRn (Z) ^ 2 QvecRn (Z) ^ + min m×n ^ vecZ - vecZ + minRn O3×n ^ vecRn (Z) - vecRn (19) where the minimization is taken with respect to the integer matrix Z and the orthonormal ^ ^ ^ ^ T matrix Rn . The least-squares residual matrix E is given as E = Y - AZ - GRn Fn , in ^ ^ which Z and Rn are the least-squares estimators of Z and Rn , respectively, without taking ^ the constraints Z Z m×n and Rn O 3×n into account. The 3 × q matrix Rn (Z) is the least-squares estimator of Rn , assuming that Z is known, but without using the constraints Rn O 3×n . Note that, in contrast to (6), the third term on the right-hand side of the last equation of (19) can not be made zero for any Z. This is due to the constraints on Rn . As we will see, this complicates the integer search for the DD ambiguities, as compared to the integer search needed for solving the GNSS baseline model (1). If we define ^ Rn (Z) = arg min ||vecRn (Z) - Rn ||2 vecR (Z) (20) Q ^ 3×n Rn O then the integer least-squares estimator of Z is given as the minimizer Z = arg min m×n ^ vecZ - vecZ ^ vecRn (Z) - vecRn (Z) 2 QvecRn (Z) ^ (21) from which the final sought for solution for Rq follows as ^ Rq = Rq (Z) (22) From this 3 × q matrix the necessary attitude information can be recovered. The computation of Z can be done efficiently by means of the method described in (Teunissen, 2006) and (Park and Teunissen, 2007). It is based on an integer search, with the search space given as = {Z Z m×n | ^ vecZ - vecZ ^ vecRn (Z) - vecRn (Z) 2 QvecRn (Z) ^ 2 } (23) where 2 is a suitably chosen positive constant. We will now show how the intermediate solutions and the final attitude solution can be computed. Intermediate least-squares solutions ^ ^ ^ We will now first derive the expressions for vecZ, vecRn , vecRn (Z) and their variance ^ ^ matrices. To determine vecZ and vecRn , we need to consider (17) without the integer constraints on Z and without the orthonormality constraints on R. The normal equations of the least-squares solution are then given as, (P -1 AT Q-1 )vecY T (Fn P -1 GT Q-1 )vecY (24) ^ ^ n and thus for Rn , the least-squares solution of Rn follows as If we solve for vecR ^ vecZ ^ vecRn = T ^ ¯ ¯ ¯ Rn = (GT Q-1 G)-1 GT Q-1 Y P -1 Fn (Fn P -1 Fn )-1 P -1 AT Q-1 A P -1Fn AT Q-1 G T -1 T -1 T Fn P G Q A Fn P -1Fn GT Q-1 G (25) ¯ ^ with G = PA G and PA = I2m - A(AT Q-1 A)-1 AT Q-1 . The variance matrix of vecRn is given as T ¯ ¯ QvecRn = (Fn P -1 Fn )-1 (GT Q-1 G)-1 (26) ^ Similarly, if Z is assumed known, we find the least-squares solution of Rn as T ^ Rn (Z) = (GT Q-1 G)-1 GT Q-1 (Y - AZ)P -1Fn (Fn P -1 Fn )-1 (27) ^ The variance matrix of vecRn (Z) is given as T QvecRn (Z) = (Fn P -1Fn )-1 (GT Q-1 G)-1 ^ (28) ^ The solution vecZ follows as ^ vecZ = In (AT Q-1 A)-1 AT Q-1 Its variance matrix is given as QvecZ = ^ = (P -1 AT Q-1 )(I - PFn PG )(I A) ¯ ~ -1 ¯ ~ P -1 AT Q-1 A + P -1 P AT Q-1 A Fn -1 ^ vecY - (Fn G)vecRn (29) (30) ~ ¯ with A = PG A, A = PG A, PG = I - PG , PFn = I - PFn , PG = G(GT Q-1 A)-1 AT Q-1 , T -1 -1 T -1 and PFn = Fn (Fn P Fn ) Fn P . The case n 3 The above least-squares solutions show a dependence on Fn , i.e. on the relative geometry of the antenna configuration as expressed in the body frame. This is of practical importance, since it allows one to improve the precision of these estimators by designing an appropriate geometry of the antenna configuration. In case of four or less than four antennas (n 3), however, the situation is somewhat different. In that case ma trix Fn is square and invertible, see (15), which implies that PFn = I, PFn = 0 and T T -1 (Fn P -1Fn )-1 Fn P -1 = Fn . The least-squares solutions reduce then to -T ¯ ¯ ^ ¯ Rn = (GT Q-1 G)-1 GT Q-1 Y Fn -T ^ Rn (Z) = (GT Q-1 G)-1 GT Q-1 (Y - AZ)Fn (31) For the variance matrices describing the precision, we have -1 -T ¯ ¯ = Fn P Fn (GT Q-1 G)-1 QvecRn ^ -1 -T QvecRn (Z) = Fn P Fn (GT Q-1 G)-1 ^ T -1 ¯ -1 ¯ QvecZ = P (A Q A) ^ (32) ^ ^ This shows, although the precision of Rn and Rn (Z) is still dependent on the choice of ^ is now independent of this configuration. antenna configuration, that the precision of Z The well-known fact that longer baselines improve the attitude capability is also clear 2 from the above. For instance, we have for n = 1, that QvecR1 (Z) = (P/f11 ) (GT Q-1 G)-1 , ^ with f11 the length of the baseline. Solving for the attitude matrix Nonlinear least-squares The computation of Rq (Z), see (20), requires for each Z, that one solves the minimization problem ^ min ||vecRq (Z) - vecRq ||2 vecR (Z) (33) Q ^ 3×q Rq O This minimization problem is a nonlinear least-squares problem due to the nonlinear constraints on vecRq . Geometrically it amounts to finding that point on a nonlinear or curved manifold which has the smallest distance to the given data vector, which in our ^ case is vecRq (Z). Distance is here measured with respect to the metric of the 3q × 3q variance matrix QvecRq (Z) . If n = 1, we have one constraint, namely the unit length of ^ R1 , and vecR1 is of order 3 × 1. Hence, in this case the curved manifold is of dimension 2 and it is embedded in the 3-dimensional space. If n = 2, we have three constraints, namely one orthogonality and two unit length constraints of the two columns of R2 , and vecR2 is of order 6 × 1. Hence in this case the curved manifold is of dimension 3 and it is embedded in the 6-dimensional space. If n 3, we have six constraints, namely three orthogonality and three unit length constraints of the three columns of R3 , and vecR3 is of order 9 × 1. Hence, in this case the curved manifold is of dimension 3 and it is embedded in the 9-dimensional space. The constraints Rq O 3×q can be taken into account by a suitable parametrization of Rq (e.g. by using one of the different ways in which rotations can be parametrized). Once this is done, one has an unconstrained nonlinear least-squares problem, which can be iteratively solved by means of the Gauss-Newton method (with a local linear rate of convergence) or by the Newton method (with a local quadratic rate of convergence), see e.g (Teunissen, 1990). These iterations are initialized with an appropriate approximation of Rq (see below). 4.3.2 Using the singular value decomposition Under certain conditions it is also possible to solve (33) directly by means of a singular T value decomposition (SVD). Let M = GT Q-1 G and N = Fn P -1Fn . Then QvecRq (Z) = ^ -1 -1 N M and the objective function of (33) can be written as ^ ^ = trace([Rq (Z) - Rq ]T M[Rq (Z) - Rq ]N) ^ ^ ^ = trace(Rq (Z)T M Rq (Z)N) - 2trace(N Rq (Z)T MRq ) T +trace(Rq MRq N) (34) This function is, in general, quadratic in Rq . It will be linear in Rq , if the last term of the above equation becomes independent of Rq . q (Z) ^ ||vecRq (Z) - vecRq ||2 vecR Q ^ The quadratic case With one exception, the quadratic case requires iterative solution techniques as the ones described above. The exception occurs when n = 1. Recall that matrix N is a q × q matrix, with q = n if n 3 and q = 3 if n > 3. Thus N is a scalar, if n = 1. The above minimization problem can then be written as R1 R3 T ^ min N||R1 (Z) - R1 ||2 -1 subject to R1 R1 = 1 M (35) This problem can be solved by means of the singular value decomposition (SVD), see e.g. (Golub and Van Loan, 1989). We can formulate problem (35) also as the problem of projecting orthogonally (with respect to the standard Euclidean metric) onto an ellipsoid. ^ To see this, let (Z) = M -1/2 R1 (Z) and = M -1/2 R1 . Then (35) is equivalent to ^ R3 min N||^(Z) - ||23 subject to T M -1 = 1 I (36) This problem of finding the closest point on a 2-dimensional ellipsoid is equivalent to the problem of computing geodetic coordinates from Cartesian coordinates. The case n = 1 occurs when one wants to determine heading and elevation (or yaw and pitch) from one GNSS-baseline (two antennas). This method was followed in (Park and Teunissen, 2003). The linear case The objective function becomes linear in Rq , when M is a scaled unit matrix, M = I3 , or when N is a scaled unit matrix, N = Iq , where , are scalars and n 3. When T M = I3 , we have trace(Rq MRq N) = trace(N), which is independent of Rq . And when T T N = Iq , we have trace(Rq MRq N) = trace(MRq Rq ) = trace(M), if q = 3 and thus if n 3. If (34) is linear in Rq , then the minimization of the objective function is equivalent to solving a maximization problem of the form Rq O 3×q max trace(T Rq ) (37) ^ with the q × 3 matrix T given. If M = I3 , then T = N Rq (Z)T , which is a q × 3 matrix. T ^ And when N = Iq and n 3, then T = Rq (Z) M, which is a q × 3 matrix, with q = 3. T Let T = UV be the SVD of T , with orthogonal matrices U and V of order q × q and 3 × 3, respectively, and with the q × 3 matrix = (q , 0), where q is the diagonal matrix containing the q singular values. Then the solution of (37) is given as Rq = V Iq 0 UT (38) To see this, we write trace(T Rq ) = trace(UV T Rq ) = trace(V T Rq U) = q (V T Rq U)ii i , i=1 in which the i are the q nonnegative entries of the q ×q diagonal matrix q in = (q , 0). Note that the q columns of the 3 × q matrix V T Rq U are orthonormal. Hence, trace(T Rq ) T is maximal if (V T Rq U)ii = 1 for i = 1, . . . , q, and thus if U T Rq V = (Iq , 0). From this the result (38) follows. Designing the antenna configuration In practice one has not much control over the matrices G and Q in M = GT Q-1 G. Matrix G is dictated by the relative receiver-satellite geometry and matrix Q is dictated by the precision of the observables. Hence, in practice, one can not expect that M = I3 holds true. The corresponding method of solution given above is therefore no option for solving (33). At most one can use M = I3 as an approximation of GT Q-1 G and thereby compute an approximate solution to (33), useful, for instance, for linearization purposes. T In contrast to M = GT Q-1 G, one can, however, exercise control over N = Fn P -1Fn by means of a suitable choice of the antenna configuration, that is, by means of choosing a suitable fixed body frame Fn . Let P = CC T , with C a lower triangular matrix, be the Cholesky decomposition of P . Then P -1 = C -T C -1 , with C -T an upper triangular matrix. If we now choose Fn as Iq (39) Fn = C 0 T then Fn P -1Fn = Iq . With this choice for the antenna configuration, we have QvecRq (Z) = ^ T -1 -1 Iq (G Q G) and therefore ^ ||vecRq (Z) - vecRq ||2 vecR Q ^ q (Z) = (1/) i=1 ||^i(Z) - ri ||2 T Q-1 G)-1 r (G (40) The constrained minimizer of this objective function can then be found, for q = 1 and q = 3, with the SVD-approach as discussed above. MULTIPLE EPOCH ATTITUDE DETERMINIATION So far we considered single epoch attitude determination. Now consider the multi epoch, multi antenna case. The matrix observation equation for epoch t is then given as T E(Yt ) = AZ + Gt Rq,t Fn t = 1, . . . , k (41) Matrices A and Fn are assumed to be time-invariant. This implies a constant tracking of the same satellites. Matrices Gt and Rq,t are assumed to depend on time. The geometry matrix Gt changes over time due to the changing relative receiver-satellite geometry and the orthonormal matrix Rq,t changes over time, because of the changing attitude of the vehicle. When we collect all observation equations for the k epochs, we may write them in vector-matrix form as E( vecY1 In A Fn G1 . = . .. . . . . . In A vecYk Fn Gk vecZ vecRq,1 . . . vecRq,k (42) The variance matrix is assumed to be block diagonal. Thus with Y = (Y1 , . . . , Yk ), we have D(vecY ) = blockdiag(P1 Q1 , . . . , Pk Qk ). In analogy with our earlier orthogonal decomposition of the objective function, we may decompose the objective function for (42) as ^ ^ H(Z, Rq,1, . . . , Rq,k ) = ||vecEk ||2 vecY +||vecZk -vecZ||2 vecZ + Q Q ^ k i=1 ^ ||vecRq,i(Z)-vecRq,i ||2 vecR Q ^ q,i (Z) (43) The unknowns are the m × n integer matrix Z and the 3 × q orthonormal matrices Rq,i , ^ i = 1, . . . , k. Matrix Zk is the real-valued least-squares estimator of Z, based on all the data up to and including epoch k, but without taking the integerness of Z and the ^ orthonormality of the Rq,i into account. Matrix Rq,i (Z) is the least-squares estimator of Rq,i , again based on all the data up to and including epoch k and without taking the orthonormality constraints into account, but now assuming that Z is known. Note, due ^ to the structure of (42) and the block diagonality of D(vecY ), that Rq,i (Z) only depends on Yi and not on Yj for j = i. This is also the reason why the last term in (43) can be written as a summation of k terms. The least-squares solution of (42) follows from minimizing the objective function H(Z, Rq,1, . . . , Rq,k ) subject to the integer constraints on Z and the orthonormality constraints on Rq,i . Thus if we define Rq,i (Z) = arg Rq,i O 3×q min ^ ||vecRq,i (Z) - vecRq,i ||2 Rq,i (Z) , i = 1, . . . k vec ^ (44) then the integer least-squares estimator of Z is given as ^ ||vecZk - vecZ||2 vecZ + Zk = arg min Q ^ m×n k i=1 ^ ||vecRq,i (Z) - vecRq,i (Z)||2 vecR Q ^ q,i (Z) (45) The sought for least-squares estimators of the k orthonormal matrices Rq,i , based on all the data up to and including epoch k, are then given as ^ Rq,i = Rq,i (Zk ) , i = 1, . . . , k (46) The integer solution Zk is obtained by means of an integer search, similar to the one described in the previous sections. For practical applications it would be very useful if the above solution can be obtained in a recursive manner. It will be clear that the structure of (42) and D(vecY ) is such ^ that the least-squares estimator Zk can indeed be obtained recursively. It is not difficult ^ ^k can be obtained from Zk-1 and Yk . However, if we now consider (45), to show that Z it will be clear that Zk can not be obtained in a recursive manner. This is due to the orthonormality constraints on Rq,i , i = 1, . . . , k. That is, in order to determine Zk , and ^ ^ q,k , it does not suffice to have Rq,k available, but we also need to store Rq,i for thus also R i < k. The conclusion is therefore reached that it is not possible to determine the integer least-squares estimator of Rq,k in a recursive manner, if the orthonormality constraints on all k matrices Rq,i are used. Hence, a rigorous recursive integer least-squares solution of (42) is not possible. The above analysis suggest, however, that it is possible to construct a recursive procedure, provided one is willing to accept a suboptimal integer solution for Z. We define this suboptimal solution as ^ ^ Zk = arg min ||vecZk - vecZ||2 vecZ + ||vecRq,k (Z) - vecRq,k (Z)||2 vecR Q Q ^ ^ m×n k q,k (Z) (47) This is the integer least-squares estimator of Z, that follows when one does not take the orthonormality constraints on the matrices Rq,i , i = 1, . . . , (k -1), into account. Compare (47) with (45). When use is made of (47), the updating steps in the recursion go as follows. From ^ ^ ^ Zk-1 and Yk , we obtain Zk . Similarly, Yk and Z, gives Rq,k (Z) and Rq,k (Z). From this ^ information Zk can be computed, which then finally leads to the output Rq,k = Rq,k (Zk ). This recursion is thus made possible by only taking the orthonormality constraints into account, that pertain to the current epoch k. Acknowledgement This research was done in the framework of the author's International Linkage Professorial Fellowship of the Australian Research Council, at the Curtin University of Technology, Perth, Australia, with Professor Will Featherstone as his host. This support is greatly acknowledged.

Journal

Artificial Satellitesde Gruyter

Published: Jan 1, 2007

There are no references for this article.