Distributed-Order Non-Local Optimal Control
Ndaïrou, Faïçal;Torres, Delfim F. M.
2020-10-25 00:00:00
axioms Article †,‡ ,‡ Faïçal Ndaïrou and Delfim F. M. Torres * Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal; faical@ua.pt * Correspondence: delfim@ua.pt; Tel.: +351-234-370-668 † This research is part of first author ’s Ph.D. project, which is carried out at the University of Aveiro under the Doctoral Program in Applied Mathematics of Universities of Minho, Aveiro, and Porto (MAP-PDMA). ‡ These authors contributed equally to this work. Received: 9 September 2020; Accepted: 22 October; Published: 25 October 2020 Abstract: Distributed-order fractional non-local operators were introduced and studied by Caputo at the end of the 20th century. They generalize fractional order derivatives/integrals in the sense that such operators are defined by a weighted integral of different orders of differentiation over a certain range. The subject of distributed-order non-local derivatives is currently under strong development due to its applications in modeling some complex real world phenomena. Fractional optimal control theory deals with the optimization of a performance index functional, subject to a fractional control system. One of the most important results in classical and fractional optimal control is the Pontryagin Maximum Principle, which gives a necessary optimality condition that every solution to the optimization problem must verify. In our work, we extend the fractional optimal control theory by considering dynamical system constraints depending on distributed-order fractional derivatives. Precisely, we prove a weak version of Pontryagin’s maximum principle and a sufficient optimality condition under appropriate convexity assumptions. Keywords: distributed-order fractional calculus; basic optimal control problem; Pontryagin extremals MSC: 26A33; 49K15 1. Introduction Distributed-order fractional operators were introduced and studied by Caputo at the end of the previous century [1,2]. They can be seen as a kind of generalization of fractional order derivatives/integrals in the sense that these operators are defined by a weighted integral of different orders of differentiation over a certain range. This subject gained more interest at the beginning of the current century by researchers from different mathematical disciplines, through attempts to solve differential equations with distributed-order derivatives [3–6]. Moreover, at the same time, in the domain of applied mathematics, those distributed-order fractional operators have started to be used, in a satisfactory way, to describe some complex phenomena modeling real world problems—see, for instance, works in viscoelasticity [7,8] and in diffusion [9]. Today, the study of distributed-order systems with fractional derivatives is a hot subject—see, e.g., [10–12] and references therein. Fractional optimal control deals with optimization problems involving fractional differential equations, as well as a performance index functional. One of the most important results is the Pontryagin Maximum Principle, which gives a first-order necessary optimality condition that every solution to the dynamic optimization problem must verify. By applying such a result, it is possible to find and identify candidate solutions to the optimal control problem. For the state of the art on fractional optimal control, we refer the readers to [13–15] and references therein. Recently, distributed-order fractional problems of the calculus of variations were introduced and Axioms 2020, 9, 124; doi:10.3390/axioms9040124 www.mdpi.com/journal/axioms Axioms 2020, 9, 124 2 of 12 investigated in [16]. Here, our main aim is to extend the distributed-order fractional Euler–Lagrange equation of [16] to the Pontryagin setting (see Remark 2). Regarding optimal control for problems with distributed-order fractional operators, the results are rare and reduce to the following two papers: [17,18]. Both works develop numerical methods while, in contrast, here we are interested in analytical results (not in numerical approaches). Moreover, our results are new and bring new insights. Indeed, in [17], the problem is considered with Riemann–Liouville distributed derivatives, while in our case we consider optimal control problems with Caputo distributed derivatives. We must also note an inconsistency in [17]: when one defines the control system with a Riemann–Liouville derivative, then in the adjoint system it should appear as a Caputo derivative—when one considers optimal control problems with a control system with Caputo derivatives, the adjoint equation should involve a Riemann–Liouville operator—as a consequence of integration by parts (cf. Lemma 1). This inconsistency has been corrected in [18], where optimal control problems with Caputo distributed derivatives (as in this paper) are considered. Unfortunately, there is still an inconsistency in the necessary optimality conditions of both [17,18]: the transversality conditions are written there exactly as in the classical case, with the multiplier vanishing at the end of the interval, while the correct condition, as we prove in our Theorem 1, should involve a distributed integral operator—see condition (3). The text is organized as follows. We begin by recalling definitions and necessary results of the literature in Section 2 of preliminaries. Our original results are then given in Section 3. More precisely, we consider fractional optimal control problems where the dynamical system constraints depend on distributed-order fractional derivatives. We prove a weak version of Pontryagin’s maximum principle for the considered distributed-order fractional problems (see Theorem 1) and investigate a Mangasarian-type sufficient optimality condition (see Theorem 2). An example, illustrating the usefulness of the obtained results, is given (see Examples 1 and 2). We end with Section 4 of conclusions, mentioning also some possibilities of future research. 2. Preliminaries In this section, we recall necessary results and fix notations. We assume the reader to be familiar with the standard Riemann–Liouville and Caputo fractional calculi [19,20]. Let a be a real number in [0, 1] and let y be a non-negative continuous function defined on [0, 1] such that y(a)da > 0. This function y will act as a distribution of the order of differentiation. Definition 1 (See [1]). The left and right-sided Riemann–Liouville distributed-order fractional derivatives of a function x : [a, b] ! R are defined, respectively, by Z Z 1 1 y() y() a a D x(t) = y(a) D x(t)da and D x(t) = y(a) D x(t)da, + + a b a b 0 0 a a where D and D are, respectively, the left and right-sided Riemann–Liouville fractional derivatives of order a. a b Definition 2 (See [1]). The left and right-sided Caputo distributed-order fractional derivatives of a function x : [a, b] ! R are defined, respectively, by Z Z 1 1 y() y() C C a C C a D x(t) = y(a) D x(t)da and D x(t) = y(a) D x(t)da, a b a b 0 0 C a C a where D and D are, respectively, the left and right-sided Caputo fractional derivatives of order a. a b Axioms 2020, 9, 124 3 of 12 As noted in [16], there is a relation between the Riemann–Liouville and the Caputo distributed-order fractional derivatives: y(a) y() y() C a D x(t) = D x(t) x(a) (t a) da + + a a G(1 a) and y(a) y() y() C a D x(t) = D x(t) x(b) (b t) da. b b G(1 a) Along the text, we use the notation 1 y() 1 a I x(t) = y(a) I x(t)da, b b 1 a where I represents the right Riemann–Liouville fractional integral of order 1 a. The next result has an essential role in the proofs of our main results; that is, in the proofs of Theorems 1 and 2. Lemma 1 (Integration by parts formula [16]). Let x be a continuous function and y a continuously differentiable function. Then, Z Z h i b b y() 1 y() y() x(t) D y(t)dt = y(t) I x(t) + y(t) D x(t)dt. a b b a a Next, we recall the standard notion of concave function, which will be used in Section 3.3. Definition 3 (See [21]). A function h : R ! R is concave if h(bq + (1 b)q ) bh(q ) + (1 b)h(q ) 1 2 1 2 for all b 2 [0, 1] and for all q , q in R . 1 2 Lemma 2 (See [21]). Let h : R ! R be a continuously differentiable function. Then h is a concave function if and only if it satisfies the so called gradient inequality: h(q ) h(q ) rh(q )(q q ) 2 2 1 1 1 for all q , q 2 R . 1 2 Finally, we recall a fractional version of Gronwall’s inequality, which will be useful to prove the continuity of solutions in Section 3.1. Lemma 3 (See [22]). Let a be a positive real number and let a(), b(), and u() be non-negative continuous functions on [0, T] with b() monotonic increasing on [0, T). If a 1 u(t) a(t) + b(t) (t s) u(s)ds, then " # (b(t)G(a)) na 1 u(t) a(t) + (t s) u(s) ds G(na) n=0 for all t 2 [0, T). Axioms 2020, 9, 124 4 of 12 3. Main Results The basic problem of optimal control we consider in this work, denoted by (BP), consists in finding a piecewise continuous control u 2 PC and the corresponding piecewise smooth state trajectory x 2 PC solution of the distributed-order non-local variational problem J[x(), u()] = L (t, x(t), u(t)) dt ! max, y() D x(t) = f t, x(t), u(t) , t 2 [a, b], ( ) (BP) x() 2 PC , u() 2 PC, x(a) = x , where functions L and f , both defined on [a, b] R R, are assumed to be continuously differentiable 1 1 in all their three arguments: L 2 C , f 2 C . Our main contribution is to prove necessary (Section 3.2) and sufficient (Section 3.3) optimality conditions. 3.1. Sensitivity Analysis Before we can prove necessary optimality conditions to problem (BP), we need to establish continuity and differentiability results on the state solutions for any control perturbation (Lemmas 4 and 5), which are then used in Section 3.2. The proof of Lemma 4 makes use of the following mean value theorem for integration, that can be found in any textbook of calculus (see Lemma 1 of [23]): if F : [0, 1] ! R is a continuous function and y is an integrable function that does not change the sign on the interval, then there exists a number a ¯ , such that Z Z 1 1 y(a)F(a)da = F(a ¯) y(a)da. 0 0 Lemma 4 (Continuity of solutions). Let u be a control perturbation around the optimal control u , that is, e e for all t 2 [a, b], u (t) = u (t) + eh(t), where h() 2 PC is a variation and e 2 R. Denote by x its corresponding state trajectory, solution of y() C e e e e D x (t) = f (t, x (t), u (t)) , x (a) = x . + a Then, we have that x converges to the optimal state trajectory x when e tends to zero. Proof. Starting from the definition, we have, for all t 2 [a, b], that y() y() C e C e e D x (t) D x (t) = j f (t, x (t), u (t)) f (t, x (t), u (t))j . + + a a Then, by linearity, y() y() y() C e C C e e e D x (t) D x (t) = D (x (t) x (t)) = j f (t, x (t), u (t)) f (t, x (t), u (t))j + + + a a a and it follows, by definition of the distributed operator, that C a e e e y(a) D (x (t) x (t)) da = j f (t, x (t), u (t)) f (t, x (t), u (t))j . Now, using the mean value theorem for integration, and denoting m := y(a)da, we obtain that there exists an a ¯ such that e e j f (t, x (t), u (t)) f (t, x (t), u (t))j C a ¯ e D (x (t) x (t)) . m Axioms 2020, 9, 124 5 of 12 Clearly, one has e e j f (t, x (t), u (t)) f (t, x (t), u (t))j C a ¯ e C a ¯ e D (x (t) x (t)) D (x (t) x (t)) , + + a a which leads to e e f t, x (t), u (t) f t, x (t), u (t) j ( ) ( )j e a ¯ x (t) x (t) I . Moreover, because f is Lipschitz-continuous, we have e e e e f (t, x , u ) f (t, x , u ) K x x + K u u . 1 2 By setting K = maxfK , K g, it follows that 1 2 e a ¯ e x (t) x (t) I x (t) x (t) + eh(t) h i ¯ ¯ a a e = jej I h(t) + I x (t) x (t) + + a a K 1 a ¯ a ¯