Access the full text.

Sign up today, get DeepDyve free for 14 days.

Axioms
, Volume 12 (5) – Apr 26, 2023

/lp/multidisciplinary-digital-publishing-institute/study-of-burgers-ndash-huxley-equation-using-neural-network-method-qAXBLGkmHd

- Publisher
- Multidisciplinary Digital Publishing Institute
- Copyright
- © 1996-2023 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. Terms and Conditions Privacy Policy
- ISSN
- 2075-1680
- DOI
- 10.3390/axioms12050429
- Publisher site
- See Article on Publisher Site

axioms Article Study of Burgers–Huxley Equation Using Neural Network Method 1, ,† 2,† Ying Wen * and Temuer Chaolu College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China College of Sciences and Arts, Shanghai Maritime University, Shanghai 201306, China; tmchaolu@shmtu.edu.cn * Correspondence: 201840310002@stu.shmtu.edu.cn † These authors contributed equally to this work. Abstract: The study of non-linear partial differential equations is a complex task requiring sophis- ticated methods and techniques. In this context, we propose a neural network approach based on Lie series in Lie groups of differential equations (symmetry) for solving Burgers–Huxley nonlinear partial differential equations, considering initial or boundary value terms in the loss functions. The proposed technique yields closed analytic solutions that possess excellent generalization properties. Our approach differs from existing deep neural networks in that it employs only shallow neural networks. This choice signiﬁcantly reduces the parameter cost while retaining the dynamic behavior and accuracy of the solution. A thorough comparison with its exact solution was carried out to validate the practicality and effectiveness of our proposed method, using vivid graphics and detailed analysis to present the results. Keywords: Burgers–Huxley equation; optimization; neural network method; Lie groups; Lie series 1. Introduction Partial differential equations (PDEs) are ubiquitous and fundamental to understanding and modeling the complexities of natural phenomena. From mathematics to physics to Citation: Wen, Y.; Chaolu, T. Study of economics and beyond, PDEs play a critical role in virtually all ﬁelds of engineering and Burgers–Huxley Equation Using science [1–3]. Through their mathematical representation of physical phenomena, PDEs Neural Network Method. Axioms provide a powerful means of gaining insight into complex systems, enabling researchers 2023, 12, 429. https://doi.org/ and engineers to predict behavior and uncover hidden relationships. However, solving 10.3390/axioms12050429 PDEs can be a daunting and challenging task. The complexity of these equations often requires sophisticated numerical methods that must balance accuracy and efﬁciency while Academic Editor: Azhar Ali Zafar solving high-dimensional PDEs. Despite these challenges, PDEs remain a cornerstone of and Nehad Ali Shah modern science, enabling researchers to unlock discoveries and technological advancements Received: 15 March 2023 across disciplines. Revised: 20 April 2023 As numerical and computational techniques continue to rapidly develop, the study Accepted: 25 April 2023 of PDEs has become increasingly vital. In recent years, advances in numerical methods Published: 26 April 2023 and high-performance computing techniques have made it possible to solve complex PDEs more accurately and efﬁciently than ever before. These new tools can precisely solve speciﬁc problems across a broader range of equations while simultaneously computing data faster, reducing the time and cost of solving pending problems. Moreover, these new techniques Copyright: © 2023 by the authors. have allowed researchers to gain deeper insights into the physical meaning behind PDEs, Licensee MDPI, Basel, Switzerland. enabling them to revisit natural phenomena from fresh perspectives and explore those This article is an open access article that prove challenging to explain by traditional methods. This has led to groundbreaking distributed under the terms and research discoveries and innovations in various ﬁelds of science and engineering. conditions of the Creative Commons Machine learning methods [4,5], particularly in the area of artiﬁcial neural networks Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ (ANNs) [6,7], have piqued considerable interest in recent years due to their potential to 4.0/). solve differential equations. ANNs are well-known for their exceptional approximation Axioms 2023, 12, 429. https://doi.org/10.3390/axioms12050429 https://www.mdpi.com/journal/axioms Axioms 2023, 12, 429 2 of 16 capabilities and have emerged as a promising alternative to traditional algorithms [8]. These methods have a signiﬁcantly smaller memory footprint and generate numerical solutions that are both closed and continuous over the integration domain without requiring interpolation. ANNs have been applied to differential equations, including ordinary differential equations (ODEs) [9,10], PDEs [11,12], and stochastic differential equations (SDEs) [13,14], making them a valuable tool for researchers and engineers alike. Neural networks have become a powerful and versatile tool for solving differential equations due to their ability to learn intricate mappings from input–output data, further cementing their role as a critical component in the machine learning ﬁelds. In recent years, the application of neural networks in solving differential equations has gained signiﬁcant attention in the scientiﬁc community. One prominent model is the neural ordinary differential equations, which approximates the derivative of an unknown solution using neural networks, parameterizing the derivatives of the hidden states of the network with the help of the differential equation, thus creating a new type of neural network [15]. Another approach is the deep Galerkin method [16], which uses neural networks to approximate the solution of the differential equation in a bid to minimize error. Gorikhovskii et al. [17] introduced a practical approach for solving ODEs using neural networks in the TensorFlow machine-learning framework. In addition, Huang et al. [18] introduce an additive self-attention mechanism to the numerical solution of differential equations based on the dynamical system perspective of the residual neural network. By utilizing neural network functions to approximate the solutions, neural networks have also been used to solve PDEs. The physics-informed neural network (PINN) method uses the underlying physics of the problem to incorporate constraints into the solution of the neural network, resulting in successful applications to various PDEs such as the Burgers and Poisson equations [19]. Compared to traditional numerical methods, PINNs offer several advantages, including higher accuracy and more efﬁcient computation. Berg et al. [20] introduced a new deep learning-based approach to solve PDEs on complex geometries. They use a feed-forward neural network and an unconstrained gradient-based optimization method to predict PDE solutions. Furthermore, Another exciting development in the ﬁeld of neural networks and PDEs is the use of convolutional neural networks (CNNs). Ruthotto et al. [21] used a CNN to learn to solve elliptic PDEs and incorporated a residual block structure to improve network performance. Quan et al. [22] presented an innovative approach to addressing the challenge of solving diffusion PDEs, by introducing a novel learning method built on the foundation of the extreme learning machine algorithm. By leveraging this advanced technique, the parameters of the neural network are precisely calculated by solving a linear system of equations. Furthermore, the loss function is ingeniously constructed from three crucial components: the PDE, initial conditions, and boundary conditions. Tang et al. [23] demonstrate through numerical cases that the proposed depth adaptive sampling (DAS-PINNs) method can be used for solving PDEs. Overall, the advancements made in the domain of neural networks have revolutionized how we approach solving complex PDEs in unimaginable ways. These developments suggest that neural networks are a promising tool for solving complex PDEs and that there is great potential for further research and innovation in this area. This paper proposes a novel approach for solving the Burgers–Huxley equation , which uses a neural network based on the Lie series in the Lie groups of differential equations, adding initial or boundary value terms to the loss function to approximate the solution of the equation by minimization. Slavova et al. [24] constructed a cellular neural network model to study the Burgers–Huxley equation. Shagun et al. [25] employed a feed-forward neural network to solve the Burgers-Huxley equation and investigated the impact of the number of training points on the accuracy of the solution. Kumar et al. [26] proposed a deep learning algorithm based on the deep Galerkin method for solving the Burgers–Huxley equation, which outperformed traditional numerical methods. These studies demonstrate the potential of neural networks in solving differential equations. Nonetheless, it is simple to ignore the underlying nature of these equations, in other words, Axioms 2023, 12, 429 3 of 16 to fail to capture the nonlinear nature of the equations, which is essential to comprehend the behavior of complex systems. To address this issue, the aim of our proposed method is to approximate the solution of the differential equations by combining the Lie series in Lie groups of differential equations and the power of neural networks. Our proposed method accurately simulates the physical behavior of complicated systems, and the ﬁrst part of the constructed solution has well captured the nonlinear nature of the equation while reducing the parameter cost of the subsequent neural network and by minimizing the loss function, making the solution converge quickly by introducing initial or boundary value terms required for exact approximation. This work demonstrates the effectiveness of combining neural networks with Lie series to solve differential equations and provides insights into the physical behavior of complex dynamical systems. The essay is set up as follows. The basic framework and fundamental theory of neural network algorithms based on Lie series in Lie groups of differential equations are introduced in Section 2. The speciﬁc steps for the Lie-series-based neural network method to solve the Burgers–Huxley equation are described in Section 3. The method is also applied to the Burgers–Fisher equation and the Huxley equation. Summary and outlook are presented in Section 4. 2. Basic Idea of a Lie-Series-Based Neural Network Algorithm 2.1. Differential Forms and Lie Series Solution With respect to the Lie group transformation of the parameter #, u = T(#; u) 2 G, u (0) = u (1) where G is a Lie group, and # is a group parameter. By employing Taylor expansion about neighborhood of # = 0, ¶T(#; u) u = T(#; u) = u + # + O # . (2) ¶# #=0 Then, u = u + #z is known as the inﬁnitesimal transformation. D = z(u)¶u is called ¶T(#;u) the inﬁnitesimal operator, where z(u) = . ¶# #=0 The following differential equation is given u = F(x, u), u(0) = u (3) F(x, u) is a differentiable function, and if (2) is a symmetry of (3), then it has a Lie series solution to the initial value problem (3) and can be written as [27] x D u = e uj (4) x=0 2.2. Algorithm of a Lie-Series-Based Neural Network The idea of Lie groups is based on the study of continuous symmetry, which at ﬁrst may seem abstract and complex. However, in the realm of solving differential equations, Lie group methods are a unique approach that goes beyond traditional mathematical techniques. Lie series in the Lie transform groups of differential equations can be used to construct approximate solutions of PDEs and to study their symmetries and other properties. Lie series provide a powerful framework for studying the behavior of dif- ferential equations and have many important applications in various ﬁelds of science and engineering. From [28], it is known that D = D + D (5) 1 2 x D x(D +D ) 1 2 The solution of (3) can be written as u = e uj = e uj . x=0 x=0 Axioms 2023, 12, 429 4 of 16 x D n Theorem 1. u ¯(x; u) = e uj , x 2 R , is the decomposition part of D. The solution of x=0 problem (3) belonging to D expanded as follows: (xt)D u = u ¯(x; u) + D e u j dt (6) 2 ¯ u!u(t;u) The proof is given below and is detailed in the literature [28]. Proof. ¥ v ¥ v x x x D x(D +D ) v v1 1 2 u = e u = e u = D u + D D u å å 1 1 v! v! v=0 v=1 (7) ¥ v ¥ v x x v2 va a1 + D D Du + . . . + D D D u + . . . å 2 å 2 1 1 v! v! v=2 v=a It is known that v a1 va x (x t) t = dt, (v a 1, integers) v! (a 1)! (v a)! Equation (7) is rewritten as Z Z ¥ ¥ v v x x t t v v u = u ¯ + D D udt + (x t) D D Dudt + . . . å 2 å 2 1 1 v! v! 0 0 v=0 v=0 a1 ¥ v (x t) t v a1 + D D D udt + . . . å 1 (a 1)! v! v=0 From the form of the series solution [27], it follows that ¥ v v a1 a1 D D D u = D D u 2 2 å 1 v! u!u ¯(t;u) v=0 Hence, x a1 (x t) a1 u = u ¯ + D D u dt å 2 (a 1)! u!u ¯(t;u) a=1 after commuting the signs of the series and the integral which is allowed within the circle of absolute convergence, the formula ¥ a (x t) u = u ¯ + D D u dt (8) a! a=0 u!u ¯(t;u) is obtained, which may also be written as follows: x D x D (xt)D e u = e u + D e u dt (9) u!u ¯(t;u) The complexities inherent in the integration of the second component, as elucidated by Equation (6), necessitates a sophisticated approach to computation. To tackle this daunting challenge head-on, as elaborated in the reference [29] of our previous work, the functional form of the neural network is utilized to simplify this part and ensure the accuracy of our results. x D From [29], u ˆ = e uj = u ¯ + x N(q; x). The determination of u ¯ from the equation x=0 u ¯ = D u ¯ is inspired by the idea of the Lie series solution of the ﬁrst-order ODE, where the initial value of u(0) = u(0) = u is kept constant throughout the process, ensuring the 0 Axioms 2023, 12, 429 5 of 16 reliability and truthfulness of our results. N(q; x) is a single output neural network with a single input of x , the parameter q consists of the weight W and the bias b. The Algorithm 1 is described in detail below, as shown in Figure 1. Initialize W,b Input x Output N(x;W,b) Calculate uˆ Calculation error W ,,,b (( )) < e or > maxit Done Figure 1. Flow chart of Lie-series-based neural network algorithm. Algorithm 1: A Lie-series-based neural network algorithm for problem (3) Require Determine the operator D according to (3), and solve it with the decomposed part D to obtain u ¯. Begin 1. Consider a uniformly spaced distribution of discrete points within the initial condition x (` = 1, 2, . . . , l). 2. Determining the structure of a neural network. (The number of hidden layers and the number of neurons, the selection of the activation function s.) 3. Initialization of the neural networks parameters W, b. 4. Get u ˆ = u ¯ + x N(q; x) and substitute back into (3). 5. Minimize the loss function L(q). 6. Update the parameter q so that u ˆ approximates the solution u of problem (3). End In general, the loss function L(q) is deﬁned as follows: L(q) = L + L F I l n 1 ¶ ˆ ˆ ˆ ˆ = u (x, q) F (u , u , . . . , u ) 1 2 å å i i i l ¶x x=x i=1 `=1 ` (10) + u ˆ (x, q)j K(x)j å å i x=x x=x l l 2 2 l=1 i=1 ... ... 1H Axioms 2023, 12, 429 6 of 16 as additional terms with K(x ), l = 1, 2, . . . , p as initial value or boundary conditions. The L part of the loss function is derived by substituting the network solution u ˆ into the mean squared error generated on both sides of the problem (3). In addition, the mean squared error generated by the network solution u ˆ under the initial or boundary value terms are also used to derive the L component of our loss function. By constructing the components of L and L , we can satisfy both the differential equations and the initial values or boundary F I conditions of the problem under study. du The above algorithm also applies to the system of differential equations = F (u , u , i 1 2 dx 1 n ¶ . . . , u ), u (0) = a 2 R , i = 1, 2, . . . , n, where D = å F (u ) . For higher-order ODEs i i i i i=1 ¶u or PDEs, the above form can also be transformed with the help of some transformations or calculations. 2.3. The General Structure of the Neural Network As depicted in Figure 2, our study delves into the complexities of multilayer percep- trons and their unique characteristics, with a particular emphasis on those with a single input unit, m hidden layers of H neurons, a neural network with activation function s in the hidden layer, and a linear output unit. We present a detailed analysis of this neural network architecture. Speciﬁcally, for a given input vector x (` = 1, 2, . . . , l), the output H H m1 m+1 m m+1 m m m m of the network N = W s(Z ) + b , Z = w s(Z ) + b , where w is å å i=1 j=1 i i ji j i ji the weight of the jth neuron in layer m 1 to the ith neuron in layer m, and b is the bias of 1 1 1 the ith neuron in layer m. It can be seen that Z = w x + b . In this paper, the activation 1 11 1 Z Z e e function s is chosen tanh(Z) = . Z Z e +e 1 m b b 1 1 1 m s (Z ) s (Z ) 1 1 1 m+1 w W m m + 1 s (Z ) s(Z ) b 2 1 Inputs Output 1 m s (Z ) s(Z ) 3 3 ... 1 m s(Z ) s(Z ) H H Hidden Layer Figure 2. Neural network structure. 3. Lie-Series-Based Neural Network Algorithm for Solving Burgers Huxley Equation The generalized Burgers–Huxley equation [30] is a nonlinear PDE that describes the propagation of electrical impulses in excitable media, such as nerve and muscle cells. It is a widely used mathematical framework for modeling intricate dynamical phenomena and has been instrumental in advancing research across multiple domains including physics, biology, economics, and ecology. The equation takes the form ¶u ¶u ¶ u d d d + au = bu 1 u hu l (11) ¶t ¶x ¶x where a, b, l, h are constants and d is a positive constant. 12 Axioms 2023, 12, 429 7 of 16 When a = 1, b = 1, l = 1, h = 1, d = 1, the Burgers–Huxley equation is as follows: ¶u ¶ u ¶u 1 x = + u + u(1 u)(u 1), u(0, x) = 1 tanh (12) ¶t ¶x ¶x 2 4 1 x 3t The exact solution of (12) is u(t, x) = 1 tanh + . Using the traveling wave 2 4 8 00 0 0 transform x = x ct, problem (12) is transformed into an ODE, u + cu + uu + u(1 u)(u 1) = 0. Naturally, it is transformed into the form of the following system of ODEs 0 0 u = u , u = u u u u (1 u )(u 1) (13) 2 2 1 2 1 1 1 1 2 3 1 1 with c = , u (x) = u and initial values u (0) = , u (0) = . 1 1 2 2 2 8 In this study, we address the problem of solving the Burgers–Huxley equation us- ing a Lie-series-based neural network algorithm. The operator D = u ¶ + ( u 2 u 2 1 2 u u u (1 u )(u 1))¶ of (13) is chosen as D = u ¶ + u ¶ , and the solu- 1 2 1 1 1 u 1 2 u 2 u 2 1 2 3x 3x tion of the corresponding initial value problem is u ¯ (x) = 7 cosh sinh , 12 2 2 3x 3x 1 1 u ¯ (x) = cosh sinh . The solution of this part has been able to capture 8 2 8 2 the nonlinear nature of the equation within a certain range, as shown in Figure 3. To minimize the loss function L(q), we employ two structurally identical neural networks and boundary value terms, each with 30 neurons in a single hidden layer, and the in- put x (` = 1, 2, . . . , 100) is 100 training points spaced equally in the interval [5, 3], making u ˆ (x) as close as possible to the exact solution u(x) of the equation. The gen- eralization ability of the neural network was conﬁrmed in 120 test points at equidistant intervals of x 2 [5, 3.3]. The Lie-series-based neural network algorithm solves the Burgers–Huxley equation model as shown in Figure 4. Furthermore, we demonstrate the ability of neural networks to ﬁt the training and test sets in Figure 5. By plotting the loss function L(q) = L + L against the number of iterations in Figure 6, where L = F I F 1 l 0 0 3 u ˆ (x ) u ˆ (x ) + u ˆ (x ) u ˆ (x ) + u ˆ (x )u ˆ (x ) + u ˆ (x )(1 u ˆ (x )) ` 2 ` ` 2 ` 1 ` 2 ` 1 ` 1 ` l `=1 1 2 2 2 2 2 1 1 0 (u ˆ (x ) 1)) , and L = (u ˆ (5) u(5)) + (u ˆ (5) u (5)) , l = 100. Some 1 ` I 1 2 2 2 1100 iterations later, L(q) = 3.042 10 . 1.0 0.5 0.0 -0.5 -1.0 -3 -2 -1 0 1 2 Figure 3. Comparison of the u ¯ (x) solution of the Burgers–Huxley equation with the exact solution u(x). We compare the solution u ˆ(t, x) containing the neural network training and the exact solution u(t, x) in the interval t 2 [0, 1], x 2 [5, 2] in the upper panel of Figure 7. Addition- ally, the lower panel displays the behavior of the solution at t = 0.3, 0.5, 0.8, demonstrating Values ... ... Axioms 2023, 12, 429 8 of 16 the solitary wave solution of the Burgers–Huxley equation. The contour plots for solution u ˆ (t, x) and the exact solution u(t, x) are shown in Figure 8, further illustrating the accuracy of our proposed algorithm. u +xN 1 1 II s u ˆ u ˆ (x )-u (x ) x N 1 1 l 1 l ¶ x Min ¶uˆ ¶uˆ F F 1 2 * - uˆ , + cuˆ + uˆ uˆ + uˆ (1 - uˆ )(uˆ - 1) 2 2 1 2 1 1 1 (q ) ¶x ¶x ¶ x u +xN II 2 2 x s N u ˆ x -u x uˆ ( ) ( ) 2 2 l 2 l NN (x ,q ) ODEs Figure 4. Schematic diagram of a Lie series-based neural network algorithm for solving Burgers– Huxley equation. ̂ ̂ Neural solution - u Neural solution - u 1 ̂ 0.9 0.9 x ̂ act solution - u Exact solution - u 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 −5 −4 −3 −2 −1 0 1 2 3 −4 −2 0 2 ξ ξ 1 x Figure 5. (Left) Comparison of solution u ˆ (x) with the exact solution u(x) = 1 tanh 2 4 of (13) in the training set. (Right) Comparison of solution u ˆ (x) with the exact solution u(x) = 1 tanh of (13) in the test set. 2 4 BFGS 2 − 4 − 6 − 0 200 400 600 800 1000 Training Iterations Figure 6. Curves of Loss function versus number of iterations for Burgers–Huxley equation. Values Log Loss Values Axioms 2023, 12, 429 9 of 16 t=0.3 t=0.5 t=0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 -7 -6 -5 -4 -3 -2 -1 0 1 2 -7 -6 -5 -4 -3 -2 -1 0 1 2 -7 -6 -5 -4 -3 -2 -1 0 1 2 x x x 1 x 3t Figure 7. (Top) The true solution u(t, x) = 1 tanh + of the Burgers–Huxley equation is 2 4 8 on the left, the predicted solution u ˆ(t, x) is on the right. (Bottom) Comparison of predicted and exact solutions at time t = 0.3, 0.5, and 0.8. (The dashed blue line indicates the exact solution u(t, x), and the solid red line indicates the predicted solution u ˆ(t, x)). u(t,x) u (t,x) 0.9 0.9 1 1 0.8 0.8 0 0 0.7 0.7 -1 -1 0.6 0.6 -2 -2 0.5 0.5 -3 -3 0.4 0.4 0.3 0.3 -4 -4 0.2 0.2 -5 -5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 t t Figure 8. Contour plot of the Burgers–Huxley equation with respect to the solution u ˆ (t, x) and the exact solution u(t, x). To verify the validity and generality of our proposed equation, the method was applied to two classical equations, the Burgers–Fisher, and the Huxley equations. For this purpose, we performed a thorough analysis and obtained strong results that proved the validity of our method. Speciﬁcally, when a = 1, b = 1, l = 1, h = 0, d = 1, the Burgers–Fisher equation is as follows: ¶u ¶ u ¶u 1 x = + u + u(1 u), u(0, x) = 1 + tanh (14) ¶t ¶x ¶x 2 4 1 x 5t The exact solution of (14) is u(t, x) = 1 + tanh + . Similarly, using the travel- 2 4 8 00 0 0 ing wave transform x = x ct, problem (14) is transformed into an ODE, u + cu + uu + u(t,x) u(t,x) u(t,x) Axioms 2023, 12, 429 10 of 16 1 0 1 u(1 u) = 0, with initial value u(0) = , u (0) = . Transformation of ODEs into the form 2 8 of a system of differential equations, 0 0 u = u , u = u u u u (1 u ) (15) 2 2 2 1 2 1 1 5 1 1 where u (x) = u, c = , and initial values u (0) = , u (0) = . The operator 1 1 2 2 2 8 5 5 D = u ¶ + ( u u u u (1 u ))¶ of (15), D is chosen as u ¶ + u ¶ u ¶ , 2 u 2 1 2 1 1 u 1 2 u 2 u 1 u 1 2 2 1 2 2 2 1 x /2 3x /2 1 x /2 ` ` ` the predicted solution u ˆ (x ) = e (7 + e ) + x N , u ˆ (x ) = (7e ) 1 ` ` 1 ` 12 24 2x e + x N , where the structure of the neural network is a single hidden layer containing ` 2 30 neurons with inputs x 2 [5, 2] of equidistant intervals of 100 training points and test points are 120 points of the interval 5, 2.2 , and the training results are shown in Figure 9. [ ] As shown in Figure 10, our method achieves an impressive performance with the loss function L(q) reaches 8.861 10 in about 700 iterations. This exceptional result again illustrates that the solution of the D part of our proposed method captures the nonlinear nature of the solution, thereby reducing the computational cost associated with additional parameters which are evident from Figure 11. In addition, we provide a three-dimensional representation of the dynamics of the predicted solution u ˆ(t, x) with the exact solution u(t, x) in the interval t 2 [0, 1] and x 2 [5, 2], as shown in Figure 12. Neural solution - û 1 ̂ Neural solution - u1 0.7 x ̂ act solution - u x ̂ act solution - u 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 −5 −4 −3 −2 −1 0 1 2 −5 −4 −3 −2 −1 0 1 2 ξ ξ Figure 9. (Left) Comparison of solution u ˆ (x) with the exact solution u(x) = 1 + tanh of (15) 2 4 in the training set. (Right) Comparison of u ˆ (x) with the exact solution u(x) = 1 + tanh of (15) in the test set. BFGS 2 − 4 − 6 − 0 100 200 300 400 500 600 700 Training Iterations Figure 10. Curves of loss function versus number of iterations for Burgers–Fisher equation. Values Log Loss Values Axioms 2023, 12, 429 11 of 16 -1 -2 -3 1 -5 -4 -3 -2 -1 0 1 2 Figure 11. Comparison of the u ¯ (x) solution of the Burgers–Fisher equation with the exact solution u(x). t=0.3 t=0.5 t=0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 -5 -4 -3 -2 -1 0 1 2 -5 -4 -3 -2 -1 0 1 -5 -4 -3 -2 -1 0 x x x 1 x 5t Figure 12. (Top) The true solution u(t, x) = 1 + tanh + of the Burgers–Fisher equation is 2 4 8 on the left, the predicted solution u ˆ(t, x) is on the right. (Bottom) Comparison of predicted and exact solutions at time t = 0.3, 0.5, and 0.8. (The dashed blue line indicates the exact solution u(t, x), and the solid red line indicates the predicted solution u ˆ(t, x)). We investigate the Huxley equation under the conditions where a = 0, b = 1, l = 1, h = 1, d = 1. The equations are as follows: ¶u ¶ u 1 x = + u(1 u)(u 1), u(0, x) = 1 + tanh (16) ¶t ¶x 2 2 2 1 2x t The exact solution of (16) is u(t, x) = 1 + tanh . Similarly, using the 2 4 4 00 0 traveling wave transform x = x ct, problem (16) is transformed into an ODE, u + cu + u(1 u)(u 1) = 0. It is transformed into the following differential equation form u(t,x) Values u(t,x) u(t,x) Axioms 2023, 12, 429 12 of 16 0 0 u = u , u = u u (1 u )(u 1) (17) 2 2 1 1 1 1 2 1 1 2 where initial values u (0) = , u (0) = , and c = , it is clear that u (x) = u(x), 1 2 1 2 2 4 2 0 2 u (x) = u (x). In the case of D = u ¶ u ¶ , the system of differential equations u u 2 1 2 2 1 2 2 0 0 1 1 ¯ ¯ ¯ ¯ ¯ ¯ u = u , u = u , the initial values are u (0) = and u (0) = , this time 2 2 1 2 1 2 2 2 4 2 p p 3 1 x / 2 1 x / 2 u ¯ (x) = e , u ¯ (x) = e . 4 4 4 2 For predicting the solution u ˆ (x) and u ˆ (x), the same neural network with two single 1 2 hidden layers containing 30 neurons with the same structure is trained by optimization technique Broyden–Fletcher–Goldfarb–Shanno (BFGS) minimizes the Loss function L(q). The input x is the interval [2, 7] equidistantly spaced by 100 points. The test set is the 150 points in the interval [2, 7.5]. As shown in Figure 13, our proposed method produced excellent predictions for both the trained predicted and exact solutions. The variation of the loss function throughout the process is depicted in Figure 14, and it can be observed that the loss function decreased remarkably during training. Figure 15 shows the dynamics of u ˆ (t, x) with the exact solution u(t, x), when x = x ct is substituted into u ˆ (x) and the 1 1 predicted solution u ˆ (t, x) compared with the exact solution u(t, x) at t = 0.3, 0.5, 0.8. The contour plot in Figure 16 provides a more visualization of the network solution u ˆ (t, x) compared to the exact solution u(t, x). 1.0 Neural solution - u1 1.0 Neural solution - u1 x ̂ act solution - u x ̂ act solution - u 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 −2 0 2 4 6 −2 0 2 4 6 ξ ξ 1 2x Figure 13. (Left) Comparison of solution u ˆ (x) with the exact solution u(x) = 1 + tanh 2 4 of (17) in the training set. (Right) Comparison of solution u ˆ (x) with the exact solution 2x u(x) = 1 + tanh of (17) in the test set. 2 4 BFGS 2 − 4 − 6 − 0 100 200 300 400 500 Training Iterations Figure 14. Curves of loss function versus number of iterations for Huxley equation. Log Loss Values Values Axioms 2023, 12, 429 13 of 16 t=0.3 t=0.5 t=0.8 1 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 -2 -1 0 1 2 3 4 5 6 7 -2 -1 0 1 2 3 4 5 6 7 -2 -1 0 1 2 3 4 5 6 7 x x x 1 2x t Figure 15. (Top) The true solution u(t, x) = 1 + tanh of the Huxley equation is on 2 4 4 the left, the predicted solution u ˆ(t, x) is on the right. (Bottom) Comparison of predicted and exact solutions at time t = 0.3, 0.5, and 0.8. (The dashed blue line indicates the exact solution u(t, x), and the solid red line indicates the predicted solution u ˆ(t, x)). u(t,x) u (t,x) 4 4 0.9 0.9 3 3 0.8 0.8 2 0.7 2 0.7 0.6 0.6 1 1 0.5 0.5 0 0 0.4 0.4 -1 -1 0.3 0.3 0.2 0.2 -2 -2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 t t Figure 16. Contour plot of the Huxley equation with respect to the solution u ˆ (t, x) and the exact solution u(t, x). 4. Discussion and Conclusions The exponential growth of information data has resulted in limited data becoming a signiﬁcant issue in various ﬁelds, especially in data-driven applications. Addressing this challenge has become a critical area of research in recent times. To contribute towards ﬁnding solutions to this problem, this paper proposes a novel method for resolving the Burgers-Huxley equation using a neural network based on Lie series in Lie groups of differential equations, which is an emerging ﬁeld with great potential in solving complex problems. To the best of our knowledge, this study represents the ﬁrst time the Burgers- Huxley equation has been solved using a Lie-series-based neural network algorithm. In physics, engineering, and biology, the Burgers–Huxley equation is a well-known mathe- matical model that is frequently utilized. Our novel approach offers a unique perspective on solving this equation by adding boundary or initial value items to the loss function, u(t,x) u(t,x) u(t,x) Axioms 2023, 12, 429 14 of 16 which leads to more accurate predictions and a better understanding of the underlying system. This research opens up new avenues for further exploration of the Lie-series-based neural network algorithm, speciﬁcally regarding its applications to other complex models beyond the Burgers–Huxley equation. In this study, we present a novel method for obtaining a differentiable closed analytical form to provide an effective foundation for further research. The proposed approach is straightforward to use and evaluate. To verify the effectiveness of the suggested method, we applied it to two classic models of the Burgers–Fisher and Huxley equations that have well-known exact solutions. The proposed algorithm exhibits remarkable potential in cap- turing the nonlinear nature of equations and accelerating the computation process of neural networks. The performance of our method is demonstrated in Figures 3 and 11, which show how the proposed algorithm can capture the nonlinear behavior of the equations more effectively and speed up the computation of subsequent neural networks. To further evaluate the effectiveness of the proposed technique, we plotted the relationship between the loss function and the number of iterations in Figures 6, 10 and 14. Our results indicate that under the inﬂuence of the Lie series in Lie groups of differential equations, our algo- rithm can converge quickly and achieve more precise solutions with fewer data. Moreover, the accuracy of the obtained solutions is signiﬁcant, and the generalization ability of the neural network is demonstrated by its ability to maintain high accuracy even outside the training domain, as shown in Figures 5, 9 and 13. We compared the performance of each neural network using small parameters (60 weight parameters and 31 bias parameters) with the exact solution to the problem. Our results highlight that the addition of the Lie series in Lie groups of differential equations algorithm remarkably enhances the ability of the neural network to solve a given equation. Undoubtedly, the proposed method has several limitations that need to be carefully considered. Firstly, the method requires the transformation of PDEs into ODEs before applying the suggested algorithm. Although the results obtained after this transformation are preliminary, they provide useful insights for researchers. Additionally, an inverse transformation must be employed to produce the ﬁnal solution u ˆ(t, x), taking into account the range of values for various variables. The choice of the operator D may also inﬂuence the outcomes. Secondly, the current study only addresses nonlinear diffusion issues of the d d d type F(u) = au u + u + bu 1 u hu l , and the suitability of the technique was x xx assessed via the computation of the loss function. Therefore, the applicability of the method to other types of non-linear PDEs is yet to be investigated, and it might require further adjustments to accommodate such problems. Despite some inherent challenges, our work offers a promising strategy for solving complex mathematical models using neural network algorithms based on Lie series. The computational performance of the proposed algorithm is noteworthy, achieving high solution accuracy at a relatively low time and parameter cost. In light of these ﬁndings, it is worth considering the prospect of applying this algorithm to ﬁnancial modeling, where accurate predictions can have a signiﬁcant impact. Moving forward, there is ample scope for extending and improving the proposed algorithm further. Future research could explore how to optimize the performance of the algorithm by addressing its limitations and weaknesses for nonlinear PDE problems. For example, choosing a different neural network framework, CNN or recurrent neural network, etc., may improve the efﬁciency and accuracy of the method. Additionally, expanding the method’s applicability beyond nonlinear diffusion issues may also yield valuable insights into other areas of mathematical modeling. In summary, we believe that our work presents an exciting avenue for future research. By building upon our ﬁndings and addressing the limitations of the proposed algorithm, we can develop more sophisticated techniques for solving complex mathematical models in ﬁnance and other areas. Solving the above problems is the main goal of our next research work. Axioms 2023, 12, 429 15 of 16 Author Contributions: Conceptualization, Y.W. and T.C.; methodology, Y.W.; software, Y.W.; valida- tion, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W. and T.C. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the National Natural Science Foundation of China grant number 11571008. Data Availability Statement: The data used to support the ﬁndings of this study are included within the article. The link to the code is https://github.com/yingWWen/Study-of-Burger-Huxley- Equation-using-neural-network-method (accessed on 14 March 2023). Acknowledgments: The authors thank the support of the National Natural Science Foundation of China [grant number 11571008]. Conﬂicts of Interest: As far as we know, there are no conﬂict of interest or ﬁnancial or other conﬂicts. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. References 1. Ockendon, J.R.; Howison, S.; Lacey, A.; Movchan, A. Applied Partial Differential Equations; Oxford University Press on Demand: Oxford, UK, 2003. 2. Mattheij, R.M.; Rienstra, S.W.; Boonkkamp, J.T.T. Partial Differential Equations: Modeling, Analysis, Computation; SIAM: Philadelphia, PA, USA, 2005. 3. Duffy, D.J. Finite Difference Methods in Financial Engineering: A Partial Differential Equation Approach; John Wiley & Sons: Hoboken, NJ, USA, 2013. 4. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [CrossRef] 5. Mahesh, B. Machine learning algorithms—A review. Int. J. Sci. Res. (IJSR) 2020, 9, 381–386. 6. Yegnanarayana, B. Artiﬁcial Neural Networks; PHI Learning Pvt. Ltd.: New Delhi, India, 2009. 7. Zou, J.; Han, Y.; So, S.S. Overview of artiﬁcial neural networks. In Artiﬁcial Neural Networks: Methods and Applications; Humana Press: Totowa, NJ, USA, 2009; pp. 14–22. 8. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Netw. 1991, 4, 251–257. [CrossRef] 9. Lagaris, I.E.; Likas, A.; Fotiadis, D.I. Artiﬁcial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 1998, 9, 987–1000. [CrossRef] [PubMed] 10. Chakraverty, S.; Mall, S. Artiﬁcial Neural Networks for Engineers and Scientists: Solving Ordinary Differential Equations; CRC Press: Boca Raton, FL, USA, 2017. 11. Yang, L.; Meng, X.; Karniadakis, G.E. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. J. Comput. Phys. 2021, 425, 109913. [CrossRef] 12. Blechschmidt, J.; Ernst, O.G. Three ways to solve partial differential equations with neural networks–review. GAMM-Mitteilungen 2021, 44, e202100006. [CrossRef] 13. Han, J.; Jentzen, A. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun. Math. Stat. 2017, 5, 349–380. 14. Nabian, M.A.; Meidani, H. A deep learning solution approach for high-dimensional random differential equations. Probabilistic Eng. Mech. 2019, 57, 14–25. [CrossRef] 15. Chen, R.T.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D.K. Neural ordinary differential equations. Adv. Neural Inf. Process. Syst. 2018, 31, 6572–6583. 16. Sirignano, J.; Spiliopoulos, K. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys. 2018, 375, 1339–1364. [CrossRef] 17. Gorikhovskii, V.; Evdokimova, T.; Poletansky, V. Neural networks in solving differential equations. J. Phys. Conf. Ser. 2022, 2308, 012008. [CrossRef] 18. Huang, Z.; Liang, M.; Lin, L. On Robust Numerical Solver for ODE via Self-Attention Mechanism. arXiv 2023, arXiv:2302.10184. 19. Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [CrossRef] 20. Berg, J.; Nyström, K. A uniﬁed deep artiﬁcial neural network approach to partial differential equations in complex geometries. Neurocomputing 2018, 317, 28–41. [CrossRef] 21. Ruthotto, L.; Haber, E. Deep neural networks motivated by partial differential equations. J. Math. Imaging Vis. 2020, 62, 352–364. [CrossRef] 22. Quan, H.D.; Huynh, H.T. Solving partial differential equation based on extreme learning machine. Math. Comput. Simul. 2023, 205, 697–708. [CrossRef] 23. Tang, K.; Wan, X.; Yang, C. DAS-PINNs: A deep adaptive sampling method for solving high-dimensional partial differential equations. J. Comput. Phys. 2023, 476, 111868. [CrossRef] Axioms 2023, 12, 429 16 of 16 24. Slavova, A.; Zecc, P. Travelling wave solution of polynomial cellular neural network model for burgers-huxley equation. C. R. l’Acad. Bulg. Sci. 2012, 65, 1335–1342. 25. Panghal, S.; Kumar, M. Approximate analytic solution of Burger Huxley equation using feed-forward artiﬁcial neural network. Neural Process Lett. 2021, 53, 2147–2163. [CrossRef] 26. Kumar, H.; Yadav, N.; Nagar, A.K. Numerical solution of Generalized Burger–Huxley & Huxley’s equation using Deep Galerkin neural network method. Eng. Appl. Artif. Intell. 2022, 115, 105289. 27. Olver, P.J. Applications of Lie Groups to Differential Equations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1993; Volume 107. 28. Gröbner, W.; Knapp, H. Contributions to the Method of Lie Series; Bibliographisches Institut Mannheim: Mannheim, Germany, 1967; Volume 802. 29. Wen, Y.; Chaolu, T.; Wang, X. Solving the initial value problem of ordinary differential equations by Lie group based neural network method. PLoS ONE 2022, 17, e0265992. [CrossRef] [PubMed] 30. Wang, X.; Zhu, Z.; Lu, Y. Solitary wave solutions of the generalised Burgers-Huxley equation. J. Phys. Math. Gen. 1990, 23, 271. [CrossRef] Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Axioms – Multidisciplinary Digital Publishing Institute

**Published: ** Apr 26, 2023

**Keywords: **Burgers–Huxley equation; optimization; neural network method; Lie groups; Lie series

Loading...

You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!

Read and print from thousands of top scholarly journals.

System error. Please try again!

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

Copy and paste the desired citation format or use the link below to download a file formatted for EndNote

Access the full text.

Sign up today, get DeepDyve free for 14 days.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.