Access the full text.
Sign up today, get DeepDyve free for 14 days.
Isabelle Guyon, A. Elisseeff (2003)
An Introduction to Variable and Feature SelectionJ. Mach. Learn. Res., 3
Irina Kļevecka, J. Lelis (2008)
Pre-Processing of Input Data of Neural Networks: The Case of Forecasting Telecommunication Network Traffic, 104
Ben Niu, Yanjun Liu, Wanlu Zhou, Haitao Li, Peiyong Duan, Junqing Li (2020)
Multiple Lyapunov Functions for Adaptive Neural Tracking Control of Switched Nonlinear Nonlower-Triangular SystemsIEEE Transactions on Cybernetics, 50
Shen Yin, S. Ding, Adel Haghani, Haiyang Hao (2013)
Data-driven monitoring for stochastic systems and its application on batch processInternational Journal of Systems Science, 44
N. Batmanghelich, B. Taskar, C. Davatzikos (2012)
Generative-Discriminative Basis Learning for Medical ImagingIEEE Transactions on Medical Imaging, 31
(2012)
Face recognition using robust PCA and radial basis function network
S. Chauhan, K. Prema (2013)
Effect of dimensionality reduction on performance in artificial neural network for user authentication2013 3rd IEEE International Advance Computing Conference (IACC)
(2001)
Saliency analysis of support vector machines for feature selection, 2
Ben Niu, Ding Wang, N. Alotaibi, F. Alsaadi (2019)
Adaptive Neural State-Feedback Tracking Control of Stochastic Nonlinear Switched Systems: An Average Dwell-Time MethodIEEE Transactions on Neural Networks and Learning Systems, 30
(2009)
Dimensionality reduction: A comparative review tilburg centre for creative computing
Lijuan Cao, K. Chua, W. Chong, Heow Lee, Q. Gu (2003)
A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machineNeurocomputing, 55
P. Hoyer, Aapo Hyvärinen (2000)
Independent component analysis applied to feature extraction from colour and stereo imagesNetwork: Computation in Neural Systems, 11
(2019)
ADAPTIVE PID CONTROLLER BASED ON NEURAL NETWORKS FOR MIMO NONLINEAR SYSTEMS
M. Lennon, G. Mercier, M. Mouchot, L. Hubert‐Moy (2002)
Curvilinear component analysis for nonlinear dimensionality reduction of hyperspectral images, 4541
Madhusmita Mishra, H. Behera, Depart (2012)
Kohonen Self Organizing Map with Modified K-means clustering For High Dimensional Data Set
Kyungmi Lee, V. Estivill-Castro (2007)
Feature extraction and gating techniques for ultrasonic shaft signal classificationAppl. Soft Comput., 7
Ridong Zhang, Jili Tao, Renquan Lu, Q. Jin (2018)
Decoupled ARX and RBF Neural Network Modeling Using PCA and GA Optimization for Nonlinear Distributed Parameter SystemsIEEE Transactions on Neural Networks and Learning Systems, 29
J. Weston, Sayan Mukherjee, O. Chapelle, M. Pontil, T. Poggio, V. Vapnik (2000)
Feature Selection for SVMs
Nicolás Peleato, R. Legge, R. Andrews (2018)
Neural networks for dimensionality reduction of fluorescence spectra and prediction of drinking water disinfection by-products.Water research, 136
O. Mohareri, R. Dhaouadi, A. Rad (2012)
Indirect adaptive tracking control of a nonholonomic mobile robot via neural networksNeurocomputing, 88
He Qinshu, Li Xinen, X. Shifu (2013)
Comparison of PCA and Model Optimization Algorithms for System Identification Using Limited DataJournal of Applied Sciences, 13
Krystyna Ku, M. Zaj (2011)
Data pre-processing in the neural network identification of the modified walls natural frequencies
F. Tay, Lijuan Cao (2001)
A comparative study of saliency analysis and genetic algorithm for feature selection in support vector machinesIntell. Data Anal., 5
Eleni Aggelogiannaki, H. Sarimveis (2008)
Nonlinear model predictive control for distributed parameter systems using data driven artificial neural network modelsComput. Chem. Eng., 32
S. Yoo, Jin Park, Y. Choi (2007)
Indirect adaptive control of nonlinear dynamic systems using self recurrent wavelet neural networks via adaptive learning ratesInf. Sci., 177
K. Narendra, K. Parthasarathy (1990)
Identification and control of dynamical systems using neural networksIEEE transactions on neural networks, 1 1
Chakour Chouaib, M. Harkat, Djeghaba Messaoud (2013)
Adaptive kernel principal component analysis for nonlinear dynamic process monitoring2013 9th Asian Control Conference (ASCC)
(2009)
This PDF file includes: Materials and Methods
A. Errachdi, M. Benrejeb (2017)
Online identification using radial basis function neural network coupled with KPCAInternational Journal of General Systems, 46
Qunxiong Zhu, Chengfei Li (2006)
Dimensionality Reduction with Input Training Neural Network and Its Application in Chemical Process ModellingChinese Journal of Chemical Engineering, 14
Mengling Wang, Xingdi Yan, H. Shi (2013)
Spatiotemporal prediction for nonlinear parabolic distributed parameter system using an artificial neural network trained by group search optimizationNeurocomputing, 113
Yingqun Xiao, Yi-gang He (2011)
A novel approach for analog fault diagnosis based on neural networks and improved kernel PCANeurocomputing, 74
R. Fezai, M. Mansouri, O. Taouali, M. Harkat, N. Bouguila (2018)
Online reduced kernel principal component analysis for process monitoringJournal of Process Control, 61
Kalam Reddy, V. Ravi (2013)
Differential evolution trained kernel principal component WNN and kernel binary quantile regression: Application to bankingKnowl. Based Syst., 39
A. Errachdi, M. Benrejeb (2018)
Performance Comparison of Neural Network Training Approaches in Indirect Adaptive ControlInternational Journal of Control, Automation and Systems, 16
Chun-Yuan Cheng, Chun-Chin Hsu, Mu-Chen Chen (2010)
Adaptive Kernel Principal Component Analysis (KPCA) for Monitoring Small Disturbances of Nonlinear ProcessesIndustrial & Engineering Chemistry Research, 49
Samarasena Buchala, N. Davey, T. Gale, R. Frank (2005)
Analysis of linear and nonlinear dimensionality reduction methods for gender classification of face imagesInternational Journal of Systems Science, 36
Masoud Shirzadeh, A. Amirkhani, A. Jalali, M. Mosavi (2015)
An indirect adaptive neural control of a visual-based quadrotor robot for pursuing a moving target.ISA transactions, 59
V. Janakiraman, X. Nguyen, D. Assanis (2013)
Nonlinear identification of a gasoline HCCI engine using neural networks coupled with principal component analysisAppl. Soft Comput., 13
Azuwien Bohari, W. Utomo, Z. Haron, N. Zin, Sy Sim, Roslina Ariff (2013)
Speed Tracking of Indirect Field Oriented Control Induction Motor Using Neural NetworkProcedia Technology, 11
Lijuan Cao, F. Tay (2004)
Saliency Analysis of Support Vector Machines for Feature Selection in Financial Time Series Forecasting
MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 2020, VOL. 26, NO. 2, 144–168 https://doi.org/10.1080/13873954.2019.1710715 ARTICLE On the combination of kernel principal component analysis and neural networks for process indirect control A. Errachdi, S. Slama and M. Benrejeb Automation Research Laboratory, Tunis El Manar University, Tunis, Tunisia ABSTRACT ARTICLE HISTORY Received 4 March 2019 A new adaptive kernel principal component analysis (KPCA) for Accepted 28 December 2019 non-linear discrete system control is proposed. The proposed approach can be treated as a new proposition for data pre-proces- KEYWORDS sing techniques. Indeed, the input vector of neural network con- Neural networks; modelling; troller is pre-processed by the KPCA method. Then, the obtained indirect control; KPCA; reduced neural network controller is applied in the indirect adap- reduction; non-linear system tive control. The influence of the input data pre-processing on the accuracy of neural network controller results is discussed by using numerical examples of the cases of time-varying parameters of single-input single-output non-linear discrete system and multi- input multi-output system. It is concluded that, using the KPCA method, a significant reduction in the control error and the identi- fication error is obtained. The lowest mean squared error and mean absolute error are shown that the KPCA neural network with the sigmoid kernel function is the best. 1. Introduction We are involved in adaptive system control of the non-linear discrete system using neural network. In fact, the indirect adaptive control structure is based on two neural network blocks corresponding to the model identification of the dynamic behaviour of the system and system controller [1–6]. However, the size of the neural network model or the neural network controller can accelerate or slow down their training phase. This problem of reduction of the higher dimension of neural network is well discussed by different techniques [7–37]. The first step in reduction method is feature selection (new features are selected from the original inputs) or feature extraction (new features are transformed from the original inputs). In the modelling, all available indicators can be used, but correlated features or irrelevant features could deteriorate the generalization performance of any model [7–18]. Many linear techniques of reduction dimensionality are proposed. For instance, Kohonen Self Organizing Feature Maps provide a way of representing multidimensional data in much lower dimensional spaces [19], curvilinear component analysis[20] and curvilinear distance analysis [21] are proposed to make smaller the original dimension of the face images, data and for classification in medical imaging [22] and principal CONTACT A. Errachdi errachdi_ayachi@yahoo.fr Automation Research Laboratory, Tunis El Manar University, BP 37, le Belvédère, Tunis 1002, Tunisia © 2020 Informa UK Limited, trading as Taylor & Francis Group MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 145 component analysis (PCA) has been widely used for reducing high dimension in many applications [16–18,23–25]. PCA is a well-known method for feature extraction [23,24]. By calculating the eigenvectors of the covariance matrix of the original inputs, PCA linearly transforms the original high-dimensional input vector into new low-dimensional one whose com- ponents are uncorrelated. The basis function orders of PCA, as a typical approach, are the lowest in the sense of model dimension reduction [16–18,23–25]. In other applications, for instance, in the study by Zhang et al. [15], a hybrid modelling strategy consists of a decoupled non-linear radial basis function neural network model based on PCA and linear autoregressive exogenous model. PCA reduces the cross- validation time required to identify optimal model hyper-parameters [25]. In the study by Seerapu and Srinivas [26], it was combined with the linear discriminate analysis to ameliorate the reduction. Then, in the study by Peleato et al. [27], the use of fluorescence data coupled with neural networks based on PCA for improved predictability of drinking water disinfection by-products was investigated. Second, in the study by Qinshu et al. [14], a PCA for feature selection and a grid searching and k-fold cross validation approach for parameter optimization in the support vector machine were developed. Finally, in other dimensionality reduction, linear techniques such as multidimensional scaling and probabilistic PCA are applied for user authentication using keystroke dynamics [28] and other methods [29]. However, PCA is a linear time/space separation method and cannot be directly applied to non-linear systems [30]. Non-linear PCA has also been developed by using different algorithms. Kernel principal component analysis (KPCA) is a non-linear PCA developed by using the kernel method. Kernel method is originally used for Support Vector Machine (SVM). Later, it has been generalized into many algorithms having the term of dot products such as PCA. Specifically, KPCA firstly maps the original inputs into a high-dimensional feature space using the kernel method and then calculates PCA in the high-dimensional feature space. The linear PCA in the high-dimensional feature space corresponds to a non-linear PCA in the original input space. Recently, another linear transformation method called independent component analysis (ICA) is also developed. Instead of transforming uncorrelated components, ICA attempts to achieve statistically independent components in the transformed vectors. ICA is originally developed for blind source separation. Later, it has been generalized for feature extraction [7]. KPCA is used as an effective method for tackling the problem of non-linear data [31]. Indeed, in the study by Chakour et al. [32], an algorithm for adaptive KPCA is proposed for dynamic process monitoring. This algorithm combined two existing algorithms: the recursive weighted PCA and the moving window KPCA algorithms. Even better, the fault detection of the non-linear system using KPCA method for extracting the reduced number of measurements from the training data [33] is studied. In the study by Xiao and He [34], a neural-network-based fault diagnosis approach of analog circuits is developed, using maximal class separability-based KPCA as a preprocessor to reduce the dimensionality of candidate features so as to obtain the optimal features with maximal class separability as inputs to the neural networks. In the study by Reddy and Ravi [36], differential evolution (DE)-trained kernel principal component wavelet neural network and DE-trained kernel binary quantile regression are proposed for classification. 146 A. ERRACHDI ET AL. In the proposed DE-KPCWNN technique, KPCA is applied to input data to get KPC, on which WNN is employed. In the study by Klevecka and Lelis [37], a functional algorithm of preprocessing of input data taking into account the specific aspects of teletraffic and properties of neural networks is created. The practical application for forecasting telecommunication data sequences shows that the procedure of data preprocessing decreases the time of learning and increases the plausibility and accuracy of the forecasts. In this paper, the scheme of indirect adaptive control is used based on a neural network. First, the used neural network is based on an adaptive learning rate and a reduced derivative of the activation function. Even better, the weights of the neural network model and neural network controller are updated based on the identification error and the control error and used to generate the appropriate control. In the first hand, in various studies [1,2,5,6,15,38,39], the authors developed many algorithms for the adaptive indirect control without any preprocessing and they did not take into account the high dimension of the neural network. On the other hand, in the study by Errachdi and Benrejeb [4], the authors developed an algorithm to accelerate the speed of training phase in the adaptive indirect control based on neural network controller using a variable learning rate and a development of Taylor of the derivative of the activation function but they did not focus on the big dimension. That is why, in this paper, we propose a new algorithm of a reduction of the input vector of the neural controller in the control system based on the KPCA. The procedure of the data preprocessing scheme decreases the time of learning and increases the accuracy of the system control. The present paper is organized as follows. After this introduction, Section 2 reviews the proposed KPCA method for system control. In fact, the proposed neural network controller based on the KPCA method is developed. Furthermore, in Section 3, the proposed algorithm is detailed. In Section 4, an example of a non-linear system is presented to illustrate the proposed efficiency of the method. Section 5 gives the conclu- sion of this paper. 2. The proposed KPCA neural network controller approach On the basis of the input and output relations of a system, the above discrete non-linear system can be expressed by a NARMA (Non-linear Autoregressive Moving Average) model [4,35] given by yðk þ 1Þ¼ f ðyðkÞ; :::; yðk n Þ; uðkÞ; :::; uðk n ÞÞ (1) y u f ð:Þ is the non-linear function mapping specified by the model, yðkÞ and uðkÞ are the outputs and the inputs of the system, respectively, k is the discrete time, n and n are the y u number of past output and input samples, respectively, required for prediction. The aim of this paper is to find a control law uðkÞ to the non-linear system, given by Equation (1), based on the KPCA approach in order that the system output yðkÞ tracks, where possible, the desired value rðkÞ. The indirect control architecture is shown in Figure 1, and the weights of the neural network model and the neural network controller are trained by different errors where MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 147 Figure 1. The architecture of indirect neural control. eðkÞ is the identification error, be ðkÞ is the estimated tracking error and e ðkÞ is the c c tracking error [4]. The architecture shown in Figure 1 assumes the role of two neural blocks. Indeed, the weights of the neural model are adjusted by the identification error eðkÞ; however, the weights of the neural controller are trained by the tracking error e ðkÞ [4]. The multi-layer perceptron is used in the neural model and in the neural controller. Each block consists of three layers. The sigmoid activation function sð:Þ is used for all neurons [4]. 2.1. The neural network model The principle of neural network model is given by the Figure 2. th The j output layer of the hidden layer is described as follows: h ¼ w x j ¼ 1; 2; :::; n (2) j ji i 2 i¼1 where n is the number of nodes of the input layer, n is the number of nodes of the 1 2 hidden layer and w is the hidden weight. ji The input vector of the neural network model is x ¼½ uðkÞ; uðk 1Þ; uðk 2Þ; ::: (3) where uðkÞ is the neural network controller output. The output of the neural network model is given by the following equation: yrðk þ 1Þ¼ λsð w sðh ÞÞ (4) 1j j j¼1 where λ is a scaling coefficient and w is the output weight. 1j The compact form of the output is given by the following equation: yrðk þ 1Þ¼ λsðh Þ¼ λs½w SðWxÞ (5) 1 148 A. ERRACHDI ET AL. Figure 2. The principle of neural network model. with,. x ¼½x ; i ¼ 1; .. . ; n i 1 W ¼½w ; i ¼ 1; ... ;n ; j ¼ 1; .. . ; n ji 1 2 SðWxÞ¼½sðh Þ ; j ¼ 1; ... ; n j 2 w ¼½w ; j ¼ 1; ... ; n 1 1j 2 The identification error eðkÞ is given by eðkÞ¼ yðkÞ yrðkÞ (6) The function cost is given by the following equation: E ¼ ðeðkÞÞ (7) where N is the number of observations. The output weights are updated by the following equation: w ðk þ 1Þ¼ w ðkÞþ Δw ðkÞ (8) 1j 1j 1j MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 149 where Δw , j ¼ 1; :::; n is given by minimizing the cost function defined as follows: 1j 2 @EðkÞ Δw ¼ηðkÞ 1j @w 1j @EðkÞ @eðkÞ @h (9) ¼ηðkÞ @eðkÞ @h @w 1 1j ¼ ληðkÞeðkÞs ðh ÞSðWxÞ ηðkÞ is the variable learning rate for the weights of the neural network model, 0 ηðkÞ 1, given by ηðkÞ¼ 2 2 (10) 0 T T 0 0 T λ s ðh Þ S ðWxÞSðWxÞþw S ðWxÞS ðWxÞw x x 1 1j 1j s ðh Þ is the derivative of sðh Þ defined as follows: 1 1 s ðh Þ¼ sðh Þð1 sðh ÞÞ 1 1 1 (11) ð1 þ e Þ 1 1 þ h þ Oðh Þ 4 2 The hidden weights are updated by the following equation: w ðk þ 1Þ¼ w ðkÞþ Δw ðkÞ (12) ji ji ji where Δw is given by the following equation: ji @EðkÞ Δw ¼ηðkÞ ji @w ji @EðkÞ @eðkÞ @h @h j (13) ¼ηðkÞ @eðkÞ @h @h @w 1 j ji 0 0 T ¼ ληðkÞs ðh ÞS ðWxÞw x eðkÞ 1 1j 0 0 with S ðWxÞ¼ diag½s ðh Þ ; j ¼ 1; :::; n j 2 For the stability of the neural network model, the Lyapunov function is detailed. Indeed, let us define a discrete Lyapunov function as (14) VðkÞ¼ EðkÞ¼ ðeðkÞÞ where eðkÞ is the identification error given by Equation (6). The change in the Lyapunov function is obtained by 2 2 ΔVðkÞ¼ Vðk þ 1Þ VðkÞ¼ ððeðk þ 1ÞÞ ðeðkÞÞ Þ (15) The identification error difference can be represented by @yrðkÞ ΔeðkÞ¼ eðk þ 1Þ eðkÞ ηðkÞ eðkÞ (16) @w ðkÞ where w ðkÞ is the synaptic weights of the neural network identifier (w ðkÞ, w ðkÞ). Using i 1j ji Equation (16), the identification error is going to be 150 A. ERRACHDI ET AL. eðk þ 1Þ¼ eðkÞ ηðkÞðkÞeðkÞ (17) with hi 2 2 0 T T 0 0 T ðkÞ¼ð Þ s ðh Þ S ðWxÞSðWxÞþ w S ðWxÞS ðWxÞw x x (18) 1 1j 1j From Equations (17) and (18), the convergence of the identification error eðkÞ is guaranteed if lim eðkÞ¼ 0or0< ηðkÞ< 2 ðkÞ with VðkÞ > 0 from Equation (14). k!þ1 The suitable online algorithm may be applied if the variable learning rate ηðkÞ is ðkÞ. 2.2. The KPCA neural network controller The PCA technique is a lower-dimensional projection method that can use with multi- variate data mining [25,30–32,40]. The main idea behind the PCA is to represent multi- dimensional data with fewer numbers of variables retaining the main features of the data. It is inevitable that by reducing dimensionality, some features of the data will be lost. The method PCA tries to project multidimensional data into a lower-dimensional space, retaining as much as possible variability of the data [4,25,30–32,40]. However, the presented PCA method is a linear technique and cannot capture the non-linear structure in a data set. For this reason, non-linear generalization has been proposed using the kernel method, introduced for computing the principal components of the data set mapped non-linearly into some high-dimensional feature space. Because sample data are implicitly mapped from an input space to a higher-dimensional feature space ζ, KPCA is implemented efficiently by virtue of kernel tricks and it can be solved as an eigenvalue problem of its kernel matrix. In this section, we propose to reduce the input vector of the neural network controller of the adaptive indirect control structure. Indeed, before the reduction of the input vector, the new architecture of the adaptive indirect KPCA neural network control is given Figure 3. We recall the input vector of the neural network controller is z ¼½ rðkÞ; rðk 1Þ; rðk 2Þ; ::: (19) where rðkÞ is the desired value. For the input data fz g , ϕ represents the non-linear mapped data in ζ. The k¼1 covariance matrix of the projected features C is l l,defined as C ¼ ϕðz Þϕðz Þ (20) j j j¼1 Its eigenvalues and eigenvectors are given by Cp ¼ λ p k ¼ 1; :::; l (21) k k k From Equation (20), Equation (21) may be MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 151 Figure 3. The new architecture of indirect neural control. ϕðz Þðϕðz Þ p Þ¼ λ p (22) j j k k k j¼1 p can be rewritten as p ¼ α ϕðz Þ (23) k j j j¼1 with α , j ¼ 1; :::; l as the expansion coefficients. Equation (21) can be rewritten as l l l X X X ϕðz Þðϕðz Þ α ϕðz ÞÞ ¼ λ α ϕðz Þ (24) j j i i k i i j¼1 i¼1 i¼1 The kernel function krðz ; z Þ is defined as i j krðz ; z Þ¼ ϕðz Þ ϕðz Þ (25) i j i j is multiplied to the left and to the right by ϕðz Þ , Equation (23) becomes l l l X X X T T T ϕðz Þ ϕðz Þðϕðz Þ α ϕðz ÞÞ ¼ λ α ϕðz Þ ϕðz Þ (26) d j j i i k i d i j¼1 i¼1 i¼1 Equation (25) is l l l X X X krðz ; z Þ α krðz ; z Þ¼ λ α krðz ; z Þ (27) d i j i j k i d i i¼1 j¼1 i¼1 with krðz ; z Þ¼ ϕðz Þ ϕðz Þ d i d i 152 A. ERRACHDI ET AL. The resulting kernel principal components can be calculated using x ðkÞ¼ ϕðzÞ p ¼ α krðz; z Þ (28) r k i i i¼1 The reduced space of the signal given by Equation (28) constitutes the input vector of the neural network controller. We propose a dimensionality reduction technique that should be employed to reduce the dimensionality of the feature vectors before they are fed as input x ¼½ x ðkÞ; x ðk 1Þ; x ðk 2Þ; ::: (29) 1 r r r The primary purpose of data pre-processing is to modify the input variables so they can better match the predicted output. The main purpose of neural network data transfor- mation is to modify the distribution of the network input parameters without losing much information. th Using the reduced input vector x , the j output layer of the hidden layer is described as follows: h ¼ v x j ¼ 1; :::; n (30) cj ji 1i 4 i¼1 where n is the number of nodes of the input layer, v is the hidden weight. 3 ji Similarly, the output of the neural controller is given by the following equation: uðkÞ¼ λ s v sh c 1j cj j¼1 (31) n n 4 3 P P ¼ λ s v s v x c 1j ji 1i j¼1 i¼1 where n is the number of nodes of the hidden layer, λ is a scaling coefficient and v is 4 c 1j the output weight. The compact form of the control input to the system is given by the following equation: uðkÞ¼ λ sðh Þ¼ λ s½v SðVx Þ (32) c c1 c 1 with,. x ¼½x ; i ¼ 1; ... ; n 1 1i 3 V ¼½v ; i ¼ 1; ... ;n ; j ¼ 1; .. . ; n ji 3 4 SðVx Þ¼½sðh Þ ; j ¼ 1; ... ; n 1 j 4 v ¼½v ; j ¼ 1; ... ; n 1 1j 4 The tracking error e ðkÞ is given by the following equation: c MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 153 e ðkÞ¼ yðkÞ rðkÞ (33) where rðkÞ is the desired output. The updated weights of the neural controller are obtained by minimizing the cost function defined as follows: E ¼ ðe ðkÞÞ (34) c c where N is the number of observations. The output weights are updated by v ðk þ 1Þ¼ v ðkÞþ Δv ðkÞ (35) 1j 1j 1j with Δv , j ¼ 1::n , is the incremental change of the output weights: 1j 4 @E ðkÞ Δv ¼η ðkÞ 1j c @v 1j @sðh Þ @h @e ðkÞ @yrðkÞ j j @uðkÞ @E c @h @h c 1 c1 (36) ¼η ðkÞ @e ðkÞ @yðkÞ @h @sðh Þ @h @uðkÞ @h @v c 1 j j c1 1j 0 0 0 ¼ η ðkÞλ e ðkÞs ðh Þw S ðWxÞw s ðh ÞSðVx Þ c c 1 1j ji c1 1 where η ðkÞ is the learning rate for the weights of the neural network controller, 0 η ðkÞ 1, given by 2 2 0 0 0 η ðkÞ¼ 1=ðλ s ðh Þs ðh Þw w S ðWxÞ c1 1 1j ji c c hi (37) T T 0 0 T S ðVx ÞSðVx Þþ v S ðVx ÞS ðVx Þv x x Þ 1 1 1 1 1j 1 1j 1 Concerning the hidden weights, they are updated by v ðk þ 1Þ¼ v ðkÞþ Δv ðkÞ (38) ji ji ji where Δv is given by ji @E ðkÞ Δv ¼η ðkÞ ji @v ji @yr @sðh Þ @h @h @E @e @h j j @h cj c c 1 @u c1 (39) ¼η ðkÞ @e @y @h @sðh Þ @h @u @h @h @v c 1 j j c1 cj ji 0 0 0 0 T ¼ η ðkÞλ e ðkÞs ðh Þw S ðWxÞw s ðh Þv S ðVx Þx c c 1 1j ji c1 1j 1 c 1 0 0 with S ðVx Þ¼ diag½s ðh Þ ; j ¼ 1; :::; n 1 j 4 Let Ψ ¼½ϕðz Þ; :::; ϕðz Þ,1 ¼ð Þ and Γ ¼ Ψ Ψ, Γ is the matrix which is defined as 1 l l ll ~ ~ ~ ~ Γ ¼ Γ 1 Γ Γ1 þ 1 Γ1 (40) l l l l with Γ ¼ ϕðz Þ 1 ϕðz Þ¼ krðz ; z Þ. In this paper, different kernel functions are used and ij i l j i j defined in Table 1. Table 1. The usual kernel functions. Function Kernel kz z k i j Radial basis function kernel 2σ krðz ; zÞ¼ e i j Polynomial kernel krðz ; zÞ¼ða:z :z þ bÞ i j i j Linear kernel krðz ; zÞ¼ z :z i j i j Sigmoid kernel krðz ; zÞ¼ tanhða:z :z þ bÞ i j i j 154 A. ERRACHDI ET AL. The principal components are the s first vectors associated with the highest eigenva- lues and are often sufficient to describe the structure of the data. The number s satisfies the Inertia Percentage Criterion (IPC) [25] given by s ¼ argðIPC 99Þ (41) with i¼1 IPC ¼ 100 (42) i¼1 i We have developed a neural network controller based on a reduced input vector and a variable learning rate. Consequently, this approach increases the training speed. For the stability of the neural network controller, the Lyapunov function is detailed. Indeed, let us define a discrete Lyapunov function as (43) V ðkÞ¼ E ðkÞ¼ ðe ðkÞÞ c c c where e ðkÞ is the control error. The change in the Lyapunov function is obtained by 2 2 (44) ΔV ðkÞ¼ V ðk þ 1Þ V ðkÞ¼ ððe ðk þ 1ÞÞ ðe ðkÞÞ Þ c c c c c The control error difference can be represented by @e ðkÞ T @yðkÞ @u ðkÞ c c Δe ðkÞ¼ e ðk þ 1Þ e ðkÞ η ðkÞð Þ e ðkÞ (45) c c c c c @v ðkÞ @u ðkÞ @v ðkÞ c c c where v ðkÞ is the synaptic weights of the neural network controller (v ðkÞ and v ðkÞ). c 1j ji Using Equation (45), the control error is going to be e ðk þ 1Þ¼ e ðkÞ η ðkÞ ðkÞe ðkÞ (46) c c c c with hi 2 2 0 0 0 T T 0 0 T ðkÞ¼ λ s ðh Þs ðh Þw w S ðWxÞ S ðVx ÞSðVx Þþ v S ðVx ÞS ðVx Þv x x c c1 1 1j ji 1 1 1 1 1j 1 c 1j 1 (47) From Equations (46) and (47), the convergence of the control error e ðkÞ is guaranteed if lim e ðkÞ¼ 0or0< η ðkÞ< 2 ðkÞ with V ðkÞ > 0 from Equation (43). k!þ1 c c c c The suitable online algorithm for real-time applications may be applied if the variable learning rate η ðkÞ is ðkÞ. c c 3. The proposed algorithm In this section, a summary of the proposed algorithm of the online kernel principal component analysis neural network controller is presented. Offline phase (1) Initialization of neural network parameters (v , v , w , w ) using M observa- 1j ji 1j ji tions, ðM NÞ, (2) Determine the matrix C, focus the data and decompose into eigenvalue λ, MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 155 (3) Determine the orthogonal eigenvalues and the eigenvectors of the covariance matrix, (4) Order the eigenvectors on the decreasing way respect to the corresponding eigenvalues, (5) (5) Choose x ðkÞ that satisfy Equation (28) using the s retained principal compo- nents given byEquations (41) and (42). Online phase (1) At time instant ðk þ 1Þ, we have a new data ðuðk þ 1Þ; yðk þ 1ÞÞ, using the obtained input vector x , if the condition eðk þ 1Þ< ε , where ε > 0 is a given 1 1 1 small constant, is satisfied then the neural network model, given by Equation (5), approaches sufficiently the behaviour of the system. (2) If the condition e ðk þ 1Þ < ε , where ε > 0 is a given small constant, is satisfied, c 2 2 then the reduced neural network controller provides sufficiently the control law uðkÞ. (3) If eðk þ 1Þ< ε is not satisfied, the update of the synaptic weights of the neural network model is necessary, using Equations (8) and (12), (4) If e ðk þ 1Þ< ε is not satisfied, the update of the synaptic weights of the neural c 2 network controller is necessary, using Equation (35) and (38), (5) (5) End. 4. Simulation results In this section, two non-linear discrete systems are used. Indeed, the first is a single-input single-output nonlinear time-varying system and the second is a multi-input multi- output (MIMO) system. 4.1. Example of time-varying system The time-varying non-linear system is described by the input–output model in the following equation [41]. yðkÞyðk 1Þyðk 2Þuðk 1Þðyðk 2Þ 1Þþ uðkÞ yðk þ 1Þ¼ (48) 2 2 a ðkÞþ a ðkÞy ðk 1Þþ a ðkÞy ðk 2Þ 0 1 2 where yðkÞ and uðkÞ are, respectively, the output and the input of the time-varying non- linear system at instant k; a ðkÞ, a ðkÞ and a ðkÞ are given by 0 1 2 < a ðkÞ¼ 1 a ðkÞ¼ 1 þ 0:2cosðkÞ (49) a ðkÞ¼ 1 þ 0:2sinðkÞ The trajectory of a ðkÞ and a ðkÞ are given in Figure 4. 1 2 In this section, in order to examine the effectiveness of the proposed algorithm of the dimensionality reduction, different performance criteria are used. 156 A. ERRACHDI ET AL. 1.2 1.2 1.15 1.15 1.1 1.1 1.05 1.05 1 1 0.95 0.95 0.9 0.9 0.85 0.85 0.8 0.8 0 102030405060708090 100 0 102030405060708090 100 k k Figure 4. a ðkÞ and a ðkÞ trajectories. 1 2 Indeed, the mean squared identification error (MSE ) and the mean absolute identi- fication error (MAE ) are, respectively, given by MSE ¼ ðyðkÞ yrðkÞÞ (50) k¼1 MAE ¼ ðyðkÞ yrðkÞÞ (51) k¼1 where yðkÞ is the time-varying system output, yrðkÞ is the neural network model output and the used number of observations N is 100. The mean squared tracking error (MSE ) and the mean absolute tracking error (MAE ) are, respectively, given by MSE ¼ ðyðkÞ rðkÞÞ (52) k¼1 where rðkÞ is the desired value. MAE ¼ ðyðkÞ rðkÞÞ (53) k¼1 In this section, we examine the effectiveness of the proposed algorithm of the dimension- ality reduction of the neural network controller input vector in the adaptive indirect control system. Indeed, in offline phase, using a reduced number of observations ðM ¼ 3Þ to find, either, the parameter initialization of the neural network parameters (w , w , v , v ), and 1j ji 1j ji the KPCA parameters as the matrix C, the eigenvalues, the eigenvectors, and finally the reduced input vector x ðkÞ given by Equation (28) based on the s retained principal components given by the Equations (41)–(42) are obtained. In online phase, at instant ðk þ 1Þ, we use the input vector of the neural network controller x ¼½ x ðkÞ; x ðk 1Þ; x ðk 2Þ; x ðk 3Þ; x ðk 4Þ . 1 r r r r r a (k) a (k) 2 MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 157 Table 2. The comparison results of the used kernel function in the identification error. Kernel function MSE RBF kernel, σ ¼ 6 4:625210 Polynomial kernel, a ¼ 1; b ¼ 1; n ¼ 3 4:685410 Linear kernel 4:682910 Sigmoid kernel, a ¼ 1; b ¼ 1 4:095510 In this case, both neural network model and pre-processing neural network controller consist of single input, 1 hidden layer with 8 nodes, and a single output node, identically, and a variable learning rate of neural network model ηðkÞ and of neural network controller η ðkÞ. The used scaling coefficient is λ ¼ λ ¼ 1 and ε ¼ ε ¼ 10 . c 1 2 To use the suitable kernel function, the simulation results present that the used sigmoid function as a kernel, compared to other kernel functions defined in Table 2, which gives the lowest value obtained with the calculation of the MSE indicating which sigmoid kernel function is the most reliable. However, the features are directly fed to multilayer perceptron neural network as inputs without any preprocessing by KPCA. The obtained online MLP neural network model and the plant output are obtained. The used input vector of the MLP neural network is ½rðkÞ; rðk 1Þ; rðk 2Þ; rðk 3Þ; rðk 4Þ; rðk 5Þ when the number of the hidden layer is 1 with 23 nodes and the value of the learning rate is variable. From Figure 5, an excellent concordance between both plant output and the desired value is observed with a mean square error equal to 6:926910 . 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 The desired values r(k) 0.1 The plant output y(k) 0.05 0 102030405060708090 100 Figure 5. The pre-processing control system output and the desired values. 158 A. ERRACHDI ET AL. In Figure 5, the output of the reduced online MLP neural network controller and the desired values are presented. In this case, the KPCA method is combined with the multilayer perceptron neural network. The KPCA technique is used as a preprocessing method to reduce the dimension features. The obtained reduced vector is fed also to the online multilayer perceptron neural network. The number of the hidden layer is 1. The learning rates are variable. A concordance between both desired values and the plant output is noticed from Figure 6. To give more efficiency of this combination, several functions are tested and the result is presented in Table 2. As defined in Table 2, we use the sigmoid function as a kernel function in the KPCA technique, and the tracking control aim of this system is to follow as possible the reference signal based on a proposed pre-processing neural network controller. In this simulation, the desired value, rðkÞ, is given in the following: 0:45 fork 25 0:20 for 26 k 50 rðkÞ¼ (54) > 0:45 for 51 k 75 0:20 for k > 75 We examine the influence of the dimensionality reduction of the neural network con- troller input vector in the identification error in Table 3 and in the control error in Table 4. From Tables 3 and 4, we observe that using the KPCA as a pre-processing phase to reduce the input vector of the neural network controller, the neural network KPCA controller has the smallest performance criteria in the identification error eðkÞ and in the control error e ðkÞ. These results are shown in Figures 5, 6 and 7. Indeed, Figure 5 presents the pre-processing control system output and the desired values. In this case, the KPCA method is combined with a multilayer perceptron neural network controller. The KPCA technique is used as a preprocessing method to reduce the dimension features. The obtained reduced vector is fed to the neural network controller. A concor- dance between the desired values and the control system output is noticed from Figure 7 although the parameters vary over time. However, Figures 6 and 7 present, respectively, the control law and the control error. Table 3. The influence of the dimensionality reduc- tion in the identification error. NN model KPCA NN model η Variable Variable 7 7 MSE e 4; 6252:10 4; 0955:10 4 4 MAE 5; 4126:10 4:9423:10 4 4 maxðeÞ 9:9953:10 9; 9666:10 time (s) 16:2345 1:0987 Table 4. The influence of the dimensionality reduction in the control error. NN controller KPCA NN controller η Variable Variable MSE 0:0074 0:0027 MAE 0:0225 0:0166 maxðe Þ 0:5000 0:3000 time (s) 47:4502 3:1092 MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 159 0.8 Control law u(k) 0.7 0.6 0.5 0.4 0.3 0.2 0 102030405060708090 100 Figure 6. The control law. 0.35 The control error 0.3 0.25 0.2 0.15 0.1 0.05 -0.05 0 102030405060708090 100 Figure 7. The control error. 160 A. ERRACHDI ET AL. These figures reveal that the NN controller using the KPCA as a pre-processing technique has smaller errors than the other controller without pre-processing. Another desired value rðkÞ, given by Equation (55), is used to examine the effective- ness of the proposed algorithm of the dimensionality reduction of the neural network controller input vector in the adaptive indirect control system for the time-varying non- linear system. Indeed, both neural network model and neural network controller consist of single input, 1 hidden layer with 23 nodes, and a single output node, identically. The used scaling coefficient is λ ¼ λ ¼ 1 and ε ¼ ε ¼ 10 . c 1 2 In this simulation, the desired value, rðkÞ, is given in the following: 0:45 for k 25 < 0:20 for 26 k 30 rðkÞ¼ 0:40 for 31 k 35 (55) > 0:30 for 36 k 80 0:20 for k > 80 Figure 8 presents the pre-processing control system output and the desired values. In this case, the KPCA method is combined with a multilayer perceptron neural network controller. A concordance between the desired values and the control system output is noticed, although the time-varying parameters. However, Figures 9 and 10 present, respectively, the control law and the control error. These figures reveal that the NN controller using the KPCA as a pre-processing technique has smaller errors than the other controller without pre-processing. 0.5 The desired values r(k) 0.45 The plant output y(k) 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 102030405060708090 100 Figure 8. The pre-processing control system output and the desired values. MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 161 0.8 Control law u(k) 0.7 0.6 0.5 0.4 0.3 0.2 0 102030405060708090 100 Figure 9. The control law. Tables 5 and 6 present the influence of the dimensionality reduction in the identifica- tion error and in the control error. From Tables 5 and 6 we observe that by using the KPCA as a pre-processing phase to reduce the input vector of the neural network controller, the neural network KPCA controller has the smallest performance criteria in the identification error eðkÞ and in the control error e ðkÞ. These results are shown in Figures 8, 9 and 10. Table 5. The influence of the dimensionality reduc- tion in the identification error. NN model KPCA NN model η Variable Variable 7 7 MSE 4; 4236:10 4; 2343:10 4 4 maxðeÞ 9; 9904:10 9; 9793:10 4 4 MAE 5; 1255:10 4; 9632:10 time (s) 41:3576 1:8007 Table 6. The influence of the dimensionality reduc- tion in the control error. NN model KPCA NN model η Variable Variable MSE 0:0073 0:0028 MAE 0:0221 0:0166 ec maxðe Þ 0:5000 0:3073 time (s) 22:3142 3:1241 162 A. ERRACHDI ET AL. 0.35 The control error 0.3 0.25 0.2 0.15 0.1 0.05 -0.05 0 102030405060708090 100 Figure 10. The control error. 4.2. Effect of disturbances An added noise vðkÞ is injected to the output of the time-varying non-linear system, given by Equation (48), in order to test the effectiveness of the pre-processing neural network controller. To measure the correspondence between the system output and the desired value, a signal noise ratio ðSNRÞ is taken from the following equation: ðyðkÞ yÞ k¼0 SNR ¼ (56) ðvðkÞ vÞ k¼0 with vðkÞ is a noise of the measurement of symmetric terminal δ, vðkÞ2½δ; δ, y and v are an output average value and a noise average value, respectively. In this paper, the taken SNR is 5%. Using the first desired value rðkÞ, the sensitivity of the proposed pre-processing neural network controller is examined in Tables 7 and 8, respectively. From these tables, we observe that by using the KPCA as a pre-processing phase to reduce the input vector of the neural network controller, the neural network KPCA controller has the smallest performance criteria in the identification error and in the control error. Using the second desired value, the sensitivity of the proposed pre-processing neural network controller is examined in Tables 9 and 10, respectively. MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 163 Table 7. The influence of the dimensionality reduc- tion in the identification error. NN model KPCA NN model η Variable Variable 8 8 MSE 4; 2044:10 3; 5275:10 4 4 maxðeÞ 9; 8520:10 9; 7021:10 4 4 MAE 1; 2436:10 1; 1753:10 time (s) 58:2112 3:2024 Table 8. The influence of the dimensionality reduction in the control error. NN Controller KPCA NN controller η Variable Variable MSE 0:0074 0:0027 MAE 0:0223 0:0165 maxðe Þ 0:5000 0:3000 time (s) 62:2231 2:3321 Table 9. The influence of the dimensionality reduc- tion in the identification error. NN model KPCA NN model η Variable Variable 8 8 MSE 2; 8501:10 1; 6541:10 4 4 maxðeÞ 9; 4522:10 8; 9819:10 4 4 MAE 1; 1140:10 1; 0016:10 time (s) 17:5536 2:0092 According to the obtained simulation results, despite the fact of the presence of disturbance in the system output and the time-varying parameters, the lowest MSE , MAE and maxðe Þ are obtained using a combination between the neural network e c controller and the KPCA technique. 4.3. Example of multi-input multi-output system In this section, in order to examine the effectiveness of the proposed algorithm of the dimensionality reduction, a multi-input multi-output (MIMO non-linear system, given by the following equation, is used. y ðkÞ y ðk þ 1Þ¼ þ u ðkÞ 1 2 1 1þy ðkÞ (57) y ðkÞy ðkÞ 1 2 y ðk þ 1Þ¼ þ u ðkÞ 2 2 2 1þy ðkÞ where y ðkÞ and u ðkÞ, i ¼ 1; 2, are, respectively, the output and the input of the MIMO i i non-linear system at instant k; r ðkÞ and r ðkÞ are the reference signal given by 1 2 164 A. ERRACHDI ET AL. 2kπ r ðkÞ¼ sinð Þ > 8 < > 0:8fork 50 0:4for51 k 100 (58) r ðkÞ¼ > > 0:8for101 k 150 > > : : 0:4for151 k 200 The control system outputs, the desired values and the control errors are presented in Figure 11. However, Figure 12 presents the control law u1 and u2 trajectories. These figures reveal that using a NN controller combined with the KPCA as a pre-processing technique gives an excellent concordance between the system outputs and the desired outputs with smaller control errors. In this case, both neural network model and pre-processing neural network controller consist of single input, 1 hidden layer with 28 nodes, and two output nodes, identically, and variable learning rates of neural network model, η ðkÞ, and of neural network controller η ðkÞ. The used scaling coefficient is λ ¼ λ ¼ 1 and ε ¼ 10 , i ¼ 1 : 2. i c i ic The input vector of the neural network controller is x ¼½ x ðkÞ; x ðk 1Þ; x ðk 2Þ; x ðkÞ; x ðk 1Þ; x ðk 2Þ .The influence of the 1 r1 r1 r1 r2 r2 r2 dimensionality reduction in the model error and in the control error is shown in Tables 11 and 12. 5. Conclusion In this paper, an online combination between the neural network controller and the KPCA method is proposed and is applied with success in indirect adaptive control. Different kernel functions are tested. For instance, the lowest MSE , MAE , maxðeÞ, MSE , MAE e e e e c c and maxðe Þ are obtained, and it is proved that the sigmoid kernel function is the best. The effectiveness of the proposed algorithm is successfully applied, firstly, to single-input single-output system, with and without disturbances, and it proved its robustness to reject disturbances and to accelerate the speed of the learning phase of the neural model and neural controller. Second, it is applied to MIMO system and it gives good results. 0.9 0.9 desired value r 0.8 0.8 system output y control error ec 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 desired value r system output y 0.2 0.2 control error ec 0.1 0.1 0 0 -0.1 -0.1 0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 k k Figure 11. The control system output, the desired values and the control error. MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 165 0.4 0.7 control law u control law u 1 2 0.35 0.6 0.3 0.5 0.25 0.2 0.4 0.15 0.3 0.1 0.2 0.05 0 0.1 0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 k k Figure 12. The control law u and u trajectories. 1 2 Table 10. The influence of the dimensionality reduction in the control error. NN controller KPCA NN controller η Variable Variable 0:0074 0:0028 MSE MAE 0:0233 0:0175 maxðe Þ 0:5000 0:3073 time (s) 38:4532 3:00941 Table 11. The influence of the dimensionality reduction in the model error. KPCA NN model η i ¼ 1 : 2 Variable MSE 5; 4430:10 MSE e 8; 1510:10 MAE 2; 4000:10 MAE e 1; 3635:10 maxðe Þ 0:0023 maxðe Þ 0:0053 time (s) 13; 8162 Table 12. The influence of the dimensionality reduction in the con- trol error. KPCA NN controller η Variable ic MSE 5; 4430:10 1c MSE 8; 1510:10 e2c MAE 0; 0057 1c MAE 0; 0091 2c maxðe Þ 0; 4251 1c maxðe Þ 0; 5000 2c time (s) 34; 7886 166 A. ERRACHDI ET AL. Disclosure statement No potential conflict of interest was reported by the authors. References [1] O. Mohareri, R. Dhaouadi, and A.B. Rad, Indirect adaptive tracking control of a nonholonomic mobile robot via neural networks, Neurocomputing 88 (2012), pp. 54–66. doi:10.1016/j.neucom.2011.06.035. [2] A.A. Bohari, W.M. Utomo, Z.A. Haron, N.M. Zin, S.Y. Sim, and R.M. Ariff, Speed tracking of indirect field oriented control induction motor using neural network, Procedia Technol. 11 (2013), pp. 141–146. doi:10.1016/j.protcy.2013.12.173. [3] S. Slama, A. Errachdi, and M. Benrejeb, Adaptive PID controller based on neural networks for MIMO nonlinear systems, J. Theor. Appl. Inf. Technol. 97 2 (2019), pp. 361–371. [4] A. Errachdi and M. Benrejeb, Performance comparison of neural network training approaches in indirect adaptive control, Int. J. Control. Autom. Syst. 16 (3) (2018), pp. 1448–1458. doi:10.1007/s12555-017-0085-3. [5] N. Ben, W. Ding, D.A. Naif, and E.A. Fuad, Adaptive neural state-feedback tracking control of stochastic nonlinear switched systems: An average dwell-time method, IEEE Trans. Neural Networks Learn. Syst. 30 (4) (2018), pp. 1076–1087. doi:10.1109/TNNLS.2018.2860944. [6] N. Ben, L. Yanjun, Z. Wanlu, L. Haitao, D. Peiyong, and L. Junqing, Multiple lyapunov functions for adaptive neural tracking control of switched nonlinear non-lower-triangular systems, IEEE Trans. Cybern. 99 (2019). doi:10.1109/TCYB.2019.2906372 [7] P.O. Hoyer and A. HyvUarinen, Independent component analysis applied to feature extrac- tion from colour and stereo images, Network 11 (3) (2000), pp. 191–210. doi:10.1088/0954- 898X_11_3_302. [8] L.J. Cao, K.S. Chua, W.K. Chong, H.P. Lee, and Q.M. Gu, A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machine, Neurocomputing 55 (1–2) (2003), pp. 321–336. doi:10.1016/S0925-2312(03)00433-8. [9] I. Guyon and A. Eliseeff, An introduction to variable and feature selection, J. Mach. Learn. Res. 3 (2003), pp. 1157–1182. [10] J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V.N. Vapnik, Feature selection for SVMs, Adv. Neural Inform. Process. Syst. 13 (2001), pp. 668–674. [11] F.E.H. Tay and L.J. Cao, Saliency analysis of support vector machines for feature selection, Neural Network World 2 1 (2001), pp. 153–166. [12] F.E.H. Tay and L.J. Cao, A comparative study of saliency analysis and genetic algorithm for feature selection in support vector machines, Intell. Data Anal. 5 (3) (2001), pp. 191–209. doi:10.3233/IDA-2001-5302. [13] K. Lee and V. Estivill-Castro, Feature extraction and gating techniques for ultrasonic shaft signal classification, Appl. Soft Comput. 7 (2007), pp. 156–165. doi:10.1016/j. asoc.2005.05.003. [14] H. Qinshu, L. Xinen, and X. Shifu, Comparison of PCA and model optimization algorithms for system identification using limited data, J. Appl. Sci. 13, 11 (2013), pp. 2082–2086. doi:10.3923/jas.2013.2082.2086 [15] R. Zhang, J. Tao, R. Lu, and Q. Jin, Decoupled ARX and RBF neural network modeling using PCA and GA optimization for nonlinear distributed parameter systems, IEEE Trans. Neural Networks Learn. Syst. 29 (2) (2018), pp. 457–469. doi:10.1109/TNNLS.2016.2631481. [16] M.L. Wang, X.D. Yan, and H.B. Shi, Spatiotemporal prediction for nonlinear parabolic distributed parameter system using an artificial neural network trained by group search optimization, Neurocomputing 113 (2013), pp. 234–240. doi:10.1016/j.neucom.2013.01.037. [17] S. Yin, S.X. Ding, A.H. Abandan Sari, and H.Y. Hao, Data-driven monitoring for stochastic systems and its application on batch process, Int. J. Syst. Sci. 44 (7) (2013), pp. 1366–1376. doi:10.1080/00207721.2012.659708. MATHEMATICAL AND COMPUTER MODELLING OF DYNAMICAL SYSTEMS 167 [18] E. Aggelogiannaki and H. Sarimveis, Nonlinear model predictive control for distributed parameter systems using data driven artificial neural network models, Comput. Chem. Eng. 32 (6) (2008), pp. 1225–1237. doi:10.1016/j.compchemeng.2007.05.002. [19] M. Madhusmita and H.S. Behera, Kohonen self organizing map with modified K-means clustering For high dimensional data set, Int. J. Appl. Inf. Syst. (IJAIS). 2(3) (2012), pp. 34–39. (Foundation of Computer Science FCS, New York, USA). [20] S. Buchala, N. Davey, T.M. Gale, and R.J. Frank, Analysis of linear and nonlinear dimen- sionality reduction methods for gender classifcation of face images, Int. J. Syst. Sci. 14 (36) (2005), pp. 931–942. doi:10.1080/00207720500381573. [21] M. Lennon, G. Mercier, M.C. Mouchot, and L. Hubert-Moy, Curvilinear component analysis for nonlinear dimensionality reduction of hyperspectral images, Proc. SPIE Image Signal Process Remote Sens. VII 4541 (2001), pp. 157–168. [22] N.K. Batmanghelich, B. Taskar, and C. Davatzikos, Generative-discriminative basis learning for medical imaging, IEEE Trans. Med. Imaging 31 (2012), pp. 51–69. doi:10.1109/ TMI.2011.2162961. [23] L. Van der Mateen, E. Postma, and J. Van den Herik, Dimensionality reduction: A comparative review tilburg centre for creative computing, Tilburg University, LE Tilburg, The Netherlands, 2009. [24] K. Kuzniar and M. Zajac, Data pre-processing in the neural network identification of the modified walls natural frequencies, Proceedings of the 19th International Conference on Computer Methods in Mechanics CMM-2011, Warszawa, 9–12 May, 2011, pp. 295–296. [25] V.M. Janakiraman, X. Nguyen, and D. Assanis, Nonlinear identification of a gasoline HCCI engine using neural networks coupled with principal component analysis, Appl. Soft Comput. 13 (2013), pp. 2375–2389. doi:10.1016/j.asoc.2013.01.006. [26] K. Seerapu and R. Srinivas, Face recognition using robust PCA and radial basis function network, Int. J. Comput. Sci. Commun. Networks 2 5 (2012), pp. 584–589. [27] N.M. Peleato, R.L. Legge, and R.C. Andrews, Neural networks for dimensionality reduction of fluorescence spectra and prediction of drinking water disinfection by-products, Water Res. 136 (2018), pp. 84–94. doi:10.1016/j.watres.2018.02.052 [28] C. Sucheta and K.V. Prema, Effect of dimensionality reduction on performance in artificial neural network for user authentication, 3rd IEEE International Advance Computing Conference (IACC), Ghaziabad, India, 2013. [29] G.E. Hinton and R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science 313 (2006), pp. 504–507. doi:10.1126/science.1127647. [30] Q. Zhu and C. Li, Dimensionality reduction with input training neural network and its application in chemical process modelling, Chinese J. Chern. Eng. 14 (5) (2006), pp. 597–603. doi:10.1016/S1004-9541(06)60121-3. [31] C.-Y. Cheng, -C.-C. Hsu, and M.-C. Chen, Adaptive kernel principal component analysis (KPCA) for monitoring small, Ind. Eng. Chem. Res. 49 (2010), pp. 2254–2262. doi:10.1021/ ie900521b. [32] C. Chakour, M.F. Harkat, and M. Djeghaba, New adaptive kernel principal component analysis for nonlinear dynamic process monitoring, Appl. Math. Inf. Sci. 9 4 (2015), pp. 1833–1845. [33] R. Fezai, M. Mansouri, O. Taouali, M.F. Harkat, and N. Bouguila, Online reduced kernel principal component analysis for process monitoring,Jx61(2018), pp. 1–11. doi:10.1016/j. jprocont.2017.10.010. [34] Y. Xiao and Y. He, A novel approach for analog fault diagnosis based on neural networks and improved kernel PCA, Neurocomputing 74 (2011), pp. 1102–1115. doi:10.1016/j. neucom.2010.12.003. [35] A. Errachdi and M. Benrejeb, On-line identification using radial basis function neural network coupled with KPCA, Int. J. Gen. Syst. 45 7 (2016), pp. 1–15. [36] K.N. Reddy and V. Ravi, Differential evolution trained kernel principal component WNN and kernel binary quantile regression: Application to banking, Knowledge-Based Syst. 39 (2013), pp. 45–56. doi:10.1016/j.knosys.2012.10.003. 168 A. ERRACHDI ET AL. [37] I. Klevecka and J. Lelis, Pre-processing of input data of neural networks: The case of forecasting telecommunication network traffic, Telektronikk 104 3/4 (2008), pp. 168–178. [38] M. Shirzadeh, A. Amirkhani, A. Jalali, and M.R. Mosavi, An indirect adaptive neural control of a visual-based quadrotor robot for pursuing a moving target, ISA Trans 59 (2015), pp. 290–302. doi:10.1016/j.isatra.2015.10.011. [39] S.J. Yoo, J.B. Park, and Y.H. Choi, Indirect adaptive control of nonlinear dynamic systems using self recurrent wavelet neural networks via adaptive learning rates, Inf. Sci. 177 (2007), pp. 3074–3098. doi:10.1016/j.ins.2007.02.009. [40] B. Scholkopf and A. Smola, Learning with Kernels, MIT Press, Cambridge, 2002. [41] K.S. Narendra and K. Parthasarthy, Identification and control of dynamical systems using neural networks, IEEE Trans. Neural Networks 1 (1) (1990), pp. 4–27. doi:10.1109/72.80202.
Mathematical and Computer Modelling of Dynamical Systems – Taylor & Francis
Published: Mar 3, 2020
Keywords: Neural networks; modelling; indirect control; KPCA; reduction; non-linear system
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.