Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Hindawi Journal of Robotics Volume 2021, Article ID 1129872, 14 pages https://doi.org/10.1155/2021/1129872 Research Article Mobile Robot Obstacle Avoidance Based on Neural Network with a Standardization Technique 1,2 3,4 2,5 Karoline Kamil A. Farag , Hussein Hamdy Shehata , and Hesham M. El-Batsh Mechatronics Engineering Department, Alexandria Higher Institute of Engineering and Technology (AIET), Alexandria, Egypt Mechanical Engineering, Faculty of Engineering, Benha University, Benha, Egypt Benha Faculty of Engineering, Benha University, Benha, Egypt Mechatronics Systems Engineering Department, Faculty of Engineering, MSA University, 6th of October City, Egypt High Institute of Engineering and Technology, Mahala El-Kobra, Egypt Correspondence should be addressed to Karoline Kamil A. Farag; caroline.kamil@aiet.edu.eg Received 2 July 2021; Accepted 17 September 2021; Published 3 November 2021 Academic Editor: L. Fortuna Copyright © 2021 Karoline Kamil A. Farag et al. 'is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Reactive algorithm in an unknown environment is very useful to deal with dynamic obstacles that may change unexpectantly and quickly because the workspace is dynamic in real-life applications, and this work is focusing on the dynamic and unknown environment by online updating data in each step toward a speciﬁc goal; sensing and avoiding the obstacles coming across its way toward the target by training to take the corrective action for every possible oﬀset is one of the most challenging problems in the ﬁeld of robotics. 'is problem is solved by proposing an Artiﬁcial Intelligence System (AIS), which works on the behaviour of Intelligent Autonomous Vehicles (IAVs) like humans in recognition, learning, decision making, and action. First, the use of the AIS and some navigation methods based on Artiﬁcial Neural Networks (ANNs) to training datasets provided high Mean Square Error (MSE) from training on MATLAB Simulink tool. Standardization techniques were used to improve the performance of results from the training network on MATLAB Simulink. When it comes to knowledge-based systems, ANNs can be well adapted in an appropriate form. 'e adaption is related to the learning capacity since the network can consider and respond to new constraints and data related to the external environment. of as a model of incomplete systems. Electrical and me- 1. Introduction chanical interactions within the structure generate complex Navigation is an important challenge for autonomous patterns of vibration that can prevent the system from mobile robots [1]. 'e problem is divided into positioning succeeding in the right working conditions. We discuss an and path planning, so the major purpose of usage of the impact strategy to ensure optimal working conditions mobile robot is the shortest path from an initial point to the support triggering the hidden dynamics of defects, char- ﬁnal point (target) in minimum time with high accuracy. acterizing their impact by referring to the characteristics of Real devices usually operate in systems that are far from the control signals and the facility providing the structure. ideal, and although defects cannot be avoided in real pro- A variety of articles on machine learning have been duction processes, they still work. 'is is often associated published over the past few years, including how it was with the fact that defects produce hidden dynamics, which, implemented to help mobile robots develop their operating appropriately evoked, have an overall positive eﬀect on the capabilities. Navigation is one of the most critical problems device [2]. We throughout specialize in elegant and in- in designing and improving the intelligent mobile network. complete electromechanical structures that can be thought 'is is about a mobile robot’s ability to plan and execute 2 Journal of Robotics collision motions within its environment. Robots need to be the IP camera ﬁnder and ultrasonic range. 'is idea assumes able to understand the environment’s structure. 'e robots certain predictions as follows, to explain the analysis: must be equipped with the potential for vision, data analysis, (i) 'e obstacles’ size is very small with red colour, and understanding, listening, logic, comprehension, decision, we can accurately online measure their location. and response to reach their goals without collisions. 'e (ii) Six Ultrasonic (US) sensors are used to cover 180 reproduction of this kind of intellect is, up to now, a human around the mobile robot; each of them is placed achievement in the creation and advancement of smart with a red colour sensor to identify only obstacles in machines and, particularly, autonomous mobile robots. the environment. Sensing and understanding are two fundamental criteria to achieve a fair degree of autonomy. Obstacle avoidance is one (iii) 'e IP camera is used to identify the target coordinate of the most important problems for any realistic robotics by training on Google Net or Alex Net. Experimental design. 'ere are several strategies in the literature to deal comparison with a traditional machine learning with this problem. technique indicates a target that detects eﬃciency can 'e main aim of the study is to solve the problem of be obtained by the proposed process. motion planning in robotics. Supplied with an object with an (iv) 'e mobile robot can pass smoothly on the way to initial location and direction, a goal location and orientation, the goal; i.e., the steering angle is not restricted. and several obstacles placed in the workspace, the problem is (v) 'e speed of the goal is equal to or slower than the ﬁnding a stable pathway from the initial location to the speed of the robot movement. target location, which prevents clashes with obstacles along the way. In other words, path planning is classiﬁed into two 3. Kinematics of the Mobile Robot subproblems, ﬁnding space and a stable pathway. For more than half a century, robotics has been a feature of 'is section will show the kinematics of the mobile robot that modern technology. Such devices are primarily being used for has been taken into consideration when designed the mobile entertainment, security, and intelligence purposes as robots robot. 'e kinematic model of the mobile robot is shown in and their peripheral hardware grow more advanced, stable, Figure 1. It consists of a vehicle frame mounted on the same and remotely controlled. Every robot remotely operated to axis with two driving wheels and a sliding shield at the front produce images/videos for diﬀerent purposes is designed to be point. To achieve motion and orientation, the two driving a remotely controlled sensing robot. Robots that are remotely wheels are independently driven by two motors [4, 5]. operated have signiﬁcant recovery and security laws. 'e strategy for the navigation of the mobile robot was In this research, a mobile robot is trained by the neural proposed. 'e low-level inverse neural controller, which network technique by entering all available paths for most controls the mobile robot’s dynamics, is the main compo- probability for the environment that can expose in a path to nent of the motion controller. reach a goal for the unknown environment and movable goal 'is state approximately reaches 235000 states using the ANOVA test to train the mobile robot to act for any change in v � v + v , t r l the environment. Also, the standardization technique was used to improve training performance and regression rate to reach (1) ω � v − v , the goal. It is programmed by MATLAB connected with an t r l Arduino. But, we take into consideration some criteria such as robot size, robot step size, and safety margin. To plan its v � rω and v � rω . r r l l motion not only with right positions but also with the rea- 'e position of the robot is deﬁned by the vector [O, X, sonable robot’s velocity and the relative position of the dy- Y] notation as follows: namic virtual obstacle [3]. 'e ANN uses training datasets that provided high Mean x y θ (2) q � , c p Square Error (MSE) from training on the MATLAB Simulink tool. Search for the reason for this big error found big dif- where ference values of datasets, so we make rescaling on the cost function of the hypothesis. Based on these characteristics and (i) X and Y are the coordinates of point P in the c p using standardization techniques to put all values of datasets global coordinate in the range from −1 to +1, the performance of results from (ii) θ is the orientation of the local coordinate attached the training network on MATLAB Simulink is improved. to the robot platform measured from the horizontal axis 2. System Requirement, Assumption, (iii) [PX Y ] are three generalized coordinates that can c p and Precondition describe the conﬁguration of the robot as in equation (2) 'is suggestion does not require the redeﬁnition of the speed and position of the robot, the goal, and the obstacles, but 'e rigid frame is the mobile robot framework con- during the execution of the action, it detects them. No more sidered here, and the wheels are pure rolling and without than exact online measurements can be achieved by using slippage. It says that the robot can only drive towards the axis Journal of Robotics 3 2r Figure 1: Kinematics model of the mobile robot. where: 2r: the diameter of the two-wheel; W: the distance between two driving wheels; c: the mobile robot’s centre of gravity (COG) which is centred at a point; d: the distance between points P and C; P: located at the intersection of a straight line passing through the centre of the vehicle and a line passing through the axis of the two wheels; V : linear right wheel velocity; V : linear left wheel velocity; V : linear tangential velocity; W : angular right wheel velocity; W : angular left wheel velocity; and W : angular l t r l t tangential velocity. of the driving wheels in the usual direction. 'e velocity part where v is the linear velocity of the point up along with the is zero at the contact point with the ground and orthogonal robot axis and ω is the angular velocity. 'erefore, the ki- to the wheel’s plane. nematics equation in (6) can be described as x_ cos θ −d sin θ y_ cos θ − x_ sin θ − dθ � 0. (3) ⎡ ⎢ ⎤ ⎥ p c ⎢ ⎥ ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ v ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ _ ⎥ ⎢ ⎥ ⎢ y ⎥ ⎢ ⎥ q _ � ⎢ ⎥ � ⎢ sin θ d cos θ ⎥ . (10) ⎢ p ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ All kinematics constraints are independent of time and 0 1 can be expressed as Equation (10) is considered the vehicle’s steering A (q)q _ � 0, (4) mechanism. 'e main aim desired reference trajectory that where A (q) is the input transformation matrix associated can control the track of the system. 'e control laws are designed to generate suﬃcient left and right wheel speeds to with the constraints move the mobile robot to follow the necessary direction C A(q) � 0, (5) trajectories. where C (q) is the full rank matrix formed by a set of in- 4. Neural Network Concepts dependent vector ﬁelds spanning the null space of at (q). From equations (4) and (5), it is possible to ﬁnd an A selection of samples, connected with the control system, is auxiliary vector time function V (t) for all time t. constructed using the ANN. In a nonlinear way, these systems transform signals. Nonparametric estimation q � C(q) V(t). (6) methods are neural networks that can construct discrete 'e constraint matrix in equation (4) for a mobile robot functions based on given case studies of inputs-outputs [5]. is given by A three-layer classiﬁer is a neural network constructed in this research. To train, the number of layers is set experi- (7) A (q) � −sin θ cos θ −d . mentally. 'ere are eight neurons in the Input Layer (IL), six for detecting distance values from obstacles (i.e., in the front 'e C (q) matrix is given by of the robot at 180 ) and two for detecting the goal distance and angle. 'e inputs to the eighth neuron are set to “zero” if cos θ −d sin θ no target is detected and a single neuron in the Output Layer ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (OL), which produces the steering angle of the robot’s C(q) � ⎢ sin θ d cos θ ⎥. (8) ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ motion direction with a linear activation function. Also, the 0 1 Hidden Layer (HL) has 120 neurons with a Hyperbolic Tangent Sigmoid Activation Function (HTSAF). Also, (11) f(x) � . − x V(t) � v ω , (9) 1 + e 4 Journal of Robotics 'ese hidden neuron numbers with their input and 5. Methodology and Network Architecture output signals are shown in Figure 2. 'e neural network is 'is section shows the mobile robot construction and trained to navigate by describing approximately 235,000 conﬁguration for each component in the experimental cases representing. datasets of a mobile robot and shows the desired ANN 'e overall output can calculating by the following dataset and the training method to produce a robot con- equations: troller [6, 7]. Figure 3 shows the designed workspace of the arranged environment used in this work. w w · · · w 1,1 2,1 120,1 ⎡ ⎢ ⎥ ⎢ ⎤ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ 'e design of the grid front mobile robot is divided into 6 ⎢ ⎥ ⎢ w w · · · w ⎥ ⎢ ⎢ ⎥ ⎢ 1,2 2,2 120,2 ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ sectors shown in Table 1 concerning the angle. ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ w w · · · w ⎥ ⎢ ⎥ ⎢ 1,3 2,3 120,3 ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ sop � i/p i/p i/p . . . i/p 1 ⎢ ⎥ , ⎢ ⎢ ⎥ It is divided into 3 ranges as shown in Table 2 concerning 1 2 3 8 1 ∗ 9⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⋮ ⋮ · · · ⋮ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ distance from the mobile robot. ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ w w · · · w ⎥ ⎢ ⎥ ⎢ ⎢ 1,8 2,8 120,8 ⎥ ⎢ ⎥ ⎥ ⎣ ⎦ B B · · · B 1,9 2,9 120,9 9∗120 5.1. Datasets (12) 5.1.1. Inputs to Trained ANN. We have 8 inputs to the sop ⎡ ⎢ ⎤ ⎥ Arduino MEGA Controller divided into the following: ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ sop ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (a) Obstacle avoidance situation: 6 inputs indicate ob- ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ sop � ⎢ sop ⎥ , (13) ⎢ ⎥ ⎢ ⎥ ⎢ 3 ⎥ ⎢ ⎥ ° ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ stacles found in 180 from the workspace around the ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⋮ ⎥ ⎢ ⎥ ⎣ ⎦ mobile robot in each step by the US connected sop consequently. Each of them covers 30 and 1 m 120 1∗120 distance as in [8]. where i: number of neurons in HL seriously, which change (i) 'e designed ANN each sensor has 3 values from 1 :120, j: number of neurons in IL seriously, which shown in Table 3 indicating the obstacle’s dis- change from 1 : 8, A: the only neuron in OL, sop is the sum i tance from the mobile robot. of the product for each i neuron in HL, w : weights between i,j (b) Goal position situation: 2 inputs from the IP camera i neuron in HL and j neuron in IL, and B : bias which is the j,9 which measure GOAL coordinate (x, y) to specify the additional weights for each neuron in HL. following: o/p 1 (i) In the designed ANN, the goal location has 3 ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ values shown in Table 4 that indicate goal dis- ⎢ ⎥ ⎢ ⎥ ⎢ o/p ⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ tance from the mobile robot which is located at ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ o/p � ⎢ o/p ⎥ , (14) ⎢ ⎥ ⎢ 3 ⎥ 1∗120 ⎢ ⎥ ⎢ ⎥ the predeﬁned ranges (Table 2). ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ������ ⎢ ⎥ ⎢ ⎥ ⎢ ⋮ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ 2 2 (17) Goal location r � x + y . o/p (ii) Also, the goal angle has 6 values shown in Table 5 indicating the goal sector which is located at the where o/p is the output from each sop after using the i i predeﬁned sectors in Table 1. HTSAF according to equation (11). Finally, we calculate the overall output o/p using overall − 1 Goal angle θ � tan . (18) the following equations: o/p ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ o/p ⎥ ⎢ ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ 5.1.2. Output from Training. 'is only one output from the ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ o/p ⎥ ⎢ ⎥ ⎢ 3 ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ trained ANN to specify the action movement angle of the o/p � w w . . . w B ⎢ ⎥ , ⎢ ⎥ A,1 A,2 A,120 A,121 ⎢ ⎥ overall 1∗ 121⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⋮ ⎥ ⎢ ⎥ ⎢ ⎥ mobile robot angle. ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ o/p ⎥ In the designed ANN, output comes from a rotating ⎢ ⎥ ⎢ 120 ⎥ ⎢ ⎥ ⎣ ⎦ servo motor that has 6 values shown in Table 6 indicating the 121∗1 action of the mobile robot. (15) o/p � [output] , (16) overall 1∗1 5.2. Description Example of the Possible Environmental Situation As Shown in Figure 4. An example of the input’s where w : weights between A neuron in OL and i neuron in situation on the real workspace is shown in Table 7 and its A,i HL. output to the servo motor of the mobile robot. W A,1 120,1 A,2 2,1 120,8 Journal of Robotics 5 Figure 2: 'ree-layer neural network for robot navigation (neural network architecture). Figure 3: Workspace of the arranged environment’s data. environment model is constantly modiﬁed after each scan, 6. ANN Learning and the target is ﬁxed. 'e US can measure the distance from 'e mobile robot adapts to an environment without human the obstacle, and the IP camera measures the target interaction. We should provide it with the capacity to extract coordinate. environmental information. It is, therefore, recommended In Figure 5, the protocol of our algorithm is described. If to use a US range ﬁnder that scans the workspace with a 30 the US 6 is mounted on the robot front 180⁰, the following ﬁeld of view and an operational range of 4 meters. 'e algorithm should be implemented. SOP 1 US 1 US 2 SOP 2 US 3 US 4 Output On Servo Motor Output US 5 US 6 Magnitude Angle SOP 120 Input Layer Hidden Layer Output Layer Sector 4 Sector 3 Sector 5 Sector 2 Sector 6 Sector 1 60 cm 30 cm Range 1 Range 2 Range 3 Mobile Robot 1,1 A,120 2,8 1,8 Goal distance and angle detection using Obstcales distance detection in 6 sectors coordinate from using US sensor TP Camera measurement 6 Journal of Robotics Table 1: Division of forward workspace referring to the angles 7. Training Desired ANN and Improving Results from the position of the mobile robot. Using Data Normalization Sectors Angle range and Standardization ° ° Sector 1 0 ≤ θ ≤ 30 ° ° 'e program exhibits cognition like that of a human being to Sector 2 31 ≤ θ ≤ 60 ° ° solve problems, and there are two subﬁeld of the AIS. 'ere Sector 3 61 ≤ θ ≤ 90 ° ° Sector 4 91 ≤ θ ≤ 120 are many types of machine learning such as classiﬁcation, ° ° Sector 5 121 ≤ θ ≤ 150 regression, clustering, dimensional reduction analysis, and ° ° Sector 6 151 ≤ θ ≤ 180 so on. 'e initial state of the backpropagation algorithm is always a point in the region of the origin of the coordinate space, while the search space scale greatly shifts the distance Table 2: Division of workspace referring to the distance from the to the desired minimum. 'erefore, requirements that position of the mobile robot. minimize the search space to a unitary hyperbolic geometry Ranges Distance from the mobile robot compress the distance to a minimum, essentially speeding Range 1 2 cm ≤ l≤ 30 cm up the algorithm for backpropagation. Range 2 31 cm≤ l≤ 60 cm It is important to keep in mind the neural network. It Range 3 l≥ 61 cm typically initializes weights to random values in the range of −1 to 1 to explain why proper data normalization will in- crease neural network training speed and eﬃciency. Table 3: Sample input values from the US used in ANN to identify 'e system has implemented datasets trained many obstacles’ location referring to the deﬁned ranges in Table 2. times using a backpropagation network tool on MATLAB. Values Obstacles’ distance from the mobile robot Various networks were developed and tested with random 1 Obstacles at range 1 initial weights, activation functions, and a diﬀerent number 2 Obstacles at range 2 of epochs, and Table 8 shows the result from this training 0 Obstacles at range 3 based on the given input and output datasets. Table 4: Sample input values from the IP camera used in ANN to 7.1. MATLAB Training Results for a Database Using Neural identify the goal location with respect to the recent mobile robot Network Toolboxes. Experimental results have a very high location referring to the deﬁned ranges in Table 2. mean square error (MSE) and take more training time, as Magnitude value Goal distance from the mobile robot shown in Figure 6, so we will use the data normalization and standardization technique. 1 'e goal at range 1 2 'e goal at range 2 0 'e goal at range 3 7.2. Data Normalization and Standardization Technique. It is found that if the features are on the same scale during Table 5: Sample input values from the IP camera used in ANN to gradient descent, then the algorithm appears to perform identify goal angle with respect to the recent mobile robot location better than when the features in the same range are not referring to deﬁned sectors in Table 1. properly scaled. 'e plot in Figure 7 shows the inﬂuence of Angle value ( ) Goal sector feature scaling on the contour plot of the cost function of the 15 'e goal at sector 1 hypothesis based on these characteristics. 45 'e goal at sector 2 As shown in Figure 7, learning steps will take longer to 65 'e goal at sector 3 converge if the contours are biased as the steps will be more 105 'e goal at sector 4 sensitive to oscillatory behaviour as seen in Figure 7. 'e plot 135 'e goal at sector 5 is uniformly distributed if the characteristics are correctly 165 'e goal at sector 6 scaled, and the steps of gradient descent have a stronger convergence proﬁle. By splitting the characteristics by max, the scaling of Table 6: Sample output values to the servo motor used in ANN to features between 0 and 1 is implemented. 'is helps to hold move the mobile robot by correction angle referring to the deﬁned all the qualities within acceptable ranges. 'e goal is to keep sectors in Table 1. the characteristics preferably within the range of −1 to 1. Output angle value ( ) Movement angle of the mobile robot at 'e terms normalization and standardization are used interchangeably, but generally, they apply to diﬀerent be- 15 Sector 1 haviours. Normalization usually involves scaling a number 45 Sector 2 65 Sector 3 to a value between 0 and 1; thus, standardization translates 105 Sector 4 information to a mean of zero and a normal deviation 135 Sector 5 between −1 and 1 [9, 10], and with the following formula, 165 Sector 6 each value on datasets can be standardized: Journal of Robotics 7 Figure 4: Example of case studies and their output according to the previous one. O: the position of obstacles in all states. A, B, C, D, E, F: the diﬀerent positions of the goal. Table 7: Situations on the real environment. Inputs Diﬀerent positions of goal From the ultrasonic sensor From the IP camera Output to the servo motor ( ) US 1 US 2 US 3 US 4 US 5 US 6 Goal location Goal angle ( ) A 2 0 2 1 0 0 0 15 45 B 2 0 2 1 0 0 0 45 45 C 2 0 2 1 0 0 0 75 45 D 2 0 2 1 0 0 2 105 135 E 2 0 2 1 0 0 2 135 135 F 2 0 2 1 0 0 0 165 165 x − μ old x � , (19) new 1 1 . . . I � i/p − μ i/p − μ . . . i/p − μ ∗ Diag , (20) 1 1 2 2 8 8 1 ∗ 8 σ σ σ 1 2 8 ∗ 8 where x : value after standardization; x : original values 7.3. MATLAB Training Results after Standardized Datasets new old of the datasets; μ : mean value for each j neuron in IL; Using Neural Network Toolboxes. 'e result of experimental σ:standard deviation; and I: input after standard. data in Figure 8 showed a high improvement in the eﬃciency We use equation (20) in matrix form 1∗ 8 to get a new with the shortest training time of the neural networks’ di- value for 8 inputs on datasets and substitute by using new abetes data classiﬁcation model based on the methods of the values in equations (12), (13), (14), (15), and (16) to ﬁnd data standardization technique. overall output after standardization o/p . overall.stand To research the methods of normalization and data 8. Simulation and Experimental standardization to improve the backpropagation network, Software of Results the model was applied. Using MATLAB, the simulations were carried out. Various networks with random initial It would be ideal to test the performance of the proposed weights were successfully tested. 'e network is trained ten method on real-world data [11]. 'e dataset is the speciﬁc times, and at various epochs, the eﬃciency objective is environment dataset, which contains various robot obser- achieved. Taking the best of ten runs and calculating in terms vations of several domestic environments in the form of of classiﬁcation accuracy, the experimental eﬀects can be arranged sensor and IP camera readings, which are collected analyzed. Table 9 indicates that when the standardization from high numbers of situations in a diﬀerent environment. process is used, the accuracy has been increased. 'e complete implementation of our algorithm was carried O 8 Journal of Robotics Start Robot Start From Home Position Read Obstcales distances and Goal Coordinates Identify the Obstcales and Goal according to workspace identiﬁed ranges and sectors Trained ANN to process Rotate Servo Motor according to determined angle from ANN No r ≤ 60 cm Yes Move the robot r cm END Figure 5: 'e ﬂowchart of the proposed system. Table 8: ANN training results using the backpropagation algorithm. MATLAB—nntool Hidden layer Sigmoid LOGSIG Activation function Performance (MSE) 69.1268 Output layer Linear PURELN Training algorithm TRAINBR Iteration 7594 Number of neurons 120 Regression 0.98664 Number of epochs 1000000 Training time 68 : 22 :13 out in a MATLAB environment. By putting the robot and regression. 'us, it would be better to predeﬁne ranges and the goal in various environments and testing the robot sectors on the environment around mobile robots using US motion behavior in diﬀerent situations and demonstrated sensors and IP cameras. To make the dataset applicable for the good performance of the proposed method. Performance system design, it is necessary to perform data preprocessing plots for training standardization and rescaling techniques before going into the navigation simulation of the robot. are depicted in Figures 9(a) and 9(b), respectively. As can be After preprocessing, the sensor ﬁeld of view is limited to 180 seen from the ﬁgures, performance plots for training, val- and the sensor range equals 4 m. 'e dimensions of the map idation, and test sets show similar characteristics, which are 500 × 500 cm. 'e obstacles can be static or dynamic, and means that the networks are very well trained and have a the robot has no prior knowledge of the environment. 'e good generalization performance. assumption control time step is 0.5 s, and the robot moves As we discussed earlier, the focus of the study is mainly with a maximum speed of 0.5 m/s towards the target. As can on obstacle avoidance with low mean square errors and high be seen from the following simulation result, achieving the Journal of Robotics 9 (a) Figure 6: Continued. 10 Journal of Robotics Training: R=0.98664 Test: R=0.98588 250 250 150 150 100 100 50 50 0 50 100 150 200 250 0 50 100 150 200 250 Target Target Data Data Fit Fit Y = T Y = T All: R=0.98653 0 50 100 150 200 250 Target Data Fit Y = T (b) Figure 6: ANN characteristics output from backpropagation training. θ J (θ) 0 ≤ x ≤ 1 0 ≤ x ≤ 1 J (θ) θ θ 1 1 Figure 7: Feature scaling and contour plot. Output~=0.97 Target +2.4 Output~=0.97 Target +2.4 Output-=0.97 Target +2.5 Journal of Robotics 11 Table 9: Results from training datasets using MATLAB based on the methods of data standardization. Training after stand.—nntool Hidden layer Sigmoid LOGSIG Activation function Performance (MSE) 0.00178 Output layer Linear PURELN Training algorithm TRAINBR Iteration 1445 Number of neurons 120 Regression 0.98884 Number of epochs 1000000 Training time 6 : 22 : 05 (a) Figure 8: Continued. 12 Journal of Robotics Training: R=0.98884 Test: R=0.98812 1 1 0.5 0.5 0 0 -0.5 -0.5 -0.5 0 0.5 1 -0.5 0 0.5 1 Target Target Data Data Fit Fit Y = T Y = T All: R=0.98874 0.5 -0.5 -0.5 0 0.5 1 Target Data Fit Y = T (b) Figure 8: Results from training datasets using MATLAB based on the methods of data standardization. steering angle leads to the generation of a smooth and safe dynamics and ﬂow patterns. Schembri proposed a set of path after standardization input and output datasets from dimensionless parameters to classify the nonlinearity of the start point to the target position. the method showing also its sensitivity to input ﬂow So, for the concept of the evaluation, d-inﬁnite and variations. largest Lyapunov exponent [12] were used. A nonlinear Also, this system discussed can be applied it for real analysis method was used to review the complex dynamics environment “Dynamic and Unknown” such as transport of air bubbles carried by water and ﬂowing through a shipments or heavyweights inside the factory to a speciﬁc microﬂuidic snake channel. 'e experimental observation location without attaching any object or oﬀset, it can eﬀect of bubbles’ motion shows an upscale sort of nonlinear the industrial movement, and so on. Output~=0.98 Target + 2.9e-06 Output~=0.98 Target + 5.3e-05 Output~=0.98 Target + 0.00034 Journal of Robotics 13 Best Training Performance is 69.1268 at epoch 7594 Best Validation Performance is 0.002587 at epoch 362 6 10 -1 -2 -3 0 10 050 100 150 200 250 300 350 0 1000 2000 3000 4000 5000 6000 7000 368 Epochs 7595 Epochs Train Train Test Test Validation Best Best (a) (b) Figure 9: (a, b) High diﬀerence in training results of performance after using data standardization. camera coordinate to determine the goal position “angle and 9. Conclusions and Future Work distance,” and one output to the servo motor to determine 'is work illustrates the mobile robot obstacle avoidance the mobile robot movement angle. (Supplementary technique using the ANN algorithm, but found big MSE Materials) when a big diﬀerence in values of trained datasets was seen. So, we used a data classiﬁcation model based on the methods References of the data standardization technique to rescaling all data and put all in speciﬁc range according to a standard formula [1] A. Medina-Santiago, J. L. Camas-Anzueto, J. A. Vazquez- which shows high improvement in the performance of the Feijoo, H. R. Hernandez-de ´ Leon, ´ and R. Mota-Grajales, training results from (69.1268) to (0.00178) MSE with the “Neural control system in obstacle avoidance in mobile robots shortest training time in two states with the same param- using ultrasonic sensors,” Journal of Applied Research and eters, assumption, and activation function. Technology, vol. 12, no. 1, pp. 104–110, 2014. [2] M. Bucolo, A. Buscarino, C. Famoso, L. Fortuna, and In future, this study will apply and prove the hardware M. Frasca, “Control of imperfect dynamical systems,” Non- experimentally with simulation in the real environment linear Dynamics, vol. 98, no. 4, pp. 2989–2999, 2019. which is conﬁrmed theoretically using MATLAB training in [3] H. H. Shehata and J. Schlattmann, “Reactive algorithm for this paper. mobile robot path planning among moving target/obstacles by means of dynamic virtual obstacle concept,”in proceedings Data Availability of the 22nd International Conference on Flexible Automation and Intelligent Manufacturing (FAIM 2012), Helsinki, Fin- 'e training dataset used to support this study’s ﬁndings is land, vol. 49, pp. 563–574, June 2012. included within the supplementary information ﬁles. [4] M. K. Singh and D. R. Parhi, “Path optimisation of a mobile robot using an artiﬁcial neural network controller,” Inter- Conflicts of Interest national Journal of Systems Science, vol. 42, no. 1, pp. 107–120, 'e authors declare no conﬂicts of interest. [5] D. R. Parhi and M. K. Singh, “Real-time navigational control of mobile robots using an artiﬁcial neural network,” Pro- ceedings of the Institution of Mechanical Engineers, Part C: Supplementary Materials Journal of Mechanical Engineering Science, vol. 223, no. 7, 'e supplementary ﬁles contain samples of the described pp. 1713–1725, 2009. datasets used in this work which contain case studies and [6] D. Parhi and M. Singh, “Real-time navigational control of their output, six inputs from Ultrasonic Sensors (USs) to mobile robots using an artiﬁcial neural network,” Journal of determine the obstacles’ distance, two inputs from the IP Mechanical Engineering Science, vol. 223, pp. 1713–1725, 2009. Mean Squared Error (mse) Mean Squared Error (mse) 14 Journal of Robotics [7] O. Azouaoui, Ouadah, Noureddine, Salem, Aouana, and D. Chabi, Neural-Based Navigation Approach for a Bi-Steer- able Mobile Robot, Intech Open, London, UK, 2008. [8] V. A. Zhmud, N. O. Kondratiev, K. A. Kuznetsov, V. G. Trubin, and L. V. Dimitrov, “Application of ultrasonic sensor for measuring distances in robotics,” Journal of Physics: Conference Series, vol. 1015, 2018. [9] T. Jayalakshmi and A. Santhakumaran, “Statistical normali- zation and back propagation for classiﬁcation,” International Journal of Computer Deory and Engineering, vol. 3, pp. 89–93, [10] P. J. M. Ali and R. H. Faraj, “Data normalization and stan- dardization: a technical report,” Machine Learning Technical Reports, vol. 1, no. 1, pp. 1–6, 2014. [11] F. Shamsfakhr and B. Sadeghibigham, “A neural network approach to navigation of a mobile robot and obstacle avoidance in dynamic and unknown environments,” Turkish Journal of Electrical Engineering & Computer Sciences, vol. 25, pp. 1629–1642, 2017. [12] F. Schembri, F. Sapuppo, and M. Bucolo, “Experimental classiﬁcation of nonlinear dynamics in microﬂuidic bubbles’ ﬂow,” Nonlinear Dynamics, vol. 67, no. 4, pp. 2807–2819,
Journal of Robotics – Hindawi Publishing Corporation
Published: Nov 3, 2021
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.