Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Recognition and Localization of Target Images for Robot Vision Navigation Control

Recognition and Localization of Target Images for Robot Vision Navigation Control Hindawi Journal of Robotics Volume 2022, Article ID 8565913, 12 pages https://doi.org/10.1155/2022/8565913 Research Article Recognition and Localization of Target Images for Robot Vision Navigation Control Muji Chen College of Information Engineering, Henan Vocational College of Agriculture, Zhengzhou, Henan 451450, China Correspondence should be addressed to Muji Chen; 2004110217@hnca.edu.cn Received 20 January 2022; Accepted 5 March 2022; Published 24 March 2022 Academic Editor: Shan Zhong Copyright © 2022 Muji Chen. (is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (is paper focuses on a visual navigation control system for mobile robots, recognizing target images and intelligent algorithms for the navigation system’s path tracking and localization techniques. (is paper examines the recognition and localization of target images based on the visual navigation control of mobile robots. It proposes an efficient marking line method for recognizing and localization target images. Meanwhile, a fuzzy control method with smooth filtering and high efficiency is designed to improve the stability of robot operation, and the feasibility is verified in different scenarios. (e corresponding image acquisition system is developed according to the characteristics of the experimental environment, and the acquired images are preprocessed to obtain corrected grayscale images. (en, target image recognition and linear fitting are performed to obtain target image positioning. (e system calculates the angle and distance of the mobile robot, offsetting the target image in real time, adjusting the output signal, and controlling the mobile robot to realize path tracking. (e comparison of sensor data and path tracking algorithm results during the experiment shows that the path tracking algorithm achieves good results with an angular deviation of ±1.5 . (e application of RANSAC algorithm and improved Hough algorithm was analyzed in visual navigation control, and the two navigation line detection algorithms based on the image characteristics of the target image were improved in the optical detection area of the navigation line for the shortcomings of the two algorithms in visual navigation control, and the algorithms before and after the improvement were compared. introduced. (e intelligence level of robots has been im- 1.Introduction proving. In the 21st century, attention is on the robot’s (e mobile robot is an essential branch of robotics. It is an perception of the external environment and autonomy. (e intelligent robot control system capable of detecting and new direction of robotics is bound to develop toward sensing the environment through various sensors and car- practicality and intelligence. Mobile robots have been widely rying out independent analysis, planning, and decision- used in traditional industry and agriculture and will be making based on environmental information and its state further expanded to new sectors, services, defense and se- [1]. (e research field of mobile robotics involves many curity, and medical services and will be commonly used in kinds of interdisciplinary theories and technologies, in- unsuitable and dangerous situations, such as deep sea and cluding computer vision, sensor information technology, space. (erefore, the broad application prospect of mobile communication technology, motion control theory, and robots has made the research in this field receive widespread mechanical engineering. (e current hot wave of artificial attention worldwide. intelligence also affects the research progress of mobile Autonomous mobility, which gives robots the ability to robotics [2]. With the rapid development of information explore their environment more fully, dramatically increases technology, computer microelectronics, and network tech- the complexity of the tasks they can accomplish. State es- nology, mobile robotics has also developed rapidly, and timation during movement is a constant topic in mobile more and more new robots with special functions have been robotics research [3]. (e primary consideration in 2 Journal of Robotics upgrading in the industrial field. More and more intelligent designing a reasonable and efficient state estimation method is the type of sensor the robot is equipped with and the robots are needed to liberate labor, improve production efficiency, save energy consumption, etc. Visual inspection characteristics of the data acquired by that type of sensor, i.e., the construction of a sensor observation model. (e in- area image recognition is the basis of navigation line ex- formation that a mobile robot carries about itself and its traction. (e quality of image segmentation affects the environment is the source of all information in the subse- navigation line extraction and the size of the error in the quent navigation process and determines the form of in- measurement results of navigation parameters. In the formation processing in the following global positioning and navigation line region established by ultrasonic measure- attitude tracking, map building, environment understand- ment, the navigation line visual detection region is set as the target operation domain for a series of image processing ing, path planning, and motion control, and task execution. How to deal with the uncertainty contained in the perceptual algorithms, and the detection region is dynamically tracked and set based on the detection results of adjacent frames; information and how to design efficient cognitive methods to deal with the mental uncertainty based on the environ- preprocessing image algorithms such as inverse color transformation and histogram equalization are specifically mental information contained in the perceptual report are the significant challenges for building mobile navigation analyzed to enhance the different target images in the de- systems and must also be predicated on the construction of tection region differentially. mobile robot observation models [4]. At the same time, the estimation of its positional attitude is the basis and pre- 2.Related Works requisite for performing other processes during the exe- cution of tasks by mobile robots [5]. (erefore, the (rough the continuous development of electronic hard- performance of the state estimation method will significantly ware technology and control disciplines, by the 1960s, some affect the performance of the whole navigation system. (us, European countries already had various forms of mobile the mobile robot observation model and the underlying state robots. With the rapid growth of processors in the 70s and estimation model are introduced for two sensors, laser 80s, mobile robots have made significant flexibility and sensor, and RGB-D vision camera, respectively, to illustrate stability. However, the main application scenarios are still the robot state estimation process under different obser- the warehousing industry and logistics and transportation vation information and its uncertainty expression form systems [9]. In the 1990s, the degree of intelligence and further elaborates its problems. Specifically, the observation automation of mobile robots was further improved with the model is constructed for the laser sensor for the mobile robot rapid development of computers, electronics, communica- equipped with the laser sensor [6]. Based on the observation tions, and image processing technologies, and mobile robots model, various forms of observation similarity measures are adapted to various working environments were born, which given, and the characteristics of each form are analyzed. have been widely used in the material assembly, home Based on this, a general model of the global localization appliance production, chemical industry, food, and many process of the robot on the raster map is given. Finally, since other industries. (e vision-based mobile robot navigation global localization results are often uncertain and multi- technology has been a new research boom in recent years hypothesis, the probability-based state tracking model is and is one of the essential directions of mobile robot introduced in this chapter. For the visual observation model, guidance technology research work. Research laboratories in the camera projection model is described, the projection universities in the countries that first researched visual model of spatial points to the camera plane is described, and navigation technology for mobile robots have achieved the method to recover the spatial position of pixels is given. significant research results [10]. Many of the results have Based on this, a technique for global positional estimation been applied to actual industrial production and even to the based on the current observation and the feature matching daily lives of the general public. results in the worldwide map is described. Due to the bias of (e Robot Vision Laboratory was the first to develop a feature observation, there is uncertainty in the global po- vision-guided mobile robot based on map construction, sitional estimation results. which is accomplished through scene reconstruction using Robotics has been rapidly improved, thanks to the rapid vision sensors to capture photos of the scene [11]. (e development and maturity of microcomputer technology, laboratory at Purdue University has developed an active sensors, and other related technologies. Intelligent robots binocular stereo-based vision-guided mobile robot, Peter, have been popularized and applied in various fields such as which acquires 3D information of the operating environ- civil, military, and scientific research. (e research results of ment and path obstacles, and combines 8 radar scanners, 24 intelligent robots are more prominent in many developed ultrasonic sensors, 8 infrared distance sensors, and a passive countries and intelligent. Highly automated intelligent ro- infrared motion sensor to achieve flexible operation. (e bots have been put into many fields such as aerospace, Intelligent Robotics Laboratory at Osaka University has geological exploration, scientific exploration, rescue and conducted in-depth research on vision navigation and de- disaster relief, such as China’s lunar rover “Moon Rabbit”. veloped a mobile robot based on monocular vision navi- Some low-cost, clever robots are also coming into daily life gation, which can detect the surrounding environment and are used in many indoor environments in homes or extensively by rotating its vision sensors and obtaining the offices, such as floor cleaning robots [7, 8]. We are now in a positioning information, travel distance, and turning angle critical period of the modern manufacturing industry of the mobile robot by a rotary encoder and potentiometer Journal of Robotics 3 [12]. (e vision navigation technology is widely used in module, the navigation decision module, the information advanced countries, such as autonomous lawnmowers, Mars communication module, and the information storage landing vehicles, driverless vehicles, and Kiva robots for module. (e information acquisition module is the basis and Amazon’s unmanned warehouses. preparation of the visual navigation. Its primary function is Global localization of mobile robots is the basis of to obtain an accurate image with distortion correction after navigation. (e localization process is based on observations the calibration of the camera [15]. (e IMU (Inertial to form feature representations and perform feature retrieval Measurement Unit) detects the pitch angle of the camera in and area inference in a global map. Lasers and vision are two real time to correct the deviation of the camera pitch angle standard sensors used in the indoor navigation process [13]. due to the vibration of the robot during the navigation Lasers can provide stable distance sensing information, process. (e image processing module is the core part of which is advantageous in obstacle avoidance and motion visual navigation. (e main functions of the image pro- planning tasks. However, it can only perceive flat infor- cessing algorithm are image ROI construction, detection mation and is relatively homogeneous. Vision can perceive based on a deep learning model, detection frame clustering, image grayscale and smoothing filtering, corner point fea- richer data, but its geometric perception range is lower, while laser has a more extended and more stable perception ture extraction in the detection frame, and navigation line range. In the global localization process, there are multiple fitting. (e navigation decision module controls the motion possible regions of positional distribution in the environ- of the robot based on the navigation information obtained ment due to the existence of similar areas and the incom- after image processing. (is study uses the fuzzy control pleteness of the perceived information. How to eliminate the method developed by our group to control the robot’s spatial perceptual ambiguity to make an accurate estimation movement. (e primary process of the navigation decision of the current poses is essentially a global optimization module is to extract the navigation lines in the ROI after problem. (e robot measures the degree of information image processing, then to find out the dominant route, solve consistency between the recent observation and the expected the position deviation and angle deviation of the robot, then observation corresponding to the estimated position based input the navigation deviation parameters into the fuzzy control decision, and finally derive the control command of on the constructed objective function. (ere is considerable international work on visual navigation motion control from the robot. (e function of the communication module is to image analysis to driving command generation [14]. In realize the serial communication between the visual navi- image segmentation, since the opening of the ImageNet gation upper computer software and the lower computer challenge, there has been significant work based on the hardware control system. (rough the configuration of the traditional feature point segmentation method to the cur- serial parameters and the development of the data trans- rently popular deep learning methods: the use of deep mission protocol, the control information is finally con- learning to achieve semantic recognition of images and verted so that the digital signal output by the navigation segmentation methods, with a very high number of refer- software is converted to the level signal for controlling the ences to achieve autonomous driving using images, or au- robot motion [16]. (e information saving module is re- tonomous driving using reinforcement learning methods, sponsible for saving the camera calibration parameters to the mainstream research in the field of autonomous driving. avoid tedious, repetitive calibration work content, and (ere is a large body of literature on autonomous driving, saving the position deviation and angle deviation during the including the use of road segmentation and the use of en- robot autonomous navigation process, which is used for the vironmental features for autonomous driving. quantitative analysis of the accuracy of the image processing algorithm. It is divided into six parts: image processing display, camera parameter setting, camera calibration set- 3.Model Design of the Target Image ting, serial port setting, production and saving of data, and robot motion control. Recognition and Localization System Based on Robot Vision Navigation Control (1) (e image processing display section displays the processing effects of the main stages of image pro- 3.1. Robot Vision Navigation Control System Construction. cessing in real time. (e processing details of the (e visual navigation software is mainly used to process the image algorithm can be visually observed and ana- images obtained from the camera in real time, then the lyzed for any problems that exist. navigation lines are extracted, navigation decisions are made (2) (e camera parameter setting section is used for after calculating the navigation parameters of the robot, and camera ID selection, camera resolution setting, and finally control signals are sent to the lower computer to image correction by calibrating internal camera control the weeding robot for automatic navigation. (e reference. visual navigation software in this paper is written on the (3) (e camera calibration section is used to set the Visual Studio 2015 development platform based on MFC calibration board parameters, calibrate the camera, with Open CV, where the algorithms are implemented in and save the calibration parameters. (e calibration C++ and C languages. (e functional framework of the parameters saved locally can be read directly in the visual navigation software is shown in Figure 1. subsequent image correction operation, repeatedly (e visual navigation software comprises five parts: the avoiding tedious camera calibration work. information acquisition module, the image processing 4 Journal of Robotics e Information e Image Navigation Acquisition Module Processing Module e Navigation e Information e Information Acquisition Decision Module Communication Module Storage Module e Visual e Navigation Primary Distortion Calibration Real-Time Navigation Process e Image Detection Frame Processing Algorithm Construction Learning Model Processing Module Clustering Navigation Line Smoothing Corner Point Feature Extraction Detection Frame Image Grayscale Fitting Filtering Navigation Decision e Serial Information Image Processing Movement Communication Module Parameters e Control Information Camera Calibration Camera Parameter Image Processing e Development Information Saving Module Setting Setting Display Figure 1: Functional framework of visual navigation software. (4) (e serial port selection section selects and config- and right wheels are the driving wheels, and the front and ures the communication serial port between the rear wheels are the driven wheels. (e wheeled mobile robot upper and lower computers. can accomplish a variety of motions, mainly by controlling the rotational speed of its left and right drive wheels, re- (5) (e data display and saving section is used to display spectively. (erefore, to effectively manage the movement of the pitch angle of the camera measured by the IMU the mobile robot, its kinematic model must be analyzed first. in real time, to solve the position and angle deviation (e position state of the mobile robot at two adjacent of the robot, to store the position and angle deviation moments, with the x-axis forward as the robot’s forward of the weeding robot during automatic navigation in direction, where v and v are the velocities of the robot’s left txt file format, and to select the size of the filter r t and right drive wheels, respectively, the angle the robot has kernel. turned at the adjacent moment t, r is the distance between (6) Robot motion control is divided into manual control the left and right drive wheels, and r is the radius of the mode and automatic navigation mode. (e manual circular arc motion at the adjoining moment where the control mode is used to regulate the robot’s position forward velocity of the mobile robot is equal to the average and adjust the part of the welding robot in the water rate of its left and right wheels, assuming that the steering field. (e automatic navigation mode is for the robot angle is slight, the formula can be obtained as follows: to track forward along the seedling navigation line v − 1 according to the control command of fuzzy decision. ⎧ ⎪ ⎪ √���� v � , ⎪ t (e robot vision navigation flow chart is shown in lw/2 Figure 2. (1) v + 1 Visual navigation of mobile robots is to collect road √���� v � . 2/lw information through a camera to identify marking lines and guide the robot. (erefore, to accurately show the robot along the desired path, the body model must be built first to Before the visual navigation line improvement RANSAC algorithm extraction, each pixel point in the optical de- realize the conversion between the image space coordinate system corresponding to the camera and the world coor- tection area of the navigation line needs to be calculated in u , the directional gradient I and v , directional dinate system centered on the robot’s body. (e mobile IPM upm IPM 2 2 robot body structure is fixed, and its Kinect camera optical gradient I , and the product I I and sum I I of vPM u v u v ipm iPM ipm ipm the two gradient directions are calculated. (en, the axis is parallel to the road surface, 34 cm high from the ground, and its maximum adequate vertical view is a + b Gaussian filtering is performed, and the template parameters are normalized; then, the corner point response value P of because there are shields above and below the camera, i.e., the bottom of the image taken by the camera is the road each image element is calculated, and the P value smaller than the threshold is set to zero. Finally, within the 3 × 3 surface 66 cm in front of it. (e Tourtellot mobile robot in neighborhood, the local nonmaximum values are this paper adopts a four-wheel structure, in which the left Journal of Robotics 5 Right Drive Navigation Respectively Corresponding Wheels Eecti ff vely Respectively Parallel Accurately Two Adjacent Movement Moments Surface Coordinate Identify Marking Corresponding System Lines Right Drive Kinematic Wheels surface Guide The Robot Body System At The Adjacent Camera Respectively Moment motions Conversion Below The Maximum Camera The Body Model The Conversion Surface Left Wheels Right Drive Wheels Between The Adjoining The Camera Accomplish Moment Figure 2: Flow chart of robot vision navigation. suppressed, and the remaining local maximum values are Step 5: Repeat the hypothesis and judge the mathe- output as corner points. (e specific steps of the improved matical model to find the ingroup points step N by RANSAC algorithm for visual navigation line detection are step, compare the ingroup point set U with the SMax as follows: most significant number of ingroup points, and output its corresponding linear model M to get the navigation Step 1: A minimum of two data points is required for path line L . each random sampling. (e number of samples in the data set You for each corner point needs to be guar- (e essence of the Hough transform is to map the image anteed to avoid misfitting the harvesting navigation line to its parameter space, which requires the computation of all due to too few corner points. HM edge points and requires a large amount of memory space and operations. (e improved Hough transform Step 2: Initial estimation of two randomly selected data 1 1 2 2 processes m (m < M ) only one edge point in the input H H H points (u , v ) and (u , v ) models from the IPM Ipm IPM Ipm image, and the selection of m this edge point is somewhat corner point data set U to obtain a linear model M. H random. In addition, the enhanced Hough transform al- 􏽱��������� 2 1 v + v gorithm can obtain two endpoints of a straight line in the ipm ipm (2) v � . ip detection image and accurately locate the detected target 2 1 u + u straight line. (e specific detection processes are as follows: Step 3: For the remaining data in the data set U, cal- (1) In the image of the detection area, an edge point is culate the pixel distances d to the linear model M in randomly selected (u , v ) and mapped to the IPM Ipm turn, if the distance threshold A is d satisfied, but the m polar coordinate system to obtain a family of point into the set U as an ingroup point together with straight lines through the edge point. Suppose the the extracted sample points, and the other points as edge point has been marked on a straight line. In outgroup points. that case, the random selection is continued among the remaining issues, and the polar coordinate Step 4: Count the number of ingroup points in the set U of ingroup points s. If s satisfies the threshold S of equation where the family of straight lines through s T the edge point lies is obtained until all the edge the number of ingroup points, refit the ingroup points points are randomly selected. (e polar coordinates U in the ingroup points using the least-squares of the lines passing through the edge point are the method, and update the linear model M. If it does not opposite equation. satisfy, discard this linear model. 6 Journal of Robotics displayed in the color image. (e principle of target image u . cos θ ipm 􏽱�������� � r � . visual localization is shown in Figure 3. (3) v . sin θ ipm (e amount of data in the image is also three times the number of pixels. When an image is grayed out, the in- (2) (e Hough transform of randomly selected edge formation contained in the image becomes one-third of the points and calculation of the cumulative sum. original image [17]. (e image pixels differ only in brightness and are all displayed in gray. (e various colors in (3) In the Hough space, select the edge point that reaches a color image are composed of the three base colors R, G, and the maximum value and continue to the next step B. In digital images, if the R, G, and B base colors are finely when the issue is greater than the threshold, oth- divided, the more colorful the image can be and the more erwise return to step1. information the image contains. (e R, G, and B base colors (4) (e edge point where the maximum value reached by are red grayscale, green grayscale, and blue grayscale. step3 is taken as the starting coordinate point. (e Weighted value method: according to different indicators to displacement is carried out along the straight-line the original color image R, G, and B components multiplied direction. (e two endpoints of the line are detected by the corresponding weight to find the weighting, the b b to have coordinates of (u , v ) and IPM IPM expression is as follows: e e (u , v ), respectively. IPM IPM 􏽲�������� 􏽱��������� (5) When the length of the line obtained from the de- W W G B (4) gray � 􏽘 W + R × + . tection reaches a particular threshold value, the line R G B is output as a result, and the detection is continued Image enhancement is also critical to the overall bin- by returning to the initial step. ocular vision system and is necessary for processing images. Compared with the standard Hough transform algo- Image enhancement highlights essential information in the rithm, the improved Hough transform has significantly image to meet the system’s requirements and eliminate or improved memory consumption and computation. (e weaken redundant details irrelevant to the system. Enhanced improved probabilistic Hough algorithm can effectively image processing makes the image more compatible with avoid the interference of non-harvesting dividing lines and human visual habits and is intended for specific application achieve accurate detection of visual navigation lines. In the purposes. After the image is enhanced, only the ability to complete detection process of optical navigation lines, the distinguish information increases, while the background average processing time of a single frame for navigation line information is not added. (e improved picture is more detection based on the original probabilistic Hough trans- suitable for the application than the original image in specific form algorithm is 77.4 Ms. In comparison, the average scenarios. (e standard methods for image enhancement are processing time of a single frame based on the improved segmented linear transform enhancement and histogram probabilistic Hough algorithm is 54.6 Ms. (e enhanced equalization. (e segmented linear transformation en- algorithm also improves the processing speed compared hancement: Suppose the gray map function before the en- with the original algorithm. hancement transformation is f(r, c) and the grayscale range is [O, M ], after the segmented linear transformation en- hancement, the gray map function is g(r, c) and the gray- 3.2. Target Image Recognition and Localization Model Design. scale range is [O, M ] and the formula. (e process of digital image acquisition and transmission can be disturbed by many factors, which can cause differ- 􏽲���� � m − d d − c ences between the digital image and the real object scene and g(r, c) � 􏽘 + . (5) can affect the image processing of the vision system in the b − a m − b later stage. Preprocessing operations such as grayscale, image enhancement, and filtering must be performed on the A grayscale histogram is a statistical graph of the dis- original image captured by the camera at the beginning of tribution of gray levels, representing the proportion of each the vision system. (en, the features of the preprocessed gray level pixel in the total number of pixels in a digital image are extracted, and then the parts are matched with image. (e histogram can describe the general situation of a those of the template image to identify the target strip of grayscale image, such as the degree of contrast between light smoke. (e surface of the identification target in this paper and dark, the frequency of each gray level, the distribution of has robust texture features. (e images captured by the gray levels in the picture, etc. (e gray histogram is a camera are generally colorful. (ey contain much infor- function of the gray level, with the gray value as the hori- mation, extending the time of processing images by the zontal coordinate and the number of pixels as the vertical binocular vision system and considering the efficiency re- coordinate [18]. (e gray histogram of an image has the quirements of recognition and localization technology to following properties: the gray histogram is a statistical result improve the efficiency of the binocular vision system. (e of the number of occurrences of the gray value of all pixels in color images are to be converted into grayscale maps [17]. the picture, which does not reflect the specific position of the (e R, G, and B components in the color image are con- gray value pixels in the image, but only the number of occurrences of different gray values; a pair of images only verted into the same values. Since each pixel has a different value for the R, G, and B components, other colors can be corresponds to a couple of histograms. (e histogram of a Journal of Robotics 7 e Process Of Digital Image Acquisition And Surface Of e Identification Transmission Can Be Disturbed By Target In is Paper Has Many Factors Robust Texture Features Reprocessing Operations Such As Grayscale Enhancement Vision System Image Enhancement Smoke Binocular Vision System Features e Binocular Vision System Localization Technology To Preprocessing Improve Grayscale Differences Robust Texture Features Colorful Considering e Efficiency Processing Grayscale Requirements Of Efficiency Recognition Color Image Object Figure 3: Target image visual localization principle. team of images corresponds to only one histogram, but one 4.Analysis of Results histogram can be reversed to correspond to different ideas; 4.1. Analysis of the Robot Vision Navigation Control System. the grayscale histogram counts the number of pixels with the In the process of autonomous navigation, the robot calcu- same gray value in an image, so the grayscale histogram of a lates the position deviation and angle deviation of the robot pair of images is equal to the sum of the histograms of all relative to the navigation line based on the seedling navi- parts of the picture. (e histogram with a [O, L − 1] range of gation line extracted by the vision system and continuously gray values is a discrete function. corrects the heading based on the variation during the forward motion [19]. (erefore, the positioning error of the (6) S � 􏽚 P + (w + dw). weeding robot concerning the navigation line directly affects the navigation control process of the robot and must be (e robot uses a trained feature point regression model measured and analyzed. (e angular deviation error was and a topological structure regression model to estimate the measured by fixing the deviation of the robot’s center on the robot’s poses online. First, a pyramid is constructed for the centerline of the seedling row and rotating the robot. (e acquired image, and SURF features and descriptors are angular deviation between the robot’s centerline and the extracted. Let F � 􏼈(p , v ), i � 1􏼉 be the location of the i i °° centerline of the seedling row changed from [15–15 ] to extracted feature points and their descriptors. Based on the °° [−15 ]. When the centerline of the robot is parallel to the depth map of the current observation, the coordinates centerline of the seedling column, the angular deviation is corresponding to the feature points in the camera coordinate defined as positive when the weeding robot is turned system can be obtained. Let S � s ∈ R , i � 1 denote the set 􏼈 􏼉 counterclockwise, i.e., the centerline of the weeding robot of points corresponding to the features. (en, the topo- deviates to the left concerning the centerline of the seedling, logical positions corresponding to the current image are the angular deviation is defined as unfavorable when the predicted using the topological position regression model to weeding robot is turned clockwise, i.e., the centerline of the obtain multiple candidate topological positions. At the same weeding robot deviates to the right concerning the centerline time, the spatial coordinates in the world coordinate system of the seedling column. (e measured and calculated values are indicated for the features in F using the feature point of angular deviation were recorded every 5 as a set of data. regression model. Let M � 􏼈v � m + m , i � 1􏼉 denote the i 1 u (e experiment was repeated thrice for each group to im- set of prediction results for feature points. M contains only prove the reliability of the experimental data, as shown in the prediction results located at that topological node for Figure 4. (e mean error of the angular deviation was each topological location. Due to the existence of erroneous ° ° calculated to be 0.11 , and the standard deviation was 0.04 . predictions and the fact that each point has multiple pro- To verify the feasibility and reliability of the visual jections, the global positional estimation problem of the navigation and path tracking designed in this paper, the camera can be expressed. following scenes are set up: straight line, turning path, and obstacle occlusion. In the experiment, its forward speed is set t � arg min 􏽘(min ∈ |m − ts|). (7) 1 as V � 0.2 m/s. (e angular velocity w is in the 8 Journal of Robotics 0 2 4 6 8 1012141618202224 Number of experiments SVM-1 SVM-3 SVM-2 SVM-4 Figure 4: Experimental test comparison chart. [−1.0 rad/s, 1.0 rad/s] range, in each frame after processing, 900 the recognized center line is displayed with a straight blue line, and if a digital road sign is detected, the connection between the presighting point and the center of the digital road sign is displayed with a red straight line. (e initial position of the Tourtellot robot is slight to the right of the path in the straight section at startup, and the identification line recognition results under several moments at intervals 500 during operation. During the process of the robot, the angle of the centerline of the marking line obtained through image processing and recognition is plotted against the distance from the pre-sighting point to the centerline, as shown in Figure 5, where the dark blue curve is after the Kalman pre filtering. (e light blue curve is the angle θ without the Kalman filtering. It can be seen that θ the fluctuation is 0 5 10152025303540 relatively stable during the robot’s operation without sig- T (s) pre nificant abrupt changes. In contrast, the θ instability of the angular pre-angular unfiltered red curve in the figure is the distance deviation d distance pre-distance after Kalman filtering, while the purple curve is the unfil- pre tered distance deviation d . (e result of the distance Figure 5: Running process identification lines θ and d curves. deviation depends on the angle to a certain extent, so the pre fluctuation θ leads to a more significant distance deviation from the pre-sighting point to the marker line also varies industrial CCD camera, and the execution component Turtle more. Bot2, and enable the wireless network access function after (e lane lines were laid in the lab, and the corresponding checking and confirming the connection of each hardware of QR codes were set at each node location to place the mobile the lower computer, setting the network parameters, and robot at any initial node location. First, we start the mobile accessing the LAN established by the wireless server; the robot management system and wireless communication upper computer reads the map file and reads the infor- server, set the relevant network parameters, establish the mation to the vehicle controller according to the readings. wireless LAN server, and wait for the lower computer to (e upper computer reads the map file, reads the data from access. We start the embedded development board TX2, the the vehicle controller, initializes the coordinates of the node Time (ms) Running process Journal of Robotics 9 where the mobile robot is located according to the read -2.0 information, and displays its position in the map in real 60 -1.5 time; the management system will automatically number the tasks after receiving the task command, and then compare -1.0 the current mobile robot location information with the target location information to get the shortest path infor- -0.5 mation by path planning and transmit the path information 0.0 to the lower computer through wireless communication. (e path information will be transferred to the lower computer 0.5 through wireless communication. After receiving the path information, the lower computer will execute the task and 1.0 walk on the specified path through lane line tracking. (e 1.5 QR code recognition will be used for positioning correction and steering guidance, and the real-time data will be 0 2.0 uploaded to the upper computer [20]. To verify the accuracy 0 2 4 6 8 10 12 14 16 18 20 22 of the path tracking algorithm based on visual navigation, we T (s) randomly select a time point in the mobile robot operation Composite index test, extract the measured angle data of the gyroscope in the Growth rate actuator at this time point, and compare it with the data Figure 6: Deviation angle change. obtained from the path tracking algorithm at the corre- sponding time. (e deviation of the gyroscope angle data from the angle data obtained from image processing is calculated, and the relationship between the selected time the factors affecting the angle deviation measurement error nodes and the angle deviation is plotted as shown in Fig- are mainly: due to the influence of factors such as the in- ure 6. From the results, the deviation value between the path fluence of driving vision, the manually operated intelligent tracking algorithm and the actual angle is ±1.5 , indicating rice, and wheat harvester cannot always keep the straight line that this paper’s path tracking algorithm can achieve high where the left-hand divider is located parallel to the harvest accuracy. navigation line, i.e., the actual reference value floats around 0 ; the incomplete segmentation of harvested and unharvested areas in the image causes the detected navigation line and the 4.2. Results of Target Image Recognition and Localization straight line where the left-hand divider is located to addi- Analysis. (e validation criteria for the measurement ac- tional angle exists. curacy of visual navigation parameters were first established, (e training process of the detection algorithm includes followed by the analysis of the overall structure of the optical data set preparation, formal training, and network gener- navigation detection system in the field and the experimental alization error evaluation. (en, the deep learning detection locations and experimental methods. (en, experiments on algorithm is compared with the traditional SVM algorithm navigation line detection were conducted for the improved for classification experiments. (e results surface that al- RANSAC algorithm and the enhanced probabilistic Hough though the deep learning detection check-all rate is slightly transforms algorithm under different harvesting environ- lower than the SVM algorithm, its check-accuracy rate and ments. (e success rate of detection and image processing accuracy rate are higher than the SVM algorithm. And the operation speed were statistically analyzed. Finally, experi- detection rate of the deep learning detection algorithm has a ments are conducted to verify the success rate of navigation more significant advantage, which can meet the basic re- line detection for the image pyramid optical flow tracking quirements of real-time QR code detection for mobile ro- algorithm under different environments, and multiple sets of bots. Finally, the operation experiment of the whole system displacement deviation and angle deviation benchmark values is carried out to verify the feasibility. Moreover, comparing are set to verify the measurement accuracy of navigation the gyroscope angle data with the path tracking algorithm parameters according to the visual navigation parameter data can be obtained. Various experiments were designed measurement accuracy verification standard. (e measure- according to different given algorithm parameters to in- ment accuracy of angular deviation of navigation parameters fluence the environment model rasterization resolution of the image pyramid optical flow tracking algorithm is parameter on the confidence occupancy meter. (e test verified. (e experiments on the error measurement of the image size varies. (e algorithm automatically transforms angular variation of the navigation parameters of the image the original test map to 448 × 448 resolution size before pyramid optical flow tracking algorithm were performed in 6 input. (e output result of the positioning frame will also be groups in turn, with the base value of the angular deviation altered accordingly. (e partial detection results of the deep being 0 . As shown in Figure 7, the average value of the learning detection algorithm are shown in Figure 8. (e maximum error of the actual measurement value of the results show that the deep learning detection algorithm angular deviation of the harvester was 10.57 , the average performs well for QR codes with simple backgrounds, value of the mean error was 3.73 , and the average value of the complex backgrounds, vestiges, small deformations, partial standard deviation was 2.98 . In the actual detection process, occlusions, or multiple QR codes in a single image. Deviation angle 10 Journal of Robotics velocity 2.0 Aoccleration 1.5 1.0 Positon 0.5 0.0 24 6 8 10 12 14 16 18 20 T (s) Figure 7: Component trajectories of the rotational path, velocity, and acceleration. computer reads the map file, reads the information from the 19% 23% 35% 24% vehicle controller, initializes the coordinates of the node 21% 35% 19% 25% where the mobile robot is located according to the read 8 information, and displays its position in the map in real 30% 36% 9% 25% time. (e gyroscope is corrected once after the QR code, so it 7 24% 12% 30% 34% is assumed that the gyroscope data is the actual angle. (e 6 43% 9% 35% 13% deviation of the gyroscope angle data from the angle data 5 27% 23% 13% 37% obtained from image processing is calculated, and the re- 4 23% 12% 32% 32% lationship between the selected time nodes and the angle 3 26% 16% 21% 37% deviation is plotted. From the results, the variation of the 2 25% 42% 11% 23% path tracking algorithm from the actual angle is ±1.5 , which 1 14% 25% 27% 34% indicates that the path tracking algorithm in this paper can achieve high accuracy. 0 20 40 60 80 100 Testing time (s) 5.Conclusion Full masking Small deformation With the continuous development and progress of science Partially obscured defective and technology, the intelligent level requirements are Figure 8: Comparison results of testing indicators. gradually increasing, and the automatic control of mobile robots has become an important direction in the develop- ment of robot systems, whose visual navigation system is one Among the detection results, the deep learning detection of the hotspots of research today. In this paper, an in-depth algorithm can make accurate judgments on the images of the study is conducted on the problem of robot optical navi- experimental scenes of the mobile robot in this paper. (e gation path detection. For the problem that the RANSAC accuracy and rapidity of the integrated detection algorithm algorithm first establishes the linear mathematical model of can meet the basic requirements of real-time detection of QR the path in the navigation line detection and then performs codes during the execution of the mobile robot tasks in this the remaining corner point model verification, which leads paper’s experiments. Moreover, the corresponding QR codes to more iterations of the algorithm and more considerable are set at each node location, and the mobile robot is placed computation, model verification criteria are added to avoid at any initial node location. First, we start the mobile robot the problem of time-consuming and detection errors caused management system and wireless communication server, set by continuing iterative verification in the case of model errors. By limiting the range of edge point probability ex- the relevant network parameters and establish the wireless LAN server, and wait for the lower computer to access; then, traction and setting the success criterion of straight line we start the embedded development board TX2, the in- detection, the improved Hough transform algorithm can dustrial CCD camera, and the execution component Turtle effectively solve the problem of fast and accurate identifi- Bot2 and enable the wireless network access function after cation of navigation lines caused by the probability ex- each hardware connection of the lower computer is checked traction of edge points in the whole detection area; finally, and confirmed, set the network parameters and access the the image pyramid optical flow tracking algorithm is used to LAN established by the wireless server; the upper computer realize the tracking detection of robot visual navigation and reads the QR code and reads the QR code. (e upper tracking measurement of visual navigation parameters. (e Testing index Component trajectory Journal of Robotics 11 IEEE Transactions on Intelligent Transportation Systems, robot is experimented with setting straight, turning, and vol. 21, no. 8, pp. 3409–3422, 2019. obstacle occlusion scenes. From the trajectory graphs ob- [3] L. Qiu, C. Li, and H. Ren, “Real-time surgical instrument tained from the experimental simulation, the robot can tracking in robot-assisted surgery using multi-domain con- better fit the marking line operation during the operation. volutional neural network,” Healthcare Technology Letters, (e charts of parameters and d show that the bit-posture vol. 6, no. 6, pp. 159–164, 2019. relationship of the robot body relative to the marking line is [4] A. Devo, G. Mezzetti, G. Costante, M. L. Fravolini, and closed during the process, which also illustrates the effec- P. Valigi, “Towards generalization in target-driven visual tiveness and accuracy of target image recognition and navigation by using deep reinforcement learning,” IEEE localization. Transactions on Robotics, vol. 36, no. 5, pp. 1546–1561, 2020. Based on the research work carried out in this paper, we [5] Y. Xiong, Y. Ge, L. Grimstad, and P. J. From, “An autonomous briefly analyze the potential research points with further strawberry-harvesting robot: design, development, integra- depth in this study, taking into account the current trends in tion, and field evaluation,” Journal of Field Robotics, vol. 37, computer vision and robotics. no. 2, pp. 202–224, 2020. (e introduction of deep neural network descriptors as [6] P. Neubert, S. Schubert, and P. Protzel, “A neurologically the front-end data matching for VSLAM to achieve visual inspired sequence processing model for mobile robot place data matching with illumination and viewpoint invariance is recognition,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3200–3207, 2019. a current research hotspot and trend in VSLAM-related [7] W. J. Heerink, S. J. S. Ruiter, J. P. Pennings et al., “Robotic work. A considerable amount of work has been carried out versus freehand needle positioning in CT-guided ablation of in this area, including the extraction of intermediate de- liver tumors: a randomized controlled trial,” Radiology, scriptors using existing network models and the design of a vol. 290, no. 3, pp. 826–832, 2019. network structure of visual feature descriptors specifically [8] S. G. Mathisen, F. S. Leira, H. H. Helgesen, K. Gryte, and for VSLAM. However, the gap between the current research T. A. Johansen, “Autonomous ballistic airdrop of objects from in this area and the application of VSLAM lies in the visual a small fixed-wing unmanned aerial vehicle,” Autonomous feature point extraction and the real-time nature of the Robots, vol. 44, no. 5, pp. 859–875, 2020. generation. In addition, deep neural network descriptors [9] K. M. Abughalieh, B. H. Sababha, and N. A. Rawashdeh, “A tend to have higher dimensionality and take longer to video-based object detection and tracking system for weight perform feature point matching and distance calculation. sensitive UAVs,” Multimedia Tools and Applications, vol. 78, (erefore, it is not easy to guarantee the online performance no. 7, pp. 9149–9167, 2019. of deep vision descriptors for system applications with high [10] K. Lee, J. Gibson, and E. A. (eodorou, “Aggressive per- real-time requirements such as VSLAM. (erefore, there is ception-aware navigation using deep optical flow dynamics work to be done on a downscale and speed up the depth and PixelMPC,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1207–1214, 2020. vision descriptors to meet the demand of VSLAM real-time [11] C. H. G. Li and Y. M. Chang, “Automated visual positioning applications. and precision placement of a workpiece using deep learning,” 1e International Journal of Advanced Manufacturing Tech- Data Availability nology, vol. 104, no. 9, pp. 4527–4538, 2019. [12] D. Fielding and M. Oki, “Technologies for targeting the pe- (e data used to support the findings of this study are ripheral pulmonary nodule including robotics,” Respirology, available from the corresponding author upon request. vol. 25, no. 9, pp. 914–923, 2020. [13] P. M. Kumar, U. Gandhi, R. Varatharajan, G. Manogaran, Conflicts of Interest R. Jidhesh, and T. Vadivel, “Intelligent face recognition and navigation system using neural learning for smart security in (e author declares no conflicts of interest. internet of things,” Cluster Computing, vol. 22, no. 4, pp. 7733–7744, 2019. Acknowledgments [14] V. Vasilopoulos, G. Pavlakos, S. L. Bowman et al., “Reactive semantic planning in unexplored semantic environments (is work was supported by the 2021 school level scientific using deep perceptual feedback,” IEEE Robotics and Auto- research and innovation team project, Innovation and En- mation Letters, vol. 5, no. 3, pp. 4455–4462, 2020. trepreneurship Education Scientific Research and Innova- [15] V. Vasilopoulos, G. Pavlakos, S. L. Bowman et al., “Reactive tion Team, (No. HNACKT-2021-01) and 2021 Research semantic planning in unexplored semantic environments using deep perceptual feedback,” IEEE Robotics and Auto- Projects of Educational Science (No. HNACJY-2021-15). mation Letters, vol. 5, no. 3, pp. 4455–4462, 2020. [16] J. W. Martin, B. Scaglioni, J. C. Norton et al., “Enabling the References future of colonoscopy with intelligent and autonomous magnetic manipulation,” Nature machine intelligence, vol. 2, [1] C. Sampedro, A. Rodriguez-Ramos, H. Bavle, A. Carrio, no. 10, pp. 595–606, 2020. P. de la Puente, and P. Campoy, “A fully-autonomous aerial [17] M. Ma, H. Li, X. Gao et al., “Target orientation detection based robot for search and rescue applications in indoor environ- on a neural network with a bionic bee-like compound eye,” ments using learning-based techniques,” Journal of Intelligent and Robotic Systems, vol. 95, no. 2, pp. 601–627, 2019. Optics Express, vol. 28, no. 8, pp. 10794–10805, 2020. [18] J. Yang, C. Wang, B. Jiang, H. Song, and Q. Meng, “Visual [2] S. Wang, F. Jiang, B. Zhang, R. Ma, and Q. Hao, “Develop- ment of UAV-based target tracking and recognition systems,” perception enabled industry intelligence: state of the art, 12 Journal of Robotics challenges and prospects,” IEEE Transactions on Industrial Informatics, vol. 17, no. 3, pp. 2204–2219, 2020. [19] W.-H. Su, “Advanced machine learning in point spectros- copy, RGB- and hyperspectral-imaging for automatic dis- criminations of crops and weeds: a review,” Smart Cities, vol. 3, no. 3, pp. 767–792, 2020. [20] A. A. Zhilenkov, S. G. Chernyi, S. S. Sokolov, and A. P. Nyrkov, “Intelligent autonomous navigation system for UAV in randomly changing environmental conditions,” Journal of Intelligent and Fuzzy Systems, vol. 38, no. 5, pp. 6619–6625, 2020. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Robotics Hindawi Publishing Corporation

Recognition and Localization of Target Images for Robot Vision Navigation Control

Journal of Robotics , Volume 2022 – Mar 24, 2022

Loading next page...
 
/lp/hindawi-publishing-corporation/recognition-and-localization-of-target-images-for-robot-vision-qfda0rG1Q6

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2022 Muji Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-9600
eISSN
1687-9619
DOI
10.1155/2022/8565913
Publisher site
See Article on Publisher Site

Abstract

Hindawi Journal of Robotics Volume 2022, Article ID 8565913, 12 pages https://doi.org/10.1155/2022/8565913 Research Article Recognition and Localization of Target Images for Robot Vision Navigation Control Muji Chen College of Information Engineering, Henan Vocational College of Agriculture, Zhengzhou, Henan 451450, China Correspondence should be addressed to Muji Chen; 2004110217@hnca.edu.cn Received 20 January 2022; Accepted 5 March 2022; Published 24 March 2022 Academic Editor: Shan Zhong Copyright © 2022 Muji Chen. (is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (is paper focuses on a visual navigation control system for mobile robots, recognizing target images and intelligent algorithms for the navigation system’s path tracking and localization techniques. (is paper examines the recognition and localization of target images based on the visual navigation control of mobile robots. It proposes an efficient marking line method for recognizing and localization target images. Meanwhile, a fuzzy control method with smooth filtering and high efficiency is designed to improve the stability of robot operation, and the feasibility is verified in different scenarios. (e corresponding image acquisition system is developed according to the characteristics of the experimental environment, and the acquired images are preprocessed to obtain corrected grayscale images. (en, target image recognition and linear fitting are performed to obtain target image positioning. (e system calculates the angle and distance of the mobile robot, offsetting the target image in real time, adjusting the output signal, and controlling the mobile robot to realize path tracking. (e comparison of sensor data and path tracking algorithm results during the experiment shows that the path tracking algorithm achieves good results with an angular deviation of ±1.5 . (e application of RANSAC algorithm and improved Hough algorithm was analyzed in visual navigation control, and the two navigation line detection algorithms based on the image characteristics of the target image were improved in the optical detection area of the navigation line for the shortcomings of the two algorithms in visual navigation control, and the algorithms before and after the improvement were compared. introduced. (e intelligence level of robots has been im- 1.Introduction proving. In the 21st century, attention is on the robot’s (e mobile robot is an essential branch of robotics. It is an perception of the external environment and autonomy. (e intelligent robot control system capable of detecting and new direction of robotics is bound to develop toward sensing the environment through various sensors and car- practicality and intelligence. Mobile robots have been widely rying out independent analysis, planning, and decision- used in traditional industry and agriculture and will be making based on environmental information and its state further expanded to new sectors, services, defense and se- [1]. (e research field of mobile robotics involves many curity, and medical services and will be commonly used in kinds of interdisciplinary theories and technologies, in- unsuitable and dangerous situations, such as deep sea and cluding computer vision, sensor information technology, space. (erefore, the broad application prospect of mobile communication technology, motion control theory, and robots has made the research in this field receive widespread mechanical engineering. (e current hot wave of artificial attention worldwide. intelligence also affects the research progress of mobile Autonomous mobility, which gives robots the ability to robotics [2]. With the rapid development of information explore their environment more fully, dramatically increases technology, computer microelectronics, and network tech- the complexity of the tasks they can accomplish. State es- nology, mobile robotics has also developed rapidly, and timation during movement is a constant topic in mobile more and more new robots with special functions have been robotics research [3]. (e primary consideration in 2 Journal of Robotics upgrading in the industrial field. More and more intelligent designing a reasonable and efficient state estimation method is the type of sensor the robot is equipped with and the robots are needed to liberate labor, improve production efficiency, save energy consumption, etc. Visual inspection characteristics of the data acquired by that type of sensor, i.e., the construction of a sensor observation model. (e in- area image recognition is the basis of navigation line ex- formation that a mobile robot carries about itself and its traction. (e quality of image segmentation affects the environment is the source of all information in the subse- navigation line extraction and the size of the error in the quent navigation process and determines the form of in- measurement results of navigation parameters. In the formation processing in the following global positioning and navigation line region established by ultrasonic measure- attitude tracking, map building, environment understand- ment, the navigation line visual detection region is set as the target operation domain for a series of image processing ing, path planning, and motion control, and task execution. How to deal with the uncertainty contained in the perceptual algorithms, and the detection region is dynamically tracked and set based on the detection results of adjacent frames; information and how to design efficient cognitive methods to deal with the mental uncertainty based on the environ- preprocessing image algorithms such as inverse color transformation and histogram equalization are specifically mental information contained in the perceptual report are the significant challenges for building mobile navigation analyzed to enhance the different target images in the de- systems and must also be predicated on the construction of tection region differentially. mobile robot observation models [4]. At the same time, the estimation of its positional attitude is the basis and pre- 2.Related Works requisite for performing other processes during the exe- cution of tasks by mobile robots [5]. (erefore, the (rough the continuous development of electronic hard- performance of the state estimation method will significantly ware technology and control disciplines, by the 1960s, some affect the performance of the whole navigation system. (us, European countries already had various forms of mobile the mobile robot observation model and the underlying state robots. With the rapid growth of processors in the 70s and estimation model are introduced for two sensors, laser 80s, mobile robots have made significant flexibility and sensor, and RGB-D vision camera, respectively, to illustrate stability. However, the main application scenarios are still the robot state estimation process under different obser- the warehousing industry and logistics and transportation vation information and its uncertainty expression form systems [9]. In the 1990s, the degree of intelligence and further elaborates its problems. Specifically, the observation automation of mobile robots was further improved with the model is constructed for the laser sensor for the mobile robot rapid development of computers, electronics, communica- equipped with the laser sensor [6]. Based on the observation tions, and image processing technologies, and mobile robots model, various forms of observation similarity measures are adapted to various working environments were born, which given, and the characteristics of each form are analyzed. have been widely used in the material assembly, home Based on this, a general model of the global localization appliance production, chemical industry, food, and many process of the robot on the raster map is given. Finally, since other industries. (e vision-based mobile robot navigation global localization results are often uncertain and multi- technology has been a new research boom in recent years hypothesis, the probability-based state tracking model is and is one of the essential directions of mobile robot introduced in this chapter. For the visual observation model, guidance technology research work. Research laboratories in the camera projection model is described, the projection universities in the countries that first researched visual model of spatial points to the camera plane is described, and navigation technology for mobile robots have achieved the method to recover the spatial position of pixels is given. significant research results [10]. Many of the results have Based on this, a technique for global positional estimation been applied to actual industrial production and even to the based on the current observation and the feature matching daily lives of the general public. results in the worldwide map is described. Due to the bias of (e Robot Vision Laboratory was the first to develop a feature observation, there is uncertainty in the global po- vision-guided mobile robot based on map construction, sitional estimation results. which is accomplished through scene reconstruction using Robotics has been rapidly improved, thanks to the rapid vision sensors to capture photos of the scene [11]. (e development and maturity of microcomputer technology, laboratory at Purdue University has developed an active sensors, and other related technologies. Intelligent robots binocular stereo-based vision-guided mobile robot, Peter, have been popularized and applied in various fields such as which acquires 3D information of the operating environ- civil, military, and scientific research. (e research results of ment and path obstacles, and combines 8 radar scanners, 24 intelligent robots are more prominent in many developed ultrasonic sensors, 8 infrared distance sensors, and a passive countries and intelligent. Highly automated intelligent ro- infrared motion sensor to achieve flexible operation. (e bots have been put into many fields such as aerospace, Intelligent Robotics Laboratory at Osaka University has geological exploration, scientific exploration, rescue and conducted in-depth research on vision navigation and de- disaster relief, such as China’s lunar rover “Moon Rabbit”. veloped a mobile robot based on monocular vision navi- Some low-cost, clever robots are also coming into daily life gation, which can detect the surrounding environment and are used in many indoor environments in homes or extensively by rotating its vision sensors and obtaining the offices, such as floor cleaning robots [7, 8]. We are now in a positioning information, travel distance, and turning angle critical period of the modern manufacturing industry of the mobile robot by a rotary encoder and potentiometer Journal of Robotics 3 [12]. (e vision navigation technology is widely used in module, the navigation decision module, the information advanced countries, such as autonomous lawnmowers, Mars communication module, and the information storage landing vehicles, driverless vehicles, and Kiva robots for module. (e information acquisition module is the basis and Amazon’s unmanned warehouses. preparation of the visual navigation. Its primary function is Global localization of mobile robots is the basis of to obtain an accurate image with distortion correction after navigation. (e localization process is based on observations the calibration of the camera [15]. (e IMU (Inertial to form feature representations and perform feature retrieval Measurement Unit) detects the pitch angle of the camera in and area inference in a global map. Lasers and vision are two real time to correct the deviation of the camera pitch angle standard sensors used in the indoor navigation process [13]. due to the vibration of the robot during the navigation Lasers can provide stable distance sensing information, process. (e image processing module is the core part of which is advantageous in obstacle avoidance and motion visual navigation. (e main functions of the image pro- planning tasks. However, it can only perceive flat infor- cessing algorithm are image ROI construction, detection mation and is relatively homogeneous. Vision can perceive based on a deep learning model, detection frame clustering, image grayscale and smoothing filtering, corner point fea- richer data, but its geometric perception range is lower, while laser has a more extended and more stable perception ture extraction in the detection frame, and navigation line range. In the global localization process, there are multiple fitting. (e navigation decision module controls the motion possible regions of positional distribution in the environ- of the robot based on the navigation information obtained ment due to the existence of similar areas and the incom- after image processing. (is study uses the fuzzy control pleteness of the perceived information. How to eliminate the method developed by our group to control the robot’s spatial perceptual ambiguity to make an accurate estimation movement. (e primary process of the navigation decision of the current poses is essentially a global optimization module is to extract the navigation lines in the ROI after problem. (e robot measures the degree of information image processing, then to find out the dominant route, solve consistency between the recent observation and the expected the position deviation and angle deviation of the robot, then observation corresponding to the estimated position based input the navigation deviation parameters into the fuzzy control decision, and finally derive the control command of on the constructed objective function. (ere is considerable international work on visual navigation motion control from the robot. (e function of the communication module is to image analysis to driving command generation [14]. In realize the serial communication between the visual navi- image segmentation, since the opening of the ImageNet gation upper computer software and the lower computer challenge, there has been significant work based on the hardware control system. (rough the configuration of the traditional feature point segmentation method to the cur- serial parameters and the development of the data trans- rently popular deep learning methods: the use of deep mission protocol, the control information is finally con- learning to achieve semantic recognition of images and verted so that the digital signal output by the navigation segmentation methods, with a very high number of refer- software is converted to the level signal for controlling the ences to achieve autonomous driving using images, or au- robot motion [16]. (e information saving module is re- tonomous driving using reinforcement learning methods, sponsible for saving the camera calibration parameters to the mainstream research in the field of autonomous driving. avoid tedious, repetitive calibration work content, and (ere is a large body of literature on autonomous driving, saving the position deviation and angle deviation during the including the use of road segmentation and the use of en- robot autonomous navigation process, which is used for the vironmental features for autonomous driving. quantitative analysis of the accuracy of the image processing algorithm. It is divided into six parts: image processing display, camera parameter setting, camera calibration set- 3.Model Design of the Target Image ting, serial port setting, production and saving of data, and robot motion control. Recognition and Localization System Based on Robot Vision Navigation Control (1) (e image processing display section displays the processing effects of the main stages of image pro- 3.1. Robot Vision Navigation Control System Construction. cessing in real time. (e processing details of the (e visual navigation software is mainly used to process the image algorithm can be visually observed and ana- images obtained from the camera in real time, then the lyzed for any problems that exist. navigation lines are extracted, navigation decisions are made (2) (e camera parameter setting section is used for after calculating the navigation parameters of the robot, and camera ID selection, camera resolution setting, and finally control signals are sent to the lower computer to image correction by calibrating internal camera control the weeding robot for automatic navigation. (e reference. visual navigation software in this paper is written on the (3) (e camera calibration section is used to set the Visual Studio 2015 development platform based on MFC calibration board parameters, calibrate the camera, with Open CV, where the algorithms are implemented in and save the calibration parameters. (e calibration C++ and C languages. (e functional framework of the parameters saved locally can be read directly in the visual navigation software is shown in Figure 1. subsequent image correction operation, repeatedly (e visual navigation software comprises five parts: the avoiding tedious camera calibration work. information acquisition module, the image processing 4 Journal of Robotics e Information e Image Navigation Acquisition Module Processing Module e Navigation e Information e Information Acquisition Decision Module Communication Module Storage Module e Visual e Navigation Primary Distortion Calibration Real-Time Navigation Process e Image Detection Frame Processing Algorithm Construction Learning Model Processing Module Clustering Navigation Line Smoothing Corner Point Feature Extraction Detection Frame Image Grayscale Fitting Filtering Navigation Decision e Serial Information Image Processing Movement Communication Module Parameters e Control Information Camera Calibration Camera Parameter Image Processing e Development Information Saving Module Setting Setting Display Figure 1: Functional framework of visual navigation software. (4) (e serial port selection section selects and config- and right wheels are the driving wheels, and the front and ures the communication serial port between the rear wheels are the driven wheels. (e wheeled mobile robot upper and lower computers. can accomplish a variety of motions, mainly by controlling the rotational speed of its left and right drive wheels, re- (5) (e data display and saving section is used to display spectively. (erefore, to effectively manage the movement of the pitch angle of the camera measured by the IMU the mobile robot, its kinematic model must be analyzed first. in real time, to solve the position and angle deviation (e position state of the mobile robot at two adjacent of the robot, to store the position and angle deviation moments, with the x-axis forward as the robot’s forward of the weeding robot during automatic navigation in direction, where v and v are the velocities of the robot’s left txt file format, and to select the size of the filter r t and right drive wheels, respectively, the angle the robot has kernel. turned at the adjacent moment t, r is the distance between (6) Robot motion control is divided into manual control the left and right drive wheels, and r is the radius of the mode and automatic navigation mode. (e manual circular arc motion at the adjoining moment where the control mode is used to regulate the robot’s position forward velocity of the mobile robot is equal to the average and adjust the part of the welding robot in the water rate of its left and right wheels, assuming that the steering field. (e automatic navigation mode is for the robot angle is slight, the formula can be obtained as follows: to track forward along the seedling navigation line v − 1 according to the control command of fuzzy decision. ⎧ ⎪ ⎪ √���� v � , ⎪ t (e robot vision navigation flow chart is shown in lw/2 Figure 2. (1) v + 1 Visual navigation of mobile robots is to collect road √���� v � . 2/lw information through a camera to identify marking lines and guide the robot. (erefore, to accurately show the robot along the desired path, the body model must be built first to Before the visual navigation line improvement RANSAC algorithm extraction, each pixel point in the optical de- realize the conversion between the image space coordinate system corresponding to the camera and the world coor- tection area of the navigation line needs to be calculated in u , the directional gradient I and v , directional dinate system centered on the robot’s body. (e mobile IPM upm IPM 2 2 robot body structure is fixed, and its Kinect camera optical gradient I , and the product I I and sum I I of vPM u v u v ipm iPM ipm ipm the two gradient directions are calculated. (en, the axis is parallel to the road surface, 34 cm high from the ground, and its maximum adequate vertical view is a + b Gaussian filtering is performed, and the template parameters are normalized; then, the corner point response value P of because there are shields above and below the camera, i.e., the bottom of the image taken by the camera is the road each image element is calculated, and the P value smaller than the threshold is set to zero. Finally, within the 3 × 3 surface 66 cm in front of it. (e Tourtellot mobile robot in neighborhood, the local nonmaximum values are this paper adopts a four-wheel structure, in which the left Journal of Robotics 5 Right Drive Navigation Respectively Corresponding Wheels Eecti ff vely Respectively Parallel Accurately Two Adjacent Movement Moments Surface Coordinate Identify Marking Corresponding System Lines Right Drive Kinematic Wheels surface Guide The Robot Body System At The Adjacent Camera Respectively Moment motions Conversion Below The Maximum Camera The Body Model The Conversion Surface Left Wheels Right Drive Wheels Between The Adjoining The Camera Accomplish Moment Figure 2: Flow chart of robot vision navigation. suppressed, and the remaining local maximum values are Step 5: Repeat the hypothesis and judge the mathe- output as corner points. (e specific steps of the improved matical model to find the ingroup points step N by RANSAC algorithm for visual navigation line detection are step, compare the ingroup point set U with the SMax as follows: most significant number of ingroup points, and output its corresponding linear model M to get the navigation Step 1: A minimum of two data points is required for path line L . each random sampling. (e number of samples in the data set You for each corner point needs to be guar- (e essence of the Hough transform is to map the image anteed to avoid misfitting the harvesting navigation line to its parameter space, which requires the computation of all due to too few corner points. HM edge points and requires a large amount of memory space and operations. (e improved Hough transform Step 2: Initial estimation of two randomly selected data 1 1 2 2 processes m (m < M ) only one edge point in the input H H H points (u , v ) and (u , v ) models from the IPM Ipm IPM Ipm image, and the selection of m this edge point is somewhat corner point data set U to obtain a linear model M. H random. In addition, the enhanced Hough transform al- 􏽱��������� 2 1 v + v gorithm can obtain two endpoints of a straight line in the ipm ipm (2) v � . ip detection image and accurately locate the detected target 2 1 u + u straight line. (e specific detection processes are as follows: Step 3: For the remaining data in the data set U, cal- (1) In the image of the detection area, an edge point is culate the pixel distances d to the linear model M in randomly selected (u , v ) and mapped to the IPM Ipm turn, if the distance threshold A is d satisfied, but the m polar coordinate system to obtain a family of point into the set U as an ingroup point together with straight lines through the edge point. Suppose the the extracted sample points, and the other points as edge point has been marked on a straight line. In outgroup points. that case, the random selection is continued among the remaining issues, and the polar coordinate Step 4: Count the number of ingroup points in the set U of ingroup points s. If s satisfies the threshold S of equation where the family of straight lines through s T the edge point lies is obtained until all the edge the number of ingroup points, refit the ingroup points points are randomly selected. (e polar coordinates U in the ingroup points using the least-squares of the lines passing through the edge point are the method, and update the linear model M. If it does not opposite equation. satisfy, discard this linear model. 6 Journal of Robotics displayed in the color image. (e principle of target image u . cos θ ipm 􏽱�������� � r � . visual localization is shown in Figure 3. (3) v . sin θ ipm (e amount of data in the image is also three times the number of pixels. When an image is grayed out, the in- (2) (e Hough transform of randomly selected edge formation contained in the image becomes one-third of the points and calculation of the cumulative sum. original image [17]. (e image pixels differ only in brightness and are all displayed in gray. (e various colors in (3) In the Hough space, select the edge point that reaches a color image are composed of the three base colors R, G, and the maximum value and continue to the next step B. In digital images, if the R, G, and B base colors are finely when the issue is greater than the threshold, oth- divided, the more colorful the image can be and the more erwise return to step1. information the image contains. (e R, G, and B base colors (4) (e edge point where the maximum value reached by are red grayscale, green grayscale, and blue grayscale. step3 is taken as the starting coordinate point. (e Weighted value method: according to different indicators to displacement is carried out along the straight-line the original color image R, G, and B components multiplied direction. (e two endpoints of the line are detected by the corresponding weight to find the weighting, the b b to have coordinates of (u , v ) and IPM IPM expression is as follows: e e (u , v ), respectively. IPM IPM 􏽲�������� 􏽱��������� (5) When the length of the line obtained from the de- W W G B (4) gray � 􏽘 W + R × + . tection reaches a particular threshold value, the line R G B is output as a result, and the detection is continued Image enhancement is also critical to the overall bin- by returning to the initial step. ocular vision system and is necessary for processing images. Compared with the standard Hough transform algo- Image enhancement highlights essential information in the rithm, the improved Hough transform has significantly image to meet the system’s requirements and eliminate or improved memory consumption and computation. (e weaken redundant details irrelevant to the system. Enhanced improved probabilistic Hough algorithm can effectively image processing makes the image more compatible with avoid the interference of non-harvesting dividing lines and human visual habits and is intended for specific application achieve accurate detection of visual navigation lines. In the purposes. After the image is enhanced, only the ability to complete detection process of optical navigation lines, the distinguish information increases, while the background average processing time of a single frame for navigation line information is not added. (e improved picture is more detection based on the original probabilistic Hough trans- suitable for the application than the original image in specific form algorithm is 77.4 Ms. In comparison, the average scenarios. (e standard methods for image enhancement are processing time of a single frame based on the improved segmented linear transform enhancement and histogram probabilistic Hough algorithm is 54.6 Ms. (e enhanced equalization. (e segmented linear transformation en- algorithm also improves the processing speed compared hancement: Suppose the gray map function before the en- with the original algorithm. hancement transformation is f(r, c) and the grayscale range is [O, M ], after the segmented linear transformation en- hancement, the gray map function is g(r, c) and the gray- 3.2. Target Image Recognition and Localization Model Design. scale range is [O, M ] and the formula. (e process of digital image acquisition and transmission can be disturbed by many factors, which can cause differ- 􏽲���� � m − d d − c ences between the digital image and the real object scene and g(r, c) � 􏽘 + . (5) can affect the image processing of the vision system in the b − a m − b later stage. Preprocessing operations such as grayscale, image enhancement, and filtering must be performed on the A grayscale histogram is a statistical graph of the dis- original image captured by the camera at the beginning of tribution of gray levels, representing the proportion of each the vision system. (en, the features of the preprocessed gray level pixel in the total number of pixels in a digital image are extracted, and then the parts are matched with image. (e histogram can describe the general situation of a those of the template image to identify the target strip of grayscale image, such as the degree of contrast between light smoke. (e surface of the identification target in this paper and dark, the frequency of each gray level, the distribution of has robust texture features. (e images captured by the gray levels in the picture, etc. (e gray histogram is a camera are generally colorful. (ey contain much infor- function of the gray level, with the gray value as the hori- mation, extending the time of processing images by the zontal coordinate and the number of pixels as the vertical binocular vision system and considering the efficiency re- coordinate [18]. (e gray histogram of an image has the quirements of recognition and localization technology to following properties: the gray histogram is a statistical result improve the efficiency of the binocular vision system. (e of the number of occurrences of the gray value of all pixels in color images are to be converted into grayscale maps [17]. the picture, which does not reflect the specific position of the (e R, G, and B components in the color image are con- gray value pixels in the image, but only the number of occurrences of different gray values; a pair of images only verted into the same values. Since each pixel has a different value for the R, G, and B components, other colors can be corresponds to a couple of histograms. (e histogram of a Journal of Robotics 7 e Process Of Digital Image Acquisition And Surface Of e Identification Transmission Can Be Disturbed By Target In is Paper Has Many Factors Robust Texture Features Reprocessing Operations Such As Grayscale Enhancement Vision System Image Enhancement Smoke Binocular Vision System Features e Binocular Vision System Localization Technology To Preprocessing Improve Grayscale Differences Robust Texture Features Colorful Considering e Efficiency Processing Grayscale Requirements Of Efficiency Recognition Color Image Object Figure 3: Target image visual localization principle. team of images corresponds to only one histogram, but one 4.Analysis of Results histogram can be reversed to correspond to different ideas; 4.1. Analysis of the Robot Vision Navigation Control System. the grayscale histogram counts the number of pixels with the In the process of autonomous navigation, the robot calcu- same gray value in an image, so the grayscale histogram of a lates the position deviation and angle deviation of the robot pair of images is equal to the sum of the histograms of all relative to the navigation line based on the seedling navi- parts of the picture. (e histogram with a [O, L − 1] range of gation line extracted by the vision system and continuously gray values is a discrete function. corrects the heading based on the variation during the forward motion [19]. (erefore, the positioning error of the (6) S � 􏽚 P + (w + dw). weeding robot concerning the navigation line directly affects the navigation control process of the robot and must be (e robot uses a trained feature point regression model measured and analyzed. (e angular deviation error was and a topological structure regression model to estimate the measured by fixing the deviation of the robot’s center on the robot’s poses online. First, a pyramid is constructed for the centerline of the seedling row and rotating the robot. (e acquired image, and SURF features and descriptors are angular deviation between the robot’s centerline and the extracted. Let F � 􏼈(p , v ), i � 1􏼉 be the location of the i i °° centerline of the seedling row changed from [15–15 ] to extracted feature points and their descriptors. Based on the °° [−15 ]. When the centerline of the robot is parallel to the depth map of the current observation, the coordinates centerline of the seedling column, the angular deviation is corresponding to the feature points in the camera coordinate defined as positive when the weeding robot is turned system can be obtained. Let S � s ∈ R , i � 1 denote the set 􏼈 􏼉 counterclockwise, i.e., the centerline of the weeding robot of points corresponding to the features. (en, the topo- deviates to the left concerning the centerline of the seedling, logical positions corresponding to the current image are the angular deviation is defined as unfavorable when the predicted using the topological position regression model to weeding robot is turned clockwise, i.e., the centerline of the obtain multiple candidate topological positions. At the same weeding robot deviates to the right concerning the centerline time, the spatial coordinates in the world coordinate system of the seedling column. (e measured and calculated values are indicated for the features in F using the feature point of angular deviation were recorded every 5 as a set of data. regression model. Let M � 􏼈v � m + m , i � 1􏼉 denote the i 1 u (e experiment was repeated thrice for each group to im- set of prediction results for feature points. M contains only prove the reliability of the experimental data, as shown in the prediction results located at that topological node for Figure 4. (e mean error of the angular deviation was each topological location. Due to the existence of erroneous ° ° calculated to be 0.11 , and the standard deviation was 0.04 . predictions and the fact that each point has multiple pro- To verify the feasibility and reliability of the visual jections, the global positional estimation problem of the navigation and path tracking designed in this paper, the camera can be expressed. following scenes are set up: straight line, turning path, and obstacle occlusion. In the experiment, its forward speed is set t � arg min 􏽘(min ∈ |m − ts|). (7) 1 as V � 0.2 m/s. (e angular velocity w is in the 8 Journal of Robotics 0 2 4 6 8 1012141618202224 Number of experiments SVM-1 SVM-3 SVM-2 SVM-4 Figure 4: Experimental test comparison chart. [−1.0 rad/s, 1.0 rad/s] range, in each frame after processing, 900 the recognized center line is displayed with a straight blue line, and if a digital road sign is detected, the connection between the presighting point and the center of the digital road sign is displayed with a red straight line. (e initial position of the Tourtellot robot is slight to the right of the path in the straight section at startup, and the identification line recognition results under several moments at intervals 500 during operation. During the process of the robot, the angle of the centerline of the marking line obtained through image processing and recognition is plotted against the distance from the pre-sighting point to the centerline, as shown in Figure 5, where the dark blue curve is after the Kalman pre filtering. (e light blue curve is the angle θ without the Kalman filtering. It can be seen that θ the fluctuation is 0 5 10152025303540 relatively stable during the robot’s operation without sig- T (s) pre nificant abrupt changes. In contrast, the θ instability of the angular pre-angular unfiltered red curve in the figure is the distance deviation d distance pre-distance after Kalman filtering, while the purple curve is the unfil- pre tered distance deviation d . (e result of the distance Figure 5: Running process identification lines θ and d curves. deviation depends on the angle to a certain extent, so the pre fluctuation θ leads to a more significant distance deviation from the pre-sighting point to the marker line also varies industrial CCD camera, and the execution component Turtle more. Bot2, and enable the wireless network access function after (e lane lines were laid in the lab, and the corresponding checking and confirming the connection of each hardware of QR codes were set at each node location to place the mobile the lower computer, setting the network parameters, and robot at any initial node location. First, we start the mobile accessing the LAN established by the wireless server; the robot management system and wireless communication upper computer reads the map file and reads the infor- server, set the relevant network parameters, establish the mation to the vehicle controller according to the readings. wireless LAN server, and wait for the lower computer to (e upper computer reads the map file, reads the data from access. We start the embedded development board TX2, the the vehicle controller, initializes the coordinates of the node Time (ms) Running process Journal of Robotics 9 where the mobile robot is located according to the read -2.0 information, and displays its position in the map in real 60 -1.5 time; the management system will automatically number the tasks after receiving the task command, and then compare -1.0 the current mobile robot location information with the target location information to get the shortest path infor- -0.5 mation by path planning and transmit the path information 0.0 to the lower computer through wireless communication. (e path information will be transferred to the lower computer 0.5 through wireless communication. After receiving the path information, the lower computer will execute the task and 1.0 walk on the specified path through lane line tracking. (e 1.5 QR code recognition will be used for positioning correction and steering guidance, and the real-time data will be 0 2.0 uploaded to the upper computer [20]. To verify the accuracy 0 2 4 6 8 10 12 14 16 18 20 22 of the path tracking algorithm based on visual navigation, we T (s) randomly select a time point in the mobile robot operation Composite index test, extract the measured angle data of the gyroscope in the Growth rate actuator at this time point, and compare it with the data Figure 6: Deviation angle change. obtained from the path tracking algorithm at the corre- sponding time. (e deviation of the gyroscope angle data from the angle data obtained from image processing is calculated, and the relationship between the selected time the factors affecting the angle deviation measurement error nodes and the angle deviation is plotted as shown in Fig- are mainly: due to the influence of factors such as the in- ure 6. From the results, the deviation value between the path fluence of driving vision, the manually operated intelligent tracking algorithm and the actual angle is ±1.5 , indicating rice, and wheat harvester cannot always keep the straight line that this paper’s path tracking algorithm can achieve high where the left-hand divider is located parallel to the harvest accuracy. navigation line, i.e., the actual reference value floats around 0 ; the incomplete segmentation of harvested and unharvested areas in the image causes the detected navigation line and the 4.2. Results of Target Image Recognition and Localization straight line where the left-hand divider is located to addi- Analysis. (e validation criteria for the measurement ac- tional angle exists. curacy of visual navigation parameters were first established, (e training process of the detection algorithm includes followed by the analysis of the overall structure of the optical data set preparation, formal training, and network gener- navigation detection system in the field and the experimental alization error evaluation. (en, the deep learning detection locations and experimental methods. (en, experiments on algorithm is compared with the traditional SVM algorithm navigation line detection were conducted for the improved for classification experiments. (e results surface that al- RANSAC algorithm and the enhanced probabilistic Hough though the deep learning detection check-all rate is slightly transforms algorithm under different harvesting environ- lower than the SVM algorithm, its check-accuracy rate and ments. (e success rate of detection and image processing accuracy rate are higher than the SVM algorithm. And the operation speed were statistically analyzed. Finally, experi- detection rate of the deep learning detection algorithm has a ments are conducted to verify the success rate of navigation more significant advantage, which can meet the basic re- line detection for the image pyramid optical flow tracking quirements of real-time QR code detection for mobile ro- algorithm under different environments, and multiple sets of bots. Finally, the operation experiment of the whole system displacement deviation and angle deviation benchmark values is carried out to verify the feasibility. Moreover, comparing are set to verify the measurement accuracy of navigation the gyroscope angle data with the path tracking algorithm parameters according to the visual navigation parameter data can be obtained. Various experiments were designed measurement accuracy verification standard. (e measure- according to different given algorithm parameters to in- ment accuracy of angular deviation of navigation parameters fluence the environment model rasterization resolution of the image pyramid optical flow tracking algorithm is parameter on the confidence occupancy meter. (e test verified. (e experiments on the error measurement of the image size varies. (e algorithm automatically transforms angular variation of the navigation parameters of the image the original test map to 448 × 448 resolution size before pyramid optical flow tracking algorithm were performed in 6 input. (e output result of the positioning frame will also be groups in turn, with the base value of the angular deviation altered accordingly. (e partial detection results of the deep being 0 . As shown in Figure 7, the average value of the learning detection algorithm are shown in Figure 8. (e maximum error of the actual measurement value of the results show that the deep learning detection algorithm angular deviation of the harvester was 10.57 , the average performs well for QR codes with simple backgrounds, value of the mean error was 3.73 , and the average value of the complex backgrounds, vestiges, small deformations, partial standard deviation was 2.98 . In the actual detection process, occlusions, or multiple QR codes in a single image. Deviation angle 10 Journal of Robotics velocity 2.0 Aoccleration 1.5 1.0 Positon 0.5 0.0 24 6 8 10 12 14 16 18 20 T (s) Figure 7: Component trajectories of the rotational path, velocity, and acceleration. computer reads the map file, reads the information from the 19% 23% 35% 24% vehicle controller, initializes the coordinates of the node 21% 35% 19% 25% where the mobile robot is located according to the read 8 information, and displays its position in the map in real 30% 36% 9% 25% time. (e gyroscope is corrected once after the QR code, so it 7 24% 12% 30% 34% is assumed that the gyroscope data is the actual angle. (e 6 43% 9% 35% 13% deviation of the gyroscope angle data from the angle data 5 27% 23% 13% 37% obtained from image processing is calculated, and the re- 4 23% 12% 32% 32% lationship between the selected time nodes and the angle 3 26% 16% 21% 37% deviation is plotted. From the results, the variation of the 2 25% 42% 11% 23% path tracking algorithm from the actual angle is ±1.5 , which 1 14% 25% 27% 34% indicates that the path tracking algorithm in this paper can achieve high accuracy. 0 20 40 60 80 100 Testing time (s) 5.Conclusion Full masking Small deformation With the continuous development and progress of science Partially obscured defective and technology, the intelligent level requirements are Figure 8: Comparison results of testing indicators. gradually increasing, and the automatic control of mobile robots has become an important direction in the develop- ment of robot systems, whose visual navigation system is one Among the detection results, the deep learning detection of the hotspots of research today. In this paper, an in-depth algorithm can make accurate judgments on the images of the study is conducted on the problem of robot optical navi- experimental scenes of the mobile robot in this paper. (e gation path detection. For the problem that the RANSAC accuracy and rapidity of the integrated detection algorithm algorithm first establishes the linear mathematical model of can meet the basic requirements of real-time detection of QR the path in the navigation line detection and then performs codes during the execution of the mobile robot tasks in this the remaining corner point model verification, which leads paper’s experiments. Moreover, the corresponding QR codes to more iterations of the algorithm and more considerable are set at each node location, and the mobile robot is placed computation, model verification criteria are added to avoid at any initial node location. First, we start the mobile robot the problem of time-consuming and detection errors caused management system and wireless communication server, set by continuing iterative verification in the case of model errors. By limiting the range of edge point probability ex- the relevant network parameters and establish the wireless LAN server, and wait for the lower computer to access; then, traction and setting the success criterion of straight line we start the embedded development board TX2, the in- detection, the improved Hough transform algorithm can dustrial CCD camera, and the execution component Turtle effectively solve the problem of fast and accurate identifi- Bot2 and enable the wireless network access function after cation of navigation lines caused by the probability ex- each hardware connection of the lower computer is checked traction of edge points in the whole detection area; finally, and confirmed, set the network parameters and access the the image pyramid optical flow tracking algorithm is used to LAN established by the wireless server; the upper computer realize the tracking detection of robot visual navigation and reads the QR code and reads the QR code. (e upper tracking measurement of visual navigation parameters. (e Testing index Component trajectory Journal of Robotics 11 IEEE Transactions on Intelligent Transportation Systems, robot is experimented with setting straight, turning, and vol. 21, no. 8, pp. 3409–3422, 2019. obstacle occlusion scenes. From the trajectory graphs ob- [3] L. Qiu, C. Li, and H. Ren, “Real-time surgical instrument tained from the experimental simulation, the robot can tracking in robot-assisted surgery using multi-domain con- better fit the marking line operation during the operation. volutional neural network,” Healthcare Technology Letters, (e charts of parameters and d show that the bit-posture vol. 6, no. 6, pp. 159–164, 2019. relationship of the robot body relative to the marking line is [4] A. Devo, G. Mezzetti, G. Costante, M. L. Fravolini, and closed during the process, which also illustrates the effec- P. Valigi, “Towards generalization in target-driven visual tiveness and accuracy of target image recognition and navigation by using deep reinforcement learning,” IEEE localization. Transactions on Robotics, vol. 36, no. 5, pp. 1546–1561, 2020. Based on the research work carried out in this paper, we [5] Y. Xiong, Y. Ge, L. Grimstad, and P. J. From, “An autonomous briefly analyze the potential research points with further strawberry-harvesting robot: design, development, integra- depth in this study, taking into account the current trends in tion, and field evaluation,” Journal of Field Robotics, vol. 37, computer vision and robotics. no. 2, pp. 202–224, 2020. (e introduction of deep neural network descriptors as [6] P. Neubert, S. Schubert, and P. Protzel, “A neurologically the front-end data matching for VSLAM to achieve visual inspired sequence processing model for mobile robot place data matching with illumination and viewpoint invariance is recognition,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3200–3207, 2019. a current research hotspot and trend in VSLAM-related [7] W. J. Heerink, S. J. S. Ruiter, J. P. Pennings et al., “Robotic work. A considerable amount of work has been carried out versus freehand needle positioning in CT-guided ablation of in this area, including the extraction of intermediate de- liver tumors: a randomized controlled trial,” Radiology, scriptors using existing network models and the design of a vol. 290, no. 3, pp. 826–832, 2019. network structure of visual feature descriptors specifically [8] S. G. Mathisen, F. S. Leira, H. H. Helgesen, K. Gryte, and for VSLAM. However, the gap between the current research T. A. Johansen, “Autonomous ballistic airdrop of objects from in this area and the application of VSLAM lies in the visual a small fixed-wing unmanned aerial vehicle,” Autonomous feature point extraction and the real-time nature of the Robots, vol. 44, no. 5, pp. 859–875, 2020. generation. In addition, deep neural network descriptors [9] K. M. Abughalieh, B. H. Sababha, and N. A. Rawashdeh, “A tend to have higher dimensionality and take longer to video-based object detection and tracking system for weight perform feature point matching and distance calculation. sensitive UAVs,” Multimedia Tools and Applications, vol. 78, (erefore, it is not easy to guarantee the online performance no. 7, pp. 9149–9167, 2019. of deep vision descriptors for system applications with high [10] K. Lee, J. Gibson, and E. A. (eodorou, “Aggressive per- real-time requirements such as VSLAM. (erefore, there is ception-aware navigation using deep optical flow dynamics work to be done on a downscale and speed up the depth and PixelMPC,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1207–1214, 2020. vision descriptors to meet the demand of VSLAM real-time [11] C. H. G. Li and Y. M. Chang, “Automated visual positioning applications. and precision placement of a workpiece using deep learning,” 1e International Journal of Advanced Manufacturing Tech- Data Availability nology, vol. 104, no. 9, pp. 4527–4538, 2019. [12] D. Fielding and M. Oki, “Technologies for targeting the pe- (e data used to support the findings of this study are ripheral pulmonary nodule including robotics,” Respirology, available from the corresponding author upon request. vol. 25, no. 9, pp. 914–923, 2020. [13] P. M. Kumar, U. Gandhi, R. Varatharajan, G. Manogaran, Conflicts of Interest R. Jidhesh, and T. Vadivel, “Intelligent face recognition and navigation system using neural learning for smart security in (e author declares no conflicts of interest. internet of things,” Cluster Computing, vol. 22, no. 4, pp. 7733–7744, 2019. Acknowledgments [14] V. Vasilopoulos, G. Pavlakos, S. L. Bowman et al., “Reactive semantic planning in unexplored semantic environments (is work was supported by the 2021 school level scientific using deep perceptual feedback,” IEEE Robotics and Auto- research and innovation team project, Innovation and En- mation Letters, vol. 5, no. 3, pp. 4455–4462, 2020. trepreneurship Education Scientific Research and Innova- [15] V. Vasilopoulos, G. Pavlakos, S. L. Bowman et al., “Reactive tion Team, (No. HNACKT-2021-01) and 2021 Research semantic planning in unexplored semantic environments using deep perceptual feedback,” IEEE Robotics and Auto- Projects of Educational Science (No. HNACJY-2021-15). mation Letters, vol. 5, no. 3, pp. 4455–4462, 2020. [16] J. W. Martin, B. Scaglioni, J. C. Norton et al., “Enabling the References future of colonoscopy with intelligent and autonomous magnetic manipulation,” Nature machine intelligence, vol. 2, [1] C. Sampedro, A. Rodriguez-Ramos, H. Bavle, A. Carrio, no. 10, pp. 595–606, 2020. P. de la Puente, and P. Campoy, “A fully-autonomous aerial [17] M. Ma, H. Li, X. Gao et al., “Target orientation detection based robot for search and rescue applications in indoor environ- on a neural network with a bionic bee-like compound eye,” ments using learning-based techniques,” Journal of Intelligent and Robotic Systems, vol. 95, no. 2, pp. 601–627, 2019. Optics Express, vol. 28, no. 8, pp. 10794–10805, 2020. [18] J. Yang, C. Wang, B. Jiang, H. Song, and Q. Meng, “Visual [2] S. Wang, F. Jiang, B. Zhang, R. Ma, and Q. Hao, “Develop- ment of UAV-based target tracking and recognition systems,” perception enabled industry intelligence: state of the art, 12 Journal of Robotics challenges and prospects,” IEEE Transactions on Industrial Informatics, vol. 17, no. 3, pp. 2204–2219, 2020. [19] W.-H. Su, “Advanced machine learning in point spectros- copy, RGB- and hyperspectral-imaging for automatic dis- criminations of crops and weeds: a review,” Smart Cities, vol. 3, no. 3, pp. 767–792, 2020. [20] A. A. Zhilenkov, S. G. Chernyi, S. S. Sokolov, and A. P. Nyrkov, “Intelligent autonomous navigation system for UAV in randomly changing environmental conditions,” Journal of Intelligent and Fuzzy Systems, vol. 38, no. 5, pp. 6619–6625, 2020.

Journal

Journal of RoboticsHindawi Publishing Corporation

Published: Mar 24, 2022

References