Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
(function (i, s, o, g, r, a, m) { i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments) }, i[r].l = 1 * new Date(); a = s.createElement(o), m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m) })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga'); ga('create', 'UA-8578054-2', 'auto'); ga('send', 'pageview'); A Crowd Avoidance Method Using Circular Avoidance Path for Robust Person Following div.banner_title_bkg div.triangle { border-color: #082C0F transparent transparent transparent; opacity:0.7; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=70)" ;filter: alpha(opacity=70); /*new styles end*/ } div.banner_title_bkg_if div.triangle { border-color: transparent transparent #082C0F transparent ; opacity:0.7; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=70)" ;filter: alpha(opacity=70); /*new styles end*/ } div.banner_title_bkg div.triangle { width: 198px; }div.banner_title_bkg_if div.triangle { width: 249px; } #banner { background-image: url('https://images.hindawi.com/journals/jr/jr.banner.jpg'); background-position: 50% 0;} (function (w, d, s, l, i) { w[l] = w[l] || []; w[l].push({ 'gtm.start': new Date().getTime(), event: 'gtm.js' }); var f = d.getElementsByTagName(s)[0], j = d.createElement(s), dl = l != 'dataLayer' ? '&l=' + l : ''; j.async = true; j.src = 'https://www.googletagmanager.com/gtm.js?id=' + i + dl; f.parentNode.insertBefore(j, f); })(window, document, 'script', 'dataLayer', 'GTM-MQ4MGW'); Home Journals About Us Journal of Robotics Indexed in Web of Science About this Journal Submit a Manuscript Table of Contents Journal Menu About this Journal · Abstracting and Indexing · Aims and Scope · Annual Issues · Article Processing Charges · Articles in Press · Author Guidelines · Bibliographic Information · Citations to this Journal · Contact Information · Editorial Board · Editorial Workflow · Free eTOC Alerts · Publication Ethics · Reviewers Acknowledgment · Submit a Manuscript · Subscription Information · Table of Contents Open Special Issues · Published Special Issues · Special Issue Resources Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Full-Text XML Linked References How to Cite this Article Views 247 Citations 0 ePub 1 PDF 56 Journal of Robotics Volume 2017 (2017), Article ID 3148202, 10 pages https://doi.org/10.1155/2017/3148202 Research Article A Crowd Avoidance Method Using Circular Avoidance Path for Robust Person Following Kohei Morishita , 1 Yutaka Hiroi , 1 and Akinori Ito 2 1 Osaka Institute of Technology, 5-16-1 Omiya, Asahi-ku, Osaka 535-8585, Japan 2 Tohoku University, 6-6-5 Aramaki Aza Aoba, Aoba-ku, Sendai 980-8579, Japan Correspondence should be addressed to Akinori Ito Received 4 November 2016; Accepted 26 January 2017; Published 19 February 2017 Academic Editor: Tao Liu Copyright © 2017 Kohei Morishita et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract A life-support service robot must avoid both static and dynamic obstacles for working in a real environment. Here, a static obstacle means an obstacle that does not move, and a dynamic obstacle is the one that moves. Assuming the robot is following a target person, we discuss how the robot avoids a crowd through which the target person passes and arrives at the target position. The purpose of this paper is to propose a crowd avoidance method that makes a robot to be able to avoid both static and dynamic obstacles. The method uses the surface points of the obstacles to form an avoidance region, and the robot moves along the edge of the region. We conducted experiments assuming various situations such that the robot was blocked, there was a wide gap in the crowd, or a person in the crowd yielded for the robot to pass through. As an experimental result, it was confirmed the robot could avoid the crowd even when the obstacles were aligned in an “inverted wedge” shape. 1. Introduction Many research works have been conducted for developing life-support service robots [ 1 ]. Here, the robots we are focusing on include autonomous mobile robots that conduct tasks such as bringing a beverage to the user based on the user’s request [ 2 , 3 ]. A life-support service robot differs from industrial robots because the life-support service robots must work under an environment where humans and the robots coexist. For this purpose, a life-support service robot must have several abilities such as manipulation [ 4 , 5 ], vision [ 6 , 7 ], and mobility [ 8 , 9 ]. We especially focus on the mobility in this paper. The two technologies, the path planning, and the path following are important for realizing the autonomous navigation of the robot. Many research works have been done for those issues so far [ 10 – 17 ]. The path planning and the following behavior are either based on the prepared map of the environment or based on the on-the-fly measurement of the environment. We focus on the latter case so that the robot can move around the new and dynamic environment where the obstacles (mainly humans) move. Our interest in this paper is the robot navigation based on human-following behavior, where the robot moves in an environment by following a specific person. Since the robot’s final goal is to follow the preceding person without colliding any obstacles, the robot does not need to have a static map of the environment. Rather, the robot needs to measure the person to follow and the obstacles (walls, objects, other persons, etc.) continuously and determine the robot’s trajectory quickly and robustly. The person-following behavior of the mobile robot has been actively researched [ 18 ]. We have developed the life-support service robot ASAHI [ 19 ], which helps to carry baggage items in a place such as a shopping mall or a hospital. The person-following behavior is essential for this task. We developed a method to track the person using a single laser range finder (LRF) [ 20 ], which enabled robust person-following behavior in an environment where other persons exist. We also developed an obstacle avoidance method while following a person using an LRF. This method can track the target person even when the target is temporally lost. However, the person tracking fails when the target person is completely lost in a crowd. Therefore, we aim to develop a novel method that enables continuing tracking of the target person even when he/she is lost in the crowd. As a part of the development, this paper describes the crowd avoidance method. 2. Related Works A number of methods for obstacle avoidance have been proposed so far [ 10 , 12 , 13 , 21 – 24 ]. Usually, the purpose of the path planning and dynamic obstacle avoidance is to navigate a robot between the start point and the goal point, which are known beforehand on the internal map, while avoiding the obstacles that are not on the map [ 11 ]. On the other hand, the purpose of our robot is not to navigate the robot to the fixed goal but to follow the person, which can be considered as a “moving goal.” The potential-based path planning method [ 23 ] is a widely used technique for path planning under an environment with obstacles. However, there are several problems with the potential-based method. First, most of the techniques are suitable for the static environment, where the obstacles do not move [ 12 ]. Cao et al. proposed an obstacle avoidance method for the dynamic environment based on the potential method [ 13 ]. Their method assumed that position and velocity of an individual obstacle can be obtained, but it is not always true in a crowded environment where it is sometimes difficult to identify individual obstacle because two or more persons are moving or staying as clusters. Another problem is that the robot sometimes fails to reach the target point when the attractive and repulsive forces balance [ 24 ]. Another reason for the difficulty to apply the potential-based method is that the positions of the obstacles are not known to the robot before measuring them. In this paper, the robot is assumed to measure the environment including the target person and obstacles using the LRF and behaves only based on the results of the measurement. To calculate the potential accurately, we need to consider the effect of all obstacles including the ones which cannot be measured directly from the LRF. Therefore, it is impossible to use the accurate potential under the current assumption. 3. The Crowd Avoidance Algorithm 3.1. Overview The potential method is a general framework, where we can consider any path given an appropriate potential field. However, as stated in the previous section, it is not always easy to calculate an appropriate field under the situation assumed in this paper, where the target person is completely lost, there are clusters of obstacles around the robot, and the obstacles are moving. Thus we made a simple assumption on the situation and solved this problem using a simpler method rather than calculating the potential field around the robot. Let us consider a robot following the target person in an environment where there are many other persons, such as a shopping mall, as shown in Figure 1 . Then the target person passes through the blocking persons as Figure 1 (b). If there is a gap in the crowd, the robot can go through the gap. However, if there is no gap to pass through, the robot should avoid the crowd. Figure 1: Supposed environment. Suppose the target person walks through the crowd, and the robot is blocked by the crowd. Figure 2 shows the overview of the assumed situation and the proposed algorithm. We set the “avoidance region” around the obstacles, where it is assured that the robot does not collide with the obstacle if the robot’s center is on or outside of the edge of the region. When the robot judges that it have completely lost the target person and the path is blocked by the crowd, the robot makes the hypothetical target line toward the robot’s direction (which faces the position where the target person is finally measured) and sets the tentative target on the target line beyond the crowd. The distance between the robot’s current position and the tentative target is determined based on the average walking speed of the target person. In the later experiment, the distance is fixed to 4000 mm. Next, the robot moves along the hypothetical target line toward the crowd, measuring the distance between the robot and the nearest point of the obstacle. If the distance between the robot and the obstacle becomes shorter than the predefined threshold, the robot begins to move from the hypothetical target line to the outer edge of the avoidance region. Since the edge of the avoidance region is assured to be far enough from the obstacle, the robot can avoid the obstacle by moving along the edge. Figure 2: Overview of the algorithm. The overview of the crowd avoidance algorithm is as follows. (1) Follow the target person. (2) Recognize that the robot has lost the target person. (3) Generate the hypothetical target line and the tentative target point. (4) Generate the avoidance region and move along the hypothetical target line. (5) Measure the surface points of the obstacle and determine the nearest point. (6) If the distance to the nearest point is longer than the threshold, go to (4). (7) Measure the nearest surface point of the obstacle. (8) Follow the circle that centers the nearest point. (9) If the distance to the hypothetical target line is longer than the threshold, go to (7). (10) Follow the hypothetical target line until arriving at the tentative target point. (11) If the distance between the robot and the tentative target is shorter than the threshold, stop the robot. (12) Go to (10). As this method uses the “avoidance region” and the robot moves along the region, the path is always at almost the same distance from the obstacles even when the robot can take more distance from them. However, it is assured that the robot can avoid the obstacles keeping a sufficient distance using the proposed algorithm. The proposed algorithm is very easy to implement because it uses only local information (i.e., information of the nearest obstacle) as opposed to the potential-based method. 3.2. The Avoidance Region First, we describe the avoidance region. The avoidance region is an overlapped region of the circular areas, and the center of one circle is the surface point of an obstacle measured by the LRF. The radius of the circle is 500 mm, which is determined considering the size of the robot. Figure 3(a) shows an example of the circular area. As the measurement by the LRF is done for all of the surface points of the obstacle, we have many circular areas that correspond to all the surface points, and the union of all the circular areas is the avoidance region. Obviously, the outer edge of the avoidance region is the envelope of the circular areas. Figure 3(b) shows an example of the avoidance region. Figure 3: Generation of the avoidance region. 3.3. Crowd Avoidance Using the Avoidance Region In this section, the detail of the crowd avoidance method is described. First, assume the situation where the target person passed through the crowd, as shown in Figure 4(a) . At this point, the robot loses the target person because measurement of the target person is blocked by the crowd. Then the robot assumes that the target exists beyond the hypothetical target line, which is generated by extending the line segment between the center of the robot and the last position where the target person was measured. Then the robot follows the line (Figure 4(b) ) and moves toward the crowd. When the distance between the robot and the nearest point of the crowd becomes less than 700 mm, the robot starts to follow the piece of the circle (Figure 4(c) ). The threshold value 700 mm was determined empirically. Then the robot moves along the edge of the avoidance region and goes behind the crowd. If the robot enters the point near the line (the green region of Figure 4(c) ), then the robot starts to follow the line (Figure 4(d) ). Here the robot can avoid the crowd when the crowd is moving because the center point of the circle to follow (i.e., the nearest surface point of the obstacle) is always updated when following the circle. The details of the methods to follow a line and a circle are described in the following sections. Figure 4: The crowd avoidance using the avoidance region. 3.4. Following a Line Next, the method to follow a line is described [ 20 ]. Figure 5 shows the line-following method. Figure 5: The line-following method. Let be the angular difference between the robot’s moving direction and the line to follow, as shown in Figure 5 . Let be the distance between the center position of the robot and the line to follow. Then the velocities of the left and right wheels, and , are controlled so that becomes zero. Let the equation of the line to follow be . Then is calculated as follows. Let the two points on the line be and . Then the angle of the line is Then the angular difference is calculated using the angle of the robot , as follows. Next, we control and so that and become zero. First, let us define Here, denotes (half of) the velocity difference between the left and right wheels. Parameters and are used for controlling the constraint to decrease and . and are used to control the convergence of and . These parameters were determined empirically. The parameter values used in the later experiment “line following” are shown as follows: : 0.26 s −1 : 0.26 : 0.3 mm/s·deg : 0.3 mm/deg 3.5. Following a Circle Next, the method for following a circle is described. Figure 6 shows the circle-following method. Figure 6: The circle-following method. Basically, the circle-following method is the same as the line-following method, where the tangent lines of the circle are used as the lines to follow. First, consider the line segment between the center of the robot and the center of the circle , and the intersection point of the line and the circle is . Let the tangent at be . Here [mm] is the distance between the robot and the center of the circle, calculated as follows. Then let the vector be a unit vector having the same direction as the line between the robot and the circle, directing from the circle to the robot. The vector is calculated as and the intersection point is calculated as where is the radius of the circle. And then the tangent line at the intersection point, , is calculated as follows. Then the robot follows the tangent line using the algorithm explained in the previous section. The center point of the circle is updated by every measurement of the LRF (as the coordinate of the nearest surface point of the obstacle), and the tangent also changes according to the position of the crowd and the robot. 4. Experiments of the Crowd Avoidance 4.1. Overview We conducted an experiment to confirm if the robot actually could avoid the crowd. In the experiment, three or four persons blocked the robot and the robot moved to avoid the persons. First, we conducted an experiment where the crowd did not move (the static case) and then conducted another experiment where one person of the crowd moved (the dynamic case). 4.2. Hardware We used ASAHI2015 as the robot, which is the modified version of ASAHI [ 19 ]. This robot is equipped with three LRFs at the middle and the bottom of the robot. In the experiment, only the LRF at the middle was used (Hokuyo UTM30-LX). Pioneer 3-DX was used as the moving base. Figure 7 shows the figure of the robot. Figure 7: The mobile robot ASAHI 2015. 4.3. Experimental Conditions First, when the robot moved to a person in the crowd, we assumed that the person either stayed or moved to avoid the robot, which means that the person did not behave to actively block the robot. Figure 8 shows the experimental environment. The positions where persons stood were marked using curing tape. First, the participants stood in front of the robot with the seven patterns shown in Figure 9 . Second, the participants stood with the two patterns shown in Figure 10 , and a person in the crowd was moved during the experiment. Figure 8: Experimental environment. Figure 9: Patterns of the crowd (static case). Figure 10: Pattern of the crowd (dynamic case). In Figure 9 , patterns 1 to 4 were used to check if the proposed method worked for a different arrangement of the crowd. Patterns 5 and 7 were to check if the robot could pass through the crowd when there was a wide gap in the crowd that was enough for the robot to go through. Pattern 6 was used to check if the robot could avoid the crowd if the wide gap was blocked. Figure 10 shows the dynamic case, where a person in a crowd moved to avoid the robot. In pattern 8, four persons stood in line first, and one person stepped aside to avoid the robot. In pattern 9, a person stepped behind another person to make the gap. In these experiments, the point of losing the target person was assumed to be known as 700 mm front of the crowd. We also assumed that the robot moves counterclockwise for avoiding the crowd. 4.4. Results As the results of the experiment, the robot could avoid the crowd for all the static and dynamic patterns. Figure 11 shows the trajectory of the robot for the static patterns shown in Figure 9 . The blue points are the points measured by the LRF. The red triangles show the trajectories of the robot. From Figures 11(a) – 11(g) , we can confirm that the robot could avoid the crowd properly. Especially, in patterns 5 and 7, the robot could pass through the gap because the gap between persons was 1500 mm, which was enough for a robot to go through. Pattern 4 (Figure 11(d) ) was a difficult pattern to avoid because two persons make a gap at the center and the gap is blocked by another person. As shown in Figure 11(d) , the robot could avoid the crowd without collision even when the crowd is the “inverted wedge” formation. In this case, the robot starts avoiding the crowd before going between the two persons standing at the front. It was possible because the depth of the person at the center from the two persons at the front was 500 mm, which was comparable to the radius of the avoidance area circle. If the distance were larger, the robot would enter between the two persons and go back to avoid the crowd. Figure 11: Experimental results. As for the dynamic patterns shown in Figure 10 (patterns 8 and 9), the robot could avoid the moving crowd properly by passing through the gap made by the person’s movement. Here, we describe the motion of the robot for pattern 9 in detail. Figure 12 shows the robot’s behavior in pattern 9. First, as shown in Figures 12(a) and 12(b) , the robot moved toward the crowd along with the hypothetical target line. Figure 13(a) shows the sensors’ measurement at this stage, where the red circles show the circular areas for avoidance. The robot focused on the blue circle in front of the robot, which was the nearest point from the robot. The orange point is the center of the robot and the green line is the line segment between the robot and the nearest point of the obstacles. Next, as shown in Figure 12(c) , the robot started to avoid the nearest person. At this point, the second person from the right yielded to the robot. Then the robot recognized the gap as Figure 12(d) and passed through the gap as shown in Figure 12(e) . The sensor signal is shown in Figure 13(b) , where the blue circle is the circle along which the robot was moving. Finally, the robot moved toward the tentative target point, shown in Figure 12(f) . Figure 13(c) shows the sensor signal at this phase. Figure 12: The avoidance behavior of the robot under pattern 9. Figure 13: Measurement examples of sensor signals obtained from the robot’s avoidance behavior. 4.5. Discussion There are several limitations in the proposed method. First, the method assumes that the tentative target point is known. If another crowd were on that point, the robot could not arrive at the target point unless the crowd yields. Another limitation is that this method does not work when the persons move faster than the scanning speed of the LRF or the moving speed of the robot. The advantage of this method is that this method is based on the measurement obtained one after the other, without using the fixed map. Because of this feature, the robot could pass through the crowd under dynamic environments such as patterns 8 and 9. Another advantage is that the proposed method is simple enough to implement easily because the method is based on the following behavior along simple figures such as a straight line or a circle. 5. Conclusion In this paper, we proposed a crowd avoidance method that uses the avoidance region. Using this method, the robot could avoid the crowd even when the crowd was in the “inverted wedge” formation and the persons in the crowd were moving. In addition, the robot could pass through the crowd when there was a gap with enough width. In the experiment, we did not use the person-following method before the avoidance behavior because we wanted to check if the proposed algorithm worked correctly. As a future work, we implement the person-following method to the robot and conducted an experiment that combines the person following and the crowd avoidance under the static and dynamic environments. Another issue is that we need to implement a method to discover the lost target person after avoiding the crowd. Competing Interests The authors declare that there are no competing interests regarding the publication of this paper. Acknowledgments Part of this work was supported by JSPS KAKENHI JP16K00363. References K. Doelling, J. Shin, and D. O. Popa, “Service robotics for the home: a state of the art review,” in Proceedings of the the 7th International Conference on Pervasive Technologies Related to Assistive Environments (PETRA '14) , pp. 1–8, Rhodes, Greece, May 2014. View at Publisher · View at Google Scholar D. Holz, L. Iocchi, and T. van der Zant, “Benchmarking intelligent service robots through scientific competitions: The RoboCup@Home approach,” in Proceedings of the AAAI Spring Symposium: Designing Intelligent Robots 2013 , pp. 27–32, Stanford, Calif, USA, March 2013. View at Scopus K. Hashimoto, F. Saito, T. Yamamoto, and K. Ikeda, “A field study of the human support robot in the home environment,” in Proceedings of the IEEE Workshop on Advanced Robotics and Its Social Impacts (ARSO '13) , pp. 143–150, Tokyo, Japan, November 2013. View at Publisher · View at Google Scholar · View at Scopus J. Stückler, D. Droeschel, K. Gräve et al., “Increasing flexibility of mobile manipulation and intuitive human-robot interaction in RoboCup@Home,” in Proceedings of the RoboCup 2013: Robot World Cup XVII , S. Behnke, M. Veloso, A. Visser and, and R. Xiong, Eds., Springer, 2014. View at Publisher · View at Google Scholar J. Stückler, D. Holz, and S. Behnke, “RoboCup@Home: demonstrating everyday manipulation skills in RoboCup@Home,” IEEE Robotics & Automation Magazine , vol. 19, no. 2, pp. 34–42, 2012. View at Publisher · View at Google Scholar · View at Scopus G. Medioni, A. R. J. François, M. Siddiqui, K. Kim, and H. Yoon, “Robust real-time vision for a personal service robot,” Computer Vision and Image Understanding , vol. 108, no. 1-2, pp. 196–203, 2007. View at Publisher · View at Google Scholar · View at Scopus B. Graf, U. Reiser, M. Hägele, K. Mauz, and P. Klein, “Robotic home assistant care-O-bot® 3—product vision and innovation platform,” in Proceedings of the IEEE Workshop on Advanced Robotics and Its Social Impacts (ARSO '09) , pp. 139–144, Tokyo, Japan, November 2009. View at Publisher · View at Google Scholar · View at Scopus H.-J. Kwak and G.-T. Park, “Study on the mobility of service robots,” International Journal of Engineering and Technology Innovation , vol. 2, no. 2, pp. 97–112, 2012. View at Google Scholar S. Yoon, K. S. Roh, and Y. Shim, “Vision-based obstacle detection and avoidance: application to robust indoor navigation of mobile robots,” Advanced Robotics , vol. 22, no. 4, pp. 477–492, 2008. View at Publisher · View at Google Scholar · View at Scopus N. Sariff and N. Buniyamin, “An overview of autonomous mobile robot path planning algorithms,” in Proceedings of the 4th Student Conference on Research and Development (SCOReD '06) , pp. 183–188, Selangor, Malaysia, June 2006. View at Publisher · View at Google Scholar · View at Scopus V. Kunchev, L. Jain, V. Ivancevic, and A. Finn, “Path planning and obstacle avoidance for autonomous mobile robots: a review,” in KES 2006, Part II , B. Gabrys, R. J. Howlett, and L. C. Jain, Eds., vol. 4252 of Lecture Notes in Computer Science , pp. 537–544, Springer, 2006. View at Google Scholar I. Al-Taharwa, A. Sheta, and M. Al-Weshah, “A mobile robot path planning using genetic algorithm in static environment,” Journal of Computer Science , vol. 4, no. 4, pp. 341–344, 2008. View at Publisher · View at Google Scholar · View at Scopus Q. Cao, Y. Huang, and J. Zhou, “An evolutionary artificial potential field algorithm for dynamic path planning of mobile robot,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '06) , pp. 3331–3336, Beijing, China, October 2006. View at Publisher · View at Google Scholar · View at Scopus L. Lapierre, R. Zapata, and P. Lepinay, “Combined path-following and obstacle avoidance control of a wheeled robot,” International Journal of Robotics Research , vol. 26, no. 4, pp. 361–375, 2007. View at Publisher · View at Google Scholar · View at Scopus D. Soetanto, L. Lapierre, and A. Pascoal, “Adaptive, non-singular path-following control of dynamic wheeled robots,” in Proceedings of the 42nd IEEE International Conference on Decision and Control , vol. 2, pp. 1765–1770, Maui, Hawaii, USA, December 2003. View at Scopus G. Antonelli, S. Chiaverini, and G. Fusco, “A fuzzy-logic-based approach for mobile robot path tracking,” IEEE Transactions on Fuzzy Systems , vol. 15, no. 2, pp. 211–221, 2007. View at Publisher · View at Google Scholar · View at Scopus P. Coelho and U. Nunes, “Path-following control of mobile robots in presence of uncertainties,” IEEE Transactions on Robotics , vol. 21, no. 2, pp. 252–261, 2005. View at Publisher · View at Google Scholar · View at Scopus T. Yoshimi, M. Nishiyama, T. Sonoura et al., “Development of a person following robot with vision based target detection,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '06) , pp. 5286–5291, Beijing, China, October 2006. View at Publisher · View at Google Scholar · View at Scopus Y. Hiroi and A. Ito, “ASAHI: OK for failure: a robot for supporting daily life, equipped with a robot avatar,” in Proceedings of the International Conference on Human-Robot Interaction , pp. 141–142, Tokyo, Japan, March 2013. Y. Hiroi, S. Matsunaka, and A. Ito, “Mobile robot system with semi-autonomous navigation using simple and robust person following behavior,” Journal of Man, Machine and Technology , vol. 1, no. 1, pp. 44–62, 2012. View at Publisher · View at Google Scholar L. Huang, “Velocity planning for a mobile robot to track a moving target—a potential field approach,” Robotics and Autonomous Systems , vol. 57, no. 1, pp. 55–63, 2009. View at Publisher · View at Google Scholar · View at Scopus M. Deng, A. Inoue, K. Sekiguchi, and L. Jiang, “Two-wheeled mobile robot motion control in dynamic environments,” Robotics and Computer-Integrated Manufacturing , vol. 26, no. 3, pp. 268–272, 2010. View at Publisher · View at Google Scholar · View at Scopus J. Borenstein and Y. Koren, “Real-time obstacle avoidance for fast mobile robots,” IEEE Transactions on Systems, Man and Cybernetics , vol. 19, no. 5, pp. 1179–1187, 1989. View at Publisher · View at Google Scholar · View at Scopus S. S. Ge and Y. J. Cui, “New potential functions for mobile robot path planning,” IEEE Transactions on Robotics and Automation , vol. 16, no. 5, pp. 615–620, 2000. View at Publisher · View at Google Scholar · View at Scopus Terms of Service | Privacy Policy var trackcmp_email = ''; var trackcmp = document.createElement("script"); trackcmp.async = true; trackcmp.type = 'text/javascript'; trackcmp.src = '//trackcmp.net/visit?actid=609629776&e=' + encodeURIComponent(trackcmp_email) + '&r=' + encodeURIComponent(document.referrer) + '&u=' + encodeURIComponent(window.location.href); var trackcmp_s = document.getElementsByTagName("script"); if (trackcmp_s.length) { trackcmp_s[0].parentNode.appendChild(trackcmp); } else { var trackcmp_h = document.getElementsByTagName("head"); trackcmp_h.length && trackcmp_h[0].appendChild(trackcmp); }
Journal of Robotics – Hindawi Publishing Corporation
Published: Feb 19, 2017
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.