Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Hindawi Publishing Corporation Journal of Robotics Volume 2009, Article ID 986207, 13 pages doi:10.1155/2009/986207 Research Article Collaborator: A Nonholonomic Multiagent Team for Tasks in a Dynamic Environment Jing Ren and Mark Green Faculty of Engineering and Applied Science, University of Ontario Institute of Technology, Oshawa, ON, Canada L1G 7K4 Correspondence should be addressed to Jing Ren, jing.ren@uoit.ca Received 19 January 2009; Revised 1 September 2009; Accepted 16 October 2009 Recommended by Jorge Manuel Dias In our previous work, we proposed a potential ﬁeld-based hybrid path planning scheme for robot navigation that achieves complete coverage in various tasks. This paper is an extension of this work producing a multiagent framework, Collaborator, that integrates a high-level negotiation-based task allocation protocol with a low-level path planning method taking into consideration several real-world robot limitations such as nonholonomic constraints. Speciﬁcally, the proposed framework focuses on a class of complex motion planning problems in which robots need to cover the whole workspace, coordinate the accomplishment of a task, and dynamically change their roles to best ﬁt the task. Applications in this class of problems include bomb detection and removal as well as rescuing of survivors from accidents or disasters. We have tested the framework in simulations of several tasks and have shown that Collaborator can satisfy nonholonomic constraints, cooperatively accomplish given tasks in an initially unknown dynamic environment while avoiding collision with other team members. Finally we prove that the proposed control laws are stable using the Lyapunov stability theory. Copyright © 2009 J. Ren and M. Green. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Introduction complete their mission. Moreover when the task cannot be accomplished by a single robot, coordination or task Motion planning is a fundamental and important issue in allocation issues arise. In this instance, robots may negotiate robotics. A special type of path planning includes tasks such to solve the issues and distribute the tasks among themselves. as demining [1] and searching for victims after a tragedy, The robotics literature contains a number of motion which are extremely crucial jobs and require complete planning strategies for this class of tasks using neural coverage in the process of exploration. If time allows, robot networks, fuzzy logic, approximate cellular decomposition, teams should perform a thorough search and cover every exact cellular decomposition, and artiﬁcial potential ﬁelds. square foot of the workspace. While covering too much Most prior work focuses on either high-level task allocation area is not a concern, insuﬃcient coverage could result in or low-level task execution. Few works combine task alloca- disastrous consequences. Similar criteria apply to many other tion with potential-ﬁeld based motion planning to provide robotic applications such as vacuum robots and land mine a holistic solution. Parker [2] provides a behavior-based detection. In some more complex scenarios, robots are often architecture that facilitates fault tolerance: the eﬀectiveness interrupted by other tasks. For example, in jobs such as of the architecture is demonstrated through a team of mobile bomb detection and removal [2], or rescuing of survivors robots performing a laboratory version of hazardous waste from accidents or disasters [3], search is the typical mode. cleanup. Gerkey and Mataric [4] presents an auction-based However when a robot discovers an object, such as a survivor, task allocation, which can achieve a distributed approxima- its search phase is paused and then it attends to its rescue tion to the global optimum for resource usage. The primary task. It can only resume searching after the rescue task is contribution of the work is to empirically demonstrate that ﬁnished. These tasks require some sort of role change such the distributed negotiation mechanism is viable and eﬀective as the transformation from rescue to search to successfully for coordinating a physical multirobot team. [5] showed 2 Journal of Robotics the entire task is completed. Robots often need to work in Intelligent agent an environment which is dynamic, noisy and unpredictable. Nevertheless, they should be reliable in their ability to ﬁnish the task despite all of the distractions including High-level task coordination the intervention of moving obstacles. This guarantee of convergence is necessary for critical tasks such as rescuing victims after a disaster. Stability for single agent navigation in a static environment can be achieved by constructing a Low-level path planning simple Lyapunov function. However, stability for multiagent team navigation in a dynamic environment is inherently With nonholonimic constraints more diﬃcult due primarily to the interaction among agents and the presence of moving obstacles. In this paper, With moving obstacles constraints we propose a cooperative hybrid navigation scheme and With moving obstacles constraints construct a mode-speciﬁc team Lyapunov function, which and nonholonomic constraintsl shows that the system is stable at all times by using arguments ··· from hybrid systems theory and Lyapunov stability theory. The rest of the paper is divided into a number of sections that address individual aspects of this problem. In Section 2, Physical level control we describe the hierarchical agent design that is the basis for this work. Section 3 presents a potential ﬁeld-based hybrid navigation scheme. In Section 4, a motion control law is derived and Section 5 extends the framework to environ- ments with moving obstacles. Section 6 incorporates non- holonomic constraints in the agent design, while Section 7 Actuator Sensor considers the robots with both nonholonomic constraints and moving obstacles. Section 8 presents simulation results Figure 1: Agent-based navigation software architecture. The archi- for a multirobot system subject to nonholonomic constraints tecture of a single agent is structured in four hierarchical layers: in a dynamic environment. In Section 9 team stability is DecisionMaking (DM), KnowledgeBase (KBase), Interaction and analyzed. Finally in Section 10, we make concluding remarks Communication. Local and remote sensor information is integrated and discuss future research directions. into the knowledge base, and no distinction is made between them during motion planning. 2. Agent Design that the coordination of reactive robots can be obtained The multilayer agent architecture used in this work is through the exploitation of local interactions; however, it depicted in Figure 1. Each of the robots in the team use basically remains at the level of task allocation and does not the same architecture and there is no centralized control propagate to the level of task execution. In our previous for the team. The highest level is the task coordination work, a novel hybrid navigation scheme is proposed for a or knowledge level. At this stage of abstraction, agents multiagent team aiming to bridge the gap between high- analyze and manage tasks using an inherent coordination level task coordination and low-level path planning [6]. This mechanism. The middle level entails path planning, where work did not focus on low-level path planning, assumed ideal agents choose the suitable navigation function to pursue working conditions, and did not take into account moving ideal objectives based on the accumulation of knowledge obstacles and nonholonomic constraints. Those aspects of or team decisions. Lastly, the lowest level interacts with an low-level path planning however, are critical to a physical actuator and sensors to govern motion control. implementation. In a real-life work environment, moving Our agent team is completely decentralized, where there obstacles are often inevitable. For instance, cleaning robots is no leader or coordinator for the team and each agent often ﬁnd themselves in situations where they need to analyzes and makes navigation and coordination decisions avoid collision with people going about their own business. autonomously based on its own knowledge of the world. This Moreover, unlike simulated robots, many physical robots can design is inherently fault tolerant and easy to reconﬁgure in only move forward and backward, subject to nonholonomic the event that task requirement changes [6]. constraints, similar to that of vehicles. In this paper, we Our previous work describes the agent architecture in extend our previous results to a more realistic environment detail and how it is used to complete high-level tasks [6]. considering each single constraint and both constraints Here we provide a brief description of its main components simultaneously. and how they assist with low-level path planning. The archi- For a complex system such as the one previously tecture of a single agent consists of four hierarchical layers: described, the central objective is to achieve system stability. DecisionMaking (DM), KnowledgeBase (KBase), Interaction Systems are considered stable when convergence to the global and Communication. In the following paragraphs, we deﬁne minimum has occurred, which can only be achieved when the functionality of each layer as well as a the interface Journal of Robotics 3 between the layers. For each of these layers, the control modify the obstacle’s repulsive eﬀect. We represent obstacle software is multithreaded, with each main object running in shapes with a superscribed circle. its own thread. By specifying a minimum clearance between obstacles The Decision Making (DM) layer is the main source of and highly localizing the inﬂuence of the obstacles through intelligence in the System. It performs navigation planning modiﬁcations to the parameter C, generalized Gaussian based on the information stored in the Knowledge Base using function modeling can be used to construct potential ﬁelds the hybrid approach described in [6]. At any point in time a that are free of undesired local minima. Figure 2(a) uses a robot can be in one of a ﬁnite set of modes, which represent small C and there are local minima among the obstacles, by high-level navigation goals. A rule-based system is used to changing C to a larger value and thereby highly localizing transition between the modes based on the current state of the eﬀects of the obstacles, local minima disappear in the task. The modes deﬁne the navigation goals for the low- Figure 2(b). level path planning system. In [6], we proposed a mode-switching technique, where The Knowledge Base contains the agent’s most recent mode switches are based on events such as unsearched knowledge about the world map, the “group knowledge” as sectors decreasing and the number of found objects/obstacles well as the inference rules for coordination. This information increasing. Using the technique of artiﬁcial potential ﬁelds, includes the location of all known obstacles and any goals we construct a navigation function, V (q)for agent i in mode that have been located. The information in the Knowledge X (where X canbeany of the modesdeﬁnedabove). To i i Base is built up from the robot’s own experience, plus that construct the navigation function for a given mode, we use of the other robots it is communicating with. the three part formula: Devices at the Coordination layer, discussed in [7] i X i Xi separate the interaction tasks from the main periodic motion V q = VA q + VO q + VR q , q . i i i i j (3) planning task. The interaction between agents is performed j = i using an unambiguous communication protocol [7]. We In (3), VA (q ) represents the sum of the eﬀects on agent i of deﬁne interfaces between the interaction devices and both i all the N attractors in the system during mode X ,and thus the Knowledge Base and Communication layers. At the A i Interaction layer, devices perform coordination as well as 2 2 other interaction tasks and store the results in the Knowledge 2 X −(((x −(a ) ) +(y −(a ) ) )/2σ i i k x i k y VA q = 1 − e . (4) Base, without interfering with regular decision making. As a k=1 result, it is a simple matter to add new interaction devices if necessary for new applications. i VO (q ) represents the sum of the eﬀects on agent i of In summary, the higher levels of the architecture provide all the known static obstacles in the system during mode X . the locations of the obstacles and goals which are the input to The number of known static obstacles is N ; so we can state the low-level path planning system. The higher level ensures this as that the whole space is searched and all the intended tasked are completed, while the lower level deals with navigating 2 2 C X −(1/2)(((x −(r ) ) +(y −(r ) ) )/σ ) i i k x i k y VO q = e . (5) robots from their current locations to goal locations. k=1 Finally, the functions VR(q , q ) represent the repulsor i j 3. Potential Field-Based Navigation Scheme functions between pairs of agents i and j . Note that VR is A local minimum free potential ﬁeld can be constructed not mode dependent, since the number of agents is assumed using generalized spherical potential functions or harmonic to be constant. Thus, in general functions [8]. In this paper we use a two-dimensional 2 2 C −(1/2)(((x −x ) +(y −y ) )/σ ) i j i j generalized Gaussian function which is much easier to VR q , q = e . (6) i j construct and more eﬀective for obstacles whose protective areas can be superscribed by circles without overlapping. 4. Motion Control Law Attractive objects centered at (a , a ) are represented by x y the negative Gaussian attractor function: Using the technique of artiﬁcial potential ﬁelds, we construct a navigation function, V (q)for each agent i in mode X 2 2 i −((x−a ) +(y−a ) )/2σ x y f x, y = 1 − e . (1) (where X canbeany of the modesdeﬁnedabove). The (mode-dependent) kinematics of each agent are then given Repulsive obstacles and other agents centered at (r , r ) x y by: are modeled with the circular, two-dimensional generalized Xi Gaussian repulsor functions: ∂V /∂q q ˙ =−α . i (7) 2 2 C ∂V /∂q −(1/2)(((x−r ) +(y−r ) )/σ ) i x y i f x, y = e . (2) In (7), the operator ∂V (q)/∂q represents the gradient of For some positive integer C. The variance, σ,isameasure i of the size of the obstacle. The parameter C determines the V (q)withrespect to only q . Thus, we use a gradient eﬀective range (steepness) of the obstacle and can be varied to descent method to generate a unit vector direction for q ˙ and i 4 Journal of Robotics 50 50 45 45 40 ## 40 ## 35 35 30 30 # # 25 # @ 25 # @ 20 20 15 # 15 # # # 10 10 # # 5 5 0 0 010 20 30 40 50 0 5 10 15 20 25 30 35 40 45 50 (a) (b) Figure 2: Eﬀect of parameter C. the constant velocity parameter α to determine agent speed. with less computation. More importantly, the proposed We use a unit gradient since our Gaussian-like potential method can guarantee to ﬁnd an optimal solution when there functions decay very rapidly. is one. Note that the gradient decent method gives the direction Another advantage of the proposed method is that it of motion towards the goal, but the speed of this motion, is easier to incorporate the algorithm into various motion represented by α, is a function of the robot design. That is, planning algorithms. For example, we will show in Section 7 each robot has a maximum speed that must be accounted for that the proposed algorithm can be easily incorporated into by the navigation system. the nonholonomic robot’s motion planning [14] and achieve moving obstacle avoidance. We associate with each moving obstacle a dynamic 5. Extension to Environments with constraint [12] of the following form: Moving Obstacles (8) g q = q − P − D ≥ 0, j i j obs Moving obstacle avoidance is of great importance in robotics research and is inherently harder than dealing with only static where P is the position of the moving obstacle j,and D j obs obstacles. Many works have been devoted to this problem is the minimum safe distance between the robot and the [9–12]. The most related work in literature is presented moving obstacle. by Esposito and Kumar in [12]. In that work, Esposito When moving obstacles are far away from robots, they and Kumar propose a nonlinear programming method are ignored by the planning algorithm. When the robot for computing optimal feasible directions for the mobile moves within the threshold distance D of a moving obs robots. By moving along feasible directions instead of the obstacle, the constraints become activated, and robots move negative gradient direction, robots can successfully avoid all along the feasible motion directions. If the path is totally the moving obstacles while retaining a stable trajectory. In blocked by obstacles (static and moving), the robot will other words, the optimal direction decreases the potential simply halt until it is cleared, which is the same strategy that function value most rapidly within the feasible direction is used in [15]. set. It is not optimal in term of time or shortest distance In the context of moving obstacle avoidance, feasible though. Although that method is eﬀective in many cases, motion directions denoted by d are the directions that can it may fail to converge for some steps, especially in cases stabilize the system and move robots away from the activated where the number of moving obstacles increases (i.e., more moving obstacles and thus satisfy the following conditions: nonlinear constraints need to be considered). Also this d d > 0, method is computationally expensive due to the iterations in (9) ngi the optimization process (please refer to [13]). d dg ≥ 0, j = 1, 2,... , m, (10) Our algorithm uses geometry transformation to solve the ﬁ moving obstacle avoidance problem. Similar to the work of where dg = ∂g /∂q represents the gradient of the constraint j j i Esposito and Kumar, this algorithm achieves moving obstacle X X i i g and d =−(∂V (q)/∂q )/∂V (q)/∂q is the negative j ngi i i avoidance by ﬁnding optimal feasible solutions. However, i i there is no approximation in the proposed method and only gradient direction of the navigation function V (q)ofrobot several geometry transformations are involved even for the i.In(9), we require any control which can stabilize the most complex scenarios; so it can achieve accurate solutions system to have a positive projection on d (moving towards ngi Journal of Robotics 5 7 90 4 40 01 2 3 4 5 6 22.53 3.54 4.55 5.56 6.57 −12 ×10 Figure 3: The darker line is the trajectory of a robot trying to reach Figure 4: Histogram of position diﬀerences of 350 steps in both the goal (@ at the top of the ﬁgure). It must achieve this goal while x and y directions shows that the trajectories generated from two avoiding the moving obstacle (lighter line). algorithms are the same. the goal). In (10) we require any control which can move where the rotation matrix B (q) is of the following form: ri the robot away from the moving obstacle to have a positive ⎡ ⎤ projection on dg (moving away from the moving obstacles). j cos Θ − sin Θ ri ri ⎣ ⎦ B q = . (13) ri sin Θ cos Θ ri ri Theorem 1. The feasible motion direction set can be con- structed only using the robot gradients of the navigation The control law is then given by function and the constraints for moving obstacles. (Please see the appendix for the proof.) q ˙ = k · d . (14) fngi In our setting, the robot will move along the optimal In Figure 3, we illustrate this algorithm in a typical goal feasible direction calculated from the constructed feasible set, reaching task. The robot is required to reach the goal while where the direction is optimal in the sense that it is closest avoiding the moving obstacle. to the negative gradient direction. We will show later in this In Figure 4, we show that the results obtained from this section that the resulting motion directions are the same method are the same as those from optimization method as the optimization method in [12] when the optimization in [12] when the optimization method converges. Figure 4 method converges. Recall that the set [Θ , Θ ] is the set min max shows the histogram of the trajectory diﬀerences between of feasible directions in the following form of a rotation angle optimization method and the proposed method in both x with respect to the negative gradient direction d .From ngi and y direction, we can see the diﬀerences are the order of basic geometry (refer to Figure 13), we can see that there −15 10 , which is not signiﬁcant. will be no feasible solution if Θ > Θ . If there exists min max The proposed algorithm is much more computationally a feasible solution and d is in the feasible set, then d is ngi ngi eﬃcient than the optimization method. We use moving the optimal feasible gradient direction, otherwise the optimal obstacles with randomly generated positions and run the feasible gradient direction can be obtained by rotating d to ngi program 50 times (totally 350 steps) for the simple scenario the feasible direction set with smallest rotation. illustrated in Figure 3, the average number of iterations is If we use Θ to denote the optimal rotation angle, the ri above 5 for optimization method, and we know that when it following formula for Θ is given by ri comes to some complex scenarios, the number of iterations will increase substantially. The proposed algorithm, however, always has the computation complexity of O(1), so it is 0if Θ ≤ 0, Θ ≥ 0, min max at least ﬁve times faster than the optimization method on Θ = Θ if Θ > 0, (11) ri min min average. Θ if Θ < 0. max max 6. Nonholonomic Robots We deﬁne the optimal feasible gradient direction d to be The nonholonomic robots we consider in this paper are fngi the Hilare type mobile robot, which have the most typical nonslip constraint. This robot has two parallel wheels which d = d B q , (12) fngi ngi ri can be controlled independently. By commanding the same 6 Journal of Robotics motion direction Figure 6(a) shows the eﬀectiveness of the algorithm in i(orientation) (move forward) a multiple obstacle avoidance task. A nonholonomic robot successfully reaches the target after transversing a workspace which is packed with randomly generated obstacles of diﬀerent size. Figure 6(b) is a close-up view of Figure 6(a) θ showing fairly smooth motion of the robot around the ngi ni θ obstacles. ngi ni 7. Both Nonholonomic Constraints and Moving Obstacles −θ −θ In this section, we consider the motion control problem motion direction (backward) subject to both nonholonomic constraints and moving obstacles. We will ﬁrst ﬁnd a feasible direction set such that (a) (b) all directions in this set can move the robot towards the Figure 5: (a) The robot forward direction has a positive projection goal and at same time away from the moving obstacles. We on the negative gradient direction; the robot will move forward deﬁne the optimal feasible direction as the direction that is in to guarantee the θ is in the set [−π/2, π/2]. (b) The robot ni the feasible set and closest to the gradient descent direction. backward direction has a positive projection on the negative If the robot’s forward direction is in the feasible set, the gradient direction; the robot will move backward to guarantee that robot will move forward; if the robot’s backward direction the θ is in the set [−π/2, π/2]. ni is in the feasible set, the robot will move backward (note that backward direction and forward direction cannot be in the feasible set at the same time). When neither of these velocity to both wheels, the robot moves in a straight line. directions is in the feasible set, the robot will not generate By commanding velocities with the same magnitude but linear velocity but will rotate towards the optimal feasible opposite directions, the robot pivots about its axis. The direction. In this way, the robot can move towards (taking nonslip constraint forces the mobile robot to move only both positions and orientations into account) the goal and forward or backward. We resolve the constraint by selecting avoid moving obstacle at same time. the forward or backward direction based on whichever one If we use Θ to denote the optimal rotation angle, the oi has a positive projection on the gradient descent direction, following formula for Θ is given by oi then update the heading as fast as possible to match our heading to the gradient descent heading. If we assume it is 0if Θ ≤ 0, Θ ≥ 0, ⎪ min max a kinematic system, the inputs are the linear and angular ⎨ Θ if Θ > 0, Θ = (19) velocity. oi min min We deﬁne the angle corresponding to the negative Θ if Θ < 0, max max gradient of the navigation function θ as ngi (refer to Figure 13) and we deﬁne the feasible steepest descent X X i i ∂V ∂V i i direction d to be θ = arctan − ,− , (15) ngi ∂y ∂x i i d = d B q , (20) i ngi oi and deﬁne the angle between the negative gradient and the heading to be θ = min{(θ , θ ), (θ ,−θ )}. θ is in ni ngi i ngi i ni where B (q) is deﬁned as follows: oi the set [−π/2, π/2] by construction (please refer to Figure 5). ⎡ ⎤ The angular velocity control law is given by cos Θ − sin Θ oi oi ⎣ ⎦ B q = . (21) oi sin Θ cos Θ oi oi θ − θ ngi i θ = β , (16) θ − θ ngi i Motion Control Law. The (mode-dependent) kinematics of each agent is then given by and the linear velocity control law is given by roi X ˙ i T θ = β , (22) ∂V /∂q B q i ni |θ | roi q =−α , (17) Xi ∂V /∂q where θ is deﬁned as the angle from the orientation θ to roi i the optimal feasible direction d : where α and β are constant speed parameters and i T ⎡ ⎤ ∂V /∂q B q i i ai cos θ − sin θ ni ni q ˙ =−α , (23) ⎣ ⎦ B = . (18) i ni ∂V /∂q sin θ cos θ ni ni Journal of Robotics 7 R1063 70 40 0 1020 30405060 20 22 24 26 28 30 32 34 36 38 40 (a) (b) Figure 6: Motion planning for nonholonomic robots with multiple obstacles. where the rotation matrix B (q )isdeﬁnedas The O symbols also represent positions where the big objects ai i ⎡ ⎤ are picked up, and the @ symbols also represent positions cos Θ − sin Θ ai ai where the small objects are picked up. The solid lines near ⎣ ⎦ B q = , (24) ai the moving obstacles are trajectories of the moving obstacles. sin Θ cos Θ ai ai The robot R1 starts at position (1, 0), the robot R2 starts at where Θ is deﬁned as the rotation angle from the negative ai position (3, 0), and the robot R3 starts at position (5, 0). The gradient direction to the actual motion direction (forward small object collection location is (9, 0) and the big object direction or backward direction). Note that we require Θ ai collection location is (1, 0). to be in the set [−π/2, π/2];the robotwillmoveforward if In Figure 9, we illustrate the search-and-transport task. forward direction is a feasible direction, move backward if Only R3’s trajectory is shown (represented by dashed line) in backward direction is a feasible direction. B is set to a zero ai Figure 9 to reduce clutter. matrix when neither the forward direction nor the backward Figures 10(a)–10(c) is part of the simulation. We use direction is in the feasible set. In this case, the robot will not it to illustrate coordination among agents by showing the have any linear velocity. paths followed by the three agents and moving obstacles. The Figure 7 shows the comparison of two scenarios,the ﬁrst trajectories of agents R1, R2, and R3 are represented by the scenario is the goal reaching task with no obstacles. The solid line, dash-dot line and dashed line, respectively. From second scenario is the goal reaching task with one moving this ﬁgure, we can see that R2 found a big object at the “O” obstacle that is always in the way of the robot at each step point, R3 came to help and they returned the big object back of its motion. From the ﬁgures, it is clear that the robot can to the big object collection position together. R1found a reach the goal in six steps if the route is clear, while needs 80 small object at the “@” point, and is carrying it back to the steps under the inﬂuence of the moving obstacle. small object collection position. The ﬁgure also illustrates Figure 8 shows the diﬀerence of robot motions with how all agents avoid both static and moving obstacles at all and without moving obstacles in the task of static obstacle times. avoidance. The ﬁrst ﬁgure shows that the robot can get Figure 11 illustrates our moving obstacle avoidance tech- around the static obstacle in 60 steps, however with a moving nique. When R1 spots a moving obstacle represented by obstacle, the robot in the second ﬁgure needs 90 steps. the M symbol, a moving obstacle avoidance algorithm is incorporated into the navigation function. By always moving along the feasible direction, R1 can make progress to the goal 8. Simulation Results and avoid collision with the obstacle at the same time. In this section, we show how this framework succeeds In Figure 12 we illustrate how our method works with a in a typical navigation and coordination task: search-and- more complex scenario. The environment is enlarged from transport. We have performed simulations of agent teams in 10 by 10 to 50 by 50. The workspace is more clustered several environments. Figure 9 shows the results of a typical and the obstacles have a larger size and assume diﬀerent simulation, played in a 10 × 10 grid containing 6 static shapes. It is shown that all the algorithms presented in obstacles (represented with the # symbol) and 2 moving the paper are not aﬀected by these changes. The three obstacles (represented with the M symbol) with a team of robots can still coordinate and accomplish the search-and- 3 agents trying to ﬁnd 3 small objects (represented by the transport task. The new workspace contains eight closely @ symbol) and 3 big objects (represented by the O symbol). placed static obstacles of diﬀerent shapes and two moving 8 Journal of Robotics 100 100 6 80 90 90 80 @ 80 Trajectory of the 70 70 moving obstacle 60 60 50 50 Trajectory of the robot 40 40 30 30 Trajectory of the robot 20 20 10 10 Start Start 0 0 0 20 40 60 80 100 120 0 20 40 60 80 100 120 (a) (b) Figure 7: (a) shows the robot motion with nonholonomic constraints, no obstacle. (b) shows the robot motion with nonholonomic constraints, with a random moving obstacle, note that to fully test our algorithm, we keep the moving obstacle activated for each step. 100 80 60 90 80 @ 50 40 ∗ ∗ Start Start 0 0 0 20 40 60 80 100 120 0 20 40 60 80 100 120 (a) (b) Figure 8: (a) shows the robot motion with nonholonomic constraints, with a static obstacle at (55, 40), (b) shows the robot motion with nonholonomic constraints, with a moving obstacle moving around (55, 40). obstacles (represented with the M symbol). The number of With the mode switching technique deﬁned in [6], we large objects remains three and the number of small objects can guarantee the mode switches occur in ﬁnite time as is increased to four. As in the last scenario, the solid lines follows. Although the order of mode switches cannot be near the moving obstacles are the trajectories of the moving predicted in advance, our switching technique guarantees obstacles. The robot R1 starts at position (5, 0), the robot R2 that the mode-transition graph will be acyclic since mode starts at position (15, 0), and the robot R3 starts at position switches only occur when the team has made progress (25, 0). The small object collection location is (45, 0) and the towards solving the overall task and there is never a big object collection location is (5, 0). situation where a given mode is reentered. Therefore it is straightforward to show that our hybrid system is one of the switched system deﬁned in [16]. 9. Team Stability Analysis Now we only need to show that the low-level path planning scheme presented here is stable. The deﬁnition of our modal navigation functions, and of our mode-switching rules, allows us to show that the system (entire agent team) is globally stable at all times and in all 9.1. Stability without Constraints. Within each mode, we can states [16]. deﬁne a mode-speciﬁc Lyapunov function of the following Journal of Robotics 9 11 11 Trajectory of the robots Position of 10 10 robot home O @ 9 @ 9 R1 8 # 8 7 @ 7 6 O 6 Big objects R2 5 O # 5 Obstacles 4 4 R3 Small objects Trajectory of 3 # 3 # moving 2 # 2 # obstacles @ 1 1 0 0 @@@ OOO 024 6 8 10 024 6 8 10 (a) Figure 9: This ﬁgure is the snapshot after the task is completed. All three big objects are returned to the big object collection position and all three small objects are returned to the small object collection position. The robots are back in their home position. The dashed line represents the trajectory of agent R3. 10 9 @ R1 form: Q Q Q X i V q = V q − VR q , q (25) 4 i j i R2 i=1 i=1 j=i+1 R3 3 # Q # X X i i = VA q + VO q i i i=1 (26) 024 6 8 10 Q Q + VR q , q , (b) i j i=1 j=i+1 where the step from (25)to(26) is justiﬁed by the fact that VR(q , q ) = VR(q , q ). In these equations Q is the number i j j i of robots. R1 To show intramodal Lyapunov stability, we are required 8 X X to show V (q) ≥ 0for all q and V (q) < 0for all q, t. X X V (q) ≥ 0 follows naturally from the deﬁnition of V (q)in (26). To show that V (q) is always decreasing, we begin by using (25). For convenience, in the derivations that follow, R3 we have replaced terms of the following form VR(q , q )with 4 i j the short form VR : R2 ij 3 # Q Q X X i i 1 ∂V q ∂V q X i T i T V q = q ˙ + q ˙ i j ∂q ∂q i j i=1 i=1 i= j 024 6 8 10 (27) (c) Q Q ∂VR ∂VR ij ij T T − q ˙ + q ˙ . i j Figure 10: This ﬁgure is to show the agent team coordination in a ∂q ∂q i j i=1 j=i+1 big object carrying task. In (a) R2 ﬁnds a big object, stops searching and calls for help. In (a) R3 comes to help after receiving the conﬁrm message. In (b) R3 arrives at the big object and R2and R3are Now, observe that since q only appears in V (q) j carrying the big object home together. Finally, in (c) R2and R3 through the VR terms, the second term of (27)can be reached the object home. ij 10 Journal of Robotics X X i i rewritten as If we denote v =−∂V (q)/∂x and v =−∂V (q)/∂y , xi i yi i i i X X i i ∂V q ∂V Q Q X i i ∂VR B q ∂V ij ni i T T ∂q ∂q q ˙ = q ˙ i i j j ∂q ∂q j j i=1 i= j i=1 i= j / / ⎡ ⎤ ⎡ ⎤ v v xi xi Q Q Q i−1 ⎣ ⎦ ⎣ ⎦ = B ∂VR ∂VR ni ij ij T T ˙ ˙ v v = q + q yi yi j j (32) ∂q ∂q j j i=1 j=i+1 i=1 j=1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ v cos θ − sin θ v xi ni ni xi Q Q Q Q ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ∂VR ∂VR = ij ps T T = q ˙ + q ˙ v sin θ cos θ v j p yi ni ni yi ∂q ∂q j p i=1 j=i+1 p=1 s=p+1 2 2 = v cos θ + v cos θ . ni ni xi yi Q Q ∂VR ∂VR ij ij T T = q ˙ + q ˙ . i j ∂q ∂q i j i=1 j=i+1 Note that by construction, θ ⊆ [−π/2, π/2]. Therefore ni we will have V (q) < 0for all t, q as required. (28) 9.3. With Moving Obstacles. From (29), we have Thus, the second and third terms in (27) will cancel, and we will have Q X ∂V q X T ˙ (33) V q = q . ∂q i=1 Xi ∂V q X i T V q = q ˙ . (29) If we substitute for q ˙ using the dynamics deﬁned in (14), ∂q i=1 we will have Q X ∂V q If we substitute for q ˙ using the dynamics deﬁned in (7), X i ˙ (34) V q = kd . fngi ∂q we will have i i=1 According to the deﬁnition of feasible directions (refer to X X Q i ˙ i (9)), we have V (q) < 0for all t, q as required. ∂V /∂q ∂V q i X i V q =−α ∂q ∂V /∂q i=1 i 9.4. With Moving Obstacles and Nonholonomic Constraints. If (30) Q X we substitute for q ˙ using the dynamics deﬁned in (23), we i i ∂V q will have =−α , ∂q i=1 Q i ∂V /∂q ∂V q i X i (35) V q = (−α) B q . ai X ∂q andwewillhave V (q) < 0for all t, q as required. ∂V /∂q i=1 i Remark 1. The constructed Lyapunov function V (q)isa X X i i If we denote v =−∂V (q)/∂x and v =−∂V (q)/∂y , xi i i yi i i generalized energy function. The ﬁrst term VA (q ) i=1 decreasing means that the agents get closer to the attractors. i Xi ∂V q ∂V i i The second term VO (q ) decreasing means that the i=1 B q ai ∂q ∂q i i agents move away from the static obstacles. The third term Q Q VR(q , q ) decreasing means that the agents try ⎡ ⎤ ⎡ ⎤ i=1 j=i+1 i j T v v xi xi to stay away from the other team members. ⎣ ⎦ ⎣ ⎦ = B q ai v v yi yi (36) ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 9.2. Stability Subject to Nonholonomic Constraints. If we v cos θ − sin θ v xi ai ai xi substitute into (29)for q ˙ using the dynamics deﬁned in (17), ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ v sin θ cos θ v yi ai ai yi we will have 2 2 = v cos θ + v cos θ . ai ai xi yi Q X ∂V /∂q ∂V q i i V q =− αB q . (31) ni Note that by construction, θ ⊆ [−π/2, π/2]. Therefore ai ∂q i ∂V /∂q i=1 i X i ˙ we will have V (q) < 0for all t, q as required. Journal of Robotics 11 11 11 10 10 9 9 8 8 7 7 6 6 O O 5 5 4 4 3 # 3 # 2 M # 2 M # M M 1 1 21 26 0 0 024 6 8 10 024 6 8 10 (a) (b) 11 11 10 10 9 9 8 8 7 7 6 6 O O 5 5 4 4 3 # 3 # M # M # 2 2 M M 1 1 29 33 0 0 024 6 8 10 024 6 8 10 (c) (d) M # 024 6 8 10 (e) Figure 11: (a) R1 spots a moving obstacle in the way. and start moving obstacle avoidance algorithm for their navigation. (b)–(e) shows for each step, the robot is moving away from the current position the obstacle therefore avoiding collision. Meanwhile the total eﬀects of attractive “force” from the unsearched cells and the objects attract the robot towards to the goal. 10. Conclusions and Future Work the robots and their environment were abstractions of the real world. For instance, the robots had the ability to move In our previous work, we proposed a software framework in all directions and the environment contained only ﬁxed and a hybrid navigation scheme for multiple agent naviga- obstacles. In this paper, we extend our work by eliminating tion and coordination. However, there were several issues these ideal characteristics and considering the challenges that remained unresolved in the previous work. Speciﬁcally, that robots are likely to encounter in the real world. In 12 Journal of Robotics First we denote the angle corresponding to the gradient O O of constraint g by θ and deﬁne it as follows: j vj 45 @ ∂g ∂g j j θ = arctan , ,(A.1) vj ∂y ∂x where −π< θ ≤ π . To better illustrate our method, we vj represent all directions in the following form of rotation angles with respect to d . Θ is deﬁned as the angle between ng j d and dg . We deﬁne this angle to be positive if a counter- ng j 10 clockwise rotation takes d to dg and negative otherwise: ng j Θ = θ − θ . (A.2) vj ng 0 @@@@ OOO The resultant Θ falls in the interval [−2π,2π]and we 0 5 10 15 20 25 30 35 40 45 50 transform it to the interval [−π , π ] by the following formula: Figure 12: We show the simulation results with a more complex environment. Obstacles are larger and take on diﬀerent shapes, and Θ if − π ≤ Θ ≤ π , j j the workspace is more crowded. This ﬁgure is the snapshot after the task is completed. All three big objects are returned to the big Θ = Θ − 2π if Θ >π , (A.3) j j object collection position and all four small objects are returned to ⎪ the small object collection position. Θ +2π if Θ < −π. j j Next we construct a feasible direction set [Θ , Θ ]for j min j max 1min each of the activated moving obstacles: fng Θ − if Θ ≥ 0, ⎪ j j Θ = j min θ π 2max ⎪ − if Θ < 0, dg j ng (A.4) Θ + if Θ ≤ 0, ⎪ j j Θ = j max if Θ > 0. π/2 −π/2 θ θ 1max 2min Note that the feasible direction set [Θ , Θ ]for j min j max each activated moving obstacle constraint is a subset of dg [−π/2, π/2] because all feasible directions must lie in the same half-tangent plane as d . Next, we ﬁnd the inter- Figure 13: This ﬁgure illustrates our optimal feasible direction ng section θ = [Θ , Θ ] of the feasible direction sets scheme. Note that the ﬁlled area is the resulting feasible area. f min max [Θ , Θ ] that works for all moving obstacles: j min j max Θ = max{Θ , Θ ,... , Θ }, min 1min 2min m min particular, we propose geometric-based methods to avoid (A.5) moving obstacles and satisfy nonholonomic constraints. Our Θ = min{Θ , Θ ,... , Θ }. max 1max 2max m max methods are suitable for fast online calculation and they are easily combined with one another to solve more complex By construction, we know that θ is also a subset of problems such as nonholonomic constraints with moving [−π/2, π/2]. obstacles. Furthermore, our control framework applies to a wide range of scenarios. We have demonstrated the References search-and-transport problems with interdependencies for a multiagent team. And this framework is applicable to a much [1] S. Hirose, S. Yokota, A. Torii, et al., “Quadruped walking robot wider range of tasks than were discussed in this paper, due centered demining system—development of TITAN-IX and its operation,” in Proceedings of the IEEE International Conference to our hierarchical design and fully distributed organization on Robotics and Automation (ICRA ’05), vol. 2005, pp. 1284– and because our stability proof makes no assumptions about 1290, 2005. the nature of the team members. [2] L. E. Parker, “ALLIANCE: an architecture for fault tolerant multirobot cooperation,” IEEE Transactions on Robotics and Appendix Automation, vol. 14, no. 2, pp. 220–240, 1998. [3] A. Davids, “Urban search and rescue robots: from tragedy to Proof. (For simplicity, we drop the subscript “i”which refers technology,” IEEE Transactions on Intelligent Systems, vol. 17, to the ith robot in this proof.) no. 2, pp. 81–83, 2002. Journal of Robotics 13 [4] B. P. Gerkey and M. J. Mataric, “Sold!: auction methods for multirobot coordination,” IEEE Transactions on Robotics and Automation, vol. 18, no. 5, pp. 758–768, 2002. [5] A. J. Ijspeert, A. Martinoli, A. Billard, and L. M. Gambardella, “Collaboration through the exploitation of local interactions in autonomous collective robotics: the stick pulling experi- ment,” Autonomous Robots, vol. 11, no. 2, pp. 149–171, 2001. [6] J.Ren,K.A.McIsaac,and R. V. Patel, “A novelhybrid navigation scheme for reconﬁgurable multi-agent teams,” International Journal of Robotics and Automation, vol. 21, no. 2, pp. 100–109, 2006. [7] H. Ghenniwa and M. Kamel, “Interaction devices for coor- dinating cooperative distributed systems,” Intelligent Automa- tion and Soft Computing, vol. 6, no. 3, pp. 173–184, 2000. [8] J.-O. Kim and P. K. Khosla, “Real-time obstacle avoidance using harmonic potential functions,” IEEE Transactions on Robotics and Automation, vol. 8, no. 3, pp. 338–349, 1992. [9] R. Kimmel, N. Kiryati, and A. M. Bruckstein, “Multivalued distance maps for motion planning on surfaces with moving obstacles,” IEEE Transactions on Robotics and Automation, vol. 14, no. 3, pp. 427–436, 1998. [10] K. Fujimura and H. Samet, “Hierarchical strategy for path planning among moving obstacles,” IEEE Transactions on Robotics and Automation, vol. 5, no. 1, pp. 61–69, 1989. [11] R. A. Conn and M. Kam, “Robot motion planning on N-dimensional star worlds among moving obstacles,” IEEE Transactions on Robotics and Automation,vol. 14, no.2,pp. 320–325, 1998. [12] J. M. Esposito and V. Kumar, “A method for modifying closed- loop motion plans to satisfy unpredictable dynamic con- straints at run-time,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’02), pp. 1691– 1696, Washington, DC, USA, 2002. [13] M. Bazaraa, H. Sherali, and C. Shetty, Nonlinear Programming Theory and Algorithms, John Wiley & Sons, New York, NY, USA, 1995. [14] H.-S. Shim and Y.-G. Sung, “Stability and four-posture control for nonholonomic mobile robots,” IEEE Transactions on Robotics and Automation, vol. 20, no. 1, pp. 148–154, 2004. [15] J. Evans, B. Krishnamurthy,B.Barrows,T.Skewis, andV. Lumelsky, “Handling real-world motion planning: a hospital transport robot,” IEEE Control Systems Magazine, vol. 12, no. 1, pp. 15–19, 1992. [16] M. S. Branicky, “Multiple Lyapunov functions and other anal- ysis tools for switched and hybrid systems,” IEEE Transactions on Automatic Control, vol. 43, no. 4, pp. 475–482, 1998. International Journal of Rotating Machinery International Journal of Journal of The Scientific Journal of Distributed Engineering World Journal Sensors Sensor Networks Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Volume 2014 Journal of Control Science and Engineering Advances in Civil Engineering Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Submit your manuscripts at http://www.hindawi.com Journal of Journal of Electrical and Computer Robotics Engineering Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 VLSI Design Advances in OptoElectronics International Journal of Modelling & Aerospace International Journal of Simulation Navigation and in Engineering Engineering Observation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2010 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 International Journal of Active and Passive International Journal of Antennas and Advances in Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
Journal of Robotics – Hindawi Publishing Corporation
Published: Feb 1, 2010
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.