Design, Implementation, and Performance Evaluation of a Web-Based Multiple Robot Control System
Design, Implementation, and Performance Evaluation of a Web-Based Multiple Robot Control System
Rajapaksha, U. U. Samantha;Jayawardena, Chandimal;MacDonald, Bruce A.
2022-05-30 00:00:00
Hindawi Journal of Robotics Volume 2022, Article ID 9289625, 24 pages https://doi.org/10.1155/2022/9289625 Research Article Design, Implementation, and Performance Evaluation of a Web-Based Multiple Robot Control System 1 2 3 U. U. Samantha Rajapaksha , Chandimal Jayawardena, and Bruce A. MacDonald Department of Information Technology, Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka Department of Computer System Engineering, Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka Department of Electrical, Computer and Software Engineering, �e University of Auckland, Auckland, New Zealand Correspondence should be addressed to U. U. Samantha Rajapaksha; samantha.r@sliit.lk Received 24 March 2022; Accepted 11 May 2022; Published 30 May 2022 Academic Editor: L. Fortuna Copyright © 2022 U. U. Samantha Rajapaksha et al. �is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Heterogeneous multiple robots are currently being used in smart homes and industries for diƒerent purposes. �e authors have developed the Web interface to control and interact with multiple robots with autonomous robot registration. �e autonomous robot registration engine (RRE) was developed to register all robots with relevant ROS topics. �e ROS topic identiˆcation algorithm was developed to identify the relevant ROS topics for the publication and the subscription. �e Gazebo simulator spawns all robots to interact with a user. �e initial experiments were conducted with simple instructions and then changed to manage multiple instructions using a state transition diagram. �e number of robots was increased to evaluate the system’s performance by measuring the robots’ start and stop response time. �e authors have conducted experiments to work with the semantic interpretation from the user instruction. �e mathematical equations for the delay in response time have been derived by considering each experiment’s input given and system characteristics. �e Big O representation is used to analyze the running time complexity of algorithms developed. �e experiment result indicated that the autonomous robot registration was successful, and the communication performance through the Web decreased gradually with the number of robots registered. to interact with robots and users using the ROS bridge 1. Introduction server. ROS bridge server worked as an interface between the Autonomous robot registration and control is one of the ROS environment and the Web interface. We have devel- complex tasks in robotic application development. ROS was oped diƒerent Web interfaces to interact with the user and developed to improve interoperability and reduce hetero- diƒerent types of experiments in our research as described geneous multiple robot programming complexities. ROS is a by Web interfaces I to V. kind of middleware used by developers in robotic applica- Web interfaces I to IV were developed to work with tions to reuse most existing software developed by diƒerent instructions such as moving the robot to a speciˆc location researchers. �ere are diƒerent nodes, topics, and message and working with multiple instructions sequentially. Web formats for diƒerent robots in ROS. An algorithm was interface V was developed to work with instructions with developed to ˆnd the related topics to control diƒerent semantics. We have used the Gazebo simulator for our experiments. �e robot actions and the initial position were robots in ROS. �erefore, in our system, the main com- ponent is the robot registration engine (RRE), which is changed with time. �erefore, we have created a schedule for developed to register multiple heterogeneous robots by each robot to complete movement or navigation in the getting all related rostopics. �e Web interface was developed experiment with Web interface V. �en, we have identiˆed 2 Journal of Robotics the relevant ROS topic in corresponding nodes to subscribe used. (e server processed all complex computation and and publish the corresponding command values from the visualization, and each node in robots was used to process the real-time tasks [1]. user command. (e command publishing engine (CPE) is responsible for publishing the ROS command for each ac- (ere were many research projects with multiple robots, tion defined in the given user-level instruction. but our work is unique because of autonomous robot reg- Different architectures were used to design the hetero- istration with the Web interface, performance evaluation, geneous multiple robot system, including centralized, dis- and heterogeneous robots. tributed, and hybrid mode [1]. Our solution is based on the centralized server architecture as shown in Figure 1. 2.2. Web Interface for Robot Control. Costa et al. have in- We have conducted experiments with Web interfaces I to V with different inputs. (e state transition system works troduced a Web-based interface for multiple robot com- munication using ROS. Two services were implemented with multiple instructions when the user issues several named monitor and control. In addition, they have commands sequentially. We have derived the mathematical implemented operations as robots move forward, move to equations for each experiment for the delay time in response the right, move to the left, and move backward. (e main to the inputs and system characteristics. (e algorithm’s running time was expressed using the Big O notation, contribution was to manage heterogeneous robots by lay- people with the help of ROS [4]. representing the time complexity. (e following sections are grouped as follows. Section 2 Penmetcha et al. have implemented a system to manage robots that are based on ROS and non-ROS with cloud represents a literature survey with background readings and related research works. (e methodology with algorithms technologies. (e robotic applications were executed with machine learning algorithms based on JavaScript-based li- and main components of the design are presented in Section 3. (e experiments and evaluation of the research project braries. (e CPU utilization and latency performance were calculated, and an average latency of 35 milliseconds is with results are described in Section 4. Finally, Section 5 achieved. In addition, the innovative cloud was developed describes the conclusion with future works. using Amazon Web services [5]. Singhal et al. have developed a fleet management system 2. Background Studies with autonomous mobile robots using a single master and cloud-based configuration. In addition, autonomous navi- (ere are many research works that are currently related to gation was used with a global planner. (e authors have heterogeneous multiple robot control and communication. identified the critical limitation and issues with cloud ro- (erefore, we have categorized all background reading as botics [6]. multiple robot controls, Web Interface for robot control, and Beetz et al. have developed a service named openEASE to robot programming and control interface with user work with the available research based on cloud technology. instructions. openEASE is a Web-based knowledge service that robotic researchers can remotely access. (e researchers can access semantically annotated data from real-world scenarios [7]. 2.1. Multiple Robot Controls. Some research groups have Casañ et al. have implemented a tool with the Web implemented heterogeneous multiple robot control with the browser interface for online robot programming. It provides help of a human. Seohyun et al. have developed layered the interface with the text box for scripting. MATLAB re- architecture to manage and control multiple robots with the intervention of humans. (ey have designed the interface to mote programming environments were used to implement the system [8]. separate the autonomous and manual parts. (ey have proposed architecture to control multiple robots with the Even though there are many projects with Web inter- faces for robot control, our work is different since we have human intervention. (ey have separated the manual part and the mechanical part in this architecture. (ey have implemented the interface to register and control hetero- geneous robots and work with multiple instructions enhanced the multiple robot control with the human in- sequentially. tervention [2]. Rajapaksha et al. have implemented a system, which Alberri et al. have developed architecture to connect takes user-level instruction with uncertain words for a drone multi-robot heterogeneous systems with a hierarchical and converts it to machine-understandable executable for- system that is mainly based on the ROS. (e layered ar- mat using the ontology [9, 10]. chitecture was used in this development. Lower layers were implemented with C and C++ languages. Complex com- Rajapaksha et al. have developed a system to control and communicate with robots using user instruction with un- putations were performed by the upper layer and an in- termediate level. (ey have used three different devices certain terms. (ey used the ontology to represent the knowledge of the robot for uncertain terms. (e developed (autonomous quadcopter, autonomous mobile robot, and system is able to understand the commands such as go fast autonomous vehicle) to complete the testing of the system and go very fast. (ey have developed the user-friendly [3]. environment to interact with the robots [11, 12]. A system was developed where personal computers work Rajapaksha et al. have developed a GUI-based system to as servers and robots work as nodes. Again, the hybrid program and control the robots with Web interface [13]. architecture based on ROS with multiple robot systems was Journal of Robotics 3 Schedule Management for multi Generic Move to your Robot command allocated WEB INTERFACE location and start the work INTERPRETER Turtlebot specific command Husky specific command ONTOLOGY TIAGo specific command Figure 1: High-level system diagram. Rajapaksha et al. have implemented a heterogeneous mul- from an ontology [19]. Amaratunga et al. have developed an tiple robot control system by registering robots autono- interface to program novel programmer to program easily mously with high-level user instructions [14, 15]. with interface developed. (ese ideas can be used for robot Buscarino et al. have proposed a methodology to the programming interface development [20]. control group of robots without central coordination. (ey Muthugala et al. have reviewed the service robot com- have proved that the system performance with having noise munications where robots can work with information can be improved by including long-range connections be- having uncertainty in natural language instructions. (ey tween the robots. (ey have modeled the network as a have implemented the system to identify the issues in working with the qualitative information in the given user dynamic network [16]. instruction in current research work. (ey have indicated that the quantitative value of information with uncertain 2.3. Robot Programming and Control Interface with User terms can depend on the environment, previous experience, Instructions. Tiddi et al. have developed a system to help and the current context [21]. nonexpert users in robotics for robotic application devel- Sutherland and MacDonald have created domain-specific opment with the help of the ontology in the ROS envi- language to work with the text, which is named as RoboLang. ronment. (e main focus was to reduce the time for robot (at language is working with the existing programming programming for a specific task using the ontology repre- tools. In addition, the program code can be executed on other sentation. (e nonexpert’s user needs to configure the robot platforms with minor modification of the code [22]. system to complete different tasks by the robot [17]. Datta et al. developed an integrated development envi- Tiddi et al. have developed the interface, which allows ronment for visual programming by abstract textual domain- nonexperts to use a robot as a development platform. (e specific language. It provides the program development system provides high-level commands with the help of environment to program robotic applications very fast and fundamental ontology. (ese ontologies have mapped the very simple with the user requirements [23]. Jayawardena high-level capabilities on the robot low-level capabilities et al. developed a new concept named as coach player model (e.g., communication and synchronization). (ey have used to learn from user commands [24, 25]. the middleware as ROS [18]. Gayashini et al. have developed a navigation model in an Pomarlan and Bateman have implemented a system that unknown area with obstacles. (ey have developed a reverse translates “semantic specification” in a natural language navigation model based on previous knowledge [26]. Pan- instruction to a program that a simulated robot can execute. agoda et al. have developed a similar system with a potential For example, the system can interpret a sentence into a field graph. (ey have developed a recovery behavior al- program that allows the robot to understand the sentence. gorithm to find an alternative path if the current path has (e main task was to cover a set of basic action concepts any obstacle [27]. 4 Journal of Robotics Jayawardena et al. have implemented a system to im- Web interfaces I to IV were developed to work with simple plement software for a given robotic programming scenario instructions such as moving the robot forward, moving the within a minimum amount of time. Less coding can be used robot circle, and getting the robot’s current position. Web to create software for the given scenario. (e software can be interface V was developed to work with instructions with modified, and all changes are made quickly without any semantics. We have used the Gazebo simulator for our ex- errors. (e behavior execution engine (BEE) was used to periments. (e standard ROS JavaScript Library provided by integrate the subsystems together [28]. the ROS Web Tools (http://robotwebtools.org/) was used to Datta et al. have developed a system with an environ- connect ROS with the Web interface. In the last experiment, ment to develop the program for robots with interactive the user can issue an instruction like “Move to the Room 3” to behaviors. Moreover, it is a visual programming tool. Subject all robots that are placed at different positions. Figure 2 matter experts (SMEs) can involve in the service robot represents the system architecture of our system. application development. It makes the post-software de- ployment easy [29]. 3.1. Robot Registration Engine. (e algorithm that we have Kim et al. have developed a system to understand the developed to register all multiple heterogeneous robots qualitative information with commands for service robots using the ontology. (ey have used lexicon semantic pattern with the human intervention is represented in Figure 3. We have initially created a node called “regRobot” to complete matching to get the most relevant keywords from the user instruction. (ey developed an interpretation system as a the rest of the line execution of the algorithm. IP addresses were extracted from the given IP address list named as prototype, and it was tested with many commands. Standard vocabulary and semantics were defined in the ontology that “ipList.” (e IP address is used to connect all heterogeneous service robots in the Gazebo environment. Next, ROS intelligent agents can use [30]. Scibilia et al. have reviewed motor control theory and commands were executed to collect the software specifi- cation, which has used the execl() system call by the ROS sensory feedback applications performed in parallel. Opti- mal control models were developed to represent the humans’ node created earlier. Finally, an ontology named as “Registration Ontology” is created to represent available ability to behave optimally after a certain level of training. ROS details. (e advantage of the structural model and Hosman’s de- scriptive model is discussed in this review [31]. Bucolo et al. have worked on a complex and imperfect 3.2. Command Interpreter. When a user issues a high-level electromechanical structure that can be used as paradigm for user instruction on the Web interface provided by the the imperfect system. (ey have indicated that the electrical system, the instruction is analyzed by the command inter- and mechanical interactions generate complex patterns preter to separate the action, subject, object, and constraint, because it prevents system to reach correct conditions [32]. as shown in Figure 4. First, the instruction can be sent to Our solution may not be perfect in terms of performance process the synonyms and semantics. (en, it needs to find characteristics. out relevant ROS nodes, ROS topics for subscription, and Rashid et al. have developed an algorithm named cluster matching to get the orientation and localization of the ro- publication with the algorithm as shown in Figure 3. (e system is implemented by handling multiple in- bots. Each robot could estimate the relative orientation of structions one by one issued by the user using a state neighbor robots that are within its transmission range. It is transition diagram with the description of the states as able to get the absolute positions and orientations of the shown in Figure 5. (e robot state is saved in the ROS topic team robots without knowing the ID of the other robots [33]. to retrieve the robot state from time to time. When the robot Ali et al. have developed the multi-robots navigation is ready, it will accept the user’s instruction and complete the model in dynamic environment named shortest distance. assigned work accordingly. (e collision-free trajectory was developed using the current When a user issues multiple instructions to the robot orientation and position of the other robots. (is algorithm through the Web interface, the related flowchart with the is based on the concept of reciprocal orientation that state transition is shown in Figure 6. Initially, a robot must guarantees smooth trajectories and collision-free paths [34]. According to the above background studies, we can register with the robot registration engine and update the state as ready in the ROS topic. (en, the robot can work identify that some researches are more similar to our system, according to the instruction given by the user. While the first but in our system, we have developed an automated robot instruction is processed, the user can issue another in- registration engine that is not available in any other system. struction and then the robot must be interrupted to handle Furthermore, our semantic analysis is also based on opti- the second instruction. Based on the priority of the in- mized algorithms compared with the existing techniques struction, the robot must be able to decide to continue the used by other researchers. current work or start the second instruction. (e work state has the highest priority, the motion state has the second 3. Methodology highest priority, the dialog state has the third priority, and the ready has the lower priority. Each robot will exit from the (e authors have implemented a Web interface to interact system if the instructions are not received within the defined with the robots and users. (e Web interfaces were developed timeout. to interact with different types of experiments in our research. Journal of Robotics 5 Ontology Communication Management Management USER WEB COMMAND INTERPRETER INSTRUCTION INTER FACE Synonym Analysis Semantic Analysis ONTOLO read GY ROBOT Management TurtleBot REGISTRATION Movement TiaGo ENGINE COMMAND Management 0 Husky PUBLISHING Schedule ENGINE Management Navigation Management Figure 2: System architecture diagram. Figure 3: Robot registration algorithm. Timeout occurred Receive the interrupt Receive conversation Command Receive the interrupt Timeout occured 6 Journal of Robotics Based on ROS Nodes Action Subject Based on ROS Topics Subscribed User Web Interface P Instruction O Based on ROS Topics Published Object Based on ROS Packages available Constraint Semantic Errors Figure 4: Initial interpretation process. Starting (S ) Move (S ) 0 3 Ready (S ) Exit (S ) Registered (S ) Working (S ) 2 6 1 4 Dialog (S ) S : Starting State Move command is received. S → S if 2 3 S : Registred State Work command is received. S → S if 2 4 S : Ready State dialog command is received. S → S if 2 5 S : Move State S : Working State any interrupt command is received. 4 S 4 → S if S : Dialog State S : Exit State 6 S Robot has registred. S → S if any timeout has occured. 0 1 S 4 → S if Robot is ready for inputs. S → S if 1 2 Figure 5: State transition diagram. 3.3. Movement Management. (e most critical component 3.4. Synonym Analysis. Users can enter different types of of our experiments is the movement of the robots using instructions as described in Table 2 and based on the different instructions using different interfaces. Once a robot command interpreter outputs, and the system accepts only is registered with the RRE, it uses the ROS topic identifi- commands and commands with the condition. (ere can be cation algorithm to identify the corresponding ROS topic for some commands with different verbs with the same the movement. In experiment 01, the authors have used meaning, called synonyms. Robots may not be able to un- teleoperation to move robots forward and circle in an open derstand synonyms until it is appropriately programmed. environment in Gazebo. In experiments 02, 03, and 04, the (erefore, we implemented ontology, which is created with authors have used the Web-based interface to move robots the Web ontology language property called “sameAs” to find forward and circle in an open environment in Gazebo with the synonyms in the given instruction. We have used the multiple robots. Finally, in experiment 05, the robot was “owl:sameAs” statement to identify the two uniform resource moved to a specific location using the algorithm given in identifiers, which means each individual has the same “identity.” We can take the example as synonyms for in- Figure 7. (e notations used in the flowchart are described in Table 1. struction “move” are “shift, go, proceed, walk, and advance.” Receive the interrupt Timeout occured Receive Move Command Journal of Robotics 7 Start Start Robot Register the Robot in RRE and set the state in ROS topic Input Instructions Process the input Is it a Is it a No No No Yes Is it Is it a work move navigate timeout? request? request? request? Yes Yes Yes Start to Move and Start to work and Start to work and change the state change the state change the state End Is it a high Is it a high Is it a high No No No priority priority priority interrupt? interrupt? interrupt? Yes Yes Yes Figure 6: Flowchart for multiple instruction handling. Start Input: Move Robot to (x , y ) 0 0 Create ROS node and Subscribe to the Odometry ROS Topic Get the current position (x , y )and g g orientation (θ) of the Robot Convert the orientation (θ) from quaternion to Euler form x = x - x U = 1.5 d 0 g x No y = y - y d 0 g if |θ - θ|> 0.1 ω = 0.0 d z θ = atan[y / x ] d d d Yes U = 0.0 ω = 0.5 Publish the command if x = = 0 & y = = 0 d d No on ROS Topic Yes End. Figure 7: Flowchart for moving robot to a specific goal. 8 Journal of Robotics Table 1: Notations used in the flowchart and experiments. Notation Description s − 1 U Linear speed of the robot in x-direction at the start in ms s − 1 ω Angular speed of the robot in z-direction at the start in ms e − 1 ω Angular speed of the robot in z-direction at the stop in ms θ Current robot orientation in quaternion form θ (e difference between current robot orientation and goal orientation in quaternion form Table 2: General goal and task scheduling table. Time slot 1 Time slot 2 Time slot 3 Time slot 4 Robot name t − t t − t t − t t − t 0 1 1 2 2 3 3 4 R Goal + Task Goal + Task Goal + Task Goal + Task 1 1,1 1,1 1,2 1,2 1,3 1,3 1,4 1,4 R Goal + Task Goal + Task Goal + Task Goal + Task 2 2,1 2,1 2,2 2,2 2,3 2,3 2,4 2,4 R Goal + Task Goal + Task Goal + Task Goal + Task 3 3,1 3,1 3,2 3,2 3,3 3,3 3,4 3,4 Users can update ontology manually. Synonym identifica- to move the robot to a specific location, we can publish the tion is used in the ROS topic identification algorithm for command on ROS topics such as cm d vel, cm d_vel_mux, publishing commands. Different heterogeneous service ro- or cm d vel mux/input/navi. (ese ROS topics will be bots can use different ROS topics; therefore, we need to find varying from robot to robot in heterogeneous environments. the correct ROS topic to publish the commands. (e possible ROS topics for the movement and ROS topic for the initial pose are shown in Figure 10. When a user enters the instruction to all heterogeneous 3.5. Semantic Analysis. (e semantic meaning of the com- service robots, we need to initiate the action for each robot. mand is one of the main tasks in interpreting the user-level (is task is completed by command publishing engine instructions. Suppose a robot can detect a semantic error in (CPE), which can publish the action on the corresponding the given user-level instruction that will better implement the ROS topic. Initially, CPE can locate the current position of robot’s intelligence. For example, when a user issues a user- each robot using the optimized algorithm. Get robot posi- level instruction with the verb “go,” we can guarantee that the tion algorithm of each robot is defined in Figure 11. (e next part should be a location or destination. (e semantic algorithm has used the IP address and the undated ontology analysis algorithm is described in Figure 8. to get the initial position and the orientation. (e ontology code has a property that requires We have created a node in ROS called “initPos.” It is re- restricting all robots from moving to a specific position. sponsible for running the remaining lines of the defined al- “owl:allValuesFrom” is the property that can be used to gorithm. In addition, this node can find the relevant ROS topics define the class with all possible values of the given property related to the initial position and orientation of the robot. defined by “owl:onProperty.” If the object is not in the re- Each robot may have a different ROS topic to subscribe stricted value list, it is considered an invalid command and to and publish for different operations. (erefore, we need to gets the user intervention. identify these topics before executing any commands on each robot. (e ROS topic identification algorithm is de- scribed in Figure 12. Initially, the system used the given IP 3.6. Ontology. Ontology is a model used to represent the address list and port list to connect with all robots. (e ROS concept and the relationships among all related concepts; for topic in the ontology, which the RRE generated previously to example, if we select the robot’s ontology, we can represent create a shared file as rtList, is used. (en, it called the Get all concepts in the robot domain and the relationships ROSTopic() algorithm, which is used to get the corre- among all concepts related to robots [35–37]. Finding sponding ROS topics for each action. (is algorithm was concepts from the ontology is the one that takes more time used to find the ROS topics for each action defined in the because the running time complexity of the searching al- user instruction. For example, if the action is to move the gorithm is given by O(n), where n is the number of classes in robot from one location to another location, then we need to the given ontology. (e part of the ontology that we have find the corresponding ROS topic used from the identified created is shown in Figure 9. list as “cmd,” “vel,” “cmd vel,” “velocity,” “speed,” “travel,” and “run.” If the identified ROS topics list was not matched 3.7. Command Publishing Engine. According to the user- with the ROS topics received from the RRE, we called Get level instruction issued, the command interpreter can Uncertain ROSTopic() to find the ROS topics with syno- identify the action (move, navigate, identify) subject, con- nyms of the action based on the ontology. (is algorithm straint, and object defined in the user instruction. (e uses the synonyms for the given action to find the corre- command publishing engine needs to identify the corre- sponding ROS topic. If we can find one, we can use the topic sponding ROS topics relevant to the action to publish and for subscribing or publishing the action; otherwise, we need subscribe for initiation of the action. For example, if we want to get the user input to resolve the problem. Journal of Robotics 9 Figure 8: Semantic analysis algorithm. ROSPackage /move_base_simple /goal /cmd ROSNode ROSTopic /odem ROSServices owl:ing Cameras /map Hardware Robot /cmd_vel Scanners ROS Robot 2 Soware Robot 3 Robot 1 Figure 9: Fragment of the ontology. 3.8. Schedule Management. In our solution, we have robot has given a specific goal (G ) or position to move with i,j assigned scheduled work and location for each robot for a specific allocated work (T ) based on the given time al- i,j given time slot. (e robot can execute user instruction only if location as shown in Table 2. According to the given time it is a free time slot; otherwise, the robot needs to complete slot, the location to move (goal) and task to be completed for the allocated task. (e CPE can publish or subscribe to the each robot are displayed in the goal and task scheduling relevant values for each ROS topic. Each heterogeneous table. 10 Journal of Robotics /cmd_vel Move /*../*../ cmd_vel Navigate /cmd_vel_mux Action Identify /*../*../ cmd_vel_mux ……………. /cmd_vel_mux/*../ /odem */odem /cmd_vel_mux /odemetry ……………. /command_velocity ROS Topic for Initial Pose Figure 10: ROS topics for the movement. Figure 11: Get initial position algorithm. 3.9. Navigation Management. Autonomous navigation of another location easily by hiding most of the complex tasks the robot is one of the main research areas in robotic in autonomous robot navigation. Navigation can be programming. ROS is implemented to work with the nav- implemented using the ROS topics, message formats, and igation stack that is used to navigate from one location to shapes of footprint of the robot and selecting the relevant Journal of Robotics 11 Figure 12: ROS topic identification algorithm. values for the ROS topics for each robot. Odometry and localization. For example, ROStopiccm d vel, ROStopicgoal, sensor information were used as main inputs for the ROS ROStopico de m, ROStopiclocal plan, ROStopicglobal navigational stack, and then, it generated the corresponding plan, and ROStopicfootprint were used for remapping the velocity for the mobile base. According to the ROS speci- ROStopicmove base node for each robot. fication, we can find that the mobile base is controlled by xisvelocity, yisvelocity, an dt hetaisvelocity, and a 2D 3.10. %read Management. Since we need to control and planner laser is mounted on the mobile base. (e navigation coordinate multiple robots simultaneously, threads can be is exactly successful on the square-shaped robots. used to complete the task efficiently. Furthermore, a thread is (e map server was used to store the created map file. All a lightweight process inside a process. (erefore, concur- heterogeneous service robots used the map stored in the map rency can be developed using the threads quickly. server to navigate obstacles from one location to another. amcl (AdaptiveMonteCarloLocalization)fileand move basefilefor each robot were maintained as launch files to localize and move 4. Experiment and Results the robot in the given environment. For example, ROSscan, ROSo do metry, ROSinitialpose, and ROSparticleclou d We have conducted the experiments with Web interfaces I to topics were used in the amcl launch file for each robot for the V for simple instructions and measured the response time of 12 Journal of Robotics Table 3: Single robot average start/stop response time without Web interface. s − 1 s − 1 s − 1 U � 0.5 ms U � 1.0 ms U � 1.5 ms x x x StartResponse(s) s − 1 ω � 0.0 ms 0.871 0.807 0.787 s − 1 ω � 0.5 ms 0.657 0.541 0.531 s − 1 ω � 1.0 ms 0.561 0.512 0.499 s − 1 ω � 1.5 ms 0.511 0.501 0.476 StopResponse(s) e − 1 ω � 0.0 ms 1.211 1.728 2.161 e − 1 ω � 0.5 ms 1.039 1.631 1.981 e − 1 ω � 1.0 ms 1.001 1.431 1.871 e − 1 ω � 1.5 ms 0.988 1.181 1.761 Figure 13: Single robot interaction without Web interface. the robot start and stop with the Web interface. (e initial start R � τ + τ + , d,os d,ROS s s (1) s,d experiment was conducted without the Web interface. We U + ω x z have used the following notation for our experiments as stop s s shown in Table 1. R � τ + τ + c U + ω . (2) d,os d,ROS 2 s,d x z Figure 14 represents the average start and stop response 4.1. Experiment 01: Single Robot Interaction with Simple In- time for the robot for each instruction. (e average start struction without Using the Web Interface. Initially, the au- response time gradually decreases when the linear and thors completed the experiment with a single robot without angular speed increases, while the average stop time in- using the Web interface in the Gazebo simulator with creases when the linear and angular speed increases. TurtleBot3. (e authors have issued instructions to move the robot forward and move in a circle using the terminal in- terface with the rostopic pub command. We have evaluated 4.2. Experiment 02: Single Robot Interaction with Simple In- the average response time of the robot for a start and stop struction with Web Interface without Autonomous Robot instructions. We have conducted the experiments with Registration. (e authors developed the Web interface to different linear and angular speeds of the robot for start and interact with the robot using the ROS bridge server. (e stop instructions. (e experiment results will be displayed as authors have issued instructions to move the robot forward shown in Table 3. (e interaction with TurtleBot3 with the and move in a circle using the buttons provided in the Web terminal without using a Web interface is shown in Fig- interface with the robot. We have evaluated the average ure 13. (e response delay for the start and stop of the robot response time of the robot for a start and stop instructions. stop start is represented by equations (1) and (2), where R and R We have conducted the experiments with different linear s,d s,d represent the single robot delay at start and stop, respec- and angular speeds of the robot for start and stop in- tively, τ represents the delay in system call execution in structions. (e experiment results will be displayed as shown d,os operating system, τ is used to represent the delay in in Table 4. (e interaction with TurtleBot3 with the terminal d,ROS communicating with ROS topics, and c , c are constants. with Web interface is shown in Figure 15. (e response delay 1 2 Journal of Robotics 13 Single Robot Start Response without Web Interface Table 4: Single robot average start/stop response time with Web interface. 0.9 s − 1 s − 1 s − 1 U � 0.5 ms U � 1.0 ms U � 1.5 ms x x x 0.8 StartResponse(s) s − 1 ω � 0.0 ms 0.811 0.789 0.766 0.7 s − 1 ω � 0.5 ms 0.753 0.732 0.699 s − 1 ω � 1.0 ms 0.611 0.601 0.544 0.6 s − 1 ω � 1.5 ms 0.571 0.577 0.501 StopResponse(s) 0.5 e − 1 ω � 0.0 ms 1.031 1.402 1.981 e − 1 ω � 0.5 ms 1.001 1.267 1.812 0.4 0.6 0.8 1 1.2 1.4 1.6 e − 1 ω � 1.0 ms 0.981 1.101 1.602 –1 Linear Speed U (ms ) e − 1 ω � 1.5 ms 0.911 0.999 1.201 s –1 s –1 ω = 0.0ms ω = 1.0ms z z s –1 s –1 ω = 0.5ms ω = 1.5ms collect all robot details, including all ROS topics necessary to Single Robot Stop Response without Web Interface subscribe and publish. (e ROS topic identification algo- rithm was developed to select the relevant ROS topics for each action defined in the user instruction. We have eval- uated the average response time of the robot for a start and stop instructions. We have conducted the experiments with different linear and angular speeds of the robot for start and 1.5 stop instructions. (e experiment results will be displayed as shown in Table 5. (e interaction with TurtleBot3 with the terminal with Web interface is shown in Figure 17. (e response delay for the start and stop of the robot is rep- stop start 0.4 0.6 0.8 1 1.2 1.4 1.6 resented by equations 5and 6, where R and R rep- s,d s,d –1 Linear Speed U (ms ) resent the single robot delay at start and stop, respectively, τ represents the delay in communication through Web d,web s –1 s –1 ω = 0.0ms ω = 1.0ms z z interface, τ is used to represent the delay in commu- d,ROS s –1 s –1 ω = 0.5ms ω = 1.5ms nicating with ROS topics, τ represents the delay in ROS d,RT topic identification, and c and c are constants. 1 2 Figure 14: Single robot interaction without Web interface. start R � τ + τ + τ + , s s (5) s,d d,web d,ROS d,RT U + ω x z for the start and stop of the robot is represented by equations stop start (3) and (4), where R and R represent the single robot stop s s s,d s,d R � τ + τ + τ + c U + ω . (6) d,web d,ROS d,RT 2 s,d x z delay at start and stop, respectively, τ represents the d,web delay in communication through Web interface, τ is d,ROS Figure 18 represents the average start and stop response used to represent the delay in communicating with ROS time for the robot for each instruction. (e average start topics, and c , c are constants. 1 2 response time gradually decreases when the linear and start 1 angular speed increases, while the average stop time in- R � τ + τ + , (3) s,d d,web d,ROS s s U + ω creases when the linear and angular speed increases. x z According to the analysis, authors have identified that au- stop s s tonomous robot communication is slightly slower than R � τ + τ + c U + ω . (4) s,d d,web d,ROS 2 x z communication through the Web without autonomous Figure 16 represents the average start and stop response registration. time for the robot for each instruction. (e average start response time gradually decreases when the linear and 4.4. Experiment 04: Homogeneous Multiple Robot Interaction angular speed increases, while the average stop time in- creases when the linear and angular speed increases. with Simple Instruction with a Web Interface with Autono- According to the analysis, the authors have identified that mous Robot Registration. (e authors have developed the launch file to create multiple robots in the same Gazebo Web communication is slightly faster than communication through the terminal. environment. Initially, two TurtleBot robots were spawned in the empty Gazebo world at two different locations. (e simple move instructions were issued to both robots si- 4.3. Experiment 03: Single Robot Interaction with Simple In- multaneously and evaluated the average response time for struction with a Web Interface with Autonomous Robot the start and stop instructions. (e separate namespaces Registration. (e robot registration engine was developed to were used to identify each ROS topic for each robot. (e first Stop Response Time (s) Start Response Time (s) 14 Journal of Robotics Figure 15: Single robot interaction with Web interface. Single Robot Start Response with Web Interface Table 5: Single robot average start/stop response time with Web interface autonomous. 0.8 s − 1 s − 1 s − 1 U � 0.5 ms U � 1.0 ms U � 1.5 ms x x x StartResponse(s) s − 1 0.7 ω � 0.0 ms 1.011 1.001 0.981 s − 1 ω � 0.5 ms 1.001 0.987 0.956 s − 1 ω � 1.0 ms 0.987 0.872 0.789 0.6 s − 1 ω � 1.5 ms 0.861 0.761 0.712 StopResponse(s) e − 1 ω � 0.0 ms 1.345 1.765 2.552 0.5 e − 1 ω � 0.5 ms 1.241 1.451 2.222 0.4 0.6 0.8 1 1.2 1.4 1.6 e − 1 ω � 1.0 ms 1.109 1.431 1.988 –1 e − 1 Linear Speed U (ms ) ω � 1.5 ms 1.011 1.344 1.765 s –1 s –1 ω = 0.0ms ω = 1.0ms s –1 s –1 ω = 0.5ms ω = 1.5ms z z through Web interface, τ is used to represent the delay Single Robot Stop Response with Web Interface d,ROS in communicating with ROS topics, τ represents the d,RT delay in ROS topic identification, and c , c , α, and β are 1 2 1.8 constants. start 1 1.6 R � ατ + τ + τ + , d,web d,ROS d,RT s s (7) m,d U + ω x z 1.4 stop s s 1.2 R � βτ + τ + τ + c U + ω . (8) d,web d,ROS d,RT 2 x z m,d Secondly, the authors have spawned another four robots in the same Gazebo environment for the experiment. Sep- 0.4 0.6 0.8 1 1.2 1.4 1.6 arate namespaces were given for each robot to avoid conflicts –1 Linear Speed U (ms ) with the same ROS topic. (e simple move instructions were s –1 s –1 issued to both robots simultaneously, and the average re- ω = 0.0ms ω = 1.0ms sponse time for the start and stop instructions is evaluated. s –1 s –1 ω = 0.5ms ω = 1.5ms (e experiment results will be displayed as shown in Table 6. Figure 16: Single robot interaction with Web interface. (e interaction with multiple four TurtleBot with the ter- minal with Web interface is shown in Figure 20. Figure 21 represents the average start and stop response robot was named robot 1, and the second one was named time for the single robot, two robots, and four robots for robot 2. (e interaction with multiple two TurtleBot with the each instruction where the linear speed is changed, but the terminal with Web interface is shown in Figure 19. (e angular speed is kept constant to avoid the collision among response delay for the start and stop of the robot is rep- the robots. (e average start response time gradually in- stop start resented by equations (7) and (8), where R and R creases when the number of robots increases, while the m,d m,d represent the multiple robots’ delay at start and stop, re- average stop time increases when the number of robots spectively, τ represents the delay in communication increases. d,web Stop Response Time (s) Start Response Time (s) Journal of Robotics 15 Figure 17: Single robot interaction with Web interface auto-registration. Single Robot Start Response with Web Interface Auto distance on average. (e following map represents the initial position and target locations of two and four robots as shown in Figure 22. (e authors have conducted the experiments with a 0.9 single robot, two robots, and four robots with a single in- struction to move the robot to a specific location given by (x, y) coordinates. (e average time taken by robots to a 0.8 specific location was measured and presented in Table 7. (e average move time increases with the number of robots and distance, as shown in Figure 23. (e delay for moving single 0.7 robot and multiple robots is represented by equations (9) 0.4 0.6 0.8 1 1.2 1.4 1.6 move move –1 and (10), where R and R represent the single and Linear Speed U (ms ) x s,d m,d multiple robots’ delay in moving to specific location, re- s –1 s –1 ω = 0.0ms ω = 1.0ms z spectively, τ represents the delay in communication d,web s –1 s –1 ω = 0.5ms ω = 1.5ms z z through Web interface, τ is used to represent the delay d,ROS in communicating with ROS topics, τ represents the d,RT Single Robot Stop Response with Web Interface Auto delay in ROS topic identification, τ is used to represent d,pos delay in getting the current position and orientation of the 2.5 robot, and c , c , α, and β are constants. 1 2 move 1 R � τ + τ + τ + τ + , (9) s,d d,web d,ROS d,RT d,pos s s U + ω x z 1.5 move s s R � β τ + τ + τ + τ + c U + ω . m,d d,web d,ROS d,RT d,pos 2 x z (10) 0.4 0.6 0.8 1 1.2 1.4 1.6 –1 Linear Speed U (ms ) 4.6. Experiment 06: Robot Interaction with Multiple Instruc- s –1 s –1 ω = 0.0ms ω = 1.0ms z tions with a Web Interface with Autonomous Robot s –1 s –1 ω = 0.5ms ω = 1.5ms z z Registration. We have completed the experiment with the multiple instructions issued by the user sequentially with the Figure 18: Single robot interaction with Web interface auto- state transition diagram. (e sample interaction between the registration. user instruction through the Web interface and the robot is shown in Figure 24. (is diagram represents only three user 4.5. Experiment 05: Move the Robots to a Specific Location with instructions that the user issues to control the robot. (e a Web Interface with Autonomous Robot Registration. (e experiment was conducted with three instructions to move authors have completed the experiment to move the robot the robot to three different locations. (e target locations (single robot, two robots, and four robots) to a given target were represented as (x , y ), (x , y ), and (x , y ). (ese 0 0 1 1 2 2 location by an instruction using the Web interface. (e target locations were selected to make sure all robots move at robots were placed at different positions to move the same equal distance on average. Stop Response Time (s) Start Response Time (s) 16 Journal of Robotics Figure 19: Multiple two robots’ interaction with Web interface auto-registration. Table 6: Multiple robots average start/stop response time with Web interface autonomous. s − 1 s − 1 s − 1 U � 0.5 ms U � 1.0 ms U � 1.5 ms x x x StartResponse(s) SingleRobot 1.011 1.001 0.981 TwoRobots 1.129 1.078 1.016 FourRobots 1.456 1.241 1.112 StopResponse(s) SingleRobot 1.345 1.765 2.552 TwoRobots 1.674 1.987 2.987 FourRobots 1.987 2.134 2.456 Figure 20: Multiple four robots’ interaction with Web interface auto-registration. (e initial robot positions for two robots and four robots the mathematical notation. We have used δ as state ij are represented in the map given in Figure 25. (e robots transition time from i to j, ∀(i, j) ∈ {1, 2, 3, 4, 5, 6}, S as time were initially placed concerning the target locations where taken to save the state in ROS topic, R as time taken to each robot must move the same distance. (e blue color retrieve the state from ROS topic, andϵ as transition delay circle represents the initial robot position. (e green color by n instructions, where n ∈ {1, 2, 3, . . . , l}. (e total state square represents target locations given by user instructions. transition delay timeϵ for single instruction n � 1 is shown (e target locations are identified to ensure all robots travel in equation (11). (e total state transition delay time ϵ for equal distances on average. multiple instructions n � 1, 2, 3, ..l is shown in equation (12). (e equation that represents the delay occurs because (e delay for moving single robot and multiple robots to multiple instructions issued by user were developed using specific location with multiple instructions sequentially is Journal of Robotics 17 Multiple Robots Start Response with Web Interface Auto 1.5 1.4 1.3 1.2 1.1 0.4 0.6 0.8 1 1.2 1.4 1.6 –1 Linear Speed U (ms ) S ingle Robot T wo Robots F our Robots Multiple Robots Stop Response with Web Interface Auto 2.5 1.5 0.4 0.6 0.8 1 1.2 1.4 1.6 –1 Linear Speed U (ms ) S ingle Robot T wo Robots F our Robots Figure 21: Multi-robot interaction with Web interface. Robot (b , b ) Robot (–b , b ) 1 1 2 2 1 2 Robot (0, a ) 10m 10m (x , y ) 0 0 (x , y ) (x , y ) (x , y ) (x , y ) (x , y ) 1 1 2 2 2 2 1 1 0 0 (0, –a ) Robot Robot (–b , –b ) (b , –b ) Robot 1 2 1 2 3 10m 10m (a) (b) Figure 22: Initial position and target locations: (a) two robots and (b) four robots. Table 7: Average moving time for multiple robots with single instruction. Average move time (s) Move to (x , y ) Move to (x , y ) and (x , y ) Move to (x , y ), (x , y ), and (x , y ) 0 0 0 0 1 1 0 0 1 1 2 2 SingleRobot 2.01 2.22 3.01 TwoRobots 2.24 3.01 3.34 FourRobots 3.05 3.21 4.01 Stop Response Time (s) Start Response Time (s) 18 Journal of Robotics Average Move Time for Moving Robots to a Location 3.5 2.5 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 Move Locations Single Robot Two Robots Four Robots Figure 23: Average move time for moving a robot to a specific location. ROBOT WEB INTERFACE Not Ready Starting (S ) TurtleBot: Start User: move to (x , y ) 0 0 Not Ready Registerd TurtleBot: Registerd with RRE (S ) TurtleBot: Ready Ready (S ) Interrupt TurtleBot: Move to (x , y ) 0 0 Move (S ) User: move to (x , y ) 2 1 1 TurtleBot: Suspend TurtleBot: Move to (x , y ) 1 1 User: Start the work at (x , y ) TurtleBot: Suspend 2 2 Work (S ) TurtleBot: Move to (x , y ) 2 2 TurtleBot: Start work at (x , y ) 2 2 User: There is no user Instructions TurtleBot: Idel Exit (S ) Figure 24: Multiple instructions and robot interaction. mIns represented by equations (13) and (14), where R and (e experiment was conducted with multiple instruc- s,d mIns R represent the single and multiple robots’ delay in tions with single, two, and four robots. All robots were given m,d moving to specific location, respectively. the target locations in each instruction to travel the same distance on average to make the completion time for the comparison. (e average completion time is tabled as shown ϵ � δ + δ + δ + δ + 7S + 7R , ∀j∈ {3,4,5}, (11) 01 12 2j j6 δ δ in Table 8. (e average completion time and the number of n�l instruction relationships are shown in Figure 26. ϵ � δ + δ + δ + δ +(3 + l) S + R , 01 12 2j j6 δ δ (12) n�1 if n � l,∀j∈ {3,4,5}, 4.7. Experiment 07: Heterogeneous Multiple Robot Interaction with Semantic Instruction with a Web Interface with Au- mIns s 1 tonomous Robot Registration. We have evaluated our system R �ϵ + τ + τ + τ + τ + , (13) s,d n d,web d,ROS d,RT d,pos s s U + ω x z in the Gazebo environment using three robots such as turtlebot, husky, and TiaGo. (e virtual environment, mIns m s s R � βϵ + τ + τ + τ + τ + c U + ω . m,d n d,web d,ROS d,RT d,pos 2 x z available in Python httpserver (Python–mhttp), was exe- (14) cuted to implement necessary Web pages with JavaScripts Time Time Journal of Robotics 19 Robot (b , b ) Robot (–b , b ) 1 1 2 2 1 2 (0, a ) Robot (x , y ) 0 0 10m 10m (x , y ) (x , y ) (x , y ) (x , y ) (x , y ) 2 2 1 1 0 0 2 2 1 1 (0, –a ) Robot (–b , –b ) (b , –b ) Robot Robot 1 2 1 2 4 3 10m 10m (a) (b) Figure 25: (a) Initial positions of two robots: (b) initial positions of four robots. Table 8: Average completion time for multiple robots with multiple instructions. Average competition time (s) Single instruction Two instructions (ree instructions SingleRobot 2.32 2.98 3.56 TwoRobots 2.59 3.24 3.98 FourRobots 3.23 3.57 4.62 Multiple Robots with multiple Instructions Average Completion Time 4.5 3.5 2.5 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 Number of Instructions (n) Single Robot Two Robots Four Robots Figure 26: Average completion time with multiple instructions with state transition. Table 9: Instruction types used for testing. Instruction Description Example type Type I Instruction without synonym or semantic issue Move to A and clean Type II Instruction with synonym Shift to B and clean Type III Instruction with semantic issue Move to roof and clean Type IV Instruction with synonym and semantic issue Shift to sky and clean Instruction with synonym and semantic issue (not programmed where user involvement is Proceed to sea and Type V needed) clean for the Web interface. We have used the rosbridge server to instruction types, which were used to test our system, are work as an interface between ROS and non-ROS clients. (e shown in Table 9. Type I was a general instruction with no user has added the instruction on the Web interface pro- synonym or semantic issue. (e synonym was added to vided by the system to interact with the multiple robots. (e instruction type II, where a synonym analysis algorithm Average Completion Time (s) 20 Journal of Robotics Table 10: Time complexity of algorithms. Algorithm name Time complexity in Big O notation Robot Registration Algorithm() O(n ) Synonym Analysis Algorithm() O(n ) Semantic Analysis Algorithm() O(n ) Get Position and Orientation Algorithm() O(n) ROS Topic Identification Algorithm() O(n ) Time Complexity Analysis of Algorithms 4 3 0 (n ) 0 (n ) 0 (n ) 0 (n) 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 Number of actions in the user Input Robot Registration Algorithm () ROS Topic Identification Algorithm () Get Position and Orientation Algorithm () Semantic Analysis Algorithm () Synonym Analysis Algorith () Figure 27: Graph of the time complexity of all algorithms. Table 11: Instruction types with time complexity. Time Algorithms used in robot registration Time Types Algorithms used in command interpreter complexity and command publishing engine complexity Type I Analysis algorithm is not needed O(1) RR Algorithm()+ ROS TI Algorithm() O(n ) Type 2 4 Synonym Analysis Algorithm() O(n ) RR Algorithm()+ ROS TI Algorithm() O(n ) II Type 3 4 Semantic Analysis Algorithm() O(n ) RR Algorithm()+ ROS TI Algorithm() O(n ) III Type Synonym Analysis Algorithm()+ Semantic Analysis 3 4 O(n ) RR Algorithm()+ ROS TI Algorithm() O(n ) IV Algorithm() Type Synonym Analysis Algorithm() + Synonym Analysis 3 4 O(n ) RR Algorithm()+ ROS TI Algorithm() O(n ) V Algorithm()+human intervention is needed processed it. (e semantic of the instruction is not clear in Time complexity analysis with Big O notation for each instruction type III. Instruction type IV has both synonym type of instruction is shown in Table 11. Command inter- and semantic issues. (e synonym and semantic were not preter has used the Synonym Analysis Algorithm(), and programmed for the instruction type V, where the user has Semantic Analysis algorithm(), where Synonym Analysis to handle the synonym and semantic issues. (e system was Algorithm() has taken O(n ), and Semantic Analysis algo- tested with many instructions, type I to type V. rithm() has taken O(n ) running time based on the asymp- (e identification of the synonym and the semantic totic notation in algorithm analysis. (erefore, instruction issues were performed by our algorithms accurately. Fur- type II is poor compared with instruction type III. Instruction thermore, we have completed the time complexity analysis type V is worse because user interaction is needed to solve the of our algorithm to measure the system’s performance using synonym and semantic issue in the instruction since synonym the Big O notation. (e time complexities of all algorithms and semantics are not programmed. are shown in Table 10. Time complexity is calculated using In addition to the above discussed time complexity the number of loops used by each algorithm, where n is the analysis for instruction types I to V, we have conducted two input size. (e graph of the time complexity for all algo- types of experiments with the Gazebo environment with rithms is shown in Figure 27. According to the time Turtlebot, Husky, and TiaGo robots. In the first experiment complexity analysis, we can identify that the robot regis- type, we have moved all heterogeneous robots to a given goal tration algorithm and ROS topic identification algorithm in the open world in the Gazebo, and the second type of have poor performance because time complexity is O(n ). experiment is to navigate all heterogeneous robots to a given Time Journal of Robotics 21 Figure 28: Husky, Turtlebot, and TiaGo robots in empty world. Table 12: Goal and task scheduling table. Time slot 1 Time slot 2 Time slot 3 Time slot 4 Robot name t − t t − t t − t t − t 0 1 1 2 2 3 3 4 Turtlebot A(2, 2) + rotate(5) FreeTime B(−2, 2) + rotate(5) C(2, −2) + rotate(5) Husky D(5, 5) + rotate(10) E(5, −5) + rotate(10) FreeTime F(0, 5) + rotate(10) TiaGo FreeTime G(1, −1) + rotate(15) H(0, 1) + rotate(15) I(−1, 1) + rotate(15) Table 13: Experiment results for goal without navigation. Robot Goal without navigation Goal 01 success rate 08.00- Goal 02 success rate 10.00- Goal 03 success rate 12.00- Goal 04 success rate 02.00- Experiment 10.00 12.00 02.00 04.00 Turtlebot 0.65 0.85 0.90 0.95 Husky 0.50 0.65 0.70 0.80 TiaGo 0.45 0.55 0.65 0.85 goal with obstacles in the Gazebo. All three robots (turtlebot, identified that the turtlebot has a higher success rate com- pared with other robots, as shown in Figure 29. husky, and Tiago) in an open world in the Gazebo are shown in Figure 28. Experiments were conducted using the system (e results of the experiment type 02 (with navigation) above multiple robots with movement and navigation using are shown in Table 14. (e success rate is also increasing as 20 type IV instructions. Users can update the goal and task similar to experiment 01 as shown in Figure 30. assigned for each robot for the different schedules in Ta- (e running time of the robot registration algorithm and ble 12. We have added the self-rotation for each robot to ROS topic identification algorithm is O(n ), where n is the simulate the task completed by robots based on the number of actions defined in the user instruction. (ese two scheduled task. We found some errors in robot registration algorithms had the highest time complexity compared with algorithm and ROS Topic Identification Algorithm() for other algorithms developed in our system. movements and navigation. (ere were more ROS topic In general, delay in response time for the start has de- settings than the robot’s movement in an open world in creased when the linear and angular speed is increased. navigation. However, delay in response time for the stop has increased (e results of the experiment are represented in the table when the linear and angular speed is increased. Delay has for three robots Turtlebot, Husky, and TiaGo, where we have occurred when the robot is controlled without the Web tested 20 times for each goal at 4 different time slots as 8.00- interface because of the delay with system call execution 10.00 am, 10.00-12.00 noon, 12.00-2.00pm, and 2.00- through operating system and delay with communication 4.00pm. We received different ontology searching errors, with ROS functions. When a robot is controlled through the robot registration errors, ROS topic identification errors, Web without auto-registration, the delay has occurred in and command publishing errors in each time slot. (erefore, communication through the Web and communication with we gradually minimized the error with the experienced we ROS through the ROS bridge server. When the auto-reg- had in each experiment with the timing. (e success rate is istration was added to the system, then we need to add the measured with 20 tests. It defines the number of successful delay taken by the algorithm for the ROS topic identification. tests without errors out of 20 tests for each robot in each type It is obvious that the delay time increases with the number of of experiment. robots increased. When the robot is sent to a specific lo- (e results of experiment type 01 (without navigation) cation, then we need to add time taken to get the current are shown in Table 13. According to the analysis, we have position and orientation for the delay time. When a robot is 22 Journal of Robotics EXPERIMENT WITHOUT NAVIGATION 0.8 0.6 0.4 0.2 GOAL 01 AT TIME GOAL 02 AT TIME GOAL 03 AT TIME GOAL 04 AT TIME 8.00 - 10.00 10.00 - 12.00 12.00 - 2.00 2.00 - 4.00 TIME SOLTS Turtlebot Husky TIAGO Figure 29: Experiment without navigation success rate. Table 14: Experiment results for goal with navigation. Robot Goal with navigation Goal 01 success rate 08.00- Goal 02 success rate 10.00- Goal 03 success rate 12.00- Goal 04 success rate 02.00- Experiment 10.00 12.00 02.00 04.00 Turtlebot 0.40 0.55 0.75 0.80 Husky 0.35 0.40 0.55 0.70 TiaGo 0.30 0.45 0.60 0.75 EXPERIMENT WITH NAVIGATION registration was successful, and the communication per- formance through the Web decreased gradually with the 0.8 number of robots registered. (e running time of the robot 0.6 registration algorithm and ROS topic identification algo- 0.4 rithm is O(n ). We have not implemented the access control 0.2 of the multiple robots in the same environment. We will be GOAL 01 AT TIME GOAL 02 AT TIME GOAL 03 AT TIME GOAL 04 AT TIME implementing access controlling and synchronization with 8.00 - 10.00 10.00 - 12.00 12.00 - 2.00 2.00 - 4.00 TIME SOLTS all robots in our future work. Turtlebot Husky Data Availability TIAGO (ere are no data involved in this research. Figure 30: Experiment with navigation success rate. Conflicts of Interest controlled by the multiple instructions, then we had to use a (e authors declare that they have no conflicts of interest. state transition system. (erefore, we need to add the time taken by the state transition system to save and retrieve the Acknowledgments state to the delay time to get the more accurate results. According to the analysis, the authors have identified that (is work was supported by the Sri Lanka Institute of In- Web communication is slightly faster than communication formation Technology under grant number FGSR/RG/FC/ through the terminal. 2021/05. (e authors thank SLIIT (Sri Lanka Institute of Information Technology) for the support given towards this Research Project. 5. Conclusion and Future Works (is research study has developed a system to issue in- References struction through the Web interface and controls multiple [1] C. Hu, C. Hu, and D. He, “A new ros-based hybrid archi- robots. Initially, all multiple robots need to register with tecture for heterogeneous multi-robot systems,” in Proceed- robot registration engine. (e autonomous robot registra- ings of the %e 27th Chinese Control and Decision Conference tion and autonomous ROS topic identification algorithms (2015 CCDC), pp. 4721–4726, Qingdao, China, May 2015. were implemented successfully. (e delay time is increased [2] S. Jeon, M. Jang, and D. Lee, “Control architecture for het- with the introduction of these algorithms. We have derived erogeneous multiple robots with human-in-the-loop,” in the mathematical equations for each delay time, which varies Proceedings of the 2012 9th International Conference on based on the inputs and system characteristics. (e exper- Ubiquitous Robots and Ambient Intelligence (URAI), iment result indicated that the autonomous robot pp. 274–278, Jeju Island, Korea, November 2012. SUCCESS RATE SUCCESS RATE Journal of Robotics 23 [3] M. Alberri, S. Hegazy, and M. Badra, “Generic Ros-based [15] U. K. Rajapaksha, C. Jayawardena, and B. A. MacDonald, “Ros architecture for heterogeneous multi-autonomous systems based heterogeneous multiple robots control using high level development,” in Proceedings of the 2018 IEEE International user instructions,” in Proceedings of the TENCON 2021-2021 Conference on Vehicular Electronics and Safety (ICVES), IEEE Region 10 Conference (TENCON), pp. 163–168, Auck- pp. 1–6, Madrid, Spain, September 2018. land, New Zealand, December 2021. [4] L. F. Costa and L. M. G. Gonçalves, “Roboserv: a ros based [16] A. Buscarino, L. Fortuna, M. Frasca, and A. Rizzo, “Dynamical approach towards providing heterogeneous robots as a ser- network interactions in distributed control of robots,” Chaos: vice,” in Proceedings of the 2016 XIII Latin American Robotics An Interdisciplinary Journal of Nonlinear Science, vol. 16, Symposium and IV Brazilian Robotics Symposium (LARS/ no. 1, Article ID 015116, 2006. SBR), pp. 169–174, Recife, Brazil, October 2016. [17] I. Tiddi, E. Bastianelli, and G. Bardaro, “An ontology-based [5] M. Penmetcha, S. Sundar Kannan, and B. C. Min, “Smart approach to improve the accessibility of ROS-based robotic cloud: scalable cloud robotic architecture for web-powered systems,” in Proceedings of the Knowledge Capture Conference, multi-robot applications,” in Proceedings of the 2020 IEEE K-CAP 2017, Austin, TX, USA, December 2017. International Conference on Systems, Man, and Cybernetics [18] I. Tiddi, E. Bastianelli, and G. Bardaro, “A user-friendly in- (SMC), pp. 2397–2402, Toronto, ON, Canada, October 2020. terface to control ROS robotic platforms,” CEUR Workshop [6] A. Singhal, P. Pallav, and N. Kejriwal, “Managing a fleet of Proceedings, vol. 2180, 2018. autonomous mobile robots (amr) using cloud robotics plat- [19] M. Pomarlan and J. Bateman, “Robot program construction via grounded natural language semantics & simulation ro- form,” in Proceedings of the 2017 European Conference on Mobile Robots (ECMR), pp. 1–6, Paris, France, September botics track,” in Proceedings of the International Joint Con- 2017. ference on Autonomous Agents and Multiagent Systems, [7] M. Beetz, D. Beßler, and J. Winkler, “Open robotics research AAMAS, vol. 2, pp. 857–864, Stockholm, Sweden, July 2018. using web-based knowledge services,” in Proceedings of the [20] M. Amaratunga, G. Wickramasinghe, and M. Deepal, “An 2016 IEEE International Conference on Robotics and Auto- interactive programming assistance tool (IPAT) for instruc- mation (ICRA), pp. 5380–5387, Stockholm, Sweden, May tors and novice programmers,” in Proceedings of the 2013 8th 2016. International Conference on Computer Science Education, [8] G. A. Casañ, E. Cervera, and A. A. Moughlbay, “Ros-based pp. 680–684, Sri Lanka, 2013. [21] M. A. V. J. Muthugala and A. G. B. P. Jayasekara, “A review of online robot programming for remote education and train- ing,” in Proceedings of the 2015 IEEE International Conference service robots coping with uncertain information in natural on Robotics and Automation (ICRA), pp. 6101–6106, Seattle, language instructions,” IEEE Access, vol. 6, pp. 12913–12928, WA, USA, May 2015. 2018. [9] S. Rajapaksha, V. Illankoon, and N. D. Halloluwa, “Re- [22] C. J. Sutherland and B. MacDonald, “RoboLang: a simple sponsive drone autopilot system for uncertain natural lan- domain specific language to script robot interactions,” in guage commands,” in Proceedings of the 2019 International Proceedings of the 2019 16th International Conference on Conference on Advancements in Computing (ICAC), Ubiquitous Robots, UR 2019, pp. 265–270, Jeju, Jeju, Korea, pp. 232–237, Malabe, Sri Lanka, December 2019. June 2019. [10] U. U. S. Rajapaksha and C. Jayawardena, “Ontology based [23] C. Datta, B. A. MacDonald, C. Jayawardena, and I.-H. Kuo, optimized algorithms to communicate with a service robot “Programming behaviour of a personal service robot with using a user command with unknown terms,” in Proceedings application to healthcare,” in International Conference on of the 2020 2nd International Conference on Advancements in Social Robotics, pp. 228–237, Springer, Berlin, Germany, 2012. Computing (ICAC), vol. 1, pp. 258–262, Colombo, Sri Lanka, [24] C. Jayawardena, K. Watanabe, and K. Izumi, “Probabilistic December 2020. neural network based learning from fuzzy voice commands [11] H. L. Waruna Bandara, D. S. Wijesekera, and H. D. Bandara for controlling a robot,” Institute of Control, Robotics and Herath, “Methodology for coping with uncertain information Systems, pp. 2011–2016, 2011. contained in natural language instructions in a robotic sys- [25] C. Jayawardena, K. Watanabe, and K. Izumi, “Knowledge acquisition by a sub-coach in a coach player system for tem,” in Proceedings of the 2020 2nd International Conference on Advancements in Computing (ICAC), vol. 1, pp. 234–239, controlling a robot,” in Proceedings of the 4th International Conference on Advanced Mechatronics, pp. 601–606, Hok- Colombo, Sri Lanka, December 2020. [12] U. U. S. Rajapaksha, C. Jayawardena, and B. A. MacDonald, kaido, Japan, 2004. “Ros supported heterogeneous multiple robots registration [26] K. K. P. Gayashani, U. U. S. Rajapaksha, and C. Jayawardena, and communication with user instructions,” in Proceedings of “Moving a robot in unknown areas without collision using the 2022 2nd International Conference on Advanced Research robot operating system,” in Proceedings of the 2022 2nd In- in Computing (ICARC), pp. 102–107, February 2022. ternational Conference on Advanced Research in Computing [13] D. D. Rajapaksha, M. N. Mohamed Nuhuman, and (ICARC), pp. 84–89, February 2022. S. D. Gunawardhana, “Web based user-friendly graphical [27] M. Panagoda, M. Lokuliyanage, and A. Senarath, “Moving interface to control robots with ros environment,” in Pro- robots in unknown environments using potential field graphs,” in Proceedings of the 2022 2nd International Con- ceedings of the 2021 6th International Conference on Infor- mation Technology Research (ICITR), pp. 1–6, Tokyo, Japan, ference on Advanced Research in Computing (ICARC), August 2021. pp. 96–101, February 2022. [14] U. S. Rajapaksha, C. Jayawardena, and B. A. MacDonald, “Ros [28] C. Jayawardena, I.-H. Kuo, E. Broadbent, and based multiple service robots control and communication B. A. MacDonald, “Socially assistive robot healthbot: design, with high level user instruction with ontology,” in Proceedings implementation, and field trials,” IEEE Systems Journal, of the 2021 10th International Conference on Information and vol. 10, no. 3, pp. 1056–1067, 2016. Automation for Sustainability (ICIAfS), pp. 381–386, [29] C. Datta, C. Jayawardena, and I. H. Kuo, “Robostudio: a visual Negombo, Sri Lanka, August 2021. programming environment for rapid authoring and 24 Journal of Robotics customization of complex services on a personal service ro- bot,” in Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2352–2357, Algarve, Portugal, October 2012. [30] J. Kim, M. Jang, and J. Sohn, “An ontological approach for natural language command interpretation and its application in robotics,” in Proceedings of the 2006 SICE-ICASE Inter- national Joint Conference, pp. 3874–3878, Busan, Korea, October 2006. [31] A. Scibilia, N. Pedrocchi, and L. Fortuna, “Human control model estimation in physical human-machine interaction: a survey,” Sensors, vol. 22, no. 5, p. 1732, 2022. [32] M. Bucolo, A. Buscarino, C. Famoso, L. Fortuna, and M. Frasca, “Control of imperfect dynamical systems,” Non- linear Dynamics, vol. 98, no. 4, pp. 2989–2999, 2019. [33] A. T. Rashid, M. Frasca, A. A. Ali, A. Rizzo, and L. Fortuna, “Multi-robot localization and orientation estimation using robotic cluster matching algorithm,” Robotics and Autono- mous Systems, vol. 63, pp. 108–121, 2015. [34] A. A. Ali, A. T. Rashid, M. Frasca, and L. Fortuna, “An al- gorithm for multi-robot collision-free navigation based on shortest distance,” Robotics and Autonomous Systems, vol. 75, pp. 119–128, 2016. [35] S. K. Rajapaksha and N. Kodagoda, “Internal structure and semantic web link structure based ontology ranking,” in Proceedings of the 2008 4th International Conference on In- formation and Automation for Sustainability, pp. 86–90, December 2008. [36] U. Rajapaksha and H. Fernando, “Ontology matching and ranking: issues and research challenges in semantic web application development,” in Proceedings of the ITRU Re- search Symposium, Sri Lanka, 2009. [37] S. Rajapaksha and C. Jayasekara, “Ontology based semantic file search assistant,” in Proceedings of the 2021 10th Inter- national Conference on Information and Automation for Sustainability (ICIAfS), pp. 310–315, Galle, Sri Lanka, August
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png
Journal of Robotics
Hindawi Publishing Corporation
http://www.deepdyve.com/lp/hindawi-publishing-corporation/design-implementation-and-performance-evaluation-of-a-web-based-QoHvj2NMqp