Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Teleoperation Robot Control of a Hybrid EEG-Based BCI Arm Manipulator Using ROS

Teleoperation Robot Control of a Hybrid EEG-Based BCI Arm Manipulator Using ROS Hindawi Journal of Robotics Volume 2022, Article ID 5335523, 14 pages https://doi.org/10.1155/2022/5335523 Research Article Teleoperation Robot Control of a Hybrid EEG-Based BCI Arm Manipulator Using ROS Vidya Nandikolla and Daniel A. Medina Portilla Mechanical Engineering, College of Engineering and Computer Science, California State University Northridge, Northridge, CA 91330, USA Correspondence should be addressed to Vidya Nandikolla; vidya.nandikolla@csun.edu Received 19 April 2022; Accepted 14 May 2022; Published 24 May 2022 Academic Editor: L. Fortuna Copyright © 2022 Vidya Nandikolla and Daniel A. Medina Portilla.  is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.  e development of assistive robots is gaining momentum in the robotic and biomedical ƒelds. An assistive robotic system is presented in this research paper for object manipulation to aid people with physical disabilities.  e robotic arm design is imported to a simulated environment and tested in a virtual world.  is research includes the development of a versatile design and testing platform for robotic applications with joint torque requirements, workspace restrictions, and control tuning parameters. Live user inputs and camera feeds are used to test the movement of the robot in the virtual environment. To create the environment and user interface, a robot operating system (ROS) is used. Live brain computer interface (BCI) commands from a trained user are successfully harvested and used as an input signal to pick a goal point from 3D point cloud data and calculate the goal position of the robots’ mobile base, placing the goal point in the robot arms workspace.  e platform created allows for quick design iterations to meet di“erent application criteria and tuning of controllers for desired motion. solving algorithms to automatically move the joints [3]. 1. Introduction Control methods for robotic arms using BCI can be  e use of assistive robots for object manipulation has accurate with over 95% accuracy but require long-term many di“erent types of user-controlled inputs. One type of training [3]. the input is using brain signal data from a noninvasive Once a robotic manipulator has a goal position, there are electroencephalogram (EEG) cap.  is has the beneƒt of di“erent types of analytical and numerical methods to solve being applicable to anyone despite physical disability, at for joint angles [4]. To facilitate the accuracy required to the cost of intensive training requirements. To increase the grasp an object, a semiautonomous control approach can be used where the human-robot interface employs a vision accessibility of assistive robots with brain signal inputs, the robot must be designed to minimize the amount of system to control robot motion for part of the positioning training while keeping its complex object manipulation task [5].  e vision system requires software or algorithms to functionality. interpret the incoming data.  ese can be object recognition An assistive robot is usually composed of an arm with software or other forms of image processing that may re- multiple joints to manipulate objects, a mobile platform quire calibration. By lowering the amount of user inputs to travel in an environment, and a head with a stereo- available, more complexity is added to the automated vision system to recognize human faces and localize subsystems. manipulative objects [1,2]. One type of the manipulator BCI involves a combination of the brain and device both control can be referred to as end-e“ector control, where a sharing an interface to enable a communication channel user speciƒes a goal position of the end-e“ector and path- between the brain and an object controlled externally [6]. To 2 Journal of Robotics obtain EEG data, a cap is used with electrodes mapped to To solve the forward and inverse kinematics, a model specific head locations, as the example seen in Figure 1. representing each relevant frame is created and is as seen in Due to backlashes, eccentric joints, and the influences Figure 6. *e location of the camera, where the point cloud that the flexibility of driving axles would have on the data are measured, is considered frame zero. Frame 1 accuracy and repeat accuracy of the arm positioning, a represents the shoulder joint, Frame 2 represents the elbow compensation method should be used for good effects [7]. joint, Frame 3 represents the wrist joint, and Frame T *e use of joint torque sensing technologies on robotic represents the tool, or hand, frame. manipulators adds complexity and may require the ad- *e distances and angles between each frame are dition of compliant members to the robot [8]. Another measured with the values shown in Figure 7 and are method of compensation can use the vision system and summarized in Table 1. *ese measurements came from it requires the arm poses to be in the scope of the camera. the existing SolidWorks model and any design iteration *e use of a vision system for the robot arm position that changes these distances will affect the calculated feedback simplifies the hardware components required for transformation matrices. *e transformation matrices thedesignbutaddsinitialcalibrationandphysicalmarkers translate distances from the camera frame to any other to be added to the robotic arm. For safe manipulation of a frame on the robot. humanoid robot arm, self-loads and external loads are To derive the translation between each frame, an X-Y-Z verified before an object is manipulated [9]. fixed angles method is used [10]. Using values from Table 1, the transformation matrix T0T and its inverse are found in equations 4 and 5. 2. Robot Manipulator T0T � [−0.5 − 0.86600 − 0.02610.8660 − 0.50 (4) *e design of the robot arm is based on a person’s hanging − 0.18070010.24720001 ], arm with the intent of making the user-input intuitive. To keep the first design iteration simple, the arm has a planar −1 T0T � [−0.50.866000.1434 − 0.8660 − 0.50 workspacewiththreejoints:theshoulder,elbow,andwrist joints. *e arm is rigidly attached to an omnidirectional − 0.1129001 − 0.24720001 ]. three-wheel mobile robot, allowing the arms planar (5) workspace to translate and rotate. *e end effector is at- tached at the end of the wrist joint through a triangular *e goal point comes from the camera frame 0P and is attachment 3D printed for this application, as shown in selected by the user through the point cloud data camera Figure 2. feed.*ispointistranslatedtothehandframetoobtain TP. *e planned mounting of all components for the Based on the grabbing orientation desired, a grabbing point assistive robot system is shown inFigure 3. *erobotic arm, TG is calculated based on the desired arm position. *e battery pack, main computer, motor drivers, and d435 grabbingpointistranslatedtothecameraframe 0G,andthe camera are mounted on top of the mobile base. Other difference between the 0P and 0G is used to move the robot sensors and electrical components used in the mobile base to a new location. Once the robot moves to the desired are placed inside of the robot. *is design allows for a cover location, the goal point is in the arms workspace and the to be made that hides all main components except for the grabbing operation starts. robotic arm and the charging port. A slot for the camera is Since the arm has a planar movement defined the z-y still required but an infrared transparent material can be axis of the hand frame, the x component TG will always be used for protection. zero. *e z component is fixed, meaning we assume the robot moves in a flat environment. *is means that the x and z components of TG are known based on the input TP. 3. Kinematics To calculate the y component of TG, a grabbing pose is picked to determine the bounding equations. *e sche- *e mathematical model of the robot arm is seen in matic of a sample point and grabbing at the edge of the Figure 5. With this schematic, constraints of joint angles workspace is shown in Figure 8. *e purple vector is the are added to avoid collisions and define the arms work- grabbing point measured from the hand frame before space. Initial torque requirements and link lengths are actuation. *e orange vector is the z-offset between the calculatedforadesiredpayloadplacedonthemanipulator hand frame and the axis of rotation of the shoulder joint. It frame, located at the end of this open loop kinematic is shownas a number for is defined bythe variablesdefined chain. earlier in Figure 7. ° ° Forthegrabbingpointtobeattheedgeoftheworkspace −30 ≤ θ ≤ 180 , (1) (Figure 8), the Pythagorean theorem is used for the right triangle created using θ and is shown in (6). Equations (7) and (8) solve for R and θ . Y s −90≤ θ ≤ 180 , (2) 2 2 (6) TP + 0.1005􏼁 + R � 􏼐L + L + h 􏼑 , z Y s e offset L c + L + h c ≤ H. (3) 􏼐 􏼑 s s e offset se Journal of Robotics 3 (a) (b) Figure 1: Sample of electrode mapping using an EEG cap [6]. Since the mobile base can move in the x and y direc- tions, the arm will only label a point unreachable if the z component is outside the workspace. *is restriction is shown in (9). TP ≤ 􏼐L + L + h 􏼑–0.1005. (9) z s e offset *e schematic for grabbing with a horizontal pose is shown in Figure 9. For this pose to be valid, equations (10), (11), and (12) must be true. *ese are used to solve for the joint angles and determine if the point is reachable. L Cθ � TP + 0.1005􏼁, (10) s s z Figure 2: Robot arm 3D model. θ + θ � 90˚, (11) s e TP � L Sθ + L + h + R . (12) 􏼐 􏼑 Y s s e offset Y With these equations, the arm joint angles and the trajectory vector for the mobile base can be solved for an inputgoalpoint. Sinceeachgrabbingposehasadifferent set of inverse kinematic equations, the desired grabbing pose must be selected before, or as a part of, the user input. *e tests in this research use the grabbing at the edge of workspace pose. 4. Simulation Environment *esoftwareusedareROSandGazebo.ROSisaframework Figure3:PopulatedmobileBbse.*emanipulatorusedisaBrunel used to build robots with useful tools to create control di- Hand 2.0, an open-source robotic end-effector. *e manipulator and its built-in controller are shown in Figure 4. agrams and graphical interfaces for a robot. Gazebo is robot simulation software using a physics engine for dynamic simulationandprovidessimulatedsensordatatotestrobots. With the use of these open-source programs, we can import ������������������������������ � the existing design to test and tune different control algo- (7) R � 􏼐L + L + h 􏼑 − TP + 0.1005􏼁 , Y s e offset z rithms. We can also use real sensors and live data to provide a simulation command and modify the design based on R performance goals. θ � atan􏼠 􏼡. (8) *e visual model, shown in Figure 10, is imported from TP + 0.1005􏼁 SolidWorksusinganSTLfileformat.*ismodelwascreated 4 Journal of Robotics Figure 4: Brunel Hand 2.0 and modified control board. Shoulder Joint Elbow Joint offset Wrist Joint Figure 5: Robot arm schematic for a mobile pPlatform. *e initial constraints of the arm workspace are as follows. Z X 1 2 X 5 30 X Figure 6: Location of Frames. before planning to simulate the robot and is used as a process should start with collision and inertial models and reference to design the collision and inertial models. *e end with a visual model. simulation environment uses the collision and inertial *ecollisionmodelinthisprojectisshowninFigure11. models to run the physics engine while the visual model is In this model, the location of all joints and links are defined. only a skin for the robot. Because of this, a robot design *eoriginofeachlinkandjointisreferencedtotheprevious o T offset 0.0984 offset Journal of Robotics 5 Z Z T T Hand X Y T T Wrist 0.1005 Z 3 o Shoulder Camera Frame X 5 d L 0.1283 4 s L θ e w d e 30 X Elbow Arm Base 0.1283 Figure 9: Grabbing horizontally. Robot Center 0.0568 Figure 7: Distances from robot center. Table 1: Values of distances between frames. Variable Value (meters) d0 0.0436 d1 0.15036 d2 0.1467 d3 0.035 d4 0.115 d5 0.1255 Figure 10: Visual model. d6 0.09 0.1005 Figure 11: Collision model. equation is valid for a solid cuboid of width w, height h, and Y mass m. 1 1 1 2 2 2 2 2 2 I � 􏼔 m􏼐h + d 􏼑000 m􏼐w + d 􏼑000 m􏼐w + h 􏼑 􏼕. 12 12 12 Figure 8: Grabbing at the edge of workspace. (13) By estimating the masses of links, the inertial model is defined forthe entirerobot andis shownin Figure12. *ese joint.*ismodelusessimplegeometrytodefinethecollision values are used to calculate the torques on joints at any time surfaces of the robot. Once a robot design is finalized, the during the motion of the arm. simple geometry can be replaced with the visual model to have more detailed collision points, but at this point, it would be more beneficial to finalize all testing on a real- 5. Hybrid Control world environment. *e inertial model defines the center of mass and the ROS uses a control block diagram to communicate be- rotational inertias in all directions for a link. *is is done by tween different subsystems. Each node in the control defining a link 3D inertia tensor I shown in (13). *is block diagram can represent a robot model, a controller, 0.1304 6 Journal of Robotics an input or output, etc. *ese are usually stand-alone programs that are connected using topics. A topic is a messagethatanodecanpublishorsubscribeto.Figure13 shows a part of the control block diagram used on the simulated model. *e omnibot_arm node subscribes to the clicked_point topic, which publishes commands from the user interface. *e user publishes a message to the clicked_point using BCI commands through the user interface. *e testing done focuses on a semiautonomous end- effector control mode, but the system allows for direct Figure 12: Inertial model. joint control through BCI for different applications. *e omnibot_armnodesolvestheinversekinematicequations the shoulder moves to a set angle while the elbow moves to create a goal angle for each joint of the robot arm. *e sinusoidally. goal angle message is published to each motor’s con- Gazebo is used to simulate the robot’s movement and troller. *e controller uses tuned PID values to move each verify desired motion. *e entire tuning setup is shown in joint and the output is sent to Gazebo for visualization. Figure 17. *e effects of gravity are seen in the actual *e output of the omnibot_arm can also write to Jetson movement of the shoulder joint, noted bythe dark blue line. Xavier’s communication pins to send commands to ex- *e joint lags on an upward motion but does not struggle to ternal hardware. *e trajectory_markers_node also solves move downward with the given goal signal, noted with the inverse kinematic equations and adds visualization vec- lightblueline.*isisusedtotunethemotors,butitcanalso tors to the user interface to show the location of the be used to size up the motor used, decrease the workspace selectedpoint,thegrabbingpoint,andthegoalpositionof reach, or limit the motion paths and joint speeds of the arm. the mobile base. Changing each parameter is an iterative process that ends Figure 14 shows the user interface to select 0P, a goal when all acceptable functionality is met. point from a 3D cloud feed. *is interface is made up of two displays: *e visualization display on the left and the user-input screen on the right. *e point cloud data feed is 6. Test Data and Results shown in both displays. *e visualization display is a 3D environment used to verify selected points, visualize Each arm motor is tuned and due to the torque limits on markers, and give the user a different perspective of the each joint and the inertia of the arm, small changes in PID environment. *e user-input screen is a 2D camera feed parameters did not affect the output motion. *is tool is with the point cloud data as an overlay. A BCI controlled usefulinchoosingthetorqueratingsofeachmotorifcertain cursor is bound to stay within the user-input display speedsorloadcapacitiesarethedesignconstraints.*efinal during operation. PID gains used are stored in a file read by the robot defi- Four of a user’s BCI commands will map to up, down, nition source code. Figure 18 shows the response of the left,and right movementsof theBCI controlledcursor.*e shoulderjointtoagoalanglecommandwhiletheelbowjoint fifth BCI command will map to grab while the last com- undergoes sinusoidal motion. *e dark blue line represents mand will map to confirm the action. *is mapping is themotionoftheshoulderjoint,whichrisestothegoalangle summarized in Table 2. If the grabbing action is confirmed, with disturbances due to the oscillating mass. *e distur- a new point selected on the user-input screen will be a bances on the shoulder joint can be used to calculate the releaseaction,wheretherobotwillmovetothedesiredgoal torque on the joint. point and place the object. A confirmation command is Figure18showstheresponseoftheshoulderjointtoa used here as well. *e grabbing and releasing modes switch goal angle command while the elbow joint undergoes back and forth when each action is confirmed. *e con- sinusoidal motion. *e dark blue line represents the firmation is time limited, meaning the user must confirm motion of the shoulder joint, which rises to the goal angle the action within a timeout period or the action will be with disturbances due to the oscillating mass. Since invalidated. stepper motors only move in discrete steps, the real joint Totunethearmsjointcontrollers,aROStoolcalled rqtis would not experience any motion until the holding used. A PID position controller is used to generate the torque value is exceeded. Other DC motors may expe- output to all arm joints. *e values of the PID gains can be rience the disturbance motion, depending on the motor changed dynamically, making the tuning process quick. *e characteristics. Figure 18 can also be used to determine interfaceforchangingPIDvaluesofdifferentjointsisshown rise time, overshoot, steady-state error, and other con- in Figure 15. trol-based design constraints used to tune the joints for Acommandissenttoajointusingamessagepublisher. any application. *e black lines show the peaks of the *e tuning process can be used to tune a single joint or to disturbanceandthecentroidofthedisturbancesisshown tune for more complex motions by moving multiple joints with the dashed orange line. *is dashed orange line also at once. *e message can be a constant value or a time- shows a steady-stateerror occurs due to thedynamic load dependent function. Both are shown in Figure 16, where present. Journal of Robotics 7 /goal_from_camera /map_server /omni /lidar_right /trajectory_markers_node_12326_1605565714873 /omni/odom_ /bci_navigation_goals /gazebo_gui /omni/cmd_vel /lidar_left /omni/bw_vel_ctrl /omni/bw_vel_ctrl/command /lidar_center /omnibot_platform /omni/rw_vel_ctrl /camera /omni/rw_vel_ctrl/command /robot_state_publisher /omni/lw_vel_ctrl /omni/lw_vel_ctrl/command /clicked_point /omni/wrist_controller /gazebo /omni/wrist_controller/command /omnibot_arm /omni/elbow_controller Figure 13: Partial control block diagram for an assistive robot system. Figure 14: User interface for BCI-controlled motion. *e kinematics of the robot were verified by the user grabbingpointisthentranslatedtothecameraframe,shown interface created using Rviz. By selecting a point from the as the blue vector, using the inverse transformation matrix. point cloud, visualization vectors were added to the envi- *edifferencebetweenthegoalpointandthegrabbingpoint ronment to show the robots actions. Figure 19 shows the iscalculatedandshownasthewhitevector.*ewhitevector markers created when a point is selected. *e green vector is the goal position command sent to the mobile base. *e points to the selected goal point from the camera frame. *e movement of the base places the goal point to the grabbing red vector points to the same goal point from the hand point in the arms workspace. *e goal position command frame, verifying the transformation matrix derived. *e should always have a zero z-component as we assume the grabbing point is derived from the hand frame. *is robot cannot climb or descend in elevation. 8 Journal of Robotics Table 2: Modes of operation. Input value Mode 1 (move robot base) Mode 2 (object manipulation) 1 Forward Move cursor down 2 Rotate left Move cursor up 3 Rotate right Move cursor right 4 — Move cursor left 5 — Select point to grab/Release 6 Switch to mode 2 Confirm and switch to mode 1 Figure 15: Initial PID values for different joints. Figure 16: Message publisher used for tuning of joint controllers. Figures 19 and 20 show sample points manually selected Figures 24 and 25 show the camera and visualization en- to verify distances and visualization markers. Figure 21 vironment after this movement. *e original goal point is shows a stream of BCI data that moved a cursor and se- read again to find the error in each direction. *e values for lectedgrabbingpoints.*eBCIdatastreamwasrecordedby this test are summarized in Table 3. With access to the a user that underwent the BCI training process and was able physicalrobot,thistestwouldberunmultipletimestoverify to generate commands. calibration values and ensure the camera is leveled to the Verifying the inverse kinematics in the visualization ground. *e Brunel hand has been previously tested with dif- environment ensures the simulations work correctly. To verify the motion of the real robot, the values selected ferent gripping styles depending on the size and shape of an through the camera feed must be calibrated. A simple test is object, as shown in Figure 26. *ese grabbing modes are made due to the lack of access to a physical robot and is selectedbasedontheobjectbuthavenotbeenbroughttothe shown in Figures 22 and 23. simulated environment. Bringing these grabbing motions to In this test, the object is selected from a known distance. the simulated environment changes the inverse kinematics *egoal xandgoal ydistancesthemobilebasemusttraverse of the end-effector. In order to grab an object, the grabbing toplace thegoal pointtothegrabbing point isread from the point 0G must be offset from the goal point 0P as this goal output of the visualization markers program. *e physical point is the surface of the object. *e offset is defined by the camera is then manually moved by those goal distances. limits of the manipulator. Journal of Robotics 9 Figure 17: Tuning environment setup. Figure 18: Response of the shoulder joint with disturbance. 10 Journal of Robotics Figure 19: Sample point selected and associated visual markers. Figure 20: Sample unreachable point. Journal of Robotics 11 Figure 21: BCI stream selected points. Table 3: Camera input values. Axis Distance (cm) Mobile base X-goal 51.66 Mobile base Y-goal 13.63 X error after moving 0.017 Y error after moving 4.24 Figure 22: Point selection verification physical setup. 12 Journal of Robotics Figure 23: Visualization environment for point verification. Figure 24: Camera moved based on inverse kinematics. Journal of Robotics 13 Figure 25: Visual environment of a moved camera. (a) (b) (c) Figure 26: Gripping modes of Brunel Hand based on the object [11]. conditions. *e main objective of our research study in 7. Conclusion teleoperations was to improve the end user experience Asthepandemicwavesaredominatingintheworld,remote providing higher level of information for remote operation diagnosis is becoming the new trend in modern medicine [13]. In the study, the BCI system was processed and the [12]. *e new aspect of robotic applications includes human neuron signals was translated to operate a robotic system. machine interface for remote applications. *e human *eapplicationsofsuchasystemareheavilystudiedtoallow operator controlling an assistive robotic device demands for people with motor disabilities to control external devices reliable and efficient operation in order to work in remote through their brain waves [14]. 14 Journal of Robotics [6] P. K. Pattnaik and J. Sarraf, “Brain Computer Interface Issues *e assistive robot design in this study was successfully on Hand Movement,” Journal of King Saud University- imported into a working simulated environment where the Computer and Information Sciences, vol. 30, pp. 18–24, 2018. design was optimized by changing geometry, upgrading [7] J. T. Zou and D. H. Tu, “*e development of six D.O.F robot electrical components, and tuning motor controllers for armforintelligentrobot,”in Proccedings of the 2011 8th Asian specific applications. A live stream of BCI commands was Control Conference (ASCC), pp. 976–981, Kaohsiung, Taiwan, used to select points on a user interface and generate goal position movements to perform a grabbing or releasing [8] M. Hashimoto, T. Hattori, M. Horiuchi, and T. Kamata, operation using end-effector control. *e user-interface “Development of a torque sensing arm for interactive com- provides an environment that allows for few input com- munication,” in Proceedings of the 2002 IEEE International mands to generate arm, hand, and mobile base movements. Workshop on Robot and Human Interactive Communication, *ese commands are used for direct joint control of the pp. 344–349, Berlin, Germany, 2002. [9] B.-Ho Kim, “Torque Characteristics of Shoulder and Elbow robotic arm, but such an application would need a wider Joints of Assistive Robotic Arms Handling and Object,” in range of inputs and more user training. Proceedings of the 6th IEEE RAS/EMBS International Con- Physical tests on the robot was not possible due to the ference on Biomedical Robotics and Biomechatronics (BioRob), pandemic restrictions; therefore, a simulation environment pp. 1346–1351, Singapore, 2016. was developed using ROS and Gazebo. *e errors in [10] J. Craig, Introduction to Robotics, Pearson Education, Inc, movement shown in Table 3 was attributed to camera cal- Boston, MA, USA, 2005. ibration, measurement errors, and the leveling of the [11] B. Landavazo and V. Nandikolla, “Brain-Computer Interface camera. *e test was refined and expanded once the camera ApplicationinRoboticGripperControl,”in Proceedings of the was mounted on the mobile platform to check for these ASME 2018 International Mechanical Engineering Congress errors. *e next step in this project is to add a complete and Exposition, Pittsburgh, PA, USA, November 2018. assistive robot test. *is includes a BCI selected goal point, [12] M. Bucolo, G. Bucolo, A. Buscarino, A. Fiumara, L. Fortuna, and S. Gagliano, “Remote Ultrasound Scan Procedures with movement of the mobile base to place the goal point to the MedicalRobots:TowardsNewPerspectivesbetweenMedicine grabbing point, and confirmation of a successful grab or and Engineering,” Applied Bionics and Biomechanics, release. vol. 2022, Article ID 1072642, 12 pages, 2022. [13] M. Bucolo, A. Buscarino, L. Fortuna, and S. Gagliano, “Force feedback assistance in remote ultrasound scan procedures,” Data Availability Energies, vol. 13, no. 13, p. 3376, 2020. *e data used to support the findings of this study are in- [14] P. Belluomo, M. Bucolo, L. Fortuna L, and M. Frasca, “Robot control through brain computer interface for patterns gen- cluded within the article. eration,” AIP Conference Proceedings, vol. 1389, 2011. Conflicts of Interest *e authors declare that they have no conflicts of interest. References [1] P. M. Kebria, S. Al-Wais, H. Abdi, and S. Nahavandi, “Ki- nematic and Dynamic Modelling of UR5 Manipulator,” in Proceedings of the IEEE International Conference on Systems Man and Cybernetics(SMC), pp. 4229–4234, Budapest, Hungary, 2016. [2] Y. Chen, J. Zhu, M. Xu, H. Zhang, X. Tang, and E. Dong, “Applicationofhapticvirtualfixturesonhot-lineworkrobot- assisted manipulation,” Intelligent Robotics and Applications, vol. 11743, p. 221, 2019. [3] J. Tang, Z. Zhou, and Y. Yu, “A Hybrid Brain Computer Interface for Robot Arm Control,” in Proceedings of the 8th International Conference on Information Technology in Medicine and Education, pp. 365–369, Fuzhou, China, 2016. [4] S. Li, Z. Wang, Q. Zhang, and F. Han, “Solving Inverse Ki- nematics Model for 7-DoF Robot Arms Based on Space Vector,” in Proceedings of the International Conference on Control and Robotics (ICCR), pp. 1–5, Hong kong, China, [5] J.Kofrnan,X.Wu,T.J.Luu,andS.Verma,“Teleoperationofa robot manipulator using a vision-based human-robot inter- face,” IEEE Transactions on Industrial Electronics, vol. 52, no. 5, pp. 1206–1219, 2005. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Robotics Hindawi Publishing Corporation

Teleoperation Robot Control of a Hybrid EEG-Based BCI Arm Manipulator Using ROS

Loading next page...
 
/lp/hindawi-publishing-corporation/teleoperation-robot-control-of-a-hybrid-eeg-based-bci-arm-manipulator-zuksvlufnA

References (13)

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2022 Vidya Nandikolla and Daniel A. Medina Portilla. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-9600
eISSN
1687-9619
DOI
10.1155/2022/5335523
Publisher site
See Article on Publisher Site

Abstract

Hindawi Journal of Robotics Volume 2022, Article ID 5335523, 14 pages https://doi.org/10.1155/2022/5335523 Research Article Teleoperation Robot Control of a Hybrid EEG-Based BCI Arm Manipulator Using ROS Vidya Nandikolla and Daniel A. Medina Portilla Mechanical Engineering, College of Engineering and Computer Science, California State University Northridge, Northridge, CA 91330, USA Correspondence should be addressed to Vidya Nandikolla; vidya.nandikolla@csun.edu Received 19 April 2022; Accepted 14 May 2022; Published 24 May 2022 Academic Editor: L. Fortuna Copyright © 2022 Vidya Nandikolla and Daniel A. Medina Portilla.  is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.  e development of assistive robots is gaining momentum in the robotic and biomedical ƒelds. An assistive robotic system is presented in this research paper for object manipulation to aid people with physical disabilities.  e robotic arm design is imported to a simulated environment and tested in a virtual world.  is research includes the development of a versatile design and testing platform for robotic applications with joint torque requirements, workspace restrictions, and control tuning parameters. Live user inputs and camera feeds are used to test the movement of the robot in the virtual environment. To create the environment and user interface, a robot operating system (ROS) is used. Live brain computer interface (BCI) commands from a trained user are successfully harvested and used as an input signal to pick a goal point from 3D point cloud data and calculate the goal position of the robots’ mobile base, placing the goal point in the robot arms workspace.  e platform created allows for quick design iterations to meet di“erent application criteria and tuning of controllers for desired motion. solving algorithms to automatically move the joints [3]. 1. Introduction Control methods for robotic arms using BCI can be  e use of assistive robots for object manipulation has accurate with over 95% accuracy but require long-term many di“erent types of user-controlled inputs. One type of training [3]. the input is using brain signal data from a noninvasive Once a robotic manipulator has a goal position, there are electroencephalogram (EEG) cap.  is has the beneƒt of di“erent types of analytical and numerical methods to solve being applicable to anyone despite physical disability, at for joint angles [4]. To facilitate the accuracy required to the cost of intensive training requirements. To increase the grasp an object, a semiautonomous control approach can be used where the human-robot interface employs a vision accessibility of assistive robots with brain signal inputs, the robot must be designed to minimize the amount of system to control robot motion for part of the positioning training while keeping its complex object manipulation task [5].  e vision system requires software or algorithms to functionality. interpret the incoming data.  ese can be object recognition An assistive robot is usually composed of an arm with software or other forms of image processing that may re- multiple joints to manipulate objects, a mobile platform quire calibration. By lowering the amount of user inputs to travel in an environment, and a head with a stereo- available, more complexity is added to the automated vision system to recognize human faces and localize subsystems. manipulative objects [1,2]. One type of the manipulator BCI involves a combination of the brain and device both control can be referred to as end-e“ector control, where a sharing an interface to enable a communication channel user speciƒes a goal position of the end-e“ector and path- between the brain and an object controlled externally [6]. To 2 Journal of Robotics obtain EEG data, a cap is used with electrodes mapped to To solve the forward and inverse kinematics, a model specific head locations, as the example seen in Figure 1. representing each relevant frame is created and is as seen in Due to backlashes, eccentric joints, and the influences Figure 6. *e location of the camera, where the point cloud that the flexibility of driving axles would have on the data are measured, is considered frame zero. Frame 1 accuracy and repeat accuracy of the arm positioning, a represents the shoulder joint, Frame 2 represents the elbow compensation method should be used for good effects [7]. joint, Frame 3 represents the wrist joint, and Frame T *e use of joint torque sensing technologies on robotic represents the tool, or hand, frame. manipulators adds complexity and may require the ad- *e distances and angles between each frame are dition of compliant members to the robot [8]. Another measured with the values shown in Figure 7 and are method of compensation can use the vision system and summarized in Table 1. *ese measurements came from it requires the arm poses to be in the scope of the camera. the existing SolidWorks model and any design iteration *e use of a vision system for the robot arm position that changes these distances will affect the calculated feedback simplifies the hardware components required for transformation matrices. *e transformation matrices thedesignbutaddsinitialcalibrationandphysicalmarkers translate distances from the camera frame to any other to be added to the robotic arm. For safe manipulation of a frame on the robot. humanoid robot arm, self-loads and external loads are To derive the translation between each frame, an X-Y-Z verified before an object is manipulated [9]. fixed angles method is used [10]. Using values from Table 1, the transformation matrix T0T and its inverse are found in equations 4 and 5. 2. Robot Manipulator T0T � [−0.5 − 0.86600 − 0.02610.8660 − 0.50 (4) *e design of the robot arm is based on a person’s hanging − 0.18070010.24720001 ], arm with the intent of making the user-input intuitive. To keep the first design iteration simple, the arm has a planar −1 T0T � [−0.50.866000.1434 − 0.8660 − 0.50 workspacewiththreejoints:theshoulder,elbow,andwrist joints. *e arm is rigidly attached to an omnidirectional − 0.1129001 − 0.24720001 ]. three-wheel mobile robot, allowing the arms planar (5) workspace to translate and rotate. *e end effector is at- tached at the end of the wrist joint through a triangular *e goal point comes from the camera frame 0P and is attachment 3D printed for this application, as shown in selected by the user through the point cloud data camera Figure 2. feed.*ispointistranslatedtothehandframetoobtain TP. *e planned mounting of all components for the Based on the grabbing orientation desired, a grabbing point assistive robot system is shown inFigure 3. *erobotic arm, TG is calculated based on the desired arm position. *e battery pack, main computer, motor drivers, and d435 grabbingpointistranslatedtothecameraframe 0G,andthe camera are mounted on top of the mobile base. Other difference between the 0P and 0G is used to move the robot sensors and electrical components used in the mobile base to a new location. Once the robot moves to the desired are placed inside of the robot. *is design allows for a cover location, the goal point is in the arms workspace and the to be made that hides all main components except for the grabbing operation starts. robotic arm and the charging port. A slot for the camera is Since the arm has a planar movement defined the z-y still required but an infrared transparent material can be axis of the hand frame, the x component TG will always be used for protection. zero. *e z component is fixed, meaning we assume the robot moves in a flat environment. *is means that the x and z components of TG are known based on the input TP. 3. Kinematics To calculate the y component of TG, a grabbing pose is picked to determine the bounding equations. *e sche- *e mathematical model of the robot arm is seen in matic of a sample point and grabbing at the edge of the Figure 5. With this schematic, constraints of joint angles workspace is shown in Figure 8. *e purple vector is the are added to avoid collisions and define the arms work- grabbing point measured from the hand frame before space. Initial torque requirements and link lengths are actuation. *e orange vector is the z-offset between the calculatedforadesiredpayloadplacedonthemanipulator hand frame and the axis of rotation of the shoulder joint. It frame, located at the end of this open loop kinematic is shownas a number for is defined bythe variablesdefined chain. earlier in Figure 7. ° ° Forthegrabbingpointtobeattheedgeoftheworkspace −30 ≤ θ ≤ 180 , (1) (Figure 8), the Pythagorean theorem is used for the right triangle created using θ and is shown in (6). Equations (7) and (8) solve for R and θ . Y s −90≤ θ ≤ 180 , (2) 2 2 (6) TP + 0.1005􏼁 + R � 􏼐L + L + h 􏼑 , z Y s e offset L c + L + h c ≤ H. (3) 􏼐 􏼑 s s e offset se Journal of Robotics 3 (a) (b) Figure 1: Sample of electrode mapping using an EEG cap [6]. Since the mobile base can move in the x and y direc- tions, the arm will only label a point unreachable if the z component is outside the workspace. *is restriction is shown in (9). TP ≤ 􏼐L + L + h 􏼑–0.1005. (9) z s e offset *e schematic for grabbing with a horizontal pose is shown in Figure 9. For this pose to be valid, equations (10), (11), and (12) must be true. *ese are used to solve for the joint angles and determine if the point is reachable. L Cθ � TP + 0.1005􏼁, (10) s s z Figure 2: Robot arm 3D model. θ + θ � 90˚, (11) s e TP � L Sθ + L + h + R . (12) 􏼐 􏼑 Y s s e offset Y With these equations, the arm joint angles and the trajectory vector for the mobile base can be solved for an inputgoalpoint. Sinceeachgrabbingposehasadifferent set of inverse kinematic equations, the desired grabbing pose must be selected before, or as a part of, the user input. *e tests in this research use the grabbing at the edge of workspace pose. 4. Simulation Environment *esoftwareusedareROSandGazebo.ROSisaframework Figure3:PopulatedmobileBbse.*emanipulatorusedisaBrunel used to build robots with useful tools to create control di- Hand 2.0, an open-source robotic end-effector. *e manipulator and its built-in controller are shown in Figure 4. agrams and graphical interfaces for a robot. Gazebo is robot simulation software using a physics engine for dynamic simulationandprovidessimulatedsensordatatotestrobots. With the use of these open-source programs, we can import ������������������������������ � the existing design to test and tune different control algo- (7) R � 􏼐L + L + h 􏼑 − TP + 0.1005􏼁 , Y s e offset z rithms. We can also use real sensors and live data to provide a simulation command and modify the design based on R performance goals. θ � atan􏼠 􏼡. (8) *e visual model, shown in Figure 10, is imported from TP + 0.1005􏼁 SolidWorksusinganSTLfileformat.*ismodelwascreated 4 Journal of Robotics Figure 4: Brunel Hand 2.0 and modified control board. Shoulder Joint Elbow Joint offset Wrist Joint Figure 5: Robot arm schematic for a mobile pPlatform. *e initial constraints of the arm workspace are as follows. Z X 1 2 X 5 30 X Figure 6: Location of Frames. before planning to simulate the robot and is used as a process should start with collision and inertial models and reference to design the collision and inertial models. *e end with a visual model. simulation environment uses the collision and inertial *ecollisionmodelinthisprojectisshowninFigure11. models to run the physics engine while the visual model is In this model, the location of all joints and links are defined. only a skin for the robot. Because of this, a robot design *eoriginofeachlinkandjointisreferencedtotheprevious o T offset 0.0984 offset Journal of Robotics 5 Z Z T T Hand X Y T T Wrist 0.1005 Z 3 o Shoulder Camera Frame X 5 d L 0.1283 4 s L θ e w d e 30 X Elbow Arm Base 0.1283 Figure 9: Grabbing horizontally. Robot Center 0.0568 Figure 7: Distances from robot center. Table 1: Values of distances between frames. Variable Value (meters) d0 0.0436 d1 0.15036 d2 0.1467 d3 0.035 d4 0.115 d5 0.1255 Figure 10: Visual model. d6 0.09 0.1005 Figure 11: Collision model. equation is valid for a solid cuboid of width w, height h, and Y mass m. 1 1 1 2 2 2 2 2 2 I � 􏼔 m􏼐h + d 􏼑000 m􏼐w + d 􏼑000 m􏼐w + h 􏼑 􏼕. 12 12 12 Figure 8: Grabbing at the edge of workspace. (13) By estimating the masses of links, the inertial model is defined forthe entirerobot andis shownin Figure12. *ese joint.*ismodelusessimplegeometrytodefinethecollision values are used to calculate the torques on joints at any time surfaces of the robot. Once a robot design is finalized, the during the motion of the arm. simple geometry can be replaced with the visual model to have more detailed collision points, but at this point, it would be more beneficial to finalize all testing on a real- 5. Hybrid Control world environment. *e inertial model defines the center of mass and the ROS uses a control block diagram to communicate be- rotational inertias in all directions for a link. *is is done by tween different subsystems. Each node in the control defining a link 3D inertia tensor I shown in (13). *is block diagram can represent a robot model, a controller, 0.1304 6 Journal of Robotics an input or output, etc. *ese are usually stand-alone programs that are connected using topics. A topic is a messagethatanodecanpublishorsubscribeto.Figure13 shows a part of the control block diagram used on the simulated model. *e omnibot_arm node subscribes to the clicked_point topic, which publishes commands from the user interface. *e user publishes a message to the clicked_point using BCI commands through the user interface. *e testing done focuses on a semiautonomous end- effector control mode, but the system allows for direct Figure 12: Inertial model. joint control through BCI for different applications. *e omnibot_armnodesolvestheinversekinematicequations the shoulder moves to a set angle while the elbow moves to create a goal angle for each joint of the robot arm. *e sinusoidally. goal angle message is published to each motor’s con- Gazebo is used to simulate the robot’s movement and troller. *e controller uses tuned PID values to move each verify desired motion. *e entire tuning setup is shown in joint and the output is sent to Gazebo for visualization. Figure 17. *e effects of gravity are seen in the actual *e output of the omnibot_arm can also write to Jetson movement of the shoulder joint, noted bythe dark blue line. Xavier’s communication pins to send commands to ex- *e joint lags on an upward motion but does not struggle to ternal hardware. *e trajectory_markers_node also solves move downward with the given goal signal, noted with the inverse kinematic equations and adds visualization vec- lightblueline.*isisusedtotunethemotors,butitcanalso tors to the user interface to show the location of the be used to size up the motor used, decrease the workspace selectedpoint,thegrabbingpoint,andthegoalpositionof reach, or limit the motion paths and joint speeds of the arm. the mobile base. Changing each parameter is an iterative process that ends Figure 14 shows the user interface to select 0P, a goal when all acceptable functionality is met. point from a 3D cloud feed. *is interface is made up of two displays: *e visualization display on the left and the user-input screen on the right. *e point cloud data feed is 6. Test Data and Results shown in both displays. *e visualization display is a 3D environment used to verify selected points, visualize Each arm motor is tuned and due to the torque limits on markers, and give the user a different perspective of the each joint and the inertia of the arm, small changes in PID environment. *e user-input screen is a 2D camera feed parameters did not affect the output motion. *is tool is with the point cloud data as an overlay. A BCI controlled usefulinchoosingthetorqueratingsofeachmotorifcertain cursor is bound to stay within the user-input display speedsorloadcapacitiesarethedesignconstraints.*efinal during operation. PID gains used are stored in a file read by the robot defi- Four of a user’s BCI commands will map to up, down, nition source code. Figure 18 shows the response of the left,and right movementsof theBCI controlledcursor.*e shoulderjointtoagoalanglecommandwhiletheelbowjoint fifth BCI command will map to grab while the last com- undergoes sinusoidal motion. *e dark blue line represents mand will map to confirm the action. *is mapping is themotionoftheshoulderjoint,whichrisestothegoalangle summarized in Table 2. If the grabbing action is confirmed, with disturbances due to the oscillating mass. *e distur- a new point selected on the user-input screen will be a bances on the shoulder joint can be used to calculate the releaseaction,wheretherobotwillmovetothedesiredgoal torque on the joint. point and place the object. A confirmation command is Figure18showstheresponseoftheshoulderjointtoa used here as well. *e grabbing and releasing modes switch goal angle command while the elbow joint undergoes back and forth when each action is confirmed. *e con- sinusoidal motion. *e dark blue line represents the firmation is time limited, meaning the user must confirm motion of the shoulder joint, which rises to the goal angle the action within a timeout period or the action will be with disturbances due to the oscillating mass. Since invalidated. stepper motors only move in discrete steps, the real joint Totunethearmsjointcontrollers,aROStoolcalled rqtis would not experience any motion until the holding used. A PID position controller is used to generate the torque value is exceeded. Other DC motors may expe- output to all arm joints. *e values of the PID gains can be rience the disturbance motion, depending on the motor changed dynamically, making the tuning process quick. *e characteristics. Figure 18 can also be used to determine interfaceforchangingPIDvaluesofdifferentjointsisshown rise time, overshoot, steady-state error, and other con- in Figure 15. trol-based design constraints used to tune the joints for Acommandissenttoajointusingamessagepublisher. any application. *e black lines show the peaks of the *e tuning process can be used to tune a single joint or to disturbanceandthecentroidofthedisturbancesisshown tune for more complex motions by moving multiple joints with the dashed orange line. *is dashed orange line also at once. *e message can be a constant value or a time- shows a steady-stateerror occurs due to thedynamic load dependent function. Both are shown in Figure 16, where present. Journal of Robotics 7 /goal_from_camera /map_server /omni /lidar_right /trajectory_markers_node_12326_1605565714873 /omni/odom_ /bci_navigation_goals /gazebo_gui /omni/cmd_vel /lidar_left /omni/bw_vel_ctrl /omni/bw_vel_ctrl/command /lidar_center /omnibot_platform /omni/rw_vel_ctrl /camera /omni/rw_vel_ctrl/command /robot_state_publisher /omni/lw_vel_ctrl /omni/lw_vel_ctrl/command /clicked_point /omni/wrist_controller /gazebo /omni/wrist_controller/command /omnibot_arm /omni/elbow_controller Figure 13: Partial control block diagram for an assistive robot system. Figure 14: User interface for BCI-controlled motion. *e kinematics of the robot were verified by the user grabbingpointisthentranslatedtothecameraframe,shown interface created using Rviz. By selecting a point from the as the blue vector, using the inverse transformation matrix. point cloud, visualization vectors were added to the envi- *edifferencebetweenthegoalpointandthegrabbingpoint ronment to show the robots actions. Figure 19 shows the iscalculatedandshownasthewhitevector.*ewhitevector markers created when a point is selected. *e green vector is the goal position command sent to the mobile base. *e points to the selected goal point from the camera frame. *e movement of the base places the goal point to the grabbing red vector points to the same goal point from the hand point in the arms workspace. *e goal position command frame, verifying the transformation matrix derived. *e should always have a zero z-component as we assume the grabbing point is derived from the hand frame. *is robot cannot climb or descend in elevation. 8 Journal of Robotics Table 2: Modes of operation. Input value Mode 1 (move robot base) Mode 2 (object manipulation) 1 Forward Move cursor down 2 Rotate left Move cursor up 3 Rotate right Move cursor right 4 — Move cursor left 5 — Select point to grab/Release 6 Switch to mode 2 Confirm and switch to mode 1 Figure 15: Initial PID values for different joints. Figure 16: Message publisher used for tuning of joint controllers. Figures 19 and 20 show sample points manually selected Figures 24 and 25 show the camera and visualization en- to verify distances and visualization markers. Figure 21 vironment after this movement. *e original goal point is shows a stream of BCI data that moved a cursor and se- read again to find the error in each direction. *e values for lectedgrabbingpoints.*eBCIdatastreamwasrecordedby this test are summarized in Table 3. With access to the a user that underwent the BCI training process and was able physicalrobot,thistestwouldberunmultipletimestoverify to generate commands. calibration values and ensure the camera is leveled to the Verifying the inverse kinematics in the visualization ground. *e Brunel hand has been previously tested with dif- environment ensures the simulations work correctly. To verify the motion of the real robot, the values selected ferent gripping styles depending on the size and shape of an through the camera feed must be calibrated. A simple test is object, as shown in Figure 26. *ese grabbing modes are made due to the lack of access to a physical robot and is selectedbasedontheobjectbuthavenotbeenbroughttothe shown in Figures 22 and 23. simulated environment. Bringing these grabbing motions to In this test, the object is selected from a known distance. the simulated environment changes the inverse kinematics *egoal xandgoal ydistancesthemobilebasemusttraverse of the end-effector. In order to grab an object, the grabbing toplace thegoal pointtothegrabbing point isread from the point 0G must be offset from the goal point 0P as this goal output of the visualization markers program. *e physical point is the surface of the object. *e offset is defined by the camera is then manually moved by those goal distances. limits of the manipulator. Journal of Robotics 9 Figure 17: Tuning environment setup. Figure 18: Response of the shoulder joint with disturbance. 10 Journal of Robotics Figure 19: Sample point selected and associated visual markers. Figure 20: Sample unreachable point. Journal of Robotics 11 Figure 21: BCI stream selected points. Table 3: Camera input values. Axis Distance (cm) Mobile base X-goal 51.66 Mobile base Y-goal 13.63 X error after moving 0.017 Y error after moving 4.24 Figure 22: Point selection verification physical setup. 12 Journal of Robotics Figure 23: Visualization environment for point verification. Figure 24: Camera moved based on inverse kinematics. Journal of Robotics 13 Figure 25: Visual environment of a moved camera. (a) (b) (c) Figure 26: Gripping modes of Brunel Hand based on the object [11]. conditions. *e main objective of our research study in 7. Conclusion teleoperations was to improve the end user experience Asthepandemicwavesaredominatingintheworld,remote providing higher level of information for remote operation diagnosis is becoming the new trend in modern medicine [13]. In the study, the BCI system was processed and the [12]. *e new aspect of robotic applications includes human neuron signals was translated to operate a robotic system. machine interface for remote applications. *e human *eapplicationsofsuchasystemareheavilystudiedtoallow operator controlling an assistive robotic device demands for people with motor disabilities to control external devices reliable and efficient operation in order to work in remote through their brain waves [14]. 14 Journal of Robotics [6] P. K. Pattnaik and J. Sarraf, “Brain Computer Interface Issues *e assistive robot design in this study was successfully on Hand Movement,” Journal of King Saud University- imported into a working simulated environment where the Computer and Information Sciences, vol. 30, pp. 18–24, 2018. design was optimized by changing geometry, upgrading [7] J. T. Zou and D. H. Tu, “*e development of six D.O.F robot electrical components, and tuning motor controllers for armforintelligentrobot,”in Proccedings of the 2011 8th Asian specific applications. A live stream of BCI commands was Control Conference (ASCC), pp. 976–981, Kaohsiung, Taiwan, used to select points on a user interface and generate goal position movements to perform a grabbing or releasing [8] M. Hashimoto, T. Hattori, M. Horiuchi, and T. Kamata, operation using end-effector control. *e user-interface “Development of a torque sensing arm for interactive com- provides an environment that allows for few input com- munication,” in Proceedings of the 2002 IEEE International mands to generate arm, hand, and mobile base movements. Workshop on Robot and Human Interactive Communication, *ese commands are used for direct joint control of the pp. 344–349, Berlin, Germany, 2002. [9] B.-Ho Kim, “Torque Characteristics of Shoulder and Elbow robotic arm, but such an application would need a wider Joints of Assistive Robotic Arms Handling and Object,” in range of inputs and more user training. Proceedings of the 6th IEEE RAS/EMBS International Con- Physical tests on the robot was not possible due to the ference on Biomedical Robotics and Biomechatronics (BioRob), pandemic restrictions; therefore, a simulation environment pp. 1346–1351, Singapore, 2016. was developed using ROS and Gazebo. *e errors in [10] J. Craig, Introduction to Robotics, Pearson Education, Inc, movement shown in Table 3 was attributed to camera cal- Boston, MA, USA, 2005. ibration, measurement errors, and the leveling of the [11] B. Landavazo and V. Nandikolla, “Brain-Computer Interface camera. *e test was refined and expanded once the camera ApplicationinRoboticGripperControl,”in Proceedings of the was mounted on the mobile platform to check for these ASME 2018 International Mechanical Engineering Congress errors. *e next step in this project is to add a complete and Exposition, Pittsburgh, PA, USA, November 2018. assistive robot test. *is includes a BCI selected goal point, [12] M. Bucolo, G. Bucolo, A. Buscarino, A. Fiumara, L. Fortuna, and S. Gagliano, “Remote Ultrasound Scan Procedures with movement of the mobile base to place the goal point to the MedicalRobots:TowardsNewPerspectivesbetweenMedicine grabbing point, and confirmation of a successful grab or and Engineering,” Applied Bionics and Biomechanics, release. vol. 2022, Article ID 1072642, 12 pages, 2022. [13] M. Bucolo, A. Buscarino, L. Fortuna, and S. Gagliano, “Force feedback assistance in remote ultrasound scan procedures,” Data Availability Energies, vol. 13, no. 13, p. 3376, 2020. *e data used to support the findings of this study are in- [14] P. Belluomo, M. Bucolo, L. Fortuna L, and M. Frasca, “Robot control through brain computer interface for patterns gen- cluded within the article. eration,” AIP Conference Proceedings, vol. 1389, 2011. Conflicts of Interest *e authors declare that they have no conflicts of interest. References [1] P. M. Kebria, S. Al-Wais, H. Abdi, and S. Nahavandi, “Ki- nematic and Dynamic Modelling of UR5 Manipulator,” in Proceedings of the IEEE International Conference on Systems Man and Cybernetics(SMC), pp. 4229–4234, Budapest, Hungary, 2016. [2] Y. Chen, J. Zhu, M. Xu, H. Zhang, X. Tang, and E. Dong, “Applicationofhapticvirtualfixturesonhot-lineworkrobot- assisted manipulation,” Intelligent Robotics and Applications, vol. 11743, p. 221, 2019. [3] J. Tang, Z. Zhou, and Y. Yu, “A Hybrid Brain Computer Interface for Robot Arm Control,” in Proceedings of the 8th International Conference on Information Technology in Medicine and Education, pp. 365–369, Fuzhou, China, 2016. [4] S. Li, Z. Wang, Q. Zhang, and F. Han, “Solving Inverse Ki- nematics Model for 7-DoF Robot Arms Based on Space Vector,” in Proceedings of the International Conference on Control and Robotics (ICCR), pp. 1–5, Hong kong, China, [5] J.Kofrnan,X.Wu,T.J.Luu,andS.Verma,“Teleoperationofa robot manipulator using a vision-based human-robot inter- face,” IEEE Transactions on Industrial Electronics, vol. 52, no. 5, pp. 1206–1219, 2005.

Journal

Journal of RoboticsHindawi Publishing Corporation

Published: May 24, 2022

There are no references for this article.