Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Hobbit: Providing Fall Detection and Prevention for the Elderly in the Real World

Hobbit: Providing Fall Detection and Prevention for the Elderly in the Real World Hindawi Journal of Robotics Volume 2018, Article ID 1754657, 20 pages https://doi.org/10.1155/2018/1754657 Research Article Hobbit: Providing Fall Detection and Prevention for the Elderly in the Real World 1 1 1 1 1 Markus Bajones , David Fischinger, Astrid Weiss, Daniel Wolf, Markus Vincze, 2 3 3 4 Paloma de la Puente, Tobias Körtner, Markus Weninger, Konstantinos Papoutsakis, 4 4 4 4 Damien Michel, Ammar Qammaz, Paschalis Panteleris, Michalis Foukarakis, 4 4 4 4 4 Ilia Adami, Danai Ioannidi, Asterios Leonidis, Margherita Antona, Antonis Argyros, 5 5 6 6 Peter Mayer, Paul Panek, Håkan Eftring, and Susanne Frennert Automation and Control Institute (ACIN), TU Wien, Vienna, Austria Universidad Politecn ´ ica de Madrid, Madrid, Spain Akademie fur ¨ Altersforschung am Haus der Barmherzigkeit, Vienna, Austria Institute of Computer Science, FORTH, Heraklion, Greece Institute for Design and Assessment of Technology, TU Wien, Vienna, Austria Department of Design Sciences, Lund University, Lund, Sweden Correspondence should be addressed to Markus Bajones; markus.bajones@tuwien.ac.at Received 10 November 2017; Revised 13 February 2018; Accepted 19 March 2018; Published 3 June 2018 Academic Editor: Brady King Copyright © 2018 Markus Bajones et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present the robot developed within the Hobbit project, a socially assistive service robot aiming at the challenge of enabling prolonged independent living of elderly people in their own homes. We present the second prototype (Hobbit PT2) in terms of hardware and functionality improvements following first user studies. Our main contribution lies within the description of all components developed within the Hobbit project, leading to autonomous operation of 371 days during field trials in Austria, Greece, and Sweden. In these field trials, we studied how 18 elderly users (aged 75 years and older) lived with the autonomously interacting service robot over multiple weeks. To the best of our knowledge, this is the first time a multifunctional, low-cost service robot equipped with a manipulator was studied and evaluated for several weeks under real-world conditions. We show that Hobbit’s adaptive approach towards the user increasingly eased the interaction between the users and Hobbit. We provide lessons learned regarding the need for adaptive behavior coordination, support during emergency situations, and clear communication of robotic actions and their consequences for fellow researchers who are developing an autonomous, low-cost service robot designed to interact with their users in domestic contexts. Our trials show the necessity to move out into actual user homes, as only there can we encounter issues such as misinterpretation of actions during unscripted human-robot interaction. 1. Introduction (3) sentientmachinesthatoeff rmultimodalcommunica- tion channels and are context-aware and trustable. Whilesociallyassistiverobotsareconsideredtobepotentially More and more companies and research teams present useful for society, they can provide the highest value to older service robots with the aim of assisting older adults (e.g., adults and homebound people. As reported in [1], future Giraff (http://www.giraff.org), Care-O-Bot (http://www.care- robot companions are expected to be o-bot.de), and Kompai (https://kompai.com)) with services (1) strong machines that can take over burdensome tasks such as entertainment, medicine reminders, and video tele- for the user, phony. Requirement studies on needs and expectations of (2) graceful and soft machines that will move smoothly older adults towards socially assistive robots [2] indicate and express immediate responses to their users, that they expect them to help with household chores (e.g., 2 Journal of Robotics cleaning the kitchen, bath, and toilet), lifting heavy objects, into an implementable behavior concept. Section 5 presents reaching for and picking up objects, delivering objects, and an overview on the efi ld trials. Lessons learned from the so forth. However, most of these tasks cannot satisfyingly development and testing of Hobbit and a summary and be performed by state-of-the-art robotic platforms; hardly conclusions are provided in Sections 6 and 7. any companion robot fulfills the requirements mentioned above and only very few robots entered private homes of 2. Related Work older adults so far. One of the biggest challenges is oer ff ing sufficient useful and social functionalities in an autonomous Moving towards autonomous service robots, behavior coor- and safe manner to achieve the ultimate goal of prolonging dination systems constitute an important building block to fulfill therequirementsofactionplanning,safetaskexecu- independent living at home. The ability of a robot to interact autonomously with a human requires sophisticated cognitive tion, and integration of human-robot interaction. HAMMER from Demiris and Khadhouri [5] is built upon the concept abilities including perception, navigation, decision-making, and learning. However, research on planners and cognitive of using multiple forward/backward control loops, which can architectures still faces the challenge of enabling flexibil- be used to predictthe outcomeofanactionand comparethis against the actual result of the action. Through this design, it ity and adaptation towards different users, situations, and environments while simultaneously being safe and robust. is possible to choose the action with the highest probability Toourconviction, forsuccessfullong-term human-robot of reaching the desired outcome, which has successfully been used in a collaboratively controlled wheelchair system [6], interaction with people in their private homes, robotic behavior needs to be above all safe, stable, and predictable. in order to correct the user’s input to avoid an erroneous During our eld fi trials, this became increasingly evident, as situation. Cashmore et al. [7] introduced ROSPlan, a frame- the users failed to understand the robot’s behavior during work that uses a temporal planning strategy for planning and some interaction scenarios. dispatching robotic actions. Depending on the needs, a cost function can be optimized for planning in a certain manner In this article, we present the Hobbit PT2 platform, referred to in the remainder of this article as Hobbit. A (e.g., time- or energy-optimized). However, the constructed former version of Hobbit has been presented in detail in plan is up until now only available as a sequence of executed actions and observed events, but no direct focus is put on [3]. Hobbit is a socially assistive robot that oer ff s useful personal and social functionalities to enable independent the human, besides modeling the user as means to acquire living at home for seniors. To the best of our knowledge, some event(e.g.,movinganobjectfromone location to another). Mansouri and Pecora [8] incorporate temporal the Hobbit trials mark the first time a social service robot offering multifunctional services was placed in users’ homes, andspatialreasoninginarobottaskedwithpickandplace operated autonomously and whose usage was not restricted in environments suited for users. In the context of ALIAS, by ascheduleoranyothermeans.Themaincontribution Goetze et al. [9] designed their dialogue manager for the of this paper is twofold. First, we give a description of the tasks of emergency call, a game, e-ticket event booking, and the navigation as state-machines. However, there are still hardware that is based on improvements derived from the rfi st user trials on the previous version of Hobbit. Second, signicfi ant research challenges regarding how to incorporate we describe the implemented functionality and its integration humans into the planning stages and decide when the robot needs to adapt to the user instead of staying with the planned into the behavior coordination system. The building blocks of the behavior coordination system are based on a set of task. hierarchical state-machines implemented using the SMACH Most of those behavior coordination and planning sys- framework [4]. Each behavior was built upon simpler build- tems treat the human as an essential part of the system ing blocks, each responsible for one specific task (e.g., speech [6] (e.g., for command input) and rely on the user to execute actions planned by the coordination system [10]. and text output, arm movements, and navigation) to add up to the complex functionalities presented in Sections 3.3 and Such systems only work under the precondition that the robot 4. Finally, we present the lessons learned from the eld fi trials will execute a given task for the user independently of the user input [8]. A crucial aspect, however, to successfully integrate in order to support fellow researchers in their developments of autonomous service robots for the domestic environment. a multifunctional service robot into a domestic environment We evaluated Hobbit during 371 days of field trials with vfi e is that itneedsnot onlytoreacttousercommandsbutalsoto proactively oeff r interaction and adapt to user needs (e.g., the platforms with older adults in their private homes in Austria, Greece, and Sweden. However, details on the eld fi trials will user wanting a break from the robot or a proactive suggestion be published elsewhere. for an activity they could perform together). Our proposed The paper proceeds as follows. Section 2 reflects on solution is based on state-machines, which reflect turn-taking relevant related work on behavior coordination for social in the interaction, providing adaptations within certain states (e.g., voice dialogues) or situations (e.g., user approach). We service robots and on studies of such robots outside of the laboratory environment. In Section 3, we give an overview on integrated the possibility not only to handle robot-driven the project vision for Hobbit and its historical development actions on a purely scheduled basis but also to adapt this schedulingandactions basedontheuser’scommands. uptotheHobbitPT2platform,followedbyadetaileddescrip- tion of its hardware and interaction modalities. Section 4 presents the behavior coordination system. We outline how 2.1. State of the Art: Robotic Platforms. According to a study we developed the interaction scenarios and transferred them conducted by Georgia Tech’s Healthcare Robotics Lab, people Journal of Robotics 3 with motor impairment drop items on average 5.5 times a currently running the Socially Assistive Robotics project day. Their small tele-operated Dusty (http://pwp.gatech.edu/ (https://www.nsf.gov/awardsearch/showAward?AWD ID= hrl/project dusty/) robots are developed for that purpose: 1139078) with partners Yale, University of Southern Cal- picking up objects from the floor, which they achieve with ifornia, MIT, Stanford, Tusft , and Willow Garage. eTh ir a scoop-like manipulator. Cody, a robotic nurse assistant, focusisonrobotsthatencouragesocial,emotional, can autonomously perform bed (sponge) baths. Current and cognitive growth in children, including those with work focuses on GATSBII (http://www.robotics.gatech.edu), social or cognitive deficits. eTh elder care robot Sil-Bot a willow Garage PR2, as a generic aid for older adults at (http://www.roboticstoday.com/robots/sil-bot) developed at home. eTh Care-O-Bot research platforms developed at the the Center for Intelligent Robotics (CIR) in Korea is devised Fraunhofer Institute (IPA) are designed as general purpose mainly as an entertainment robot to oer ff interactive games robotic butlers, with a repertoire from fetching items to that have been codeveloped with Seoul National University detecting emergency situations, such as a fallen person. Also Medical Center specifically to help prevent Alzheimer’s from Fraunhofer is Mobina (https://www.ipa.fraunhofer.de/ disease and dementia. Sil-Bot is meant to be a companion de/referenzprojekte/MobiNa.html), a small (vacuum-sized) that helps encourage an active, healthy body and mind. Its robot specicfi ally performing fallen person detection and shortflipper-like armsdo notallowforactualmanipulation. video calls in emergency. Carnegie Mellon University’s Another public-private partnership is the EC-funded HERB (https://personalrobotics.ri.cmu.edu/) is another CompanionAble project (http://www.companionable.net/), general-purpose robotic butler. It serves as the main research which created a robotic assistant for the elderly called Hector. platform at the Personal Robotics Lab, which is part of the eTh project integrates Hector to work collaboratively with a QualityofLifeTechnology(QoLT)Center. KAISTinKorea smart home and remote control center to provide the most has been developing their Intelligent Sweet Home (ISH) comprehensive and cost-efficient support for older people smart home technology including intelligent wheelchairs, living at home. intelligent beds, and robotic hoists [11]. Their system Hoaloha Robotics (http://www.hoaloharobotics.com/) also employs the bimanual mobile robot Joy to act as an in the United States are planning to bring their elder care intermediary between these systems and the end user. robottomarketsoon.Basedonafairlystandard mobile Robotdalen (http://www.robotdalen.se), a Swedish public- platform oer ff ing safety and entertainment, they focus on private consortium, has funded the development of needed an application framework that will provide integration robotic products such as Bestic (http://www.camanio.com/ of discrete technological solutions like biometric devices, en/products/bestic/), an eating device for those who cannot remote doctor visits, monitoring and emergency call services, feed themselves; Giraff, a remote-controlled mobile robot medication dispensers, online services, and the increasing with a camera and monitor providing remote assistance and number of other products and applications already emerging security; or TrainiTest, a rehabilitation robot that measures for the assistive care market. Japan started a national andevaluates thecapacityofmuscles andthensetsthe initiative in 2013 to foster development of nursing care resistance in the robot to adapt to the users’ individual robots and to spread their use. The program supports 24 training needs. Remote presence robots have recently turned companies in developing and marketing their elderly care up in a variety of forms, from simple Skype video chats technologies, such as the 40 cm tall PALRO conversation on a mobility platform (Double Robotics (https://www robot (https://palro.jp/) that offers recreation services by .doublerobotics.com/)) to serious medical assistance remote playing games, singing, and dancing together with residents presence robots such as those provided by the partnership of a care facility. Another example is the helper robot by between iRobot and InTouch Health (https://www.intouch- Toyota, which is mostly remotely controlled from a tablet PC. health.com/about/press-room/2012/InTouch-Health-and-iR- Going specifically beyond entertainment capabilities, Waseda obot-to-Unveil-the-RP-VITA-Telemedicine-Robot.html), University’s Twendy One (http://www.twendyone.com) is Gira,ff and VGo Communications’ postop pediatric at-home asophisticatedbimanual robotthatprovides human robots (http://www.vgocom.com/) for communication with safety assistance, dexterous manipulation, and human- parents, nurses, doctors, and patients. friendly communication. It can also support a human to lift Another class of robots aims more specicfi ally at well- themselves from a bed or chair. Going even further, the RIBA- being of older adults. eTh recently completed FP7 project M II robot (http://rtc.nagoya.riken.jp/RIBA/index-e.html) by obiserv (https://cordis.europa.eu/project/rcn/93537 en.html) RIKEN-TRI Collaboration Center for Human-Interactive aimed to develop solutions to support independent living Robot Research (RTC) can lift patients of up to 80 kg of older adults as long as possible, in their home or in from abedtoawheelchairand back.ThePepperrobot various degrees of institutionalization, with a focus on (https://www.ald.softbankrobotics.com/en/robots/pepper) health, nutrition, well-being, and safety. es Th e solutions from Softbank Robotics (Aldebaran) is used in a growing encompass smart clothes for monitoring vital signs, a smart number of projects focusing on human-robot interaction home environment to monitor behavioral patterns (e.g., scenarios. Some ADL (activities of daily living) tasks eating) and detect dangerous events, and a companion are directly addressed by walking aids, for example [12], robot. eTh robot’s main role is to generally activate, and cognitive manipulation training, for example, using stimulate, and offer structure during the day. It also reminds exoskeletons [13, 14]. its user of meals, medication, and appointments and The short overview indicates that individually many ADL encourages social contacts via video calls. The US NSF is tasks are approached. However, they all require different types 4 Journal of Robotics of robots. eTh goal of grasping objects from the floor, while themaincharacteristics ofthe Mutual Care mode were the at thesametimekeeping therobot aoff rdable,has ledusto following: (1) return of favor: Hobbit asked if it could return design and build the custom Hobbit platform. Moreover, the thefavor aeft r situations wherethe user hadhelpedHobbit robotshouldoeff reverydaylifesuitabletasks inasocially to carry out a task, (2) communication style: Hobbit used the interactive manner to be sustainably used by the older adults. user’s name in the dialogue and was more human-like such as responding to a reward from the user by saying You are welcome instead of Reward has been received,(3)proactivity: 3. The Hobbit Robot Hobbit was more proactive and initiated interactions with Hobbit isable toprovideanumberofsafety andenter- the user, and (4) presence: Hobbit stayed in the room where tainment functions with low-cost components. The ability the last interaction has taken place for at least 30 minutes to provide many functions with sometimes contradictory instead of heading directly back to the charging station. In requirements for the hardware design creates a demanding order to avoid potential biases, users were not told about the challengeonitsown.Tothebestofourknowledge, we arethe behavioral change of the robot beforehand. rst fi to present a robot that operates in users’ homes in a fully autonomous fashion for a duration of 21 days per user, while 3.3. Development Steps Leading to Hobbit. To gain insight providing an extensive set of functionalities like manipulation into the needs of elderly living alone, we invited primary of objects with an included arm. users (PU), aged 75 years and older and living alone, and secondary users (SU), who are in regular contact with the 3.1. General Vision. The motivation for Hobbit’s development primary users, to workshops in Austria (8 PU and 10 SU) was to create a low-cost, social robot to enable older adults and Sweden (25 PU). A questionnaire survey with 113 PU in to independently live longer in their own homes. One reason Austria, Greece, and Sweden and qualitative interviews with for the elderly to move into care facilities is the risk of falling 38 PU and 18 SU were conducted. This iterative process [16] and eventually inflicted injuries. To reduce this risk, the not only resulted in the user requirements but also influenced “must-haves” for the Hobbit robot are emergency detection the design and material decisions, which were incorporated (the robot patrolling autonomously through the flat after into the development of the Hobbit robots as seen in Figure 1. three hours without any user activity and checking if the Based on these requirements and laboratory studies with user is well and did not suffer a fall), emergency handling the PT1 platform [17] with 49 users (Austria, Greece, and (automatic calls to relatives or emergency services), and fall Sweden), the following main functionalities for Hobbit were prevention (searching and bringing known objects to the selected: user and picking up objects from the floor pointed to by (1) Call Hobbit:summonthe robottoaposition linkedto the user and a basic tfi ness program to enhance the user’s battery-less call buttons overall tfi ness). Hobbit also provides a safety check feature that informs the user about possible risks in specific rooms (e.g., (2) Emergency: call relatives or an ambulance service. wet floor in the bathroom and slippery carpets on wooden This can be triggered by the user from emergency floors) and explains how to reduce such risks. buttons and gesture commands or by the robot during In science fiction, social robots are oeft n depicted as a patrolling butler, a fact that guides the expectations towards such robots. (3) Safety check:guidetheuserthrough alistofcommon However, as state-of-the-art technology is not yet able to risk sources and provide information on how to fulfill these expectations, Hobbit was designed to incorporate reduce them the Mutual Care interaction paradigm [15] to overcome the robot’s downfalls by creating an emotional bond between the (4) Pick up objects: objects lying on the floor are picked users and the robot. The Mutual Care concept envisioned that up by the robot with no distinction between known the user and the robot provide help in a reciprocal manner or unknown objects to each other, therefore creating an emotional bond between (5) Learn and bring objects:visuallearningofuser’s them, so that the robot not only provides useful assistance objectstoenabletherobottosearchandfindthem but also acts as a companion. The resulting system complexity within the environment based on the multifunctionality was considered as acceptable (6) Reminders: deliver reminders for drinking water and to fulfill the main criteria ( emergency detection and handling, appointments directly to the user fall prevention,and providing a feeling of safety). (7) Transport objects: reduce the physical stress on the user by placing objects on to the robot and letting it 3.2. Mutual Care as Underlying Interaction Paradigm. The transport them to a commanded location Mutual Care concept was implemented through two different social roles, one that enforces this concept and one that does (8) Go recharging: autonomously, or by a user command, not. Hobbit started in the Mutual Care-disabled mode during move to the charging station for recharging the el fi d trials and changed after 11 days to the Mutual Care (9) Break: put the robot on break when the user leaves the mode. eTh differences between these two modes or social flat or when the user takes a nap roles of the robot were mainly in its dialogues, proactivity, andthe proximityinwhich therobot wouldremainwhen (10) Fitness: guided exercises that increase the overall the user stops interacting with the robot. In more detail, fitness of the user Journal of Robotics 5 (a) (b) (c) Figure 1: (a–c) First mock-ups designed by secondary users: the first (PT1) and second generation of Hobbit as used during the field trials. (11) Entertainment: brain training games, e-books, and 3.4.1. Visual Perception System Using RGB-D Cameras. For music the visual perception system, Hobbit is equipped with two Asus Xtion Pro RGB-D sensors. eTh head camera is mounted 3.4. Robot Platform and Sensor Setup. The mobile platform of insidetheheadandusedforobstacleavoidance,objectlearn- the Hobbit robot has been developed and built by MetraLabs ing and recognition, user detection, and gesture recognition (http://www.metralabs.com). It moves using a two-wheeled andtodetectobjectstopickup.Since theheadcanperform dieff rential drive, mounted close to the front side in driving pan and tilt movements, the viewing angle of this camera direction. For stability, an additional castor wheel is located canbedynamicallyadaptedtoaparticular task at hand.In closetothe back.Tofitall thebuilt-insystemcomponents, contrast, the bottom camera, used for localization, mapping, the robot has a rectangular footprint with a width of 48 cm anduserfollowing,ismounted atafixedpositionataheight and a length of 55 cm. For safety reasons, a bumper sensor of 35 cm in the front of the robot’s body, facing forward. This surrounds the base plate, protecting the hull and blocking setup is a trade-off between the cost of the sensor setup (in the motors when being pressed. This ensures that the robot terms of computational power and money) and the necessary stops immediately if navigation fails and an obstacle is hit. data for safe usage and feature completeness, which we found An additional bumper sensor is mounted below the tablet PC, to be most suitable for the variety of different tasks that which provides the graphical user interface. During situations require visual perception. in which the user might not be able to reach the tablet PC The cameras, which only cost a fraction of laser range (e.g. the person has fallen), a hardware emergency button is sensors commonly used for navigation in robotics, offer a located on the bottom front side. resolution of 640× 480 pixels of RGB-D data and deliver On its right side, the robot is equipped with a 6-DoF useful data in a range of approximately 50 cm to 400 cm. arm with a two-finger n-ra fi y gripper, such that objects lying eTh refore,our system hastobeabletocopewithablind on thefloorcan bepickedupandplacedinatray on top spot in front of the robot. Furthermore, the quality of data of the robot’s body. Furthermore, the arm can grasp a small acquiredwiththeheadcamerafromanobservedobjectvaries turntablestoredontherightsideofthebody,which isused depending on the task. For example, in the learning task, an to teach the robot unknown objects. object that is placed on the robot’s turntable is very close to the eTh robot’s head, together with the neck joint with motors head camera, just above the lower range limit. In the pickup for pan and tilt movements, has been developed by Blue task, on the contrary, the object detection method needs Danube Robotics (http://www.bluedanuberobotics.com). It to be able to detect objects at the upper range limit of the contains two speakers for audio output, two Raspberry Pis camera, where data points are already severely influenced by with one display each for the eyes of the robot, a temperature noise. sensor, and a RGB-D sensor. This sensor, referred to in the Because two of the main goals for the final system remainder of the paper as head camera,isusedfor obsta- were affordability and robustness, we avoided incorporating cle avoidance, for object and gesture recognition, and—in additional cameras, for example, for visual servoing with conjunction with the temperature sensor—for user and fall the robot’s hand. For further details and advantages of detection. Similar to the previous prototype of the robot oursensorsetup fornavigation, we referthereader to [3, 18], the visual sensor setup is completed by a second RGB- [18]. Dsensor,mountedinthe robot’sbodyataheight of35cm facing forward. This sensor, referred to in the remainder of the paper as bottom camera, is used for localization, 3.4.2. Head and Neck. Besides the head camera, the head contains an infrared camera for distance temperature mea- mapping, and user following. Figure 2 shows an overview of the Hobbit hardware; a more detailed explanation of the surement, two speakers for audio output, and two Raspberry single components is given in the following sections. Pis with displays showing the robot’s eyes. Through its eyes, 6 Journal of Robotics Head Temperature sensor Head camera Speakers Neck joint (pan/tilt) Eyes showing emotions Neck Tray for personal belongings Tray where Hobbit puts objects Tablet PC with graphical UI Bumper sensor Stored turntable Gripper Bottom camera 6-DoF arm Water bottle holder Emergency help button Bumper sensor Figure 2: Hardware setup of the Hobbit platform. Very tired Sleeping Happy Very happy Wondering Concerned Sad Tired Figure 3: List of emotions shown by Hobbit’s eyes. the robot is able to communicate a set of different emotions unsupervised actions was required to minimize the risk of totheuser, whichare showninFigure3.Theneckjoint breakage. contains two servo motors, controlling the horizontal and vertical movement of the head. 4. Behavior Coordination 3.4.3. Arm and Gripper. To be able to pick up objects from As Hobbit’s goal directly called for an autonomous system the floor or to grab its built-in turntable, Hobbit is equipped running for several weeks, providing interactions on an with a 6-DoF IGUS arm and a two-finger fin-ray gripper. As irregular schedule and on-demand basis, the behavior coor- a cost-eecti ff ve solution, the arm joints are moved by stepper dination of the Hobbit robots was designed and implemented motors via Bowden cables; the used n fi -ray gripper offers inamultistagedevelopmentprocess. Basedontheworkshops oneDoF andisdesignedtoallowform-adaptable grasps. with PU and SU and the user study with Hobbit PT1, While an additional DoF would increase flexibility and elderly care specialists designed the specific scenarios. eTh y lower the need for accurate self-positioning to successfully designed detailed scripts for the 11 scenarios (see Section 3.3) grasp objects, for the sake of overall system robustness and therobothadtoperform.Those11scenarios were subse- low hardware costs, the 6-DoF version was the model of quently planned in a flowchart-like fashion, which eased the choice for the arm. eTh arm is not compliant; therefore, transition from the design process to the implementation cautious behavior implementation with reduced velocities for stage. Journal of Robotics 7 Other ROS ROS Smach behavior state machine nodes LocateUser GoTo Recharge FindObject CallRobot PickUp Other goal pose action dist/ang commands actions feedback docking on/off action other data Skeleton detection, top interfaces_mira gestures recognition scan obstacle scan localization pose head position local map and path RGB-D data tasks feedback low level information MMUI bottom scan arm virtual scans navigation tasks RGB-D distance mode commands (dist/ang) data docking on/off tasks localization reset Figure 4: Hobbit behavior architecture. In thefollowing,wediscusstheoverall behavior coordi- be able to bring it into a position in which it would be safe nation architecture and how the Mutual Care concept was to perform other tasks. eTh movement of the robot within implemented and go into detail of some of the building blocks theenvironment wouldhavebeenunsafeifthearmwould necessary to construct the 11 scenarios. We further present still stick out of the footprint of the robot itself. The priorities the methods we developed to realize the goals of the project of the commands were defined with respect to the safety of while respecting the limits set by the low-cost approach of our the user, so that emergency situations can always preempt robots. a possibly running state-machine, regardless of the state the system is currently in. 4.1.BehaviorCoordinationArchitecture. Following the sce- nario descriptions, as defined by our specialists in elderly 4.2. RGB-D Based Navigation in Home Environments. Au- care, their implementation and the execution followed a tonomous navigation in user’s homes, especially with low- script-based approach. A state-machine framework, SMACH cost RGB-D sensors, is a challenging aspect of care mobile (http://wiki.ros.org/smach), was therefore chosen to handle robots. es Th e RGB-D sensors pose additional challenges the behavior execution for all high-level codes. forsafenavigation[18,20–22]. er Th educedfieldofview, An overview of the implemented architecture is shown the blind detection area, and the short maximum range in Figure 4. eTh top structure in this architecture is the Pup- of this kind of sensors provides limited information about petMaster, which handles the decision-making outside of any the robot’s surroundings. If the robot, for example, turns scenario execution, where it can start, preempt, and restart around in a narrow corridor, it might happen that the any sub-state-machines. For this, it collects the input from wallsare alreadytoo closetobeobservedwhile turning, those ROS nodes that handle gesture and speech recognition, leading to increased localization uncertainty. In order to text input via touchscreen, emergency detection (fallen and prevent such cases, we defined no-go areas around walls falling person detection, emergency button on the robot itself, in narrow passages, preventing the robot from navigating and emergency gesture), and scheduled commands that need too close to walls in the first place. For obstacle avoidance, to be executed at a specicfi time of the day. eTh PuppetMaster the head is tilted down during navigation, so that the head delegates the actual scenario behavior execution to the sub- camera partially compensates for the blind spot of the bottom state-machines, which only rely on the input data needed camera. If obstacles are detected, they are remembered for for the current scenario. Each of these sub-state-machines a certain time in the robot’s local map. However, a suitable corresponds to one of the scenarios designed to assist the trade-off had to be found for the decay rate. On one hand, users in their daily lives. As we needed to deal with many therobot mustbeable toavoidpersistingobstacles,but, different commands with different execution priorities, it was on the other hand, it should not be blocked for too long necessary to ensure that every part of the execution of the when an obstacle in front of it (e.g., a walking person) is state-machines can safely be interrupted without the risk removed. oflingeringinanundenfi edstate.Particularlyinsituations While localization methods generally assume that fea- when the arm of the robot was moving, it was necessary to tures of the environment can be detected, this assumption localization scan gestures Other MIRA authorities user pose gestures gestures touch screen commands voice commands AAL call button commands 8 Journal of Robotics Figure 5: Risky areas to be avoided. Obstacles like high shelves or stairs may not be perceived by Hobbit’s sensor setup. Figure 6: Examples of installed ramps to overcome door thresholds. does notholdfortheusedRGB-D cameraswithlimitedrange only useful as long as overall localization is precise enough. and long corridors. In this situation, according to the detected Other challenging situations were caused by thresholds and features, the robot could be anywhere along the parallel bumps on the floor and carpets. To overcome thresholds, we walls, which can cause problems in cases where the robot tested commercial and homemade ramps (Figure 6). After should enter a room aer ft driving along in such a corridor. testing different configurations and finding proper incline When entering a room, it is especially important that the limits, the robot was usually able to pass thresholds. Problems robotbecorrectly localizedinthe transversaldirection to with standard planning methods, for example, when a new the doorway and that the doorway be approached from the plan caused the robot to turn while driving on a ramp, front, so accurately driving through doors located on one were observed. A situation-dependent direct motion control side of a corridor is much more dicffi ult than through doors instead of a plan-based approach can reduce the risk during located at the beginning or at the end of a corridor. In order such situations. to approach doors from the front, avoiding getting too close In order to facilitate the tasks to be carried out in the to the corner sides, a useful strategy for wide enough places home environment, the concept of using rooms and labeled is adding no-go areas at sides of a doorway entrance or at places inside the rooms (locations) was applied. eTh rooms sharp corners. This way, it is possible to have safer navigation are manually den fi ed, such that spatial ambiguity is not behavior in wide areas while keeping the ability to go through a problem. Also, the geometry of the defined rooms does narrower areas. This provides more flexibility than meth- nothavetobeveryprecise with respecttothe map, as ods with xe fi d security margins for the whole operational long as the rooms contain all the places of interest that the area. user wants to label. Places are learned by tele-operating the No-go areas were also useful to avoid potentially dan- robot to specicfi locations and the subsequent association of gerous andrestrictedareas androoms.Afewexamplesare places to rooms operates automatically, based on the crossing shown in Figure 5. Areas with cables and thin obstacles on number algorithm to detect whether a point lies inside a the floor and very narrow rooms (usually kitchens), where a generic polygon [23]. Figure 7 shows several examples of nonholonomic robot as Hobbit cannot maneuver, were also rooms and places defined in the user trials for different avoided. However, it is worth noting that no-go areas are tasks. Journal of Robotics 9 Figure 7: Rooms and places defined in two real apartments in Vienna. Robot Commands Pick up Follow me Go to point SOS Help! Learn object Bring me ... Go recharging Break Figure 8: GUI of Hobbit showing one of the menu pages for robot commands. eTh strike-through hand on the right side indicates that the gesture input modality is disabled currently. A similar indicator was used for the speech input. 4.3. Multimodal Interaction Between the User and the Robot. the robot is close enough for the user to interact via the The Hobbit robot deploys an improved version of the multi- touchscreen, while at the same time does not invade the modal user interface (MMUI) used on Hobbit PT1. Generally personal space of the user (limiting her/his movement space speaking, the MMUI is a framework containing the following or restricting other activities such as watching TV). Hobbit main building blocks: a Graphical User Interface (GUI) makes use of the MMUI to combine the advantages of the with touch, Automatic Speech Recognition (ASR), Text to various user interaction modalities [25]. eTh touchscreen has Speech (TTS), and Gesture Recognition Interface (GRI). eTh strengths such as intuitiveness, reliability, and flexibility for MMUI provides emergency call features, web services (e.g., multiple users in different sitting positions but requires a weather, news, RSS feed, and social media), control of robotic rather narrow distance between user and robot (Figure 9). functions, and entertainment features. Compared to PT1, the ASR allows a larger distance and can also be used when no graphical design of the GUI (Figure 8) was modified to better freehandsare available, butithasthedisadvantageofbeing meet the user’s needs. Graphical indicators on the GUI for influenced by the ambient noise level, which may reduce showing current availability of GRI and ASR were iteratively recognition performance significantly. GRI allows a wider improved. distance between the robot and user and also works in noisy During PT1trials, we foundthatmostoftheusersdid environments, but it only succeeds when the user is in the not use the option of extending the MMUI to a comfortable efi ld of view of the robot. eTh interaction with Hobbit always ergonomic position for them. er Th efore the mounting of depends on the distance between the user and Hobbit. It can the touchscreen was changed to a fixed position on Hobbit. be done through a wireless call button (far from other rooms), Additionally, while the PT1 robot approached the user from ASR and GRI (2 m to 3 m), and touchscreen (arm length, see the front, the Hobbit robot approaches the user from the right Figure 9). or left side while seated, which is more positively experienced The ASR of Hobbit is speaker-independent, continuous, by the user [24]. This oeff rs the additional advantage that and available in four languages: English, German, Swedish, 10 Journal of Robotics and the appearance variability of the tracked person. A YES challenging aspect of the problem in Hobbit-related scenarios is that elderly users spend a considerable amount of time sitting in various types of chairs or couches. er Th efore, human detection and tracking should consider human body figures that do not stand out from their background. On the contrary, they may interact with cluttered scenes, exhibiting severe partial occlusions. Additionally, the method needs to be capable of detecting a user’s body while standing or walking Figure 9: Different interaction distances between user and Hobbit based on frontal, back, or side views. seen from a ceiling camera. Short range: touch; middle range: speech The adopted solution [31] enables 3D part-based, and gesture; long range: wireless call button. full/upper body detection and tracking of multiple humans based on the depth data acquired by the RGB-D sensor. eTh 3D positions and orientations for all joints of the skeletal and Greek. Contemporary ASR systems work well for differ- ent applications, as long as the microphone is not moved far model (full or upper body) relative to the depth sensor from the speaker’s mouth. The latter case is called distant or are computed for each time stamp. A conventional face detection algorithm [32] is also integrated using the color far-field ASR and shows a significant drop in performance, which is mainly due to three different types of distortion [26]: data stream of the sensor to facilitate human detection in case thefaceofthe user isvisiblebythesensor.Theproposed (a) background noise, (b) echo and reverberation, and (c) method has a number of beneficial properties that are other types of distortions, for example, room modes or the orientation of the speaker’s head. For distant ASR, currently summarized as follows: (1) performs accurate markerless 3D tracking of the human body that requires no training no off-the-shelf solution exists, but acceptable error rates can be achieved fordistancesup to 3m by carefultuningofthe data, (2) requires simple inexpensive sensory apparatus audio components and the ASR engine [27]. An interface to a (RGB-D camera), (3) exhibits robustness in a number of challenging conditions (illumination changes, environment cloud based calendar was introduced, allowing PU and SU of Hobbit to access and partly to also edit events and reminders. clutter, camera motion, etc.), (4) has a high tolerance with Despite the known difficulties with speech recognition respect to variations in human body dimensions, clothing, and so forth, (5) performs automatic human detection and in the far field and the local dialects of the users, the ASR of Hobbit worked as expected. eTh ASR was activated all automatic tracking initialization, thus recovering easily from possible tracking failures, (6) handles self-occlusions over theHobbitusertrials, buttheperformanceratewas commented on by users as necessary to be improved. The among body parts or occlusions due to obstacles/furniture same was observed for the GRI. Eventually, the touchscreen and so forth, and (7) achieves real-time performance on a conventional computer. Indicative results of the method are as input modality was used most oen ft by the majority of users, followed by speech and gesture. Touch was used more illustrated in Figure 10. than twice as oeft n as it was the case with ASR. Additionally, many users did not wait until the robot had completed 4.5. Gesture Recognition. A vision-based gestural interface its own speech output before starting to give a speech was developed to enrich the multimodal user interface of Hobbit in addition to speech and touch modalities. This command which reduced the recognition rate. Considering these lessons learned, the aims for future work on the ASR enables natural interaction between the user and the robot are twofold: improving the performance of the ASR and by recognizing a predefined set of gestures performed by the user using her/his hands and arms. Gestures can be of varying providing better indication when the MMUI is listening to spoken commands and when it is not. eTh aspect of using two complexity and their recognition is also aeff cted by the scene different variants for text messages from the robot to the user context, actions that are performed in the foreground or the was taken over from Hobbit PT1. Based on other researches, it background at the same time, and by preceding and/or fol- can be concluded that using different text variants does have lowing actions. Moreover, gestures are oen ft culture-specific, an influence, for example, by increasing users’ impression of providing additional evidence to substantiate the interesting interacting with a (more) vivid system. Some users demanded as well as challenging nature of the problem. additional ASR commands, for example, right, left, forward, For Hobbit, existing upper body gestures/postures as used on PT1 had to be replaced with more intuitive hand/finger- reverse,and stop in addition to come closer, as they would like to position (move) the robot with the help of voice commands based gestures that can be performed more easily by elderly or a remote control. users while sitting or standing. We redesigned the gestural vocabularyforHobbitthatnowconsists ofsixhandgestures 4.4. Person Detection and Tracking. To serve as building that convey messages of fundamental importance in the block for components like activity recognition [28] and context of human-robot dialogue. Aiming at natural, easy-to- natural human-robot communication [19, 29] as well as memorize means of interaction, users have identified gestures specialized functions like the tfi ness application [30], we consisting of both static and dynamic hand configurations developed a human body detection and tracking solution. that involve different scales of observation (from arms to Person detection and tracking in home environments is fingers) and exhibit intrinsic ambiguities. Recognition needs a challenging problem because of its high dimensionality to be performed in continuous video streams containing Journal of Robotics 11 (a) (b) (c) (d) (e) Figure 10: Qualitative results of the 3D skeletal model-based person detection and tracking method. (a) Full model of a standing user. (b) Upper body (including hands and fingers) of a sitting user. (c) Full model of a sitting user. ((d) and (e)) Hand and finger detection supporting the gesture recognition framework (see Section 4.5). other irrelevant actions. All the above need to be achieved by 4.6. Fall Detection. According to the assessed user needs and analyzing information acquired by a possibly moving RGB- the results of PT1 laboratory studies [17], a top-priority and D camera in cluttered environments with considerable light prominent functionality of Hobbit regards fall prevention and variations. fall detection. We hereby describe a relevant vision-based eTh proposed framework for gesture recognition [19, 29] component that enables a patrolling robot to (a) perform fall consists of a complete system that detects and tracks arms, detection and (b) detect a user lying on the floor. We focused hands, and ngers fi and performs spatiotemporal segmenta- mostly on the second scenario, as observing a user falling tion and recognition of the set of preden fi ed gestures, based in the eld fi of view of an autonomous assistive robot is of on data acquired by the head camera of the robot. u Th s, the very low probability. eTh proposed vision-based emergency gesture recognition component is integrated with the human detection mechanism consists of three modes, each of which detection and tracking module (see Section 4.4). At a higher initiates an emergency handling routine upon successful level, hand posture models are defined and serve as building recognition of the emergency situation: blocks to recognize gestures based on the temporal evolution (1) Detection of a falling user in case the fall occurs while of the detected postures. The 3D detection and tracking of thebodyisobservablebytheheadcameraoftherobot hands and n fi gers relies on depth data acquired by the head camera of Hobbit, geometrical primitives, and minimum (2) Detection of a fallen user who is lying on the floor spanning tree features of the observed structure of the scene while the robot is navigating/patrolling in order to classify foreground and background and further (3) Recognition of the emergency (help) gesture that can discriminate between hand and nonhand structures in the be performed by a sitting or standing user via the foreground. Upon detection of the hand (palm and n fi gers), gesturerecognition interfaceofHobbit(seeFigure11, the trajectories of their 3D positions across time are analyzed middle) to achieve recognition of hand postures and gestures (Table 1). The last column describes the assignment of the chosen The methodology for (1) regards a simple classifier trained on physical movements to robot commands. eTh performance the statistics of the 3D position and velocity of the observed of the developed method has been tested not only by users human body joints acquired by the person detection and acquainted with technology but also by elderly users [19] tracking component. For (2), once the general assumption, (see Figure 11). os Th e tests formed a very good basis for thefactthatthehuman’sheadisabove therestofthebody, fine-tuning several algorithmic details towards delivering a does no longer hold true, an alternative, simple, yet effective robust and ecffi ient hand gesture recognition component. The approach to the problem has been adopted. This capitalizes performance of the final component was tested during field on calibrated depth and thermal visual data acquired from trials achieving high performance according to the evaluation two different sensors that are available on the head of Hobbit. results. More specicall fi y, depth data from both cameras of the robot 12 Journal of Robotics Table 1: Set of hand/arm postures/gestures considered for the gestural interface of Hobbit. User command Upper body gesture/posture Robot command Related scenarios/tasks Positive response to confirmation All (1 m to 2 m distance to Yes um Th b up-palm closed dialogues. YES gesture robot) Negative response to confirmation All (1 m to 2 m distance to No Close palm, waving with index finger up dialogues robot) Bend the elbow of one arm repeatedly towards the Reposition the platform closer to the All (1 m to 2 m distance to Come closer platform and the body sitting user robot) Terminate an on-going robot Cancel task Both open palms towards the robot All behavior/task Extend one arm and point in 3D space towards an Detect and grasp the object of interest Pick up an (unknown) Pointing object (lying on the floor) towards the pointing 3D direction object from the floor Open palm facing towards the robot and circular Rewards the robot for an accomplished Reward Approach the user movement (at least one complete circle is needed) action/task Emergency detection, initiated by the Emergency Cross hands pose (normal-range interaction) Emergency detection user Figure 11: Snapshots of Hobbit users performing gestures during lab trials. eTh recognition results are superimposed as text and a circle on the images indicating the location and the name of the recognized gesture (taken from [19]). (head and base) are acquired and analyzed while observing communicates to the robot whether it should move even the floor area in front of the robot. Figure 12 illustrates sample closer or not in any of the three available modes (speech, results of the fallen user detection component. In Figure 12(a), touch, or gesture). Finally, the robot moves closer by a fixed the upper part illustrates the color frame captured by the head distance of 0.15 m for a maximum of three times if the camera of the robot that is titled down towards the floor, while user wishes. This gives the users more control over final navigating. In the bottom image, the viewpoint of the bottom distance adjustments. A more detailed description of this camera is illustrated, aer ft the estimation of the 3D floor plane novel approach will be published elsewhere. has been performed. The methodology for vision-based emergency detection 4.8. User Following. As theheadcameraisnot availablefor of case (3) refers to successful recognition of the emergency observing the full body of a user during navigation (obstacle “Help me,” based on the gesture and posture recognition detection), we designed a new approach [33] to localize a user module, as described in Section 4.5. eTh developed com- by observing its lower body part, mainly the legs, based on ponent is constantly running in the background within the RGB-D sensory data acquired by the bottom camera of the robot’s behavior coordination framework, while the robot is platform. active during all robot tasks, except from object detection and The proposed method is able to track moving objects recognition tasks. such as humans, estimate camera ego-motion, and perform map construction based on visual input provided by a single 4.7. Approaching the User. Specific behavior coordination RGB-D camera that is rigidly attached to a moving platform. was developed so that the robot could approach the user The moving objects in the environment are assumed to in a more flexible and effective way compared to standard move on a planar floor. The first step is to segment the existing methods. Using fixed predefined positions can be static background from the moving foreground by selecting sufficient in certain scenarios, but it oen ft presents limitations a small number of points of interest whose 3D positions in real-world conditions [22]. eTh approach we developed are estimated directly from the sensory information. The incorporates user detection and interaction (Section 4.4), camera motion is computed by tfi ting those points to a remembered obstacles and discrete motion for coming closer progressively built model of the environment. A 3D point to the user with better, and adaptive positioning. may not match the current version of the map either because First, a safe position to move to is obtained from the it is a noise contaminated observation or because it belongs local map and the robot moves there. Secondly, the user to a moving object or because it belongs to a structure Journal of Robotics 13 (a) (b) (c) Figure 12: Vision-based emergency detection of a fallen user lying on the floor. eTh upper and lower middle images show the captured frame from theheadandbottomcameras,respectively.Thegreendotsmark afoundskeletonwithinthesearcharea(greenandbluerectangles). (a–c) No human, no detection; person lying on the floor, correct detection; volumetric data from the head’s depth and temperature sensor are in conflict with the volumetric date provided by the bottom depth sensor. (a) (b) (c) (d) Figure 13: (a–d) User points to an object on the floor and Hobbit drives to a point from where it can be picked up and moves the arm to a position to grasp it. The object is lifted and the check if grasp was successful is performed: the object is moved forward to check if something has changed at the previous position of the object on the floor. If successful, the object is placed on the tray on top of the robot. attached to the static environment that is observed for the 4.9. Pick Up Objects from the Floor. To reduce the risk of first time. A classification mechanism is used to perform falling, Hobbit was designed to be able to pick up unknown this disambiguation. Additionally, the method estimates the objectsfromthefloor.Figure13shows thesteps ofthe“Pick camera (ego) motion and the motion of the tracked objects in up object” task. The user starts the command and points at a coordinate system that is attached to the static environment the object on the floor. If the pointing gesture is recognized, (robotic platform). In essence, our hypothesis is that a pair the robot navigates to a position from where it could observe of segmented and tracked objects of specific size/width that the object. At this position, the robot looks at the approximate move independently side-by-side at the same distance and position of the object. Hobbit then makes n fi e adjustments to direction in the field of view of a moving RGB-D camera position itself at a location from where grasping is possible. correspond to user’s legs being followed by the robot with If it is safe to grasp the object, the robot executes the arm high probability. The method provides the 3D position of trajectory and subsequently checks if the grasp was successful user’s legs with respect to the moving or static robotic plat- andwilltry to do so asecondtimeifitwas not. form. Other moving objects in the environment are filtered Several autonomous mobile robots have been developed outorcan beprovidedtoan obstacle avoidancemechanism to fetch and deliver objects to people [34–38]. None of these as moving obstacles, thus facilitating safe navigation of the publications evaluate their robot grasping from floor, and robot. none evaluate the process of approaching an object and 14 Journal of Robotics Trainer You Trainer You x0 x0 We will start with your right arm. Follow me. Try to move both arms at the same time. (a) (b) Figure 14: (a) Avatar mirroring the trainer’s movement proved easier for users to follow. (b) Correction suggested by the system to the user. grasping it as a combined action. Detection of the user and interface was allocated for the instructions at the beginning recognition of a pointing gesture were performed using the of each exercise and also for any feedback and guidance to work presented in [19, 31]. Checks are performed to rule out the user when needed. eTh design and development of the unintentional or wrong pointing gestures and to enhance the tfi ness application are described in more detail in [30]. The accuracy of the detected pointing gesture. tfi ness application was explained to the participants of the A plausibility check tests if the pointing gesture is point- trials by the facilitator at the initial introduction of the system ing towards the floor. To guarantee an exact position of the during the installation day. eTh participants could access the robot to bring the arm in a position where the gripper can application if desired at any time. Almost all users tried the approach the object in a straight line before closing, the tfi ness application at least once with some using it multiple accurate movement to the grasping position can be done as times during the three-week evaluation period. From the a relative movement to the object instead of using the global comments received during the mid-term and end-of-trial navigation. This is a crucial step as the region, in which the interviews, it can be concluded that the overall concept of head camera is able to perceive objects and where the 6- having the tfi ness program as a feature of the robot received DoF arm is able to perform a movement straight down to positive marksbymanyofusers as farasitsusefulness and the floor without changing gripper orientation, is limited importance are concerned. However, most users who tried it outsaidthattheywouldhave likedittobemorechallenging to 15× 10 cm . For calculating grasps, we use the method and to offer a larger variety of exercise routines with various of Height Accumulated Features [39]. eTh se features reduce challenging levels to choose from. the complexity of a perceived point cloud input, increase the value of given information, and hence enable the use of machine learning for grasp detection of unknown objects in 5. Field Trials cluttered and noncluttered scenes. We conducted eld fi trials in the households of 18 PU with 4.10. Fitness Application. The tfi ness application was intro- 5 Hobbit robots in Austria, Greece, and Sweden. eTh trials duced as a feature to the Hobbit robot after the PT1 trials lasted∼21 days for each household, resulting in a total of 371 and was made available during the PT2 trials for evaluation. days.Duringthistime,therobotswereplacedinthehomesof The motivation behind this application comes from the fact 18 older adults living on their own, where users could use and that physical activity can have a significant positive impact explore the robot on a 24/7 basis. Detailed results of the trials on the maintenance or even on the improvement of motor will be published elsewhere; preliminary results can be found skills, balance, and general physical well-being of elderly in [40] (a rfi st analysis only of the robot log data without any people, which in turn can lower the risk of falls in the long cross-analysis to the other data collected) and in [41] (a rfi st run. Basedonfeedback fromtheCommunity andActive overview on the methodological challenges faced during the Ageing Center of the municipality of Heraklion, Greece, the field trails). following requirements were produced. The exercises must (1) The trial sample consisted of 16 female and 2 male PU; be easy to learn, (2) target different joints and muscles, (3) their age ranged from 75 to 90 years (𝑀 = 79.67). All PU provide appropriate feedback to the user, (4) keep the user were living alone, either in flats (13 participants) or in houses. engaged while providing enough breaks, and (5) be designed In adherence with inclusion criteria set by the research to be performed from a seated position. consortium, all participants had fallen in the last two years Based on these requirements and feedback from test or were worried about falling and had moderate impairments users, we developed an application including three difficulty in at least one of the areas of mobility, vision, and hearing. 15 levels and seven different exercises. The user interface con- PU had some form of multiple impairments. Furthermore, all sisted of a split view of a video recording of the actual trainer participants had sufficient mental capacity to understand the performing each exercise on the left side and an avatar figure project and give consent. In terms of technology experience, depicting the user’s movement while executing the instructed 50.0% of the PU stated that they were using a computer every exercise on the right side as shown in Figure 14. This side- day, 44.45% stated that they were never using a computer or to-side viewing setup allowed the user to compare his or her used it less than once a week, and only one participant used a movements to those of the trainer. eTh bottom part of the computer twotothree timesaweek. Journal of Robotics 15 Before the actual trials, the PU were surveyed to make 6. Lessons Learned sure that they matched the criteria for inclusion and to discuss Based on all the insights gained from developing and testing possible necessary changes to their home environments for Hobbit in the efi ld, we can summarize the following rec- the trials (e.g., removing carpets and covering mirrors). After ommendations for fellow researchers in the area of socially an informed consent was signed, the robot was brought into assistive robots for enabling independent living for older the home and the technical setup took place. Aeft r this setup, adults in domestic environments. a representative from the elderly care facility explained the study procedure and the robot functionalities to the PU in 6.1. Robot Behavior Coordination. The developed behavior an individual open-ended manner. Aeft rwards, a manual was control based on a state-machine proved to be very useful and leftwithinthehouseholdincaseparticipantswantedtolook allowed us to implement many extensions in a short time. A up a functionality during the 21 days. All users experienced close interconnection with the user was therefore helpful. In two behavioral roles of the robot. eTh robot was set to device- the following, we present our main lessons learned regarding mode until day 11 when it was switched to companion-mode theimplementationofthe robotbehavior. (i.e., Mutual Care). The real-world environment in which the el fi d tests took place bears certain challenges, such as 6.1.1. Transparency. Actions and their eects ff need to be unforeseen changes in the environment and uncontrollable communicated in a clear fashion so that the robot’s pre- settings. Assessment by means of qualitative interviews and sented functionality can be fully understood by the user. questionnaires took place at four stages of each trial: before Users reported missing or nonworking functionality (e.g., trial, midterm, end of trial, and aer ft trial (i.e., one week aer ft reminders not being delivered to them and patrol not being the trial had ended). Moreover, log data was automatically executed). Most of these reported issues were caused by recorded by the robot during the whole trial duration. eTh the fact that the users did not understand the technical field trial methodology is comparable to similar studies (e.g. interdependencies between robot functions. For example, if [42]). acommand wasnot availableduetoacertain internal state The efi ld trials revealed that several functions of the of the robot, the user was not aware of this and did not robot lack stability over time. os Th e technical issues certainly understandtheshownbehavioroftherobot.Thesefunctional influenced the evaluation of the system because a reliable relations need to be made explicit and stated more clearly to working technical system is a prerequisite for positive user the users. experience. We tried to minimize potential negative feelings due to potential malfunctioning by informing our users that 6.1.2. Legibility. The log data and conversations with par- aprototype of arobot is averycomplextechnicalsystemthat ticipantsrevealedthatthe robotneedstocommunicateits might malfunction. Additionally, they were given the phone intentions. For instance, when the robot proactively moved number of the facilitator who was available for them around out of its charging station, the user was not always aware what the clock, 7 days per week, for immediate support. However, was going to happen next. When they did not understand malfunctions certainly had an influence on subjects’ answers whattherobot wasdoing,theycanceledtherobot’saction, during the assessments and may have attracted attention with eeff ctively stopping part of the robot’s benefit to them. To theresultthatthesubtlebehavioralchangesintroducedbythe work around this, a robot needs to clearly state the reason switch from device-mode to companion-mode may have been of its action and which goal it is trying to achieve when shifted out of the attentional focus. Availability of commands performing an autonomously started task. wasequallydistributedacrossthetwophasesof Mutual Care as canbeseeninTable2.Pleasenote that unavailability or 6.1.3. Contradictory Commands. Log data presented an inter- malfunctioning of functions in one but not the other mode esting effect while interacting with the touchscreen. When (unequal distribution of functionality) would have led to moving the hand towards the touchscreen on the robot, a bias within the evaluation. Table 2 gives an overview of the gesture recognition system detected the movement of the functional status across all PU during the efi ld trials. the hand as the come closer gesture, shortly followed by It is based on the combination of (i) a check of the robot’s a command from the touch input on the GUI. We could features by the facilitator during the preassessment, midterm replicate this behavior later on in our internal tests in the lab. assessment, and end-of-trial assessments, (ii) protocols of the A simple solution for such contradictions of commands is to calls of the users because they had a problem with the robot, simply wait forashortperiodoftime(less than0.2seconds) and (iii) analysis of the log data by technical partners. before a gesture close to the robot is processed by the behavior eH Th obbit efi ldtrials marked thefirsttimeanauton- coordination system to wait for a possibly following touch omous, multifunctional service robot, able to manipulate input. objects, was put into the domestic environment of older adults for a duration of multiple weeks. Our field trials provided insight into how the elderly used the Hobbit robot 6.1.4. Transparency of Task Interdependencies. The interviews and which functionalities they deemed useful for themselves revealed that the interdependencies between the tasks were andhowtherobotinufl encedtheirdailylife.Furthermore,we not clear to the user; the best example was the learn-and- couldshowthatitisinprincipal feasibletosupportelderly bring-object task. As described, for the bring-object task, the with a low-cost, autonomous service robot controlled by a objectfirsthadtobelearned so that it canbefound in the rather simple behavior coordination system. apartment. However, this fact needs to be remembered by 16 Journal of Robotics ff ff Table 2: System reliability across 18 PU. (a) Call Come Stop Pick up Teach a new Bring object Calendar Move to muc mode Statistics Emergency Follow me Hobbit closer Hobbit object object to user reminders location Days total 226 226 226 226 226 226 226 226 226 226 Days of introduction 31 31 31 31 31 31 31 31 31 31 Days switched o 55 55 55 55 55 55 55 55 55 55 Days in use 140 140 140 140 140 140 140 140 140 140 Device Days when feature was not 14 13 13 19 84 12 79 49 105 15 working Days when feature was only 23 20 22 44 47 116 62 83 13 32 partially working Days total 148 148 148 148 148 148 148 148 148 148 Days switched o 20 20 20 20 20 20 20 20 20 20 Days in use 128 128 128 128 128 128 128 128 128 128 Days when feature was not Companion 14 10 8 12 83 17 85 54 92 9 working Days when feature was only 25 16 17 38 40 95 43 64 18 29 partially working Device Working over days in use 81.79% 83.57% 82.86% 70.71% 23.21% 50.00% 21.43% 35.36% 20.36% 77.86% Companion Working over days in use 79.30% 85.94% 87.11% 75.78% 19.53% 49.61% 16.80% 32.81% 21.09% 81.64% (b) Go Take a Surprise Entertainment Entertainment muc mode Statistics Telephone Information Entertainment games Reward recharge break me audio fitness Days in total 226 226 226 226 226 226 226 226 226 Days of introduction 31 31 31 31 31 31 31 31 31 Days feature was disabled 55 55 55 55 55 55 55 55 55 Days of feature in use 140 140 140 140 140 140 140 140 140 Device Days when feature was not 19 16 19 11 11 11 23 20 11 working Days when feature was only 20 6 22 9 23 7 20 27 6 partially working Days in total 148 148 148 148 148 148 148 148 148 Days feature was disabled 20 20 20 20 20 20 20 20 20 Days of feature in use 128 128 128 128 128 128 128 128 128 Days when feature was not Companion 10 8 22 8 8 7 26 19 14 working Days when feature was only 33 9 16 7 23 8 11 23 7 partially working Device Working over days in use 79.29% 86.43% 78.57% 88.93% 83.93% 89.64% 76.43% 76.07% 90.00% Companion Working over days in use 79.30% 90.23% 76.56% 91.02% 84.77% 91.41% 75.39% 76.17% 86.33% Journal of Robotics 17 the user and as this is oeft n not the case, users wanted to ask 6.2.1. Internet Connectivity. Internet connectivity was not Hobbittobringthemanobjecteventhoughithadnotlearned reliable depending on location and time. While in most any objects before. In this specific case, the problem could be countries Internet (line-based or mobile) coverage is no easily fixed by only oer ff ing the task ”bring object” when an problem in general, local availability and quality vary sig- object was actually learned beforehand (e.g., the task could be nificantly, which makes Internet-based services difficult to greyed out in the MMUI). implement for technically unaware users. The integration of rich Internet-based content into the interaction therefore 6.1.5. Full Integration without External Programs. The han- lacks usability in case of intermittent connectivity. dling of user input and output must be fully integrated with the rest of the robot’s software architecture to be able to handle 6.2.2. Graphical User Interface. The GUI could be person- interruptions and continuations of interaction between the alized by the user for increased comfort during interaction. user and the robot. The user interface on the tablet computer This, however, shows the need for localized content to be (MMUI) incorporated multiple external programs (e.g., Flash available. As the setup phase during the trials showed that games, speech recognition, and the tfi ness functionality). As PU are likely not aware what content is available, some those were not directly integrated, the behavior coordination (remote) support and knowledge from SU are necessary for was not aware about their current state, leading to multiple the configuration of the user interface. interaction issues with users. For example, a game would be exiting when a command with higher priority (e.g., 6.2.3. Speech Recognition. Field trials showed that speech emergency from fall detection) would start the emergency recognition is still not working well for many users. Despite scenario. External programs need to be included in a way that the overall acceptable recognition rate that varies largely from makesitpossibletosuspend andresumetheirexecutionat user to user and from language to language and that is based any time. on the environment and distance, users often do not support the needs of current ASR technology for clearly expressed 6.1.6. Avoiding Loops. Reviewing the log data revealed that and separated commands in normal voice. eTh Sweet-Home the behavior coordination system could be trapped in a projectoncemoreemphasizesthe nfi dingsfromtheDiRHA loop without a way to continue the desired behavior execu- 2 project that practical speech recognition for old people in tion. eTh behavior coordination needs to provide a fallback the home environment is still a major challenge by itself [43]. solution in case of a seemingly endless loop in any part However, our ASR provided a positively experienced natural of the behavior. eTh behavior coordination communicates input channel when used in a multimodal HRI, where the with theMMUIinawaythatdoesnot provideimmediate touchscreen with its GUI provides a reliably working base. feedback over thesamechannelsofcommunication.Dueto timing issues, it occurred that a reply was lost between the 6.2.4. Smarthome Integration. eTh setup phase during the communicating partners (i.e., the fact that the robot stopped efi ld trials showed that the integration into smarthome envi- speech output). From there on, the behavior coordination ronments can be beneficial. Field trials showed that context was in a state that should not be reached and was unable awareness and adaptations highly impact the acceptance of to continue program execution in the desired manner. us, Th therobot.Imaginedfeaturescould be automaticon/offof the the communication structures should always have a fallback light or the stove or adjusting the proactively level of the robot solution to continue execution as well as the feedback based on the user’s mood. data on the same channels to prevent such a stop in a scenario. 6.2.5. Remote End User Control. Reflecting on the field trial indicates that a potential valuable extension of the interaction 6.2. Human-Robot Interaction with the MMUI. The inter- modalities would be a remote control of the robot, for action with the user was based on a multimodal user instance, on a smartphone enabling PU but also maybe SU interface that was perceived as easy to use during our to control the robot from outside the home. Potential useful field trials. While touch input turned out to be the most scenarios could be to send the robot to the docking station or reliable modality, speech and gesture interaction was highly to patrol the flat and search for an object or the PU or the SU welcome. Many of the entertainment functions of the MMUI video calling the PU. relied on Internet connectivity. Many users either were not interested in some UI features which therefore should be 6.3. Implementation of Mutual Care Behavior. In the begin- removed or asked for special configuration of the preferred ning ofthetrials, we implementedMutualCareinsuch features (e.g., selection of entertainment). The main way a fashion that in the companion mode the robot offers to theuserwas able tocommunicateremotelywithHobbit return the favor aer ft every interaction with the user. This was with the use of physical switches (call buttons) placed was done in order to guarantee that the users would notice at several xe fi d places inside the house of the user. The the difference between the modes during the interaction. eTh user had to physically go to the designated switch spot positive fact was that users noticed the changes. However, andpress theswitchfor therobot to approach her/him. they were soon very annoyed by the robot. Consequently, A smartphone/tablet application could be developed to we changed this implementation during the running trials. allow a better remote communication experience with the eTh return of favor frequency was reduced; it was no longer robot. oer ff ed aer ft the commands Recharge batteries, Go to, Call 18 Journal of Robotics button,and Surprise. Further feedback from the second and To conclude, we believe that methods, results, and lessons third Austrian and the second and third Swedish users led to learned presented in this article constitute valuable knowl- further reduction of the return a favor frequency to offering edge for fellow researchers in the eld fi of assistive service it only aeft r the following three commands: robotics and serve as a stepping stone towards developing affordable care robots for the aging population. (1) Pick up command (favor: Hobbit oeff rs music: I’d like to return the favor. I like music. Shall I play some music Conflicts of Interest for you?) (2) Learn object command (favor: Hobbit oer ff s to play eTh authors declare that there are no conflicts of interest. a game (suitable because the user is already sitting down): I’dliketoreturnthefavor. Doyouwanttoplay Acknowledgments agame?) (3) Reward command (favor: Hobbit oeff rs to surprise This research has received funding from the European Com- the user: I’dliketoreturnthefavor. Ilikesurprises.Do munity’s Seventh Framework Programme (FP7/2007–2013) you want a surprise?) (Grant Agreement no. 288146, Hobbit). However, as the interviews showed, these behavioral changes were no longer recognized by the users. Similarly, the differ- References encesinproactivityandpresencewerenotreflectivelynoticed [1] P. Dario, P. F. M. J. Verschure, T. Prescott et al., “Robot by the users, but the changes in dialogue were noticed. companions for citizens,” Procedia Computer Science,vol.7,pp. 47–51, 2011. 6.3.1. Help Situations. For the development of Mutual Care [2] J.M.Beer, C.-A.Smarr,T.L.Chenetal.,“ed Th omesticated behavior in completely autonomous scenarios, which helping robot: Design guidelines for assisting older adults to age in situations the robot can really identify in order to ask for place,” in Proceedings of the 7th Annual ACM/IEEE International help and how the robot can notice that it actively recovered Conference on Human-Robot Interaction, (HRI ’12), pp. 335–342, through the help have to be considered. USA, March 2012. [3] D. Fischinger, P. Einramhof, K. Papoutsakis et al., “Hobbit - The Mutual Care Robot,” in Assistance and Service Robotics in 6.3.2. Design of Neediness. In the interviews, PU reflected a Human Environment Workshop in conjunction with IEEE/RSJ that they did not really recognize that the robot needed their International Conference on Intelligent Robots and Systems,2013. input to continue its task. For Mutual Care, the need of help [4] J.Bohren,R. B.Rusu,E.G.Jonesetal.,“Towards autonomous seems to be essential. For future version of the robot, how robotic butlers: Lessons learned with the PR2,” in Proceedings to design the neediness needs to be considered. This could of the 2011 IEEE International Conference on Robotics and be achieved with facial expressions, sounds, or movements. Automation, (ICRA ’11), pp. 5568–5575, China, May 2011. Also for behaviors such as presence and proactivity, the robot [5] Y. Demiris and B. Khadhouri, “Hierarchical attentive multiple could say aer ft an interaction: “I would prefer staying with you models for execution and recognition of actions,” Robotics and in your room” or proactivity (e.g., “I would like to spend more Autonomous Systems,vol.54, no.5,pp.361–369,2006. time with you” before offering an activity). This would give a [6] T. Carlson and Y. Demiris, “Collaborative control for a robotic better explanation of the robot’s behavior to the user and an wheelchair: evaluation of performance, attention, and work- expected raise of acceptance. load,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 3, pp. 876–888, 2012. [7] M. Cashmore, M. Fox, D. Long et al., “Rosplan: Planning in the 7. Conclusions robot operating system,” in Proceedings of the 25th International Conference on Automated Planning and Scheduling, (ICAPS ’15), In this article, we presented the second prototypical imple- pp. 333–341, June 2015. mentation of the Hobbit robot, a socially assistive service [8] M.MansouriandF.Pecora,“More knowledgeonthe table: robot. We presented the main functionality it provided, as Planning with space, time and resources for robots,” in Proceed- well as the behavior coordination that enabled autonomous ings of the 2014 IEEE International Conference on Robotics and interaction with the robot in real private homes. Hobbit Automation, (ICRA ’14), pp. 647–654, China, June 2014. is designed especially for fall detection and prevention, [9] S.Goetze, S. Fischer,N. Moritz,J.E.Appell,andF.Wall- providing various tasks (e.g., picking up objects from the hoff, “Multimodal Human-Machine Interaction for Service floor, patrolling through the flat, and employing reminder Robots in Home-Care Environments,” in Proceedings of the 1st functionalities), and supports multimodal interaction for Workshop on Speechand Multimodal InteractioninAssistive dieff rent impairment levels. We focused on the development Environments,2012. of a service robot for older adults, which has the potential [10] H. S. Koppula, A. Jain, and A. Saxena, “Anticipatory planning to promote aging in the home and to postpone the need to for human-robot teams,” Springer Tracts in Advanced Robotics, move to a care facility. Within the field trials, we reached vol. 109, pp. 453–470, 2016. the desirable long-term goal that a mobile service robot [11] P. Kwang-Hyun and Z. Zenn Bien, “Intelligent sweet home with manipulation capabilities enters real homes of older for assisting the elderly and the handicapped,” in Independent adults and showed its usefulness and potential to support Living for Persons with Disabilities and Elderly People: ICOST,p. independent living for elderly users. 151, 2003. Journal of Robotics 19 [12] T. Fukuda, P. Di, F. Chen et al., “Advanced service robotics [26] M. Wolf ¨ el and J. McDonough, Distant Speech Recognition,John for human assistance and support,” in Proceedings of the Wiley & Sons, 2009. International Conference on Advanced Computer Science and [27] P. Panek and P. Mayer, “Challenges in adopting speech control Information Systems, (ICACSIS ’11), pp. 25–30, December 2011. for assistive robots,” in Ambient Assisted Living,AdvancedTech- nologies and Societal Change, pp. 3–14, Springer International [13] D.-J. Kim, R. Lovelett, and A. Behal, “An empirical study with simulated ADL tasks using a vision-guided assistive robot arm,” Publishing, Berlin, Germany, 2015. in Proceedings of the 2009 IEEE International Conference on [28] D. I. Kosmopoulos, K. Papoutsakis, and A. A. Argyros, “Online Rehabilitation Robotics, (ICORR ’09), pp. 504–509, Japan, June segmentation and classification of modeled actions performed in the context of unmodeled ones,” in Proceedings of the [14] M. B. Hong, S. J. Kim, T. Um, and K. Kim, “KULEX: An 25th British Machine Vision Conference, (BMVC ’14),pp. 1–12, September 2014. ADL power-assistance demonstration,” in Proceedings of the 10th International Conference on Ubiquitous obots and Ambient [29] D. Michel and K. Papoutsakis, “Gesture Recognition Appara- Intelligence, (URAI ’13), pp. 542–544, November 2013. tuses,” MethodsandSystemsforHuman-MachineInteraction, [15] L. Lammer, A. Huber, A. Weiss, and M. Vincze, “Mutual Care: How older adults react when they should help their care [30] M. Foukarakis, I. Adami, D. Ioannidi et al., “A robot-based Robot,” in Proceedings of the 3rd international symposim on New application for physical exercise training,” in Proceedings of the Frontiers in Human-Robot interaction,April 2014. 2nd International Conference on Information and Communica- tion Technologies for Ageing Well and e-Health, (ICT4AWE ’16), [16] T. Kor ¨ tner, A. Schmid, D. Batko-Klein et al., “How social robots pp.45–52,April2016. make older users really feel well - A method to assess users’ concepts of a social robotic assistant,” in Proceedings of the [31] D. Michel and A. Argyros, “Apparatuses, methods and systems InternationalConferenceonSocialRobotics,vol.7621, pp.138– for recovering a 3-dimensional skeletal model of the human 147, Springer, 2012. body, 2016”. [17] D. Fischinger, P. Einramhof, K. Papoutsakis et al., “Hobbit, a [32] P. Viola and M. J. Jones, “Robust real-time face detection,” care robot supporting independent living at home: First pro- International Journal of Computer Vision,vol.57, no.2,pp. 137– totype and lessons learned,” Robotics and Autonomous Systems, 154, 2004. vol. 75, pp. 60–78, 2016. [33] P.Panteleris andA.A.Argyros,“Vision-based SLAMand [18] P. DeLaPuente,M.Bajones,P.Einramhof, D.Wolf,D. moving objects tracking for the perceptual support of a smart Fischinger, and M. Vincze, “RGB-D sensor setup for multiple walker platform,” Computer Vision-ECCV 2014 Workshops,vol. tasksofhomerobotsandexperimental results,”in Proceedings 8927, pp. 407–423, 2014. of the IEEE/RSJ International Conference on Intelligent Robots [34] A. Remazeilles, C. Leroux, and G. Chalubert, “SAM: A robotic and Systems, (IROS ’14), pp. 2587–2594, USA, September 2014. butler for handicapped people,” in Proceedings of the 17th [19] D. Michel, K. Papoutsakis, and A. A. Argyros, “Gesture recogni- IEEE International Symposium on Robot and Human Interactive tion supporting the interaction of humans with socially assistive Communication, RO-MAN, pp. 315–321, Germany, August 2008. robots,” in Advances in Visual Computing, vol. 8887 of Lecture [35] S. S. Srinivasa, D. Ferguson, C. J. Helfrich et al., “HERB: A home Notes in Computer Science, pp. 793–804, Springer International exploring robotic butler,” Autonomous Robots,vol.28,no.1,pp. Publishing, Berlin, Germany, 2014. 5–20, 2010. ´ ´ [20] J. Gonzalez-Jimenez, J. Ruiz-Sarmiento, and C. Galindo, [36] B. Graf, U. Reiser, M. Hag ¨ ele, K. Mauz, and P. Klein, “Robotic “Improving 2D Reactive Navigators with Kinect,” in Proceeding home assistant care-O-bot 3—product vision and innovation of the 10th International Conference on Informatics in Control, platform,” in Proceedings of the IEEE Workshop on Advanced Automation and Robotics (ICINCO ’13),2013. Robotics and Its Social Impacts (ARSO ’09), pp. 139–144, Tokyo, Japan, November 2009. [21] M. Jalobeanu, G. Shirakyan, G. Parent, H. Kikkeri, B. Peasley, and A. Feniello, “Reliable kinect-based navigation in large [37] D. Fischinger and M. Vincze, “Empty the basket - A shape indoor environments,” in Proceedings of the 2015 IEEE Interna- based learning approach for grasping piles of unknown objects,” tional Conference on Robotics and Automation, (ICRA ’15),pp. in Proceedings of the 25th IEEE/RSJ International Conference 495–502, USA, May 2015. on Robotics and Intelligent Systems, (IROS ’12),pp.2051–2057, Portugal, October 2012. [22] P. De La Puente, M. Bajones, Reuther. C., D. Fischinger, Wolf. D., and M. Vincze, “Experiences with rgb-d based navigation in [38] D. Fischinger and M. Vincze, “Shape based learning for grasping real home robotic trials,” in Austrian Robotics Workshop (ARW), novel objects in cluttered scenes,” in Proceedings of the 10th IFAC Symposium on Robot Control, (SYROCO ’12),pp. 787–792, Croatia, September 2012. [23] M. Shimrat, “Algorithm 112: Position of point relative to poly- gon,” Communications of the ACM,vol.5,no.8, p.434, 1962. [39] D. Fischinger, A. Weiss, and M. Vincze, “Learning grasps with topographic features,” International Journal of Robotics [24] M. L. Walters, K. Dautenhahn, S. N. Woods, and K. L. Koay, Research,vol.34, no.9,pp. 1167–1194, 2015. “Robotic etiquette: Results from user studies involving a fetch and carry task,” in Proceedings of the HRI 2007: 2007 ACM/IEEE [40] M. Bajones, A. Weiss, and M. Vincze, “Log data analysis of Conference on Human-Robot Interaction - Robot as Team Mem- long-term household trials: Lessons learned and pitfalls. 2016. ber,pp. 317–324, USA,March2007. Workshop on Challenges and best practices to study HRI in natural interaction settings,” in Proceeding of the International [25] P. Mayer, C. Beck, and P. Panek, “Examples of multimodal user Conference on Human-Robot Interaction,2016. interfaces for socially assistive robots in ambient assisted living environments,” in Proceedings of the 3rd IEEE International [41] J. Prip,fl T. K or ¨ tner, D. Batko-Klein et al., “Results of a real Conference on Cognitive Infocommunications, (CogInfoCom ’12), world trial with a mobile social service robot for older adults,” pp. 401–406, Slovakia, December 2012. in Proceedings of the 11th Annual ACM/IEEE International 20 Journal of Robotics Conference on Human-Robot Interaction, (HRI ’16), pp. 497-498, IEEE Press, New Zealand, March 2016. [42] F. Palumbo, D. La Rosa, E. Ferro et al., “Reliability and human factors in ambient assisted living environments,” Journal of Reliable Intelligent Environments,vol.3,no. 3,pp.139–157,2017. [43] M. Vacher, F. Portet, A. Fleury, and N. Noury, “Development of audio sensing technology for ambient assisted living: Appli- cations and challenges,” International Journal of E-Health and Medical Communications,vol.2,no.1, pp.35–54,2011. International Journal of Advances in Rotating Machinery Multimedia Journal of The Scientific Journal of Engineering World Journal Sensors Hindawi Hindawi Publishing Corporation Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 http://www www.hindawi.com .hindawi.com V Volume 2018 olume 2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Journal of Control Science and Engineering Advances in Civil Engineering Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Submit your manuscripts at www.hindawi.com Journal of Journal of Electrical and Computer Robotics Engineering Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 VLSI Design Advances in OptoElectronics International Journal of Modelling & Aerospace International Journal of Simulation Navigation and in Engineering Engineering Observation Hindawi Hindawi Hindawi Hindawi Volume 2018 Volume 2018 Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com www.hindawi.com www.hindawi.com Volume 2018 International Journal of Active and Passive International Journal of Antennas and Advances in Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Robotics Hindawi Publishing Corporation

Loading next page...
 
/lp/hindawi-publishing-corporation/hobbit-providing-fall-detection-and-prevention-for-the-elderly-in-the-U9zyz5061C

References (55)

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2018 Markus Bajones et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-9600
eISSN
1687-9619
DOI
10.1155/2018/1754657
Publisher site
See Article on Publisher Site

Abstract

Hindawi Journal of Robotics Volume 2018, Article ID 1754657, 20 pages https://doi.org/10.1155/2018/1754657 Research Article Hobbit: Providing Fall Detection and Prevention for the Elderly in the Real World 1 1 1 1 1 Markus Bajones , David Fischinger, Astrid Weiss, Daniel Wolf, Markus Vincze, 2 3 3 4 Paloma de la Puente, Tobias Körtner, Markus Weninger, Konstantinos Papoutsakis, 4 4 4 4 Damien Michel, Ammar Qammaz, Paschalis Panteleris, Michalis Foukarakis, 4 4 4 4 4 Ilia Adami, Danai Ioannidi, Asterios Leonidis, Margherita Antona, Antonis Argyros, 5 5 6 6 Peter Mayer, Paul Panek, Håkan Eftring, and Susanne Frennert Automation and Control Institute (ACIN), TU Wien, Vienna, Austria Universidad Politecn ´ ica de Madrid, Madrid, Spain Akademie fur ¨ Altersforschung am Haus der Barmherzigkeit, Vienna, Austria Institute of Computer Science, FORTH, Heraklion, Greece Institute for Design and Assessment of Technology, TU Wien, Vienna, Austria Department of Design Sciences, Lund University, Lund, Sweden Correspondence should be addressed to Markus Bajones; markus.bajones@tuwien.ac.at Received 10 November 2017; Revised 13 February 2018; Accepted 19 March 2018; Published 3 June 2018 Academic Editor: Brady King Copyright © 2018 Markus Bajones et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present the robot developed within the Hobbit project, a socially assistive service robot aiming at the challenge of enabling prolonged independent living of elderly people in their own homes. We present the second prototype (Hobbit PT2) in terms of hardware and functionality improvements following first user studies. Our main contribution lies within the description of all components developed within the Hobbit project, leading to autonomous operation of 371 days during field trials in Austria, Greece, and Sweden. In these field trials, we studied how 18 elderly users (aged 75 years and older) lived with the autonomously interacting service robot over multiple weeks. To the best of our knowledge, this is the first time a multifunctional, low-cost service robot equipped with a manipulator was studied and evaluated for several weeks under real-world conditions. We show that Hobbit’s adaptive approach towards the user increasingly eased the interaction between the users and Hobbit. We provide lessons learned regarding the need for adaptive behavior coordination, support during emergency situations, and clear communication of robotic actions and their consequences for fellow researchers who are developing an autonomous, low-cost service robot designed to interact with their users in domestic contexts. Our trials show the necessity to move out into actual user homes, as only there can we encounter issues such as misinterpretation of actions during unscripted human-robot interaction. 1. Introduction (3) sentientmachinesthatoeff rmultimodalcommunica- tion channels and are context-aware and trustable. Whilesociallyassistiverobotsareconsideredtobepotentially More and more companies and research teams present useful for society, they can provide the highest value to older service robots with the aim of assisting older adults (e.g., adults and homebound people. As reported in [1], future Giraff (http://www.giraff.org), Care-O-Bot (http://www.care- robot companions are expected to be o-bot.de), and Kompai (https://kompai.com)) with services (1) strong machines that can take over burdensome tasks such as entertainment, medicine reminders, and video tele- for the user, phony. Requirement studies on needs and expectations of (2) graceful and soft machines that will move smoothly older adults towards socially assistive robots [2] indicate and express immediate responses to their users, that they expect them to help with household chores (e.g., 2 Journal of Robotics cleaning the kitchen, bath, and toilet), lifting heavy objects, into an implementable behavior concept. Section 5 presents reaching for and picking up objects, delivering objects, and an overview on the efi ld trials. Lessons learned from the so forth. However, most of these tasks cannot satisfyingly development and testing of Hobbit and a summary and be performed by state-of-the-art robotic platforms; hardly conclusions are provided in Sections 6 and 7. any companion robot fulfills the requirements mentioned above and only very few robots entered private homes of 2. Related Work older adults so far. One of the biggest challenges is oer ff ing sufficient useful and social functionalities in an autonomous Moving towards autonomous service robots, behavior coor- and safe manner to achieve the ultimate goal of prolonging dination systems constitute an important building block to fulfill therequirementsofactionplanning,safetaskexecu- independent living at home. The ability of a robot to interact autonomously with a human requires sophisticated cognitive tion, and integration of human-robot interaction. HAMMER from Demiris and Khadhouri [5] is built upon the concept abilities including perception, navigation, decision-making, and learning. However, research on planners and cognitive of using multiple forward/backward control loops, which can architectures still faces the challenge of enabling flexibil- be used to predictthe outcomeofanactionand comparethis against the actual result of the action. Through this design, it ity and adaptation towards different users, situations, and environments while simultaneously being safe and robust. is possible to choose the action with the highest probability Toourconviction, forsuccessfullong-term human-robot of reaching the desired outcome, which has successfully been used in a collaboratively controlled wheelchair system [6], interaction with people in their private homes, robotic behavior needs to be above all safe, stable, and predictable. in order to correct the user’s input to avoid an erroneous During our eld fi trials, this became increasingly evident, as situation. Cashmore et al. [7] introduced ROSPlan, a frame- the users failed to understand the robot’s behavior during work that uses a temporal planning strategy for planning and some interaction scenarios. dispatching robotic actions. Depending on the needs, a cost function can be optimized for planning in a certain manner In this article, we present the Hobbit PT2 platform, referred to in the remainder of this article as Hobbit. A (e.g., time- or energy-optimized). However, the constructed former version of Hobbit has been presented in detail in plan is up until now only available as a sequence of executed actions and observed events, but no direct focus is put on [3]. Hobbit is a socially assistive robot that oer ff s useful personal and social functionalities to enable independent the human, besides modeling the user as means to acquire living at home for seniors. To the best of our knowledge, some event(e.g.,movinganobjectfromone location to another). Mansouri and Pecora [8] incorporate temporal the Hobbit trials mark the first time a social service robot offering multifunctional services was placed in users’ homes, andspatialreasoninginarobottaskedwithpickandplace operated autonomously and whose usage was not restricted in environments suited for users. In the context of ALIAS, by ascheduleoranyothermeans.Themaincontribution Goetze et al. [9] designed their dialogue manager for the of this paper is twofold. First, we give a description of the tasks of emergency call, a game, e-ticket event booking, and the navigation as state-machines. However, there are still hardware that is based on improvements derived from the rfi st user trials on the previous version of Hobbit. Second, signicfi ant research challenges regarding how to incorporate we describe the implemented functionality and its integration humans into the planning stages and decide when the robot needs to adapt to the user instead of staying with the planned into the behavior coordination system. The building blocks of the behavior coordination system are based on a set of task. hierarchical state-machines implemented using the SMACH Most of those behavior coordination and planning sys- framework [4]. Each behavior was built upon simpler build- tems treat the human as an essential part of the system ing blocks, each responsible for one specific task (e.g., speech [6] (e.g., for command input) and rely on the user to execute actions planned by the coordination system [10]. and text output, arm movements, and navigation) to add up to the complex functionalities presented in Sections 3.3 and Such systems only work under the precondition that the robot 4. Finally, we present the lessons learned from the eld fi trials will execute a given task for the user independently of the user input [8]. A crucial aspect, however, to successfully integrate in order to support fellow researchers in their developments of autonomous service robots for the domestic environment. a multifunctional service robot into a domestic environment We evaluated Hobbit during 371 days of field trials with vfi e is that itneedsnot onlytoreacttousercommandsbutalsoto proactively oeff r interaction and adapt to user needs (e.g., the platforms with older adults in their private homes in Austria, Greece, and Sweden. However, details on the eld fi trials will user wanting a break from the robot or a proactive suggestion be published elsewhere. for an activity they could perform together). Our proposed The paper proceeds as follows. Section 2 reflects on solution is based on state-machines, which reflect turn-taking relevant related work on behavior coordination for social in the interaction, providing adaptations within certain states (e.g., voice dialogues) or situations (e.g., user approach). We service robots and on studies of such robots outside of the laboratory environment. In Section 3, we give an overview on integrated the possibility not only to handle robot-driven the project vision for Hobbit and its historical development actions on a purely scheduled basis but also to adapt this schedulingandactions basedontheuser’scommands. uptotheHobbitPT2platform,followedbyadetaileddescrip- tion of its hardware and interaction modalities. Section 4 presents the behavior coordination system. We outline how 2.1. State of the Art: Robotic Platforms. According to a study we developed the interaction scenarios and transferred them conducted by Georgia Tech’s Healthcare Robotics Lab, people Journal of Robotics 3 with motor impairment drop items on average 5.5 times a currently running the Socially Assistive Robotics project day. Their small tele-operated Dusty (http://pwp.gatech.edu/ (https://www.nsf.gov/awardsearch/showAward?AWD ID= hrl/project dusty/) robots are developed for that purpose: 1139078) with partners Yale, University of Southern Cal- picking up objects from the floor, which they achieve with ifornia, MIT, Stanford, Tusft , and Willow Garage. eTh ir a scoop-like manipulator. Cody, a robotic nurse assistant, focusisonrobotsthatencouragesocial,emotional, can autonomously perform bed (sponge) baths. Current and cognitive growth in children, including those with work focuses on GATSBII (http://www.robotics.gatech.edu), social or cognitive deficits. eTh elder care robot Sil-Bot a willow Garage PR2, as a generic aid for older adults at (http://www.roboticstoday.com/robots/sil-bot) developed at home. eTh Care-O-Bot research platforms developed at the the Center for Intelligent Robotics (CIR) in Korea is devised Fraunhofer Institute (IPA) are designed as general purpose mainly as an entertainment robot to oer ff interactive games robotic butlers, with a repertoire from fetching items to that have been codeveloped with Seoul National University detecting emergency situations, such as a fallen person. Also Medical Center specifically to help prevent Alzheimer’s from Fraunhofer is Mobina (https://www.ipa.fraunhofer.de/ disease and dementia. Sil-Bot is meant to be a companion de/referenzprojekte/MobiNa.html), a small (vacuum-sized) that helps encourage an active, healthy body and mind. Its robot specicfi ally performing fallen person detection and shortflipper-like armsdo notallowforactualmanipulation. video calls in emergency. Carnegie Mellon University’s Another public-private partnership is the EC-funded HERB (https://personalrobotics.ri.cmu.edu/) is another CompanionAble project (http://www.companionable.net/), general-purpose robotic butler. It serves as the main research which created a robotic assistant for the elderly called Hector. platform at the Personal Robotics Lab, which is part of the eTh project integrates Hector to work collaboratively with a QualityofLifeTechnology(QoLT)Center. KAISTinKorea smart home and remote control center to provide the most has been developing their Intelligent Sweet Home (ISH) comprehensive and cost-efficient support for older people smart home technology including intelligent wheelchairs, living at home. intelligent beds, and robotic hoists [11]. Their system Hoaloha Robotics (http://www.hoaloharobotics.com/) also employs the bimanual mobile robot Joy to act as an in the United States are planning to bring their elder care intermediary between these systems and the end user. robottomarketsoon.Basedonafairlystandard mobile Robotdalen (http://www.robotdalen.se), a Swedish public- platform oer ff ing safety and entertainment, they focus on private consortium, has funded the development of needed an application framework that will provide integration robotic products such as Bestic (http://www.camanio.com/ of discrete technological solutions like biometric devices, en/products/bestic/), an eating device for those who cannot remote doctor visits, monitoring and emergency call services, feed themselves; Giraff, a remote-controlled mobile robot medication dispensers, online services, and the increasing with a camera and monitor providing remote assistance and number of other products and applications already emerging security; or TrainiTest, a rehabilitation robot that measures for the assistive care market. Japan started a national andevaluates thecapacityofmuscles andthensetsthe initiative in 2013 to foster development of nursing care resistance in the robot to adapt to the users’ individual robots and to spread their use. The program supports 24 training needs. Remote presence robots have recently turned companies in developing and marketing their elderly care up in a variety of forms, from simple Skype video chats technologies, such as the 40 cm tall PALRO conversation on a mobility platform (Double Robotics (https://www robot (https://palro.jp/) that offers recreation services by .doublerobotics.com/)) to serious medical assistance remote playing games, singing, and dancing together with residents presence robots such as those provided by the partnership of a care facility. Another example is the helper robot by between iRobot and InTouch Health (https://www.intouch- Toyota, which is mostly remotely controlled from a tablet PC. health.com/about/press-room/2012/InTouch-Health-and-iR- Going specifically beyond entertainment capabilities, Waseda obot-to-Unveil-the-RP-VITA-Telemedicine-Robot.html), University’s Twendy One (http://www.twendyone.com) is Gira,ff and VGo Communications’ postop pediatric at-home asophisticatedbimanual robotthatprovides human robots (http://www.vgocom.com/) for communication with safety assistance, dexterous manipulation, and human- parents, nurses, doctors, and patients. friendly communication. It can also support a human to lift Another class of robots aims more specicfi ally at well- themselves from a bed or chair. Going even further, the RIBA- being of older adults. eTh recently completed FP7 project M II robot (http://rtc.nagoya.riken.jp/RIBA/index-e.html) by obiserv (https://cordis.europa.eu/project/rcn/93537 en.html) RIKEN-TRI Collaboration Center for Human-Interactive aimed to develop solutions to support independent living Robot Research (RTC) can lift patients of up to 80 kg of older adults as long as possible, in their home or in from abedtoawheelchairand back.ThePepperrobot various degrees of institutionalization, with a focus on (https://www.ald.softbankrobotics.com/en/robots/pepper) health, nutrition, well-being, and safety. es Th e solutions from Softbank Robotics (Aldebaran) is used in a growing encompass smart clothes for monitoring vital signs, a smart number of projects focusing on human-robot interaction home environment to monitor behavioral patterns (e.g., scenarios. Some ADL (activities of daily living) tasks eating) and detect dangerous events, and a companion are directly addressed by walking aids, for example [12], robot. eTh robot’s main role is to generally activate, and cognitive manipulation training, for example, using stimulate, and offer structure during the day. It also reminds exoskeletons [13, 14]. its user of meals, medication, and appointments and The short overview indicates that individually many ADL encourages social contacts via video calls. The US NSF is tasks are approached. However, they all require different types 4 Journal of Robotics of robots. eTh goal of grasping objects from the floor, while themaincharacteristics ofthe Mutual Care mode were the at thesametimekeeping therobot aoff rdable,has ledusto following: (1) return of favor: Hobbit asked if it could return design and build the custom Hobbit platform. Moreover, the thefavor aeft r situations wherethe user hadhelpedHobbit robotshouldoeff reverydaylifesuitabletasks inasocially to carry out a task, (2) communication style: Hobbit used the interactive manner to be sustainably used by the older adults. user’s name in the dialogue and was more human-like such as responding to a reward from the user by saying You are welcome instead of Reward has been received,(3)proactivity: 3. The Hobbit Robot Hobbit was more proactive and initiated interactions with Hobbit isable toprovideanumberofsafety andenter- the user, and (4) presence: Hobbit stayed in the room where tainment functions with low-cost components. The ability the last interaction has taken place for at least 30 minutes to provide many functions with sometimes contradictory instead of heading directly back to the charging station. In requirements for the hardware design creates a demanding order to avoid potential biases, users were not told about the challengeonitsown.Tothebestofourknowledge, we arethe behavioral change of the robot beforehand. rst fi to present a robot that operates in users’ homes in a fully autonomous fashion for a duration of 21 days per user, while 3.3. Development Steps Leading to Hobbit. To gain insight providing an extensive set of functionalities like manipulation into the needs of elderly living alone, we invited primary of objects with an included arm. users (PU), aged 75 years and older and living alone, and secondary users (SU), who are in regular contact with the 3.1. General Vision. The motivation for Hobbit’s development primary users, to workshops in Austria (8 PU and 10 SU) was to create a low-cost, social robot to enable older adults and Sweden (25 PU). A questionnaire survey with 113 PU in to independently live longer in their own homes. One reason Austria, Greece, and Sweden and qualitative interviews with for the elderly to move into care facilities is the risk of falling 38 PU and 18 SU were conducted. This iterative process [16] and eventually inflicted injuries. To reduce this risk, the not only resulted in the user requirements but also influenced “must-haves” for the Hobbit robot are emergency detection the design and material decisions, which were incorporated (the robot patrolling autonomously through the flat after into the development of the Hobbit robots as seen in Figure 1. three hours without any user activity and checking if the Based on these requirements and laboratory studies with user is well and did not suffer a fall), emergency handling the PT1 platform [17] with 49 users (Austria, Greece, and (automatic calls to relatives or emergency services), and fall Sweden), the following main functionalities for Hobbit were prevention (searching and bringing known objects to the selected: user and picking up objects from the floor pointed to by (1) Call Hobbit:summonthe robottoaposition linkedto the user and a basic tfi ness program to enhance the user’s battery-less call buttons overall tfi ness). Hobbit also provides a safety check feature that informs the user about possible risks in specific rooms (e.g., (2) Emergency: call relatives or an ambulance service. wet floor in the bathroom and slippery carpets on wooden This can be triggered by the user from emergency floors) and explains how to reduce such risks. buttons and gesture commands or by the robot during In science fiction, social robots are oeft n depicted as a patrolling butler, a fact that guides the expectations towards such robots. (3) Safety check:guidetheuserthrough alistofcommon However, as state-of-the-art technology is not yet able to risk sources and provide information on how to fulfill these expectations, Hobbit was designed to incorporate reduce them the Mutual Care interaction paradigm [15] to overcome the robot’s downfalls by creating an emotional bond between the (4) Pick up objects: objects lying on the floor are picked users and the robot. The Mutual Care concept envisioned that up by the robot with no distinction between known the user and the robot provide help in a reciprocal manner or unknown objects to each other, therefore creating an emotional bond between (5) Learn and bring objects:visuallearningofuser’s them, so that the robot not only provides useful assistance objectstoenabletherobottosearchandfindthem but also acts as a companion. The resulting system complexity within the environment based on the multifunctionality was considered as acceptable (6) Reminders: deliver reminders for drinking water and to fulfill the main criteria ( emergency detection and handling, appointments directly to the user fall prevention,and providing a feeling of safety). (7) Transport objects: reduce the physical stress on the user by placing objects on to the robot and letting it 3.2. Mutual Care as Underlying Interaction Paradigm. The transport them to a commanded location Mutual Care concept was implemented through two different social roles, one that enforces this concept and one that does (8) Go recharging: autonomously, or by a user command, not. Hobbit started in the Mutual Care-disabled mode during move to the charging station for recharging the el fi d trials and changed after 11 days to the Mutual Care (9) Break: put the robot on break when the user leaves the mode. eTh differences between these two modes or social flat or when the user takes a nap roles of the robot were mainly in its dialogues, proactivity, andthe proximityinwhich therobot wouldremainwhen (10) Fitness: guided exercises that increase the overall the user stops interacting with the robot. In more detail, fitness of the user Journal of Robotics 5 (a) (b) (c) Figure 1: (a–c) First mock-ups designed by secondary users: the first (PT1) and second generation of Hobbit as used during the field trials. (11) Entertainment: brain training games, e-books, and 3.4.1. Visual Perception System Using RGB-D Cameras. For music the visual perception system, Hobbit is equipped with two Asus Xtion Pro RGB-D sensors. eTh head camera is mounted 3.4. Robot Platform and Sensor Setup. The mobile platform of insidetheheadandusedforobstacleavoidance,objectlearn- the Hobbit robot has been developed and built by MetraLabs ing and recognition, user detection, and gesture recognition (http://www.metralabs.com). It moves using a two-wheeled andtodetectobjectstopickup.Since theheadcanperform dieff rential drive, mounted close to the front side in driving pan and tilt movements, the viewing angle of this camera direction. For stability, an additional castor wheel is located canbedynamicallyadaptedtoaparticular task at hand.In closetothe back.Tofitall thebuilt-insystemcomponents, contrast, the bottom camera, used for localization, mapping, the robot has a rectangular footprint with a width of 48 cm anduserfollowing,ismounted atafixedpositionataheight and a length of 55 cm. For safety reasons, a bumper sensor of 35 cm in the front of the robot’s body, facing forward. This surrounds the base plate, protecting the hull and blocking setup is a trade-off between the cost of the sensor setup (in the motors when being pressed. This ensures that the robot terms of computational power and money) and the necessary stops immediately if navigation fails and an obstacle is hit. data for safe usage and feature completeness, which we found An additional bumper sensor is mounted below the tablet PC, to be most suitable for the variety of different tasks that which provides the graphical user interface. During situations require visual perception. in which the user might not be able to reach the tablet PC The cameras, which only cost a fraction of laser range (e.g. the person has fallen), a hardware emergency button is sensors commonly used for navigation in robotics, offer a located on the bottom front side. resolution of 640× 480 pixels of RGB-D data and deliver On its right side, the robot is equipped with a 6-DoF useful data in a range of approximately 50 cm to 400 cm. arm with a two-finger n-ra fi y gripper, such that objects lying eTh refore,our system hastobeabletocopewithablind on thefloorcan bepickedupandplacedinatray on top spot in front of the robot. Furthermore, the quality of data of the robot’s body. Furthermore, the arm can grasp a small acquiredwiththeheadcamerafromanobservedobjectvaries turntablestoredontherightsideofthebody,which isused depending on the task. For example, in the learning task, an to teach the robot unknown objects. object that is placed on the robot’s turntable is very close to the eTh robot’s head, together with the neck joint with motors head camera, just above the lower range limit. In the pickup for pan and tilt movements, has been developed by Blue task, on the contrary, the object detection method needs Danube Robotics (http://www.bluedanuberobotics.com). It to be able to detect objects at the upper range limit of the contains two speakers for audio output, two Raspberry Pis camera, where data points are already severely influenced by with one display each for the eyes of the robot, a temperature noise. sensor, and a RGB-D sensor. This sensor, referred to in the Because two of the main goals for the final system remainder of the paper as head camera,isusedfor obsta- were affordability and robustness, we avoided incorporating cle avoidance, for object and gesture recognition, and—in additional cameras, for example, for visual servoing with conjunction with the temperature sensor—for user and fall the robot’s hand. For further details and advantages of detection. Similar to the previous prototype of the robot oursensorsetup fornavigation, we referthereader to [3, 18], the visual sensor setup is completed by a second RGB- [18]. Dsensor,mountedinthe robot’sbodyataheight of35cm facing forward. This sensor, referred to in the remainder of the paper as bottom camera, is used for localization, 3.4.2. Head and Neck. Besides the head camera, the head contains an infrared camera for distance temperature mea- mapping, and user following. Figure 2 shows an overview of the Hobbit hardware; a more detailed explanation of the surement, two speakers for audio output, and two Raspberry single components is given in the following sections. Pis with displays showing the robot’s eyes. Through its eyes, 6 Journal of Robotics Head Temperature sensor Head camera Speakers Neck joint (pan/tilt) Eyes showing emotions Neck Tray for personal belongings Tray where Hobbit puts objects Tablet PC with graphical UI Bumper sensor Stored turntable Gripper Bottom camera 6-DoF arm Water bottle holder Emergency help button Bumper sensor Figure 2: Hardware setup of the Hobbit platform. Very tired Sleeping Happy Very happy Wondering Concerned Sad Tired Figure 3: List of emotions shown by Hobbit’s eyes. the robot is able to communicate a set of different emotions unsupervised actions was required to minimize the risk of totheuser, whichare showninFigure3.Theneckjoint breakage. contains two servo motors, controlling the horizontal and vertical movement of the head. 4. Behavior Coordination 3.4.3. Arm and Gripper. To be able to pick up objects from As Hobbit’s goal directly called for an autonomous system the floor or to grab its built-in turntable, Hobbit is equipped running for several weeks, providing interactions on an with a 6-DoF IGUS arm and a two-finger fin-ray gripper. As irregular schedule and on-demand basis, the behavior coor- a cost-eecti ff ve solution, the arm joints are moved by stepper dination of the Hobbit robots was designed and implemented motors via Bowden cables; the used n fi -ray gripper offers inamultistagedevelopmentprocess. Basedontheworkshops oneDoF andisdesignedtoallowform-adaptable grasps. with PU and SU and the user study with Hobbit PT1, While an additional DoF would increase flexibility and elderly care specialists designed the specific scenarios. eTh y lower the need for accurate self-positioning to successfully designed detailed scripts for the 11 scenarios (see Section 3.3) grasp objects, for the sake of overall system robustness and therobothadtoperform.Those11scenarios were subse- low hardware costs, the 6-DoF version was the model of quently planned in a flowchart-like fashion, which eased the choice for the arm. eTh arm is not compliant; therefore, transition from the design process to the implementation cautious behavior implementation with reduced velocities for stage. Journal of Robotics 7 Other ROS ROS Smach behavior state machine nodes LocateUser GoTo Recharge FindObject CallRobot PickUp Other goal pose action dist/ang commands actions feedback docking on/off action other data Skeleton detection, top interfaces_mira gestures recognition scan obstacle scan localization pose head position local map and path RGB-D data tasks feedback low level information MMUI bottom scan arm virtual scans navigation tasks RGB-D distance mode commands (dist/ang) data docking on/off tasks localization reset Figure 4: Hobbit behavior architecture. In thefollowing,wediscusstheoverall behavior coordi- be able to bring it into a position in which it would be safe nation architecture and how the Mutual Care concept was to perform other tasks. eTh movement of the robot within implemented and go into detail of some of the building blocks theenvironment wouldhavebeenunsafeifthearmwould necessary to construct the 11 scenarios. We further present still stick out of the footprint of the robot itself. The priorities the methods we developed to realize the goals of the project of the commands were defined with respect to the safety of while respecting the limits set by the low-cost approach of our the user, so that emergency situations can always preempt robots. a possibly running state-machine, regardless of the state the system is currently in. 4.1.BehaviorCoordinationArchitecture. Following the sce- nario descriptions, as defined by our specialists in elderly 4.2. RGB-D Based Navigation in Home Environments. Au- care, their implementation and the execution followed a tonomous navigation in user’s homes, especially with low- script-based approach. A state-machine framework, SMACH cost RGB-D sensors, is a challenging aspect of care mobile (http://wiki.ros.org/smach), was therefore chosen to handle robots. es Th e RGB-D sensors pose additional challenges the behavior execution for all high-level codes. forsafenavigation[18,20–22]. er Th educedfieldofview, An overview of the implemented architecture is shown the blind detection area, and the short maximum range in Figure 4. eTh top structure in this architecture is the Pup- of this kind of sensors provides limited information about petMaster, which handles the decision-making outside of any the robot’s surroundings. If the robot, for example, turns scenario execution, where it can start, preempt, and restart around in a narrow corridor, it might happen that the any sub-state-machines. For this, it collects the input from wallsare alreadytoo closetobeobservedwhile turning, those ROS nodes that handle gesture and speech recognition, leading to increased localization uncertainty. In order to text input via touchscreen, emergency detection (fallen and prevent such cases, we defined no-go areas around walls falling person detection, emergency button on the robot itself, in narrow passages, preventing the robot from navigating and emergency gesture), and scheduled commands that need too close to walls in the first place. For obstacle avoidance, to be executed at a specicfi time of the day. eTh PuppetMaster the head is tilted down during navigation, so that the head delegates the actual scenario behavior execution to the sub- camera partially compensates for the blind spot of the bottom state-machines, which only rely on the input data needed camera. If obstacles are detected, they are remembered for for the current scenario. Each of these sub-state-machines a certain time in the robot’s local map. However, a suitable corresponds to one of the scenarios designed to assist the trade-off had to be found for the decay rate. On one hand, users in their daily lives. As we needed to deal with many therobot mustbeable toavoidpersistingobstacles,but, different commands with different execution priorities, it was on the other hand, it should not be blocked for too long necessary to ensure that every part of the execution of the when an obstacle in front of it (e.g., a walking person) is state-machines can safely be interrupted without the risk removed. oflingeringinanundenfi edstate.Particularlyinsituations While localization methods generally assume that fea- when the arm of the robot was moving, it was necessary to tures of the environment can be detected, this assumption localization scan gestures Other MIRA authorities user pose gestures gestures touch screen commands voice commands AAL call button commands 8 Journal of Robotics Figure 5: Risky areas to be avoided. Obstacles like high shelves or stairs may not be perceived by Hobbit’s sensor setup. Figure 6: Examples of installed ramps to overcome door thresholds. does notholdfortheusedRGB-D cameraswithlimitedrange only useful as long as overall localization is precise enough. and long corridors. In this situation, according to the detected Other challenging situations were caused by thresholds and features, the robot could be anywhere along the parallel bumps on the floor and carpets. To overcome thresholds, we walls, which can cause problems in cases where the robot tested commercial and homemade ramps (Figure 6). After should enter a room aer ft driving along in such a corridor. testing different configurations and finding proper incline When entering a room, it is especially important that the limits, the robot was usually able to pass thresholds. Problems robotbecorrectly localizedinthe transversaldirection to with standard planning methods, for example, when a new the doorway and that the doorway be approached from the plan caused the robot to turn while driving on a ramp, front, so accurately driving through doors located on one were observed. A situation-dependent direct motion control side of a corridor is much more dicffi ult than through doors instead of a plan-based approach can reduce the risk during located at the beginning or at the end of a corridor. In order such situations. to approach doors from the front, avoiding getting too close In order to facilitate the tasks to be carried out in the to the corner sides, a useful strategy for wide enough places home environment, the concept of using rooms and labeled is adding no-go areas at sides of a doorway entrance or at places inside the rooms (locations) was applied. eTh rooms sharp corners. This way, it is possible to have safer navigation are manually den fi ed, such that spatial ambiguity is not behavior in wide areas while keeping the ability to go through a problem. Also, the geometry of the defined rooms does narrower areas. This provides more flexibility than meth- nothavetobeveryprecise with respecttothe map, as ods with xe fi d security margins for the whole operational long as the rooms contain all the places of interest that the area. user wants to label. Places are learned by tele-operating the No-go areas were also useful to avoid potentially dan- robot to specicfi locations and the subsequent association of gerous andrestrictedareas androoms.Afewexamplesare places to rooms operates automatically, based on the crossing shown in Figure 5. Areas with cables and thin obstacles on number algorithm to detect whether a point lies inside a the floor and very narrow rooms (usually kitchens), where a generic polygon [23]. Figure 7 shows several examples of nonholonomic robot as Hobbit cannot maneuver, were also rooms and places defined in the user trials for different avoided. However, it is worth noting that no-go areas are tasks. Journal of Robotics 9 Figure 7: Rooms and places defined in two real apartments in Vienna. Robot Commands Pick up Follow me Go to point SOS Help! Learn object Bring me ... Go recharging Break Figure 8: GUI of Hobbit showing one of the menu pages for robot commands. eTh strike-through hand on the right side indicates that the gesture input modality is disabled currently. A similar indicator was used for the speech input. 4.3. Multimodal Interaction Between the User and the Robot. the robot is close enough for the user to interact via the The Hobbit robot deploys an improved version of the multi- touchscreen, while at the same time does not invade the modal user interface (MMUI) used on Hobbit PT1. Generally personal space of the user (limiting her/his movement space speaking, the MMUI is a framework containing the following or restricting other activities such as watching TV). Hobbit main building blocks: a Graphical User Interface (GUI) makes use of the MMUI to combine the advantages of the with touch, Automatic Speech Recognition (ASR), Text to various user interaction modalities [25]. eTh touchscreen has Speech (TTS), and Gesture Recognition Interface (GRI). eTh strengths such as intuitiveness, reliability, and flexibility for MMUI provides emergency call features, web services (e.g., multiple users in different sitting positions but requires a weather, news, RSS feed, and social media), control of robotic rather narrow distance between user and robot (Figure 9). functions, and entertainment features. Compared to PT1, the ASR allows a larger distance and can also be used when no graphical design of the GUI (Figure 8) was modified to better freehandsare available, butithasthedisadvantageofbeing meet the user’s needs. Graphical indicators on the GUI for influenced by the ambient noise level, which may reduce showing current availability of GRI and ASR were iteratively recognition performance significantly. GRI allows a wider improved. distance between the robot and user and also works in noisy During PT1trials, we foundthatmostoftheusersdid environments, but it only succeeds when the user is in the not use the option of extending the MMUI to a comfortable efi ld of view of the robot. eTh interaction with Hobbit always ergonomic position for them. er Th efore the mounting of depends on the distance between the user and Hobbit. It can the touchscreen was changed to a fixed position on Hobbit. be done through a wireless call button (far from other rooms), Additionally, while the PT1 robot approached the user from ASR and GRI (2 m to 3 m), and touchscreen (arm length, see the front, the Hobbit robot approaches the user from the right Figure 9). or left side while seated, which is more positively experienced The ASR of Hobbit is speaker-independent, continuous, by the user [24]. This oeff rs the additional advantage that and available in four languages: English, German, Swedish, 10 Journal of Robotics and the appearance variability of the tracked person. A YES challenging aspect of the problem in Hobbit-related scenarios is that elderly users spend a considerable amount of time sitting in various types of chairs or couches. er Th efore, human detection and tracking should consider human body figures that do not stand out from their background. On the contrary, they may interact with cluttered scenes, exhibiting severe partial occlusions. Additionally, the method needs to be capable of detecting a user’s body while standing or walking Figure 9: Different interaction distances between user and Hobbit based on frontal, back, or side views. seen from a ceiling camera. Short range: touch; middle range: speech The adopted solution [31] enables 3D part-based, and gesture; long range: wireless call button. full/upper body detection and tracking of multiple humans based on the depth data acquired by the RGB-D sensor. eTh 3D positions and orientations for all joints of the skeletal and Greek. Contemporary ASR systems work well for differ- ent applications, as long as the microphone is not moved far model (full or upper body) relative to the depth sensor from the speaker’s mouth. The latter case is called distant or are computed for each time stamp. A conventional face detection algorithm [32] is also integrated using the color far-field ASR and shows a significant drop in performance, which is mainly due to three different types of distortion [26]: data stream of the sensor to facilitate human detection in case thefaceofthe user isvisiblebythesensor.Theproposed (a) background noise, (b) echo and reverberation, and (c) method has a number of beneficial properties that are other types of distortions, for example, room modes or the orientation of the speaker’s head. For distant ASR, currently summarized as follows: (1) performs accurate markerless 3D tracking of the human body that requires no training no off-the-shelf solution exists, but acceptable error rates can be achieved fordistancesup to 3m by carefultuningofthe data, (2) requires simple inexpensive sensory apparatus audio components and the ASR engine [27]. An interface to a (RGB-D camera), (3) exhibits robustness in a number of challenging conditions (illumination changes, environment cloud based calendar was introduced, allowing PU and SU of Hobbit to access and partly to also edit events and reminders. clutter, camera motion, etc.), (4) has a high tolerance with Despite the known difficulties with speech recognition respect to variations in human body dimensions, clothing, and so forth, (5) performs automatic human detection and in the far field and the local dialects of the users, the ASR of Hobbit worked as expected. eTh ASR was activated all automatic tracking initialization, thus recovering easily from possible tracking failures, (6) handles self-occlusions over theHobbitusertrials, buttheperformanceratewas commented on by users as necessary to be improved. The among body parts or occlusions due to obstacles/furniture same was observed for the GRI. Eventually, the touchscreen and so forth, and (7) achieves real-time performance on a conventional computer. Indicative results of the method are as input modality was used most oen ft by the majority of users, followed by speech and gesture. Touch was used more illustrated in Figure 10. than twice as oeft n as it was the case with ASR. Additionally, many users did not wait until the robot had completed 4.5. Gesture Recognition. A vision-based gestural interface its own speech output before starting to give a speech was developed to enrich the multimodal user interface of Hobbit in addition to speech and touch modalities. This command which reduced the recognition rate. Considering these lessons learned, the aims for future work on the ASR enables natural interaction between the user and the robot are twofold: improving the performance of the ASR and by recognizing a predefined set of gestures performed by the user using her/his hands and arms. Gestures can be of varying providing better indication when the MMUI is listening to spoken commands and when it is not. eTh aspect of using two complexity and their recognition is also aeff cted by the scene different variants for text messages from the robot to the user context, actions that are performed in the foreground or the was taken over from Hobbit PT1. Based on other researches, it background at the same time, and by preceding and/or fol- can be concluded that using different text variants does have lowing actions. Moreover, gestures are oen ft culture-specific, an influence, for example, by increasing users’ impression of providing additional evidence to substantiate the interesting interacting with a (more) vivid system. Some users demanded as well as challenging nature of the problem. additional ASR commands, for example, right, left, forward, For Hobbit, existing upper body gestures/postures as used on PT1 had to be replaced with more intuitive hand/finger- reverse,and stop in addition to come closer, as they would like to position (move) the robot with the help of voice commands based gestures that can be performed more easily by elderly or a remote control. users while sitting or standing. We redesigned the gestural vocabularyforHobbitthatnowconsists ofsixhandgestures 4.4. Person Detection and Tracking. To serve as building that convey messages of fundamental importance in the block for components like activity recognition [28] and context of human-robot dialogue. Aiming at natural, easy-to- natural human-robot communication [19, 29] as well as memorize means of interaction, users have identified gestures specialized functions like the tfi ness application [30], we consisting of both static and dynamic hand configurations developed a human body detection and tracking solution. that involve different scales of observation (from arms to Person detection and tracking in home environments is fingers) and exhibit intrinsic ambiguities. Recognition needs a challenging problem because of its high dimensionality to be performed in continuous video streams containing Journal of Robotics 11 (a) (b) (c) (d) (e) Figure 10: Qualitative results of the 3D skeletal model-based person detection and tracking method. (a) Full model of a standing user. (b) Upper body (including hands and fingers) of a sitting user. (c) Full model of a sitting user. ((d) and (e)) Hand and finger detection supporting the gesture recognition framework (see Section 4.5). other irrelevant actions. All the above need to be achieved by 4.6. Fall Detection. According to the assessed user needs and analyzing information acquired by a possibly moving RGB- the results of PT1 laboratory studies [17], a top-priority and D camera in cluttered environments with considerable light prominent functionality of Hobbit regards fall prevention and variations. fall detection. We hereby describe a relevant vision-based eTh proposed framework for gesture recognition [19, 29] component that enables a patrolling robot to (a) perform fall consists of a complete system that detects and tracks arms, detection and (b) detect a user lying on the floor. We focused hands, and ngers fi and performs spatiotemporal segmenta- mostly on the second scenario, as observing a user falling tion and recognition of the set of preden fi ed gestures, based in the eld fi of view of an autonomous assistive robot is of on data acquired by the head camera of the robot. u Th s, the very low probability. eTh proposed vision-based emergency gesture recognition component is integrated with the human detection mechanism consists of three modes, each of which detection and tracking module (see Section 4.4). At a higher initiates an emergency handling routine upon successful level, hand posture models are defined and serve as building recognition of the emergency situation: blocks to recognize gestures based on the temporal evolution (1) Detection of a falling user in case the fall occurs while of the detected postures. The 3D detection and tracking of thebodyisobservablebytheheadcameraoftherobot hands and n fi gers relies on depth data acquired by the head camera of Hobbit, geometrical primitives, and minimum (2) Detection of a fallen user who is lying on the floor spanning tree features of the observed structure of the scene while the robot is navigating/patrolling in order to classify foreground and background and further (3) Recognition of the emergency (help) gesture that can discriminate between hand and nonhand structures in the be performed by a sitting or standing user via the foreground. Upon detection of the hand (palm and n fi gers), gesturerecognition interfaceofHobbit(seeFigure11, the trajectories of their 3D positions across time are analyzed middle) to achieve recognition of hand postures and gestures (Table 1). The last column describes the assignment of the chosen The methodology for (1) regards a simple classifier trained on physical movements to robot commands. eTh performance the statistics of the 3D position and velocity of the observed of the developed method has been tested not only by users human body joints acquired by the person detection and acquainted with technology but also by elderly users [19] tracking component. For (2), once the general assumption, (see Figure 11). os Th e tests formed a very good basis for thefactthatthehuman’sheadisabove therestofthebody, fine-tuning several algorithmic details towards delivering a does no longer hold true, an alternative, simple, yet effective robust and ecffi ient hand gesture recognition component. The approach to the problem has been adopted. This capitalizes performance of the final component was tested during field on calibrated depth and thermal visual data acquired from trials achieving high performance according to the evaluation two different sensors that are available on the head of Hobbit. results. More specicall fi y, depth data from both cameras of the robot 12 Journal of Robotics Table 1: Set of hand/arm postures/gestures considered for the gestural interface of Hobbit. User command Upper body gesture/posture Robot command Related scenarios/tasks Positive response to confirmation All (1 m to 2 m distance to Yes um Th b up-palm closed dialogues. YES gesture robot) Negative response to confirmation All (1 m to 2 m distance to No Close palm, waving with index finger up dialogues robot) Bend the elbow of one arm repeatedly towards the Reposition the platform closer to the All (1 m to 2 m distance to Come closer platform and the body sitting user robot) Terminate an on-going robot Cancel task Both open palms towards the robot All behavior/task Extend one arm and point in 3D space towards an Detect and grasp the object of interest Pick up an (unknown) Pointing object (lying on the floor) towards the pointing 3D direction object from the floor Open palm facing towards the robot and circular Rewards the robot for an accomplished Reward Approach the user movement (at least one complete circle is needed) action/task Emergency detection, initiated by the Emergency Cross hands pose (normal-range interaction) Emergency detection user Figure 11: Snapshots of Hobbit users performing gestures during lab trials. eTh recognition results are superimposed as text and a circle on the images indicating the location and the name of the recognized gesture (taken from [19]). (head and base) are acquired and analyzed while observing communicates to the robot whether it should move even the floor area in front of the robot. Figure 12 illustrates sample closer or not in any of the three available modes (speech, results of the fallen user detection component. In Figure 12(a), touch, or gesture). Finally, the robot moves closer by a fixed the upper part illustrates the color frame captured by the head distance of 0.15 m for a maximum of three times if the camera of the robot that is titled down towards the floor, while user wishes. This gives the users more control over final navigating. In the bottom image, the viewpoint of the bottom distance adjustments. A more detailed description of this camera is illustrated, aer ft the estimation of the 3D floor plane novel approach will be published elsewhere. has been performed. The methodology for vision-based emergency detection 4.8. User Following. As theheadcameraisnot availablefor of case (3) refers to successful recognition of the emergency observing the full body of a user during navigation (obstacle “Help me,” based on the gesture and posture recognition detection), we designed a new approach [33] to localize a user module, as described in Section 4.5. eTh developed com- by observing its lower body part, mainly the legs, based on ponent is constantly running in the background within the RGB-D sensory data acquired by the bottom camera of the robot’s behavior coordination framework, while the robot is platform. active during all robot tasks, except from object detection and The proposed method is able to track moving objects recognition tasks. such as humans, estimate camera ego-motion, and perform map construction based on visual input provided by a single 4.7. Approaching the User. Specific behavior coordination RGB-D camera that is rigidly attached to a moving platform. was developed so that the robot could approach the user The moving objects in the environment are assumed to in a more flexible and effective way compared to standard move on a planar floor. The first step is to segment the existing methods. Using fixed predefined positions can be static background from the moving foreground by selecting sufficient in certain scenarios, but it oen ft presents limitations a small number of points of interest whose 3D positions in real-world conditions [22]. eTh approach we developed are estimated directly from the sensory information. The incorporates user detection and interaction (Section 4.4), camera motion is computed by tfi ting those points to a remembered obstacles and discrete motion for coming closer progressively built model of the environment. A 3D point to the user with better, and adaptive positioning. may not match the current version of the map either because First, a safe position to move to is obtained from the it is a noise contaminated observation or because it belongs local map and the robot moves there. Secondly, the user to a moving object or because it belongs to a structure Journal of Robotics 13 (a) (b) (c) Figure 12: Vision-based emergency detection of a fallen user lying on the floor. eTh upper and lower middle images show the captured frame from theheadandbottomcameras,respectively.Thegreendotsmark afoundskeletonwithinthesearcharea(greenandbluerectangles). (a–c) No human, no detection; person lying on the floor, correct detection; volumetric data from the head’s depth and temperature sensor are in conflict with the volumetric date provided by the bottom depth sensor. (a) (b) (c) (d) Figure 13: (a–d) User points to an object on the floor and Hobbit drives to a point from where it can be picked up and moves the arm to a position to grasp it. The object is lifted and the check if grasp was successful is performed: the object is moved forward to check if something has changed at the previous position of the object on the floor. If successful, the object is placed on the tray on top of the robot. attached to the static environment that is observed for the 4.9. Pick Up Objects from the Floor. To reduce the risk of first time. A classification mechanism is used to perform falling, Hobbit was designed to be able to pick up unknown this disambiguation. Additionally, the method estimates the objectsfromthefloor.Figure13shows thesteps ofthe“Pick camera (ego) motion and the motion of the tracked objects in up object” task. The user starts the command and points at a coordinate system that is attached to the static environment the object on the floor. If the pointing gesture is recognized, (robotic platform). In essence, our hypothesis is that a pair the robot navigates to a position from where it could observe of segmented and tracked objects of specific size/width that the object. At this position, the robot looks at the approximate move independently side-by-side at the same distance and position of the object. Hobbit then makes n fi e adjustments to direction in the field of view of a moving RGB-D camera position itself at a location from where grasping is possible. correspond to user’s legs being followed by the robot with If it is safe to grasp the object, the robot executes the arm high probability. The method provides the 3D position of trajectory and subsequently checks if the grasp was successful user’s legs with respect to the moving or static robotic plat- andwilltry to do so asecondtimeifitwas not. form. Other moving objects in the environment are filtered Several autonomous mobile robots have been developed outorcan beprovidedtoan obstacle avoidancemechanism to fetch and deliver objects to people [34–38]. None of these as moving obstacles, thus facilitating safe navigation of the publications evaluate their robot grasping from floor, and robot. none evaluate the process of approaching an object and 14 Journal of Robotics Trainer You Trainer You x0 x0 We will start with your right arm. Follow me. Try to move both arms at the same time. (a) (b) Figure 14: (a) Avatar mirroring the trainer’s movement proved easier for users to follow. (b) Correction suggested by the system to the user. grasping it as a combined action. Detection of the user and interface was allocated for the instructions at the beginning recognition of a pointing gesture were performed using the of each exercise and also for any feedback and guidance to work presented in [19, 31]. Checks are performed to rule out the user when needed. eTh design and development of the unintentional or wrong pointing gestures and to enhance the tfi ness application are described in more detail in [30]. The accuracy of the detected pointing gesture. tfi ness application was explained to the participants of the A plausibility check tests if the pointing gesture is point- trials by the facilitator at the initial introduction of the system ing towards the floor. To guarantee an exact position of the during the installation day. eTh participants could access the robot to bring the arm in a position where the gripper can application if desired at any time. Almost all users tried the approach the object in a straight line before closing, the tfi ness application at least once with some using it multiple accurate movement to the grasping position can be done as times during the three-week evaluation period. From the a relative movement to the object instead of using the global comments received during the mid-term and end-of-trial navigation. This is a crucial step as the region, in which the interviews, it can be concluded that the overall concept of head camera is able to perceive objects and where the 6- having the tfi ness program as a feature of the robot received DoF arm is able to perform a movement straight down to positive marksbymanyofusers as farasitsusefulness and the floor without changing gripper orientation, is limited importance are concerned. However, most users who tried it outsaidthattheywouldhave likedittobemorechallenging to 15× 10 cm . For calculating grasps, we use the method and to offer a larger variety of exercise routines with various of Height Accumulated Features [39]. eTh se features reduce challenging levels to choose from. the complexity of a perceived point cloud input, increase the value of given information, and hence enable the use of machine learning for grasp detection of unknown objects in 5. Field Trials cluttered and noncluttered scenes. We conducted eld fi trials in the households of 18 PU with 4.10. Fitness Application. The tfi ness application was intro- 5 Hobbit robots in Austria, Greece, and Sweden. eTh trials duced as a feature to the Hobbit robot after the PT1 trials lasted∼21 days for each household, resulting in a total of 371 and was made available during the PT2 trials for evaluation. days.Duringthistime,therobotswereplacedinthehomesof The motivation behind this application comes from the fact 18 older adults living on their own, where users could use and that physical activity can have a significant positive impact explore the robot on a 24/7 basis. Detailed results of the trials on the maintenance or even on the improvement of motor will be published elsewhere; preliminary results can be found skills, balance, and general physical well-being of elderly in [40] (a rfi st analysis only of the robot log data without any people, which in turn can lower the risk of falls in the long cross-analysis to the other data collected) and in [41] (a rfi st run. Basedonfeedback fromtheCommunity andActive overview on the methodological challenges faced during the Ageing Center of the municipality of Heraklion, Greece, the field trails). following requirements were produced. The exercises must (1) The trial sample consisted of 16 female and 2 male PU; be easy to learn, (2) target different joints and muscles, (3) their age ranged from 75 to 90 years (𝑀 = 79.67). All PU provide appropriate feedback to the user, (4) keep the user were living alone, either in flats (13 participants) or in houses. engaged while providing enough breaks, and (5) be designed In adherence with inclusion criteria set by the research to be performed from a seated position. consortium, all participants had fallen in the last two years Based on these requirements and feedback from test or were worried about falling and had moderate impairments users, we developed an application including three difficulty in at least one of the areas of mobility, vision, and hearing. 15 levels and seven different exercises. The user interface con- PU had some form of multiple impairments. Furthermore, all sisted of a split view of a video recording of the actual trainer participants had sufficient mental capacity to understand the performing each exercise on the left side and an avatar figure project and give consent. In terms of technology experience, depicting the user’s movement while executing the instructed 50.0% of the PU stated that they were using a computer every exercise on the right side as shown in Figure 14. This side- day, 44.45% stated that they were never using a computer or to-side viewing setup allowed the user to compare his or her used it less than once a week, and only one participant used a movements to those of the trainer. eTh bottom part of the computer twotothree timesaweek. Journal of Robotics 15 Before the actual trials, the PU were surveyed to make 6. Lessons Learned sure that they matched the criteria for inclusion and to discuss Based on all the insights gained from developing and testing possible necessary changes to their home environments for Hobbit in the efi ld, we can summarize the following rec- the trials (e.g., removing carpets and covering mirrors). After ommendations for fellow researchers in the area of socially an informed consent was signed, the robot was brought into assistive robots for enabling independent living for older the home and the technical setup took place. Aeft r this setup, adults in domestic environments. a representative from the elderly care facility explained the study procedure and the robot functionalities to the PU in 6.1. Robot Behavior Coordination. The developed behavior an individual open-ended manner. Aeft rwards, a manual was control based on a state-machine proved to be very useful and leftwithinthehouseholdincaseparticipantswantedtolook allowed us to implement many extensions in a short time. A up a functionality during the 21 days. All users experienced close interconnection with the user was therefore helpful. In two behavioral roles of the robot. eTh robot was set to device- the following, we present our main lessons learned regarding mode until day 11 when it was switched to companion-mode theimplementationofthe robotbehavior. (i.e., Mutual Care). The real-world environment in which the el fi d tests took place bears certain challenges, such as 6.1.1. Transparency. Actions and their eects ff need to be unforeseen changes in the environment and uncontrollable communicated in a clear fashion so that the robot’s pre- settings. Assessment by means of qualitative interviews and sented functionality can be fully understood by the user. questionnaires took place at four stages of each trial: before Users reported missing or nonworking functionality (e.g., trial, midterm, end of trial, and aer ft trial (i.e., one week aer ft reminders not being delivered to them and patrol not being the trial had ended). Moreover, log data was automatically executed). Most of these reported issues were caused by recorded by the robot during the whole trial duration. eTh the fact that the users did not understand the technical field trial methodology is comparable to similar studies (e.g. interdependencies between robot functions. For example, if [42]). acommand wasnot availableduetoacertain internal state The efi ld trials revealed that several functions of the of the robot, the user was not aware of this and did not robot lack stability over time. os Th e technical issues certainly understandtheshownbehavioroftherobot.Thesefunctional influenced the evaluation of the system because a reliable relations need to be made explicit and stated more clearly to working technical system is a prerequisite for positive user the users. experience. We tried to minimize potential negative feelings due to potential malfunctioning by informing our users that 6.1.2. Legibility. The log data and conversations with par- aprototype of arobot is averycomplextechnicalsystemthat ticipantsrevealedthatthe robotneedstocommunicateits might malfunction. Additionally, they were given the phone intentions. For instance, when the robot proactively moved number of the facilitator who was available for them around out of its charging station, the user was not always aware what the clock, 7 days per week, for immediate support. However, was going to happen next. When they did not understand malfunctions certainly had an influence on subjects’ answers whattherobot wasdoing,theycanceledtherobot’saction, during the assessments and may have attracted attention with eeff ctively stopping part of the robot’s benefit to them. To theresultthatthesubtlebehavioralchangesintroducedbythe work around this, a robot needs to clearly state the reason switch from device-mode to companion-mode may have been of its action and which goal it is trying to achieve when shifted out of the attentional focus. Availability of commands performing an autonomously started task. wasequallydistributedacrossthetwophasesof Mutual Care as canbeseeninTable2.Pleasenote that unavailability or 6.1.3. Contradictory Commands. Log data presented an inter- malfunctioning of functions in one but not the other mode esting effect while interacting with the touchscreen. When (unequal distribution of functionality) would have led to moving the hand towards the touchscreen on the robot, a bias within the evaluation. Table 2 gives an overview of the gesture recognition system detected the movement of the functional status across all PU during the efi ld trials. the hand as the come closer gesture, shortly followed by It is based on the combination of (i) a check of the robot’s a command from the touch input on the GUI. We could features by the facilitator during the preassessment, midterm replicate this behavior later on in our internal tests in the lab. assessment, and end-of-trial assessments, (ii) protocols of the A simple solution for such contradictions of commands is to calls of the users because they had a problem with the robot, simply wait forashortperiodoftime(less than0.2seconds) and (iii) analysis of the log data by technical partners. before a gesture close to the robot is processed by the behavior eH Th obbit efi ldtrials marked thefirsttimeanauton- coordination system to wait for a possibly following touch omous, multifunctional service robot, able to manipulate input. objects, was put into the domestic environment of older adults for a duration of multiple weeks. Our field trials provided insight into how the elderly used the Hobbit robot 6.1.4. Transparency of Task Interdependencies. The interviews and which functionalities they deemed useful for themselves revealed that the interdependencies between the tasks were andhowtherobotinufl encedtheirdailylife.Furthermore,we not clear to the user; the best example was the learn-and- couldshowthatitisinprincipal feasibletosupportelderly bring-object task. As described, for the bring-object task, the with a low-cost, autonomous service robot controlled by a objectfirsthadtobelearned so that it canbefound in the rather simple behavior coordination system. apartment. However, this fact needs to be remembered by 16 Journal of Robotics ff ff Table 2: System reliability across 18 PU. (a) Call Come Stop Pick up Teach a new Bring object Calendar Move to muc mode Statistics Emergency Follow me Hobbit closer Hobbit object object to user reminders location Days total 226 226 226 226 226 226 226 226 226 226 Days of introduction 31 31 31 31 31 31 31 31 31 31 Days switched o 55 55 55 55 55 55 55 55 55 55 Days in use 140 140 140 140 140 140 140 140 140 140 Device Days when feature was not 14 13 13 19 84 12 79 49 105 15 working Days when feature was only 23 20 22 44 47 116 62 83 13 32 partially working Days total 148 148 148 148 148 148 148 148 148 148 Days switched o 20 20 20 20 20 20 20 20 20 20 Days in use 128 128 128 128 128 128 128 128 128 128 Days when feature was not Companion 14 10 8 12 83 17 85 54 92 9 working Days when feature was only 25 16 17 38 40 95 43 64 18 29 partially working Device Working over days in use 81.79% 83.57% 82.86% 70.71% 23.21% 50.00% 21.43% 35.36% 20.36% 77.86% Companion Working over days in use 79.30% 85.94% 87.11% 75.78% 19.53% 49.61% 16.80% 32.81% 21.09% 81.64% (b) Go Take a Surprise Entertainment Entertainment muc mode Statistics Telephone Information Entertainment games Reward recharge break me audio fitness Days in total 226 226 226 226 226 226 226 226 226 Days of introduction 31 31 31 31 31 31 31 31 31 Days feature was disabled 55 55 55 55 55 55 55 55 55 Days of feature in use 140 140 140 140 140 140 140 140 140 Device Days when feature was not 19 16 19 11 11 11 23 20 11 working Days when feature was only 20 6 22 9 23 7 20 27 6 partially working Days in total 148 148 148 148 148 148 148 148 148 Days feature was disabled 20 20 20 20 20 20 20 20 20 Days of feature in use 128 128 128 128 128 128 128 128 128 Days when feature was not Companion 10 8 22 8 8 7 26 19 14 working Days when feature was only 33 9 16 7 23 8 11 23 7 partially working Device Working over days in use 79.29% 86.43% 78.57% 88.93% 83.93% 89.64% 76.43% 76.07% 90.00% Companion Working over days in use 79.30% 90.23% 76.56% 91.02% 84.77% 91.41% 75.39% 76.17% 86.33% Journal of Robotics 17 the user and as this is oeft n not the case, users wanted to ask 6.2.1. Internet Connectivity. Internet connectivity was not Hobbittobringthemanobjecteventhoughithadnotlearned reliable depending on location and time. While in most any objects before. In this specific case, the problem could be countries Internet (line-based or mobile) coverage is no easily fixed by only oer ff ing the task ”bring object” when an problem in general, local availability and quality vary sig- object was actually learned beforehand (e.g., the task could be nificantly, which makes Internet-based services difficult to greyed out in the MMUI). implement for technically unaware users. The integration of rich Internet-based content into the interaction therefore 6.1.5. Full Integration without External Programs. The han- lacks usability in case of intermittent connectivity. dling of user input and output must be fully integrated with the rest of the robot’s software architecture to be able to handle 6.2.2. Graphical User Interface. The GUI could be person- interruptions and continuations of interaction between the alized by the user for increased comfort during interaction. user and the robot. The user interface on the tablet computer This, however, shows the need for localized content to be (MMUI) incorporated multiple external programs (e.g., Flash available. As the setup phase during the trials showed that games, speech recognition, and the tfi ness functionality). As PU are likely not aware what content is available, some those were not directly integrated, the behavior coordination (remote) support and knowledge from SU are necessary for was not aware about their current state, leading to multiple the configuration of the user interface. interaction issues with users. For example, a game would be exiting when a command with higher priority (e.g., 6.2.3. Speech Recognition. Field trials showed that speech emergency from fall detection) would start the emergency recognition is still not working well for many users. Despite scenario. External programs need to be included in a way that the overall acceptable recognition rate that varies largely from makesitpossibletosuspend andresumetheirexecutionat user to user and from language to language and that is based any time. on the environment and distance, users often do not support the needs of current ASR technology for clearly expressed 6.1.6. Avoiding Loops. Reviewing the log data revealed that and separated commands in normal voice. eTh Sweet-Home the behavior coordination system could be trapped in a projectoncemoreemphasizesthe nfi dingsfromtheDiRHA loop without a way to continue the desired behavior execu- 2 project that practical speech recognition for old people in tion. eTh behavior coordination needs to provide a fallback the home environment is still a major challenge by itself [43]. solution in case of a seemingly endless loop in any part However, our ASR provided a positively experienced natural of the behavior. eTh behavior coordination communicates input channel when used in a multimodal HRI, where the with theMMUIinawaythatdoesnot provideimmediate touchscreen with its GUI provides a reliably working base. feedback over thesamechannelsofcommunication.Dueto timing issues, it occurred that a reply was lost between the 6.2.4. Smarthome Integration. eTh setup phase during the communicating partners (i.e., the fact that the robot stopped efi ld trials showed that the integration into smarthome envi- speech output). From there on, the behavior coordination ronments can be beneficial. Field trials showed that context was in a state that should not be reached and was unable awareness and adaptations highly impact the acceptance of to continue program execution in the desired manner. us, Th therobot.Imaginedfeaturescould be automaticon/offof the the communication structures should always have a fallback light or the stove or adjusting the proactively level of the robot solution to continue execution as well as the feedback based on the user’s mood. data on the same channels to prevent such a stop in a scenario. 6.2.5. Remote End User Control. Reflecting on the field trial indicates that a potential valuable extension of the interaction 6.2. Human-Robot Interaction with the MMUI. The inter- modalities would be a remote control of the robot, for action with the user was based on a multimodal user instance, on a smartphone enabling PU but also maybe SU interface that was perceived as easy to use during our to control the robot from outside the home. Potential useful field trials. While touch input turned out to be the most scenarios could be to send the robot to the docking station or reliable modality, speech and gesture interaction was highly to patrol the flat and search for an object or the PU or the SU welcome. Many of the entertainment functions of the MMUI video calling the PU. relied on Internet connectivity. Many users either were not interested in some UI features which therefore should be 6.3. Implementation of Mutual Care Behavior. In the begin- removed or asked for special configuration of the preferred ning ofthetrials, we implementedMutualCareinsuch features (e.g., selection of entertainment). The main way a fashion that in the companion mode the robot offers to theuserwas able tocommunicateremotelywithHobbit return the favor aer ft every interaction with the user. This was with the use of physical switches (call buttons) placed was done in order to guarantee that the users would notice at several xe fi d places inside the house of the user. The the difference between the modes during the interaction. eTh user had to physically go to the designated switch spot positive fact was that users noticed the changes. However, andpress theswitchfor therobot to approach her/him. they were soon very annoyed by the robot. Consequently, A smartphone/tablet application could be developed to we changed this implementation during the running trials. allow a better remote communication experience with the eTh return of favor frequency was reduced; it was no longer robot. oer ff ed aer ft the commands Recharge batteries, Go to, Call 18 Journal of Robotics button,and Surprise. Further feedback from the second and To conclude, we believe that methods, results, and lessons third Austrian and the second and third Swedish users led to learned presented in this article constitute valuable knowl- further reduction of the return a favor frequency to offering edge for fellow researchers in the eld fi of assistive service it only aeft r the following three commands: robotics and serve as a stepping stone towards developing affordable care robots for the aging population. (1) Pick up command (favor: Hobbit oeff rs music: I’d like to return the favor. I like music. Shall I play some music Conflicts of Interest for you?) (2) Learn object command (favor: Hobbit oer ff s to play eTh authors declare that there are no conflicts of interest. a game (suitable because the user is already sitting down): I’dliketoreturnthefavor. Doyouwanttoplay Acknowledgments agame?) (3) Reward command (favor: Hobbit oeff rs to surprise This research has received funding from the European Com- the user: I’dliketoreturnthefavor. Ilikesurprises.Do munity’s Seventh Framework Programme (FP7/2007–2013) you want a surprise?) (Grant Agreement no. 288146, Hobbit). However, as the interviews showed, these behavioral changes were no longer recognized by the users. Similarly, the differ- References encesinproactivityandpresencewerenotreflectivelynoticed [1] P. Dario, P. F. M. J. Verschure, T. Prescott et al., “Robot by the users, but the changes in dialogue were noticed. companions for citizens,” Procedia Computer Science,vol.7,pp. 47–51, 2011. 6.3.1. Help Situations. For the development of Mutual Care [2] J.M.Beer, C.-A.Smarr,T.L.Chenetal.,“ed Th omesticated behavior in completely autonomous scenarios, which helping robot: Design guidelines for assisting older adults to age in situations the robot can really identify in order to ask for place,” in Proceedings of the 7th Annual ACM/IEEE International help and how the robot can notice that it actively recovered Conference on Human-Robot Interaction, (HRI ’12), pp. 335–342, through the help have to be considered. USA, March 2012. [3] D. Fischinger, P. Einramhof, K. Papoutsakis et al., “Hobbit - The Mutual Care Robot,” in Assistance and Service Robotics in 6.3.2. Design of Neediness. In the interviews, PU reflected a Human Environment Workshop in conjunction with IEEE/RSJ that they did not really recognize that the robot needed their International Conference on Intelligent Robots and Systems,2013. input to continue its task. For Mutual Care, the need of help [4] J.Bohren,R. B.Rusu,E.G.Jonesetal.,“Towards autonomous seems to be essential. For future version of the robot, how robotic butlers: Lessons learned with the PR2,” in Proceedings to design the neediness needs to be considered. This could of the 2011 IEEE International Conference on Robotics and be achieved with facial expressions, sounds, or movements. Automation, (ICRA ’11), pp. 5568–5575, China, May 2011. Also for behaviors such as presence and proactivity, the robot [5] Y. Demiris and B. Khadhouri, “Hierarchical attentive multiple could say aer ft an interaction: “I would prefer staying with you models for execution and recognition of actions,” Robotics and in your room” or proactivity (e.g., “I would like to spend more Autonomous Systems,vol.54, no.5,pp.361–369,2006. time with you” before offering an activity). This would give a [6] T. Carlson and Y. Demiris, “Collaborative control for a robotic better explanation of the robot’s behavior to the user and an wheelchair: evaluation of performance, attention, and work- expected raise of acceptance. load,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 3, pp. 876–888, 2012. [7] M. Cashmore, M. Fox, D. Long et al., “Rosplan: Planning in the 7. Conclusions robot operating system,” in Proceedings of the 25th International Conference on Automated Planning and Scheduling, (ICAPS ’15), In this article, we presented the second prototypical imple- pp. 333–341, June 2015. mentation of the Hobbit robot, a socially assistive service [8] M.MansouriandF.Pecora,“More knowledgeonthe table: robot. We presented the main functionality it provided, as Planning with space, time and resources for robots,” in Proceed- well as the behavior coordination that enabled autonomous ings of the 2014 IEEE International Conference on Robotics and interaction with the robot in real private homes. Hobbit Automation, (ICRA ’14), pp. 647–654, China, June 2014. is designed especially for fall detection and prevention, [9] S.Goetze, S. Fischer,N. Moritz,J.E.Appell,andF.Wall- providing various tasks (e.g., picking up objects from the hoff, “Multimodal Human-Machine Interaction for Service floor, patrolling through the flat, and employing reminder Robots in Home-Care Environments,” in Proceedings of the 1st functionalities), and supports multimodal interaction for Workshop on Speechand Multimodal InteractioninAssistive dieff rent impairment levels. We focused on the development Environments,2012. of a service robot for older adults, which has the potential [10] H. S. Koppula, A. Jain, and A. Saxena, “Anticipatory planning to promote aging in the home and to postpone the need to for human-robot teams,” Springer Tracts in Advanced Robotics, move to a care facility. Within the field trials, we reached vol. 109, pp. 453–470, 2016. the desirable long-term goal that a mobile service robot [11] P. Kwang-Hyun and Z. Zenn Bien, “Intelligent sweet home with manipulation capabilities enters real homes of older for assisting the elderly and the handicapped,” in Independent adults and showed its usefulness and potential to support Living for Persons with Disabilities and Elderly People: ICOST,p. independent living for elderly users. 151, 2003. Journal of Robotics 19 [12] T. Fukuda, P. Di, F. Chen et al., “Advanced service robotics [26] M. Wolf ¨ el and J. McDonough, Distant Speech Recognition,John for human assistance and support,” in Proceedings of the Wiley & Sons, 2009. International Conference on Advanced Computer Science and [27] P. Panek and P. Mayer, “Challenges in adopting speech control Information Systems, (ICACSIS ’11), pp. 25–30, December 2011. for assistive robots,” in Ambient Assisted Living,AdvancedTech- nologies and Societal Change, pp. 3–14, Springer International [13] D.-J. Kim, R. Lovelett, and A. Behal, “An empirical study with simulated ADL tasks using a vision-guided assistive robot arm,” Publishing, Berlin, Germany, 2015. in Proceedings of the 2009 IEEE International Conference on [28] D. I. Kosmopoulos, K. Papoutsakis, and A. A. Argyros, “Online Rehabilitation Robotics, (ICORR ’09), pp. 504–509, Japan, June segmentation and classification of modeled actions performed in the context of unmodeled ones,” in Proceedings of the [14] M. B. Hong, S. J. Kim, T. Um, and K. Kim, “KULEX: An 25th British Machine Vision Conference, (BMVC ’14),pp. 1–12, September 2014. ADL power-assistance demonstration,” in Proceedings of the 10th International Conference on Ubiquitous obots and Ambient [29] D. Michel and K. Papoutsakis, “Gesture Recognition Appara- Intelligence, (URAI ’13), pp. 542–544, November 2013. tuses,” MethodsandSystemsforHuman-MachineInteraction, [15] L. Lammer, A. Huber, A. Weiss, and M. Vincze, “Mutual Care: How older adults react when they should help their care [30] M. Foukarakis, I. Adami, D. Ioannidi et al., “A robot-based Robot,” in Proceedings of the 3rd international symposim on New application for physical exercise training,” in Proceedings of the Frontiers in Human-Robot interaction,April 2014. 2nd International Conference on Information and Communica- tion Technologies for Ageing Well and e-Health, (ICT4AWE ’16), [16] T. Kor ¨ tner, A. Schmid, D. Batko-Klein et al., “How social robots pp.45–52,April2016. make older users really feel well - A method to assess users’ concepts of a social robotic assistant,” in Proceedings of the [31] D. Michel and A. Argyros, “Apparatuses, methods and systems InternationalConferenceonSocialRobotics,vol.7621, pp.138– for recovering a 3-dimensional skeletal model of the human 147, Springer, 2012. body, 2016”. [17] D. Fischinger, P. Einramhof, K. Papoutsakis et al., “Hobbit, a [32] P. Viola and M. J. Jones, “Robust real-time face detection,” care robot supporting independent living at home: First pro- International Journal of Computer Vision,vol.57, no.2,pp. 137– totype and lessons learned,” Robotics and Autonomous Systems, 154, 2004. vol. 75, pp. 60–78, 2016. [33] P.Panteleris andA.A.Argyros,“Vision-based SLAMand [18] P. DeLaPuente,M.Bajones,P.Einramhof, D.Wolf,D. moving objects tracking for the perceptual support of a smart Fischinger, and M. Vincze, “RGB-D sensor setup for multiple walker platform,” Computer Vision-ECCV 2014 Workshops,vol. tasksofhomerobotsandexperimental results,”in Proceedings 8927, pp. 407–423, 2014. of the IEEE/RSJ International Conference on Intelligent Robots [34] A. Remazeilles, C. Leroux, and G. Chalubert, “SAM: A robotic and Systems, (IROS ’14), pp. 2587–2594, USA, September 2014. butler for handicapped people,” in Proceedings of the 17th [19] D. Michel, K. Papoutsakis, and A. A. Argyros, “Gesture recogni- IEEE International Symposium on Robot and Human Interactive tion supporting the interaction of humans with socially assistive Communication, RO-MAN, pp. 315–321, Germany, August 2008. robots,” in Advances in Visual Computing, vol. 8887 of Lecture [35] S. S. Srinivasa, D. Ferguson, C. J. Helfrich et al., “HERB: A home Notes in Computer Science, pp. 793–804, Springer International exploring robotic butler,” Autonomous Robots,vol.28,no.1,pp. Publishing, Berlin, Germany, 2014. 5–20, 2010. ´ ´ [20] J. Gonzalez-Jimenez, J. Ruiz-Sarmiento, and C. Galindo, [36] B. Graf, U. Reiser, M. Hag ¨ ele, K. Mauz, and P. Klein, “Robotic “Improving 2D Reactive Navigators with Kinect,” in Proceeding home assistant care-O-bot 3—product vision and innovation of the 10th International Conference on Informatics in Control, platform,” in Proceedings of the IEEE Workshop on Advanced Automation and Robotics (ICINCO ’13),2013. Robotics and Its Social Impacts (ARSO ’09), pp. 139–144, Tokyo, Japan, November 2009. [21] M. Jalobeanu, G. Shirakyan, G. Parent, H. Kikkeri, B. Peasley, and A. Feniello, “Reliable kinect-based navigation in large [37] D. Fischinger and M. Vincze, “Empty the basket - A shape indoor environments,” in Proceedings of the 2015 IEEE Interna- based learning approach for grasping piles of unknown objects,” tional Conference on Robotics and Automation, (ICRA ’15),pp. in Proceedings of the 25th IEEE/RSJ International Conference 495–502, USA, May 2015. on Robotics and Intelligent Systems, (IROS ’12),pp.2051–2057, Portugal, October 2012. [22] P. De La Puente, M. Bajones, Reuther. C., D. Fischinger, Wolf. D., and M. Vincze, “Experiences with rgb-d based navigation in [38] D. Fischinger and M. Vincze, “Shape based learning for grasping real home robotic trials,” in Austrian Robotics Workshop (ARW), novel objects in cluttered scenes,” in Proceedings of the 10th IFAC Symposium on Robot Control, (SYROCO ’12),pp. 787–792, Croatia, September 2012. [23] M. Shimrat, “Algorithm 112: Position of point relative to poly- gon,” Communications of the ACM,vol.5,no.8, p.434, 1962. [39] D. Fischinger, A. Weiss, and M. Vincze, “Learning grasps with topographic features,” International Journal of Robotics [24] M. L. Walters, K. Dautenhahn, S. N. Woods, and K. L. Koay, Research,vol.34, no.9,pp. 1167–1194, 2015. “Robotic etiquette: Results from user studies involving a fetch and carry task,” in Proceedings of the HRI 2007: 2007 ACM/IEEE [40] M. Bajones, A. Weiss, and M. Vincze, “Log data analysis of Conference on Human-Robot Interaction - Robot as Team Mem- long-term household trials: Lessons learned and pitfalls. 2016. ber,pp. 317–324, USA,March2007. Workshop on Challenges and best practices to study HRI in natural interaction settings,” in Proceeding of the International [25] P. Mayer, C. Beck, and P. Panek, “Examples of multimodal user Conference on Human-Robot Interaction,2016. interfaces for socially assistive robots in ambient assisted living environments,” in Proceedings of the 3rd IEEE International [41] J. Prip,fl T. K or ¨ tner, D. Batko-Klein et al., “Results of a real Conference on Cognitive Infocommunications, (CogInfoCom ’12), world trial with a mobile social service robot for older adults,” pp. 401–406, Slovakia, December 2012. in Proceedings of the 11th Annual ACM/IEEE International 20 Journal of Robotics Conference on Human-Robot Interaction, (HRI ’16), pp. 497-498, IEEE Press, New Zealand, March 2016. [42] F. Palumbo, D. La Rosa, E. Ferro et al., “Reliability and human factors in ambient assisted living environments,” Journal of Reliable Intelligent Environments,vol.3,no. 3,pp.139–157,2017. [43] M. Vacher, F. Portet, A. Fleury, and N. Noury, “Development of audio sensing technology for ambient assisted living: Appli- cations and challenges,” International Journal of E-Health and Medical Communications,vol.2,no.1, pp.35–54,2011. International Journal of Advances in Rotating Machinery Multimedia Journal of The Scientific Journal of Engineering World Journal Sensors Hindawi Hindawi Publishing Corporation Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 http://www www.hindawi.com .hindawi.com V Volume 2018 olume 2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Journal of Control Science and Engineering Advances in Civil Engineering Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Submit your manuscripts at www.hindawi.com Journal of Journal of Electrical and Computer Robotics Engineering Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 VLSI Design Advances in OptoElectronics International Journal of Modelling & Aerospace International Journal of Simulation Navigation and in Engineering Engineering Observation Hindawi Hindawi Hindawi Hindawi Volume 2018 Volume 2018 Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com www.hindawi.com www.hindawi.com Volume 2018 International Journal of Active and Passive International Journal of Antennas and Advances in Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal

Journal of RoboticsHindawi Publishing Corporation

Published: Jun 3, 2018

There are no references for this article.