Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

3D surround local sensing system H/W for intelligent excavation robot (IES)

3D surround local sensing system H/W for intelligent excavation robot (IES) JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 2019, VOL. 18, NO. 5, 439–456 https://doi.org/10.1080/13467581.2019.1679148 CONSTRUCTION MANAGEMENT a b a Dong-Jun Yeom , Hyun-Seok Yoo and Young-Suk Kim a b Department of Architectural Engineering, Inha University, Incheon, Korea; Department of Technology Education, Korea National University of Education, Cheongju, Korea ABSTRACT ARTICLE HISTORY Received 26 August 2019 The recently developed intelligent excavation robot in Korea is a fully automated excavator Accepted 30 September 2019 equipped with global 3D modeling capabilities for an entire earthwork site and an intelligent task planning system. The intelligent excavation robot includes features such as autonomous KEYWORDS driving, 3D surround modeling, autonomous excavation, loading, etc. An intelligent excavation 3D surround modeling; truck robot features technology that allows for accurate recognition of objects near the excavator, recognition; intelligent including the terrain of surrounding environments, location of obstacles in the excavator’s excavation robot; automated path, and any approaching trucks and moving people. Such technology is critical to ensuring excavation work quality and safety. In this study, we develop the hardware for a 3D surround laser sensing system that enables 3D image modeling of the terrain surrounding an intelligent excavation robot. By mounting a sensor onto an intelligent excavation robot, we conducted performance tests to determine the robot’s 3D modeling capabilities of the terrains and obstacles at an actual earthwork site. The experimental results are applied to an object recognition system for detecting the properties of the terrain of the workspace around the excavator, any approach- ing people, trucks, obstacles, etc. The proposed hardware includes a wide range of applications in the development of future automated construction equipment. 1. Introduction excavation robot, the technology that allows for accurate recognition of objects near the excavator, 1.1. Backgrounds and purpose including the terrain of surrounding environments, Excavators are some of the most notable construction the location of obstacles in the excavator’s path, any equipment used for earthworks. Excavators perform approaching trucks and moving people, forms the a wide range of tasks such as cutting, banking, gather- core technology that is critical to ensuring work ing, loading, leveling, and grading earth. The number quality and safety. of registered excavators worldwide is increasingly on To create three dimensional models of the sur- the rise, and there are several attachments to excava- rounding terrains and approaching objects around tors that are under continuous development, making an excavator, Stentz et al. (1999)utilizedtwo-axis excavators more effective for earthworks. laser scanners mounted on the excavator to scan Research on automated excavation first began andrecognizetrucks;whereas,Yamamoto(2008) with unmanned, remote-controlled excavators used GPS (global positioning system) and developed by Japan in the 1980s, and some of the a direction sensor installed on a truck that was prominent studies that followed include the auton- designed to operate in harsh environments to omous loading system (Cannon 1999;Singh and accomplish excavation operations. In the study by Simmons 1992;Stentzet al. 1999) developed by Stentz et al. (1999), unless an excavator is situated on Carnegie Mellon University in the 1990s; the auton- top of an elevated bench and a truck stops at omous hydraulic excavator (Yamamoto et al. 2009) a designated location, it can be difficult to recognize developed by the Public Works Research Institute either the truck or any other nearby object. In addi- (PWRI) in 2008, and the intelligent excavation robot tion, Yamamoto’s(2006) study involved multiple (IES) developed by Korea in 2011. Among these, the loading trucks that were outfitted with GPS and intelligent excavation robot (IES) recently developed direction sensors, which are both expensive, and by Korea is a fully automated excavation robot that is any errors in the sensor data would result in capable of autonomous driving, 3D modeling, exca- a collision between the bucket of the excavator and vation task planning, autonomous excavation, load- the truck, posing another serious problem. ing, etc. IES features global 3D modeling capabilities Therefore, to enable 360 degrees 3D modeling of oftheentireworksiteandan intelligenttaskplan- the surroundings around an intelligent excavation ning system. To develop such an intelligent robot, this study develops hardware for a new type of CONTACT Young-Suk Kim youngsuk@inha.ac.kr Department of Architectural Engineering, Inha University, Incheon 22212, Korea © 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group on behalf of the Architectural Institute of Japan, Architectural Institute of Korea and Architectural Society of China. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 440 D.-J. YEOM ET AL. 3D surround sensing system that minimizes blind 1.2.3. Hardware design of the 3D surround sensor spots and offers prompt, accurate modeling of sur- Our prior research employed a 2D laser scanner as an rounding objects in a wide variety of terrains. In addi- ideal sensor for the intelligent excavation robot. As we tion, we install the developed sensor onto an employ the sensor, we perform an analysis of its sensing intelligent excavation robot to test its performance area, instrument movement direction, location of rota- using actual earthwork site terrains and obstacles. It tional axis, sensor rotation method, etc. and design is anticipated that the sensing system proposed in this a layout of the sensing system. We also review the driving study will provide the necessary support for accurate, part, center of gravity for rotation, scanning area, sensor fully automated excavation operations, presenting installation method, etc. in a detailed design. wide applications in the development of automated construction equipment. 1.2.4. 3D surround sensor installation and field test In this study, we build the hardware for a 3D surround sensor; install the hardware onto the intelligent exca- 1.2. Study scope and methodology vation robot; and execute tests. We also develop data 3D surround sensing technology refers to technology processing software for the 3D surround sensing sys- that uses geometric data on the terrain of the local area tem and execute performance tests at an actual earth- surrounding an intelligent excavation robot and nearby work site to verify its 3D terrain modeling capabilities. objects. The data is used to create three-dimensional models, and the three-dimensional geometric informa- 2. Analysis of prior work on 3D modeling tion captured from the excavator surroundings is then technology for intelligent excavation robot fed into object recognition algorithms to recognize ter- rains, people, trucks, obstacles, etc. The geometric data is 2.1. Definition of 3D surround modeling ultimately used to create paths for loading trucks and to technology avoid obstacles. The scope of this research is only limited The productivity and quality of earthmoving work can to developing hardware for the 3D surround sensing be significantly affected by the degree of rationality of system, which will be used to create three-dimensional task planning. Effective task planning in earthmoving models of local terrains and objects surrounding the operations must be based on a sound understanding excavation robot. Real-time data processing of the 3D of the terrain and ground characteristics at a particular surround sensing system, including object recognition job site, rather than solely focusing on the experience algorithms that separately recognize terrains and objects and skills of an equipment operator. Rational task plan- from three-dimensional geometric data, will be later dis- ning for automated excavation equipment operating cussed in subsequent research. The methodology used in in an earthmoving environment requires the creation this research is as follows: of a virtual environment (world model) that mimics the real environment based on three-dimensional data 1.2.1. Analysis of prior work and considerations for and the ability to update real-time changes to the system hardware design workspace terrain (local model) in three dimensions In this study, we examine the research and develop- as the work progresses. In addition, based on such ment status of domestic and overseas 3D surround a 3D virtual environment, an optimized earthwork modeling technologies by focusing on prior research task plan must be formulated through region segmen- work on automated earthmoving equipment and ana- tation, optimal platform positioning, and sequencing lyzing their problems. Based on the problems identi- tasks (Seo, Park, and Jang 2007). fied, we identify some of the considerations that are To build a three-dimensional environment of an required to develop a 3D surround sensing system for earthwork site, we create a three dimensional model the intelligent excavation robot. of the overall topology of the job site, which is referred to as “global modeling.” Moreover, in this research, we 1.2.2. Analysis of the optimal location of the 3D define “local modeling” as the creation of real-time 3D surround sensor models of changing terrains surrounding an auto- Excavation operations mostly involve soil digging, and mated earthmoving machine, thereby updating the 3D image modeling of a digging area below the global model. In global modeling, a terrestrial 3D ground without any blind spots requires an analysis laser scanner is used to generate 3D topographic of where the sensor should be located on the intelli- data, which is then compared to the design drawing gent excavation robot. In this study, we identify the so as to generate information on the scope of right location of the sensor that minimizes blind spots a working area and the amount of work required. in the creation of three dimensional models of the local Local modeling, on the other hand, focuses on the area around the intelligent excavation robot. creation of a 3D model of a relative working area that JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 441 Figure 1. Global area and local area (Yoo, Kwon, and Kim 2013). coordinating with multiple automated earthmoving changes as the automated earthmoving machine systems, including the man-machine interface (MMI) moves. As shown in Figure 1, the working area within an eight-meter radius of the intelligent excavation system, an object recognition system is used to reposi- tion the automated machine for collision avoidance robot and the loading area, which is situated within and path identification. a 20-meter radius, are subject to 3D modeling. 3D surround modeling technology involves using data on the terrain of the local area surrounding the 2.2. Analysis of 3D surround modeling technology excavator, including the geometry of nearby objects, development status and then creating 3D representations. 3D surround modeling is ultimately used to create paths for the Research on a fully automated excavation robot can be intelligent excavation robot to carry away earth, load traced back to earlier research on an autonomous trucks, or avoid obstacles by sensing nearby trucks, loading system (“ALS”), which was conducted by people, and obstacles. The object recognition data is Carnegie Mellon University in 1999. As shown in then sent to the remote station. The results of 3D sur- Figure 2, Stentz et al. (1999) conducted research on round modeling presented in this study can be used as modeling a surrounding workspace in three dimen- important information to execute path planning for the sions by mounting 2-axis laser scanners on both sides boom, arm, and bucket of the intelligent excavation of the ALS. In this system, the scanner on the right side robot. In addition, the results are used to determine performs 3D modeling of the workspace in front of the the amount of earth that must be excavated by compar- machine, while the left scanner detects trucks or ing the volumes before and after each excavation opera- objects that are approaching the machine, with the tion. The 3D surround modeling results can also be used horizontal scanning angle for each scanner being 120 to develop an excavation quality inspection process to degrees. The two 2-axis laser scanners obtained from inspect and decide whether the excavation operation at the ALS research comprise a laser distance measuring the current location of the platform can be achieved by sensor fitted with two rotational axes and a reflector, comparing against 3D design information. In addition, which are basically the same as a 3D laser scanner in the results of the earthmoving amount calculation pro- terms of structure, that are capable of measuring ter- cess can be used to estimate the amount that must be rain points at a rate of 12 kHz. As the ALS was designed loaded onto the truck. to recognize a truck using its left laser scanner, it can If the 3D surround modeling technology is coupled only perform loading operations when the excavator is with object recognition technology to recognize load- located on top of an elevated bench with a truck ing trucks, obstacles, people, etc., the combined tech- parked at a designated location, which can be nologies can serve as critical information for setting approached from left. In other words, in case where a travel path and an alternative path for the intelligent the excavator and the truck are at the same level or the excavation robot, which can be used to make emer- truck is approaching from behind of the excavator gency stops while avoiding mobile obstacles. platform, the loading operation becomes impossible. Moreover, to develop a collaboration system for Given that excavators exhibit variable working 442 D.-J. YEOM ET AL. (a) Laser scanners mounted on excavator in ALS (b) Excavator loading a truck in ALS Figure 2. Autonomous loading system (CMU) (Stentz et al. 1999). patterns, regardless of the position of the excavator a system requires high-precision GPS and direction sen- platform, the excavator should be able to perform sors, and the trucks used for testing were quite different a proper loading operation. from large dump trucks that typically operate on actual Meanwhile, the autonomous hydraulic excavator earthwork sites. As the truck loading system used by the (“AHE”) developed by the Public Works Research PWRI of Japan does not involve direct modeling and Institute of Japan (“PWRI”) in 2006, which is shown in recognition of the truck, its software configuration can Figure 3(a), is equipped with GPS, a direction sensor, be very simple. The problem, however, is the high costs gyro sensor, and an azimuth sensor with an indepen- involved in implementing high-precision sensors and in dent task planning system. The AHE employs a stereo some scenarios, an estimation of the truck bed area may vision camera to enable 3D modeling of the front work- not be entirely reliable. In fact, for a GPS system that space and used a 2D laser scanner for 360-degree sur- supports errors at the centimeter level, the per-unit cost rounding area modeling (Yamamoto 2008). Unlike the is more than $8,500. In a solid communication environ- ALS, to recognize where a truck is located, the study ment, it operates within a margin of error of 1 cm, but adopted an approach of equipping the truck with GPS when communication with the base is poor, errors of and direction sensors, as illustrated in Figure 3(b).Such more than 10 cm can occur, indicating that performing (b) Truck used in PWRI (a) Sensors mounted on excavator in PWRI (Yamamoto 2008) (Yamamoto et al. 2009) (c) 2D laser scanner in PWRI (d) Results of 360° modeling (Yamamoto et al. 2006) (Yamamoto et al. 2006) Figure 3. Autonomous hydraulic excavator (PWRI). JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 443 a fully automated loading operation is highly likely to Later Sarata, Koyachi, and Sugawara (2008)con- result in collision accidents. ducted research on detecting the location and In 2008, the PWRI performed research on 3D shape of a pile of soil by measuring the height of modeling of a 360-degree area using a 2D laser the pile from the ground using stereo vision cam- scanner. As shown in Figure 3(c),asensor was eras installed on top of a wheel loader (Figure 5). mounted on top of the cab of the excavator to However, these methods have some limitations as create a three-dimensional representation of the the typical terrain on actual earthwork sites tends surrounding terrain by slowly rotating the swing to be much more rough and uneven. body in a 360 degrees angle. In such a method, to Corke, Roberts, and Winstanley (1999) devel- obtain one page of topographic data, an excavator oped a mining robot equipped with a stereo vision must stop its work and slowly rotate itself at con- camera and conducted research on 3D modeling of stant speed, and if the rotation speed fluctuates, it rocks in an underground mine by projecting illu- is difficult to obtain an even distribution of points mination (Figure 6). Whiteborn, Debrunner, and (Figure 3(d)). Another limitation is that 3D modeling Steele (2003) conducted research on building 3D based on data from a 2D laser scanner requires models of underground mining operations by a precise horizontal rotation angle, which means mounting stereo vision equipment and headlights that an additional sensor must be installed on the onto an LHD vehicle specifically developed for rotational axis of the excavator. underground mining operations. Also recently, there have been similar studies in Japan using a 2D laser scanner and stereo vision 2.3. Considerations for 3D surround modeling sensor mounted on top of a wheel loader to detect sensor hardware a dump truck and a pile of soil. In his study, Sarata et al. (2007) used an algorithm that identifies To design hardware for a 3D surround modeling sensor, a truck by detecting the ground plane and then this study analyzes some of the problems of previous a vertical straight line from the ground (Figure 4). research and identified the following considerations: Figure 4. Detection of the location of a truck using 2D laser sensor (Sarata et al. 2007). Figure 5. Detection of soil mound using stereo vision (Sarata, Koyachi, and Sugawara 2008). 444 D.-J. YEOM ET AL. (a) LHD automatic vehicle (b) Stereo matching results Figure 6. LHD vehicle (Whiteborn, Debrunner, and Steele 2003). accurately recognize the truck bed from whichever 2.3.1. Minimizing blind spots for the surrounding direction the truck approaches within 360 degrees. In terrain the CMU system; however, accurately recognizing the In both studies performed by CMU and PWRI, the truck bed area requires the excavator to be positioned sensor systems of the fully automated excavation sys- higher than the truck. Because the truck bed is as high tem focus on 3D modeling of the terrain of the work as approximately three meters from the ground, blind area in front of the excavator. This is because a fully spots will be created as the truck is scanned if the truck automated excavation system typically travels to its and excavator stay at the same level, which also means work location and then stops its platform to scan the that the truck bed area cannot be fully covered for 3D initial work area terrain after which with each digging modeling. In typical earthmoving scenarios, the exca- operation, the system rescans the terrain to monitor vator is often positioned on top of an elevated bench the digging progress. Previously developed automated during the earthmoving operations, and the excavator excavation systems are characterized by terrain mod- and the truck frequently operate at the same altitude. eling sensor systems mounted on the top of the cab To address this problem, a sensing system that rotates facing downward, which is closely related to the 360 degrees at constant speed must be designed to be shapes of the excavated terrain. As the stereo vision located at a significantly high level from the ground camera system and the laser scanning system are likely while being closest as possible to the rotational axis of to create blind spots, these approaches are considered the excavator. for minimizing blind spots in the excavated terrain, as much as possible, thereby precisely modeling the changing shape of the terrain after each bucket pass. 2.3.3. Implementing 3D topographic data-based In particular, when a sensing system is located over the object (trucks, site personnel, etc.) recognition top of the excavated terrain, perpendicular to the Because a fully automated excavation system does not ground, blind spots can be minimized and the most involve any operator throughout the entire operation even density modeling is made possible. Therefore, process, possible collisions with nearby obstacles or a 3D surround modeling sensor system must be objects can be a critical issue. While the CMU research located at a level that is as high as possible, pointing partially experimented with truck and object estima- downward to allow for sensing of the terrain below. tion, there is little research on the object recognition rate and truck bed recognition. In PWRI’s fully auto- 2.3.2. 3D modeling of the 360° work area around mated excavation system research, with its primary the excavator focus on 3D modeling of the front workspace, only CMU’s fully automated excavation system uses a left the truck bed area was subject to estimation using scanner, among its two 2-axis scanners, to detect GPS and direction sensors, and none was done on a truck, which is supposed to stop at a designated object detection. For an intelligent excavation system location within the 120 degree sensing range. If the to be able to accurately and promptly recognize any truck is outside of this range, unless the excavator’s obstacles or objects in the excavator’s path, a highly swing body rotates, it is impossible to detect the truck sophisticated object recognition algorithm is required bed, let alone identify whether the tuck is anywhere to discern an object from 3D data on the local area in near the excavator. This also has close implications for a prompt and accurate manner and to estimate the a possible collision between the excavator and the type of object, including its moving direction. truck. Given that the excavators exhibit diverse opera- Recognizing objects, such as loading trucks and site tion patterns, they should be able to promptly and personnel, in atypical terrains requires 3D topographic JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 445 data with significantly even and accurate density, another. Similarly, in the case of banking, the same which must be considered while designing the hard- applies to the loading area, but it is preferable for the ware for the 3D surround sensor. excavator to be positioned at a higher location com- pared to the truck during the loading operation because this offers an unobstructed view to an opera- 3. Design and development of 3D surround tor of the rectangular loading area, minimizing poten- sensor hardware tial accidents where the excavator’s arm or bucket might collide with the truck bed area. However, in 3.1. Analysis of the optimal location of a 3D actual earthwork sites, loading formation may be flex- surround sensor ible depending on the condition of the terrain of the Existing research on automated excavation focus on construction site, while in drainage construction, the performing 3D modeling of changing workspace ter- truck and the excavator often operate at the same rains and objects around an excavation robot, and altitude. Therefore, a truck must be allowed to most research use a stereo vision camera, 2D laser approach in whatever direction possible within 360 scanner, or 3D laser scanner as a sensor interface degrees from the rotational axis of the excavator. In device for 3D modeling and object detection. addition, with or without an elevated bench, the three- A stereo vision camera can rapidly acquire video dimensional coordinates of the truck bead area can be images but it exhibits a considerable amount of noise accurately recognized. and its accuracy is very low, which means that the Because the 3D surround sensor of the intelligent camera can take quite a long time to process the excavation robot must model an area 360° around the data; whereas a terrestrial 3D laser scanner is expensive excavator, the closer it is located to the center of the and susceptible to vibrations in the external environ- excavator on the XY plane, the better it is to evenly ments, and so modifying the laser scanner to suit the model the entire area, as illustrated in Figure 7(a).In excavation operations can be a challenge. Therefore, in actual scenarios, aside from the excavation work area our previous research by Yoo, Kwon, and Kim (2013) (30°-150°) around the center of the excavator platform, we identified five factors–economic feasibility, data a truck can enter the remaining area (151°-360°) from acquisition speed and scope, accuracy, ease of installa- any direction, and in some cases, a truck might even tion, durability–and computed the weight for each enter the front work area. Therefore, the area within factor using the analytic hierarchy process (AHP) 7.5 meters from the excavator’s rotational center must method to determine the most ideal 3D modelling be set as a loadable area. Moreover, to predict whether sensor for an intelligent excavation robot. The result- a truck or object is approaching the excavation system, ing weighted preference calculation determined that the look-ahead distance for determining whether any- among a terrestrial 3D laser scanner, 2D laser scanner, thing is approaching should be 15 meters from the stereo vision camera, and structured light sensor, a 2D excavator’s center. laser scanner is the best fit for the intelligent excava- Meanwhile, the 3D surround sensor of the intelli- tion robot. Therefore, we use a 2D laser scanner to gent excavation robot creates blind spots, as shown in design hardware for a 3D surround sensor. Figure 7(b,c), no matter where the sensor is positioned As the purpose of the 3D surround sensor of the because of the shape of the excavator itself. To mini- intelligent excavation robot is to model the 360° area mize such blind spots, the sensor must be located as surrounding the excavator, definitions concerning the high as possible while also remaining close to the specifications of the intelligent excavation robot and excavator’s central axis. Figure 7(b) illustrates the the working area to be excavated must first be pro- blind spots that can be created when the sensor is vided. In this research, we used Doosan Infracore’s DX- located at 4.7 meters above the ground, with the 140LC, an excavator with a height of 2.8 meters, boom front blind space being 1.8 meters and the rear being and arm length of 4.0 meters and 1.9 meters, respec- 0.95 meters. To even out the front and rear blind space, tively, maximum excavation distance of 8.2 meters, it might be better to position the sensor at the center and maximum excavation depth of 5.0 meters from of the rotational axis or slightly to the right, but if the the ground. sensor is mounted on top of the cab, problems might The ultimate goal of the 3D surround sensor is to arise while the vehicle is in motion. Moreover, colli- recognize and localize trucks or objects that are sions with the boom or even bigger blind spots caused approaching the excavator, which requires an analysis by the boom are all possible concerns. Therefore, we of where the excavation system and the loading truck decided that the optimal location of the sensor should are located. In earthmoving operations, excavation be slightly to the left from the excavator’s central axis. (cutting) involves either piling up soil near the platform Figure 7(c) illustrates the blind spots created by the or loading it onto a truck that is in proximity of the sensor when viewed from the rear of the excavator platform. This means that the cutting area and the because the cab is positioned on the left side. The banking area typically exist independently of one sensor in the figure should be located slightly to the 446 D.-J. YEOM ET AL. (a) Loading sensing area (Yoo, Kwon, and Kim 2013) (b) 3D surround modeling sensor location (side view) (c) 3D surround modeling sensor location (back view) Figure 7. Loading sensing area and blind spots caused by loading sensing area and sensor location. right from the rotational axis of the excavator to mini- determined the back part of the excavator’scab to be mize the blind space on the left and right sides. As the optimal location for the sensor. We also designed shown below, with the sensor positioned at 4.7 m the sensor such that it can be positioned at 4.7 meters above the ground, slightly to the left from the excava- above the ground. tor’s central axis, the left and right blind spots are 1.15 m and 0.98 m, respectively. 3.2. 3D surround sensor layout design In conclusion, if the sensor is positioned closer to the excavator’s central axis, the blind spots created by the The 2D laser scanner (LMS-151, SICK) used for terrain excavator will be minimized and most evenly distribu- modeling in our prior research on intelligent excava- ted, and given the height of the truck (3m), the higher tion robots exhibits a sensing range of 270° around the the sensor is positioned above the ground, the more it Z-axis, as shown in Figure 8(a), measuring 27,050 can scan the truck bed, thereby minimizing the blind points per second with an angular resolution of 0.5°. spot for the truck bed. However, to determine the sen- The 2D laser scanner scans 270° on the XY plane at sor’s height, mobility of the intelligent excavation robot a rate of 50 Hz with one scan measuring 541 points must be considered. In addition, the fact that as the with an angular resolution of 0.5°. A 2D laser scanner, sensor gets higher, the impact caused by vibrations unlike a 3D laser sensor with two rotational axes, is can become even greater. In this research, we consid- a one-axis sensor. This means that acquiring 3D geo- ered the height of the cab, the amount of blind spots metric data using a 2D laser scanner must involve that can be created by the cab and boom and finally either the sensor moving horizontally at constant JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 447 (a) Scanning range of 2D laser scanner (b) 2D laser scanner placement and rotation direction Figure 8. 2D laser scanner orientation. speed or rotating around either the X-axis or the Y-axis. 180–360° area must be segmented and processed In this study, as shown in Figure 8(b), we set up the separately in the data processing stage. Y-axis of the 2D laser sensor such that it is positioned As discussed previously, the 3D surround sensor vertically downward while simultaneously directly should be located at 4.7 m above the ground to be rotating the Y-axis. This allows the sensor to simulta- able to model the entire area around the intelligent neously scan the right and left sides of the intelligent excavation robot. If the sensor is designed to be fixed excavation robot, making it two times faster to create like a tower, it can cause some problems when the a 3D model of the full 360° workspace. excavation robot goes off the earthwork site to As such, to rotate the Y-axis of the 2D laser sensor another location. Given the fact that a typical while positioning it vertically pointing downward, the Caterpillar-type excavator is loaded onto a truck for sensor’s rotary component design should be such that relocation as it cannot be driven on the highways, the Y-axis of the sensor coincides with the rotational the maximum overall height, including the height of axis of the instrument and there should be no blind the truck bed, must not exceed 4.2 m, which is the spots caused by the rotary unit within the sensor’s maximum limit for highways. Therefore, the top rotary scanning range. In other words, because the sensor unit of the sensor should be designed to enable an up- attached to the rotary unit must have a field of view and-down motion, with the height of the sensor top in of 20.1° on the right and left sides of the Y-axis, the the most reduced (the lowest) state of 2.7 m tall or less total height of the rotary unit should be at least 0.4 m, above the ground. As the back part of the cab of the as shown in Figure 9(a), assuming that the width and intelligent excavation robot is actually 2.0 m tall, the length of the driving part of the rotary unit are 0.14 m height of the fixed-type sensor should be less than each. Such a layout allows the 2D laser scanner of the 0.7 m. Assuming that the sensor’s Y-axis coincides rotary unit to exhibit a 93.9° scanning range on each with the central rotational axis; its rotary unit is 0.4 m side. Using a start signal for the automated excavation high; and the total instrument height is 2.7 m, as operation, the 3D surround sensor of the intelligent shown in Figure 10(a). A 2-stage movable unit is esti- excavation robot simultaneously begins to sense the mated to be as long as 1.55 m in the most reduced surrounding 360° area and continuous to operate state, and a 3-stage movable unit to be as long as 1.16 while the excavation robot performs the operation. In me, as shown in Figure 10(b). In conclusion, with the addition, as the sensor continuously scans the sur- rotary unit being at least 0.4 m high, regardless of the rounding 360° area in one direction, slip rings must extent to which the minimum height is reduced by be used, and because the sensor scans both the right adding more stages, you cannot satisfy the 0.7 m and left sides of the vehicle, the 0–180° area and the limit. To address this limitation, we have devised (b) Rotary unit sensor rotation direction (a) Rotary unit layout design Figure 9. Rotary unit layout and rotation direction. 448 D.-J. YEOM ET AL. (a) 2-stage movable (b) 3-stage movable (c) 2-stage with hinge unit unit Figure 10. Sensor movable unit layout design. a way to add a hinge and clamp mechanism at the 3.3. Detailed design of a 3D surround sensor bottom of the instrument to be able to rotate the Based on the sensor layout design discussed in the instrument. With these additional hinges at the bottom previous section, we used SolidWorks 2008 to create of the instrument, the intelligent excavation robot will a detailed design of the 3D surround sensor. As shown be able to travel for long distances in a folded position in Figure 11(a), the 3D surround sensor comprises free of the height limit as the instrument’s maximum a rotary unit, movable unit, and fixed unit, with the height will then be 0.4 m and the overall height from overall height being 2.71 m when fully stretched and the ground will be 2.4 m. 1.82 m when reduced the most. Another important consideration in the layout design The sensor’s rotary unit, as shown in Figure 11(b), stage is the average horizontal rotation speed of the essentially uses a DC motor, reducer, encoder, and sensor. In 3D modeling of the local area, there is a trade- controller, and input and output pulleys and a belt off between rotation speed and modeling quality (point are used to drive the rotating shaft. Similar to our density). While a slow rotation speed of the 3D surround previous discussion, as the Y-axis points downward, sensor does not directly affect the time the intelligent the smaller the size of the rotary driving part, the excavation robot executes its operation, it can have greater is the scanning coverage that can be implications for a 3D modeling data-based object recog- achieved. In terms of size, the rotary driving part nition process as the process is triggered only when the can benefit from a smaller motor with low starting sensor’s rotational axis passes the 0° and 180° points, current and high speed. In this research, we used an meaning the sensor’s rotation speed equals the speed EC-max22 283,840 DC motor (12W), and as of the object recognition results being updated. In addi- a reducer for the motor, a φ22 mm planetary gear tion, if the automated excavation process includes (GP22C 144,003, 690:1) was used. An incremental a loading operation, the intelligent excavation robot encoder that issues 128 pulses per revolution was will wait at a designated location until it senses a truck used. Given that the motor’s gear reduction ratio is and detects the vertexes of the truck bed, and the 690:1; a reduction ratio driven by belt and pulleys is slower the detection process, the more delay it will 34:18; and the number of encoder pulses per revo- incur in the excavation work process. On the other lution is 128, the number of pulses that the sensing hand, if the horizontal rotation speed of the 3D surround instrument outputs per revolution is 166,827, with sensor is too fast, the number of three-dimensional the resulting resolution per pulse at 0.002°. As terrain and object points retrieved will be reduced, shown in Figure 11(c),the final power realized which then leads to lower quality terrain models, ulti- through the motor and reducer is transmitted to mately degrading the performance of the object recog- the rotational axis with a reduction ratio of 34:18, nition process. Therefore, the optimal rotation speed of thereby rotating the 2D laser scanning sensor the 3D surround modeling sensor should be set at around thepipeshaft. a level that does not undermine the performance of Figure 11(d) illustrates the manner in which blind the object recognition process, and from the results of spots can be created by the rotary driving part of the several tests, we determined that approximately 60° 2D laser sensor, and in detailed design, we secured per second is the most effective speed. JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 449 (b) Sensor's rotary unit (c) Torque transmission path configuration (a) 3D surround sensor configuration (d) Right and left scanning angles (e) Coordinate axes and of movable unit scanning area Figure 11. Detailed design of 3D surround sensor. a field of view of 16.8° by increasing the length of the torque model (EC-max30 272,763) as the motor must pipe shaft. As the surround modeling sensor is located withstand the weight of the sensor’s rotary unit, at 4.71 m above the ground, the angle above the x-axis upper shaft, base, ball screw, linear bushes, etc. As of the 2D laser sensor is barely scanned, and so we a reducer for the motor, a φ42mm planetary gear placed a sensor cover on top of the x-axis. The result- (Maxon GP42C 203,115, 12:1) was adopted. In this ing scanning angle for the 2D laser sensor is 76.1° on research, the ball screw is used as the primary power the left and right sides. To control the rotation position transmission device and as shown in Figure 12(b), (angle) of the 3D surround sensor, an EPOS 24/1 con- the power from the motor is transmitted to the ball troller (Maxon) was used, and the rotation speed was screw rod through the input and output pulleys and designed to be controlled in four different modes belt, and based on the rotating direction of the ball (8.3 rpm, 7.5 rpm, 5.0 rpm, 3.75 rpm) As the 3D sur- screw, the upper base fitted with the ball screw nut round sensor employs an incremental encoder, an moves up and down. The upper base exhibits an up- index signal must be entered to calculate the current and-down movement until the index sensor signals horizontal angle of the sensor instrument. As shown in are received, and in case if there is any problem with Figure 11(e), an index signal was designed to be issued the index sensor signals, the design includes when the sensor passes the – X-axis plane. The 3D urethane buffers at the top and bottom of the surround sensor has been designed to infinitely rotate lower base. To protect against vibrations, the upper counterclockwise and scan up to 20 meters ahead until and lower bases each have two shafts and linear it receives a stop signal. bushes, and an amplifier was installed onto As shown in Figure 12(a), the movable unit of the a circular plate at the bottom of the lower base. 3D surround sensor is designed such that it is As the 3D surround sensor stands as high as 4.7 m 2-tiered and its top sensor part can move up and above the ground with a 2D laser sensor on its top, it can down, driven by a DC motor and a ball screw. As the be greatly affected by vibrations from the excavator. In motor used for the movable unit only produces an this study, we installed a base block between the lower up-and-down movement, it does not control the base and the instrument case to securely fix the lower position and consists of a motor, reducer, and base, and by installing a tension roller onto the upper brake. As for the DC motor, we chose a 60w, large base, we ensure that tension can be imposed even 450 D.-J. YEOM ET AL. (a) Movable Unit Structure (c) Sensor Encloser (b) Movable Unit Power Transmission Structure Figure 12. 3D detailed design of 3D surround sensor movable unit. when the instrument is fully stretched. In addition, 4. 3D surround modeling system assembly because excavation operations are susceptible to dust, and field test humidity, and rain, we put in place an enclosure, as 4.1. 3D surround sensor hardware assembly and shown in Figure 12(c), to completely block the instru- installation ment from rain or dust in a fully stretched state. As illustrated in Figure 13, we reviewed the scan- In this study, we developed the hardware for a 3D ning range and the angles of blind spots when the 3D surround sensor in the following order: materials pro- surround sensor is mounted on the intelligent excava- cessing, trial assembly, surface treatment, and final tion robot. When viewed from the rear of the excava- assembly. Each component of the 3D surround sensor tor, the sensing instrument is 4.7 m tall, and based on that was designed using the SolidWorks software in the Z-axis the minimum blind spot is measured at 17.2° the detailed design phase was built of duralumin 7075 while the maximum at 34.6°, and with the blind spot of and precisely cut, and as shown in Figure 14(a–c), a trial the rotary unit on the Z-axis designed to be 16.8°, there assembly was performed to verify the design of the will be no blind spot caused by the instrument. The instrument and to ensure that all parts fit together maximum non-modeling region because of the shape properly. Moreover at this stage, control cables for of the excavator is 1.13 m on the left and 1.01 m on the the 2D laser sensor and the sensing instrument were right. When viewed from the side, the minimum blind put in place to verify that the instrument moves prop- spot is 30.3° based on the Z-axis while the maximum is erly and that sensor data is well received. Following the 45.0°, and the maximum non-modeling region is pre-assembly test, parts that must be exposed exter- 0.92 m in the rear and 1.64 m in the front. nally were painted in the same color as the excavator (a) Scanning range in fully stretched mode (b) scanning range in fully stretched mode (back view) (side view) Figure 13. Scanning range of 3D surround sensor. JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 451 (a) Trial assembly of rotary unit (b) Trial assembly of movable unit (e) Movable/rotary (d) Sensor data test (c) Trial assembly of fixed unit unit test Figure 14. 3D surround sensor trial assembly test. and the surfaces of internal parts were anodized. is based on OpenGL, an open library, and allows the Figure 15 illustrates the sensing instrument installed user to simultaneously view the data processing results on the intelligent excavation robot after all surface- and the sensor’s controls of the instrument. processed components were finally put together on The 3D surround modeling software runs on the site. The 3D surround sensing system hardware was control system (Figure 16(b)) installed inside the cab installed on the rear part of the cab, and for the follow- of the intelligent excavation robot, and it has been ing six months, several tests were performed to verify designed such that the same screen can be displayed terrain data acquisition, object recognition, and truck on the monitor at the remote station via wireless TCP/ localization on an actual earthmoving site. IP data communication. All modules of the 3D sur- The control system of the 3D surround sensor is round modeling software have been designed to oper- intended to control the motion of the sensor instru- ate in a fully automatic manner except for some initial ment, process 2D laser sensor data in real-time, and default values. transmit sensing results to the remote station. As illustrated in Figure 18, the field test on the 3D Therefore, the control system is essentially responsible surround sensor was focused on verifying that the for controls of the sensor instrument, sensor data pro- sensor can accurately create three-dimensional models cessing, results transmission, etc. In this research, we of the loading truck, including the truck bed area in the built a control board for controlling the instrument as same environment as an actual earthwork site. The test shown in Figure 16(a), and the enclosure for the con- was performed assuming various scenarios that can trol system, which is made of duralumin, was anodized. occur on site without deliberately controlling any sur- The control system was placed on a shelf affixed with rounding objects such as piles of earth, vehicles, peo- brackets in the back of the cab, along with an encoder ple, etc. at the test site. A dump truck that is used for an pulse counter to its right side for transmitting the actual loading operation was used for the test. motor’s rotation angles to the data processing board. An analysis of the 3D surround modeling data mea- sured at the actual earthwork site demonstrates that 3D surround scanning quality is outstanding without 4.2. Field test of the 3D surround sensor any particular noise detected (Figure 19). In addition, The data processing software for the 3D surround the shapes of people or trucks were captured clearly, sensor proposed in this research was developed which indicates no significant impact from the vibra- tions that might have been caused by the excavator. using Microsoft Visual C++ 2010 as a Window 32- based application software (Figure 17). The software We also analyzed the accuracy of the 3D modeling 452 D.-J. YEOM ET AL. (a) Assembly completed (b) Installed on excavator (c) Installed on excavator (standing before installation (folded position) position) (d) Sensor's rotary unit (e) Field test (side) (f) Field test (back) moving up Figure 15. Final assembly of 3D surround sensor and field test. (a) Control board (b) Control system (top view) (c) Control system (front view) (d) Control system (side view) Figure 16. Local sensor control system installed. JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 453 Figure 17. User interface for a 3D surround modeling module. (a) Excavator on a slope (b) Excavator on level ground Figure 18. Test environment for loading area recognition. results using a total station, and the results exhibit an this paper is placed on verifying the accuracy of the average error of 3 mm within a 20-meter range (Yoo, sensing system proposed. Kwon, and Kim 2013). If the rotation speed of the To test the accuracy of the sensing system devel- sensor is increased, the scanning time for the front oped, we first set up 9 circular targets in the terrain workspace can be reduced, while simultaneously exhi- surrounding the excavator and obtained 3D data by biting lower point density. A slower rotation speed, on rotating the sensor mounted on the back of the cab at the other hand, can translate into higher-density geo- constant speed, and then using the software developed metric data, but the scanning time per rotation in this study constructed a 3D terrain model. Afterwards, becomes longer. Overall, the test results demonstrated we measured the distances between the sensor’sorigin that the ideal rotation speed should be set at three and the 9 targets with a total station, a laser surveying seconds per rotation. instrument used in civil engineering and earthworks, The primary purpose of this paper is to design hard- and calculated errors by comparing the physical dis- ware for a sensor system and enable the creation of tances with the computed distances in the 3D terrain a high-quality 3D terrain map in a prompt and accurate model (Figure 20). The same test was conducted twice, manner with the minimum blind space. Identifying the so a total of 18 distances were compared (Table 1). The location of a truck or site staff surrounding the exca- test found that the average error was 0.003 m vator based on 3D terrain modeling will be addressed (SD = 0.0082 m) and the maximum was 0.023m, indicat- in subsequent work. Therefore, the primary focus of ing its accuracy is comparatively superior to that of 454 D.-J. YEOM ET AL. (a) Excavator and site personnel modeling (b) Excavator and site personnel modeling (d) Truck and soil mounds modeling (c) Truck and soil mounds modeling Figure 19. Results of a 3D surround sensor’s local area modeling. Figure 20. 2D Laser scanner based 3D terrain modeling. a stereo vision camera. It was also found that, as was the excavation robot. We conducted tests to create case for a stereo vision camera, the longer the distance such three-dimensional models of the terrains and is from the 2D laser scanner to the target the greater the obstacles on an actual earthwork site. The conclu- error becomes with the magnitude of the error being sions that we arrived at are as follows: proportional to the distance. (1) From our analysis of the optimal location of the 3D surround sensor, we determined that the 5. Conclusions sensor should be located on the right of the rotational axis of the excavator to minimize In this paper, we developed hardware for the 3D blind spots on the right and left sides of the surround sensing system to construct 3D models of sensor, with the most appropriate place being the 360° workspace surrounding the intelligent on top of the air compressor inlet on the back of JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 455 Table 1. A: calculated distance at 3D model, B: actual distance measured by total station. Img. Target no. 1 2 3 4 5 6 7 8 9 1 A 4.462 5.189 6.842 8.759 10.485 12.568 14.896 18.963 22.369 B 4.459 5.186 6.848 8.750 10.489 12.565 14.906 18.971 22.346 Difference 0.003 0.003 −0.006 0.009 −0.004 0.003 −0.010 −0.008 0.023 2 A 3.956 4.661 6.624 7.845 9.966 11.987 13.855 17.915 20.112 B 3.950 4.656 6.625 7.840 9.956 11.978 13.853 17.922 20.100 Difference 0.006 0.005 −0.001 0.005 0.010 0.009 0.002 −0.007 0.012 Description Statistic Std. error Mean 0.0030 0.0011 95% confidence interval for mean Lower −0.0011 Upper 0.0071 Variance 0.0000 Standard deviation 0.0082 Minimum −0.0100 Maximum 0.0230 Significance probability 0.041 the cab. In addition, given the blind spots a system that detects and recognizes objects approach- caused by the height of the loading truck and ing the intelligent excavation robot from any direction the shape of the excavator, the sensor must be within 360 degrees, and when compared with the pre- located at least up to 4.7 m above the ground. vious technologies by Stentz et al. (1999), Yamamoto (2) Upon analyzing the direction in which the 2D (2008), Yamamoto et al. (2006), Yamamoto et al. (2009) laser sensor must be installed and rotated, we of unmanned excavators it is considered to be the most determined that to allow for easy control of con- advanced technology in terms of its expanded sensing stant speed and data processing, the 3D surround range, reduced blind spots, and finer modeling resolution sensor should infinitely rotate at 360° in one due to improved precision. The technology can be direction, with a rotation speed of 30–40 degrees further developed to be used for planning travel paths per second. In addition, the Y-axis of the sensor for the intelligent excavation robot; avoiding people and should be perpendicular to the ground and coin- preventing potential collisions; determining whether to cide with the horizontal rotational axis, which is stop work or not; and recognizing the truck bed area of the most effective way of simultaneously scan- a truck. If the sensor’s performance is further enhanced ning the right and left sides of the sensor. with continuous research and development efforts based (3) We designed a hardware layout for the 3D sur- on the results of this study, it should contribute to devel- round sensor, which comprises a rotary unit, oping a new sensor not only for the intelligent excavation movable unit, and fixed unit. Our analysis robot, but also for other types of unmanned earthmoving demonstrated that the rotary unit should be at machines. We expect that this technology can be used for least 0.4 m in height to gain a scanning angle of many other types of automated construction and earth- 20.1 degrees around the rotational axis during moving equipment and will have wide applications to its one-direction, infinite rotation movement. automated construction equipment going forward. The rotary unit used a DC motor as its power source, along with the belt and pulleys for Disclosure statement power transmission to drive the rotating shaft. As for the movable unit, a ball screw was used to No potential conflict of interest was reported by the authors. enable the up-and-down movement while the belt and pulleys were utilized to transmit power Funding for the ball screw. (4) We developed a Windows 32-based 3D sur- This work was supported by the National Research Foundation round modeling software to process data from of Korea (NRF) grant funded by the [Korea government (MSIT) the 3D surround sensor. Field tests were con- (No. 2016R1A2B2013985)]. ducted at an actual earthwork site with the results exhibiting outstanding 3D scanning Notes on contributors quality for the 360° workspace and zero noise. In addition, the resulting images clearly exhibit Dong-Jun Yeom who earned his Ph.D. in Construction the shapes of people and other objects like Management in Department of Architectural Engineering in 2018 from Inha University. He has given a series of lectures trucks such that there is no significant impact on computer aided design, computer programming for engi- caused by vibrations from the excavator. neering application, construction IT and etc.at Inha University since 2015. He currently serves as a postdoctoral The 3D surround sensor and 3D terrain modeling research engineer in Industrial Science and Technology technology presented in this paper can be used for Research Institute at Inha University. 456 D.-J. YEOM ET AL. Hyun-Seok Yoo who earned his Ph.D. in Construction Sarata, S., N. Koyachi, and K. Sugawara. 2008. “Measuring and Management in Department of Architectural Engineering in Update of Shape of Pile for Loading Operation by Wheel 2012 from Inha University. He currently serves as a vice Loader.” paper presented at the annual meeting for professor in Department Technology Education at Korea society of the ISARC, Vilnius, Lithuania. National University of Education. His research interests are Seo, J., C. Park, and D. Jang. 2007. “Development of int the area of construction information technologies, auto- Intelligent Excavating System – Introduction of Research mation in construction and etc. He has conducted various Center.” paper presented at the annual meeting for research projects in terms of automation in construction: an society of the KICEM, Busan, Korea. Automated Pavement Crack Sealing Machine, an Intelligent Singh, S., and R. Simmons. 1992. “Task Planning for Robotic Excavating System and various applications of information Excavation.” paper presented at the annual meeting for technologies and automation in construction. society of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Raleigh, U.S.A, August. Young-Suk Kim earned his Ph.D. in Construction Engineering Stentz, A., J. Bares, S. Singh, and P. Rowe. 1999. “A Robotic and Project Management in 1997 from the University of Excavator for Autonomous Truck Loading.” Journal Texas at Austin. He has given a series of lectures on an Autonomous Robots 7 (22): 175–186. doi:10.1023/ execution of building work, construction management, A:1008914201877. time management, cost management, contract manage- Whiteborn, M., C. Debrunner, and J. Steele. 2003. “Stereo ment, construction information technology, and automation Vision in LHD Automation.” IEEE Transactions on in construction and etc. at Inha University since 1999. He Industry Apllications 39 (1): 21–29. doi:10.1109/ currently serves as a professor in Department of TIA.2002.807245. Architectural Engineering at Inha University and a chairman Yamamoto, H. 2006. “Introduction to the General Technology of University Development Commission at Korea Institute of Development Project: Research and Development of Construction Engineering and Management. His research advanced Execution Technology by Remote Control interests are in the areas of sustainable construction, cost Robot and Information Technology„. paper presented at and time management, engineering education, and automa- the annual meeting for society of the ISARC, Tokyo, Japan. tion in construction. He has conducted various research Yamamoto, H. 2008. “Research on Automatic Control projects in terms of automation in construction: an Technology of Excavation Work by Hydraulic Shovel.” Automated Pavement Crack Sealing Machine, Tele-operated Public Works Research Institute. https://www.pwri.go.jp/ Concrete Pipe Laying Manipulator in the Trenches, jpn/results/report/report-project/2007/pdf/2007-sen-3.pdf Automated Controller for Checking Verticality and Yamamoto, H., M. Moteki, H. Shao, T. Ootuki, Automated, an Intelligent Excavating System, and etc. H. Kanazawa, and Y. Tanaka. 2009. “Basic Technology toward Autonomous Hydraulic Excavator.” paper pre- sented at the annual meeting for society of the ISARC, References Austin, U.S.A. Yamamoto, H., Y. Ishimatsu, S. Ageishi, N. Ikeda, K. Endo, Cannon, H. 1999. “Extended Earthmoving with an Autonomous M. Masuda, M. Uchida, and H. Yamaguchi. 2006. Excavator.” Master’s Thesis, Carnegie Mellon University, “Example of Experimental Use of 3D Measurement Pittsburgh, U.S.A. System for Construction Robot Based on Component Corke, P., J. Roberts, and G. Winstanley. 1999. “3D Perception for Design Concept.” paper presented at the annual meeting Mining Robotics.” paper presented at the annual meeting for for society of the ISARC, Tokyo, Japan. society of the Field and Service Robotics, Pittsburgh, U.S.A. Yoo, H., S. Kwon, and Y. Kim. 2013. “A Study on the Selection Sarata, S., N. Koyachi, H. Kuniyoshi, T. Tsubouchi, and and Applicability Analysis of 3D Terrain Modeling Sensor K. Sugawara. 2007. “Detection of Dump Truck for for Intelligent Excavation Robot.” Journal of the Korean Loading Operation by Loader.” paper presented at the Society of Civil Engineers 33 (6): 2551–2562. doi:10.12652/ annual meeting for society of the ISARC, Kochi, India. Ksce.2013.33.6.2551. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Asian Architecture and Building Engineering Taylor & Francis

3D surround local sensing system H/W for intelligent excavation robot (IES)

3D surround local sensing system H/W for intelligent excavation robot (IES)

Abstract

The recently developed intelligent excavation robot in Korea is a fully automated excavator equipped with global 3D modeling capabilities for an entire earthwork site and an intelligent task planning system. The intelligent excavation robot includes features such as autonomous driving, 3D surround modeling, autonomous excavation, loading, etc. An intelligent excavation robot features technology that allows for accurate recognition of objects near the excavator, including the terrain of...
Loading next page...
 
/lp/taylor-francis/3d-surround-local-sensing-system-h-w-for-intelligent-excavation-robot-C4KM2QImWJ
Publisher
Taylor & Francis
Copyright
© 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group on behalf of the Architectural Institute of Japan, Architectural Institute of Korea and Architectural Society of China.
ISSN
1347-2852
eISSN
1346-7581
DOI
10.1080/13467581.2019.1679148
Publisher site
See Article on Publisher Site

Abstract

JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 2019, VOL. 18, NO. 5, 439–456 https://doi.org/10.1080/13467581.2019.1679148 CONSTRUCTION MANAGEMENT a b a Dong-Jun Yeom , Hyun-Seok Yoo and Young-Suk Kim a b Department of Architectural Engineering, Inha University, Incheon, Korea; Department of Technology Education, Korea National University of Education, Cheongju, Korea ABSTRACT ARTICLE HISTORY Received 26 August 2019 The recently developed intelligent excavation robot in Korea is a fully automated excavator Accepted 30 September 2019 equipped with global 3D modeling capabilities for an entire earthwork site and an intelligent task planning system. The intelligent excavation robot includes features such as autonomous KEYWORDS driving, 3D surround modeling, autonomous excavation, loading, etc. An intelligent excavation 3D surround modeling; truck robot features technology that allows for accurate recognition of objects near the excavator, recognition; intelligent including the terrain of surrounding environments, location of obstacles in the excavator’s excavation robot; automated path, and any approaching trucks and moving people. Such technology is critical to ensuring excavation work quality and safety. In this study, we develop the hardware for a 3D surround laser sensing system that enables 3D image modeling of the terrain surrounding an intelligent excavation robot. By mounting a sensor onto an intelligent excavation robot, we conducted performance tests to determine the robot’s 3D modeling capabilities of the terrains and obstacles at an actual earthwork site. The experimental results are applied to an object recognition system for detecting the properties of the terrain of the workspace around the excavator, any approach- ing people, trucks, obstacles, etc. The proposed hardware includes a wide range of applications in the development of future automated construction equipment. 1. Introduction excavation robot, the technology that allows for accurate recognition of objects near the excavator, 1.1. Backgrounds and purpose including the terrain of surrounding environments, Excavators are some of the most notable construction the location of obstacles in the excavator’s path, any equipment used for earthworks. Excavators perform approaching trucks and moving people, forms the a wide range of tasks such as cutting, banking, gather- core technology that is critical to ensuring work ing, loading, leveling, and grading earth. The number quality and safety. of registered excavators worldwide is increasingly on To create three dimensional models of the sur- the rise, and there are several attachments to excava- rounding terrains and approaching objects around tors that are under continuous development, making an excavator, Stentz et al. (1999)utilizedtwo-axis excavators more effective for earthworks. laser scanners mounted on the excavator to scan Research on automated excavation first began andrecognizetrucks;whereas,Yamamoto(2008) with unmanned, remote-controlled excavators used GPS (global positioning system) and developed by Japan in the 1980s, and some of the a direction sensor installed on a truck that was prominent studies that followed include the auton- designed to operate in harsh environments to omous loading system (Cannon 1999;Singh and accomplish excavation operations. In the study by Simmons 1992;Stentzet al. 1999) developed by Stentz et al. (1999), unless an excavator is situated on Carnegie Mellon University in the 1990s; the auton- top of an elevated bench and a truck stops at omous hydraulic excavator (Yamamoto et al. 2009) a designated location, it can be difficult to recognize developed by the Public Works Research Institute either the truck or any other nearby object. In addi- (PWRI) in 2008, and the intelligent excavation robot tion, Yamamoto’s(2006) study involved multiple (IES) developed by Korea in 2011. Among these, the loading trucks that were outfitted with GPS and intelligent excavation robot (IES) recently developed direction sensors, which are both expensive, and by Korea is a fully automated excavation robot that is any errors in the sensor data would result in capable of autonomous driving, 3D modeling, exca- a collision between the bucket of the excavator and vation task planning, autonomous excavation, load- the truck, posing another serious problem. ing, etc. IES features global 3D modeling capabilities Therefore, to enable 360 degrees 3D modeling of oftheentireworksiteandan intelligenttaskplan- the surroundings around an intelligent excavation ning system. To develop such an intelligent robot, this study develops hardware for a new type of CONTACT Young-Suk Kim youngsuk@inha.ac.kr Department of Architectural Engineering, Inha University, Incheon 22212, Korea © 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group on behalf of the Architectural Institute of Japan, Architectural Institute of Korea and Architectural Society of China. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 440 D.-J. YEOM ET AL. 3D surround sensing system that minimizes blind 1.2.3. Hardware design of the 3D surround sensor spots and offers prompt, accurate modeling of sur- Our prior research employed a 2D laser scanner as an rounding objects in a wide variety of terrains. In addi- ideal sensor for the intelligent excavation robot. As we tion, we install the developed sensor onto an employ the sensor, we perform an analysis of its sensing intelligent excavation robot to test its performance area, instrument movement direction, location of rota- using actual earthwork site terrains and obstacles. It tional axis, sensor rotation method, etc. and design is anticipated that the sensing system proposed in this a layout of the sensing system. We also review the driving study will provide the necessary support for accurate, part, center of gravity for rotation, scanning area, sensor fully automated excavation operations, presenting installation method, etc. in a detailed design. wide applications in the development of automated construction equipment. 1.2.4. 3D surround sensor installation and field test In this study, we build the hardware for a 3D surround sensor; install the hardware onto the intelligent exca- 1.2. Study scope and methodology vation robot; and execute tests. We also develop data 3D surround sensing technology refers to technology processing software for the 3D surround sensing sys- that uses geometric data on the terrain of the local area tem and execute performance tests at an actual earth- surrounding an intelligent excavation robot and nearby work site to verify its 3D terrain modeling capabilities. objects. The data is used to create three-dimensional models, and the three-dimensional geometric informa- 2. Analysis of prior work on 3D modeling tion captured from the excavator surroundings is then technology for intelligent excavation robot fed into object recognition algorithms to recognize ter- rains, people, trucks, obstacles, etc. The geometric data is 2.1. Definition of 3D surround modeling ultimately used to create paths for loading trucks and to technology avoid obstacles. The scope of this research is only limited The productivity and quality of earthmoving work can to developing hardware for the 3D surround sensing be significantly affected by the degree of rationality of system, which will be used to create three-dimensional task planning. Effective task planning in earthmoving models of local terrains and objects surrounding the operations must be based on a sound understanding excavation robot. Real-time data processing of the 3D of the terrain and ground characteristics at a particular surround sensing system, including object recognition job site, rather than solely focusing on the experience algorithms that separately recognize terrains and objects and skills of an equipment operator. Rational task plan- from three-dimensional geometric data, will be later dis- ning for automated excavation equipment operating cussed in subsequent research. The methodology used in in an earthmoving environment requires the creation this research is as follows: of a virtual environment (world model) that mimics the real environment based on three-dimensional data 1.2.1. Analysis of prior work and considerations for and the ability to update real-time changes to the system hardware design workspace terrain (local model) in three dimensions In this study, we examine the research and develop- as the work progresses. In addition, based on such ment status of domestic and overseas 3D surround a 3D virtual environment, an optimized earthwork modeling technologies by focusing on prior research task plan must be formulated through region segmen- work on automated earthmoving equipment and ana- tation, optimal platform positioning, and sequencing lyzing their problems. Based on the problems identi- tasks (Seo, Park, and Jang 2007). fied, we identify some of the considerations that are To build a three-dimensional environment of an required to develop a 3D surround sensing system for earthwork site, we create a three dimensional model the intelligent excavation robot. of the overall topology of the job site, which is referred to as “global modeling.” Moreover, in this research, we 1.2.2. Analysis of the optimal location of the 3D define “local modeling” as the creation of real-time 3D surround sensor models of changing terrains surrounding an auto- Excavation operations mostly involve soil digging, and mated earthmoving machine, thereby updating the 3D image modeling of a digging area below the global model. In global modeling, a terrestrial 3D ground without any blind spots requires an analysis laser scanner is used to generate 3D topographic of where the sensor should be located on the intelli- data, which is then compared to the design drawing gent excavation robot. In this study, we identify the so as to generate information on the scope of right location of the sensor that minimizes blind spots a working area and the amount of work required. in the creation of three dimensional models of the local Local modeling, on the other hand, focuses on the area around the intelligent excavation robot. creation of a 3D model of a relative working area that JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 441 Figure 1. Global area and local area (Yoo, Kwon, and Kim 2013). coordinating with multiple automated earthmoving changes as the automated earthmoving machine systems, including the man-machine interface (MMI) moves. As shown in Figure 1, the working area within an eight-meter radius of the intelligent excavation system, an object recognition system is used to reposi- tion the automated machine for collision avoidance robot and the loading area, which is situated within and path identification. a 20-meter radius, are subject to 3D modeling. 3D surround modeling technology involves using data on the terrain of the local area surrounding the 2.2. Analysis of 3D surround modeling technology excavator, including the geometry of nearby objects, development status and then creating 3D representations. 3D surround modeling is ultimately used to create paths for the Research on a fully automated excavation robot can be intelligent excavation robot to carry away earth, load traced back to earlier research on an autonomous trucks, or avoid obstacles by sensing nearby trucks, loading system (“ALS”), which was conducted by people, and obstacles. The object recognition data is Carnegie Mellon University in 1999. As shown in then sent to the remote station. The results of 3D sur- Figure 2, Stentz et al. (1999) conducted research on round modeling presented in this study can be used as modeling a surrounding workspace in three dimen- important information to execute path planning for the sions by mounting 2-axis laser scanners on both sides boom, arm, and bucket of the intelligent excavation of the ALS. In this system, the scanner on the right side robot. In addition, the results are used to determine performs 3D modeling of the workspace in front of the the amount of earth that must be excavated by compar- machine, while the left scanner detects trucks or ing the volumes before and after each excavation opera- objects that are approaching the machine, with the tion. The 3D surround modeling results can also be used horizontal scanning angle for each scanner being 120 to develop an excavation quality inspection process to degrees. The two 2-axis laser scanners obtained from inspect and decide whether the excavation operation at the ALS research comprise a laser distance measuring the current location of the platform can be achieved by sensor fitted with two rotational axes and a reflector, comparing against 3D design information. In addition, which are basically the same as a 3D laser scanner in the results of the earthmoving amount calculation pro- terms of structure, that are capable of measuring ter- cess can be used to estimate the amount that must be rain points at a rate of 12 kHz. As the ALS was designed loaded onto the truck. to recognize a truck using its left laser scanner, it can If the 3D surround modeling technology is coupled only perform loading operations when the excavator is with object recognition technology to recognize load- located on top of an elevated bench with a truck ing trucks, obstacles, people, etc., the combined tech- parked at a designated location, which can be nologies can serve as critical information for setting approached from left. In other words, in case where a travel path and an alternative path for the intelligent the excavator and the truck are at the same level or the excavation robot, which can be used to make emer- truck is approaching from behind of the excavator gency stops while avoiding mobile obstacles. platform, the loading operation becomes impossible. Moreover, to develop a collaboration system for Given that excavators exhibit variable working 442 D.-J. YEOM ET AL. (a) Laser scanners mounted on excavator in ALS (b) Excavator loading a truck in ALS Figure 2. Autonomous loading system (CMU) (Stentz et al. 1999). patterns, regardless of the position of the excavator a system requires high-precision GPS and direction sen- platform, the excavator should be able to perform sors, and the trucks used for testing were quite different a proper loading operation. from large dump trucks that typically operate on actual Meanwhile, the autonomous hydraulic excavator earthwork sites. As the truck loading system used by the (“AHE”) developed by the Public Works Research PWRI of Japan does not involve direct modeling and Institute of Japan (“PWRI”) in 2006, which is shown in recognition of the truck, its software configuration can Figure 3(a), is equipped with GPS, a direction sensor, be very simple. The problem, however, is the high costs gyro sensor, and an azimuth sensor with an indepen- involved in implementing high-precision sensors and in dent task planning system. The AHE employs a stereo some scenarios, an estimation of the truck bed area may vision camera to enable 3D modeling of the front work- not be entirely reliable. In fact, for a GPS system that space and used a 2D laser scanner for 360-degree sur- supports errors at the centimeter level, the per-unit cost rounding area modeling (Yamamoto 2008). Unlike the is more than $8,500. In a solid communication environ- ALS, to recognize where a truck is located, the study ment, it operates within a margin of error of 1 cm, but adopted an approach of equipping the truck with GPS when communication with the base is poor, errors of and direction sensors, as illustrated in Figure 3(b).Such more than 10 cm can occur, indicating that performing (b) Truck used in PWRI (a) Sensors mounted on excavator in PWRI (Yamamoto 2008) (Yamamoto et al. 2009) (c) 2D laser scanner in PWRI (d) Results of 360° modeling (Yamamoto et al. 2006) (Yamamoto et al. 2006) Figure 3. Autonomous hydraulic excavator (PWRI). JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 443 a fully automated loading operation is highly likely to Later Sarata, Koyachi, and Sugawara (2008)con- result in collision accidents. ducted research on detecting the location and In 2008, the PWRI performed research on 3D shape of a pile of soil by measuring the height of modeling of a 360-degree area using a 2D laser the pile from the ground using stereo vision cam- scanner. As shown in Figure 3(c),asensor was eras installed on top of a wheel loader (Figure 5). mounted on top of the cab of the excavator to However, these methods have some limitations as create a three-dimensional representation of the the typical terrain on actual earthwork sites tends surrounding terrain by slowly rotating the swing to be much more rough and uneven. body in a 360 degrees angle. In such a method, to Corke, Roberts, and Winstanley (1999) devel- obtain one page of topographic data, an excavator oped a mining robot equipped with a stereo vision must stop its work and slowly rotate itself at con- camera and conducted research on 3D modeling of stant speed, and if the rotation speed fluctuates, it rocks in an underground mine by projecting illu- is difficult to obtain an even distribution of points mination (Figure 6). Whiteborn, Debrunner, and (Figure 3(d)). Another limitation is that 3D modeling Steele (2003) conducted research on building 3D based on data from a 2D laser scanner requires models of underground mining operations by a precise horizontal rotation angle, which means mounting stereo vision equipment and headlights that an additional sensor must be installed on the onto an LHD vehicle specifically developed for rotational axis of the excavator. underground mining operations. Also recently, there have been similar studies in Japan using a 2D laser scanner and stereo vision 2.3. Considerations for 3D surround modeling sensor mounted on top of a wheel loader to detect sensor hardware a dump truck and a pile of soil. In his study, Sarata et al. (2007) used an algorithm that identifies To design hardware for a 3D surround modeling sensor, a truck by detecting the ground plane and then this study analyzes some of the problems of previous a vertical straight line from the ground (Figure 4). research and identified the following considerations: Figure 4. Detection of the location of a truck using 2D laser sensor (Sarata et al. 2007). Figure 5. Detection of soil mound using stereo vision (Sarata, Koyachi, and Sugawara 2008). 444 D.-J. YEOM ET AL. (a) LHD automatic vehicle (b) Stereo matching results Figure 6. LHD vehicle (Whiteborn, Debrunner, and Steele 2003). accurately recognize the truck bed from whichever 2.3.1. Minimizing blind spots for the surrounding direction the truck approaches within 360 degrees. In terrain the CMU system; however, accurately recognizing the In both studies performed by CMU and PWRI, the truck bed area requires the excavator to be positioned sensor systems of the fully automated excavation sys- higher than the truck. Because the truck bed is as high tem focus on 3D modeling of the terrain of the work as approximately three meters from the ground, blind area in front of the excavator. This is because a fully spots will be created as the truck is scanned if the truck automated excavation system typically travels to its and excavator stay at the same level, which also means work location and then stops its platform to scan the that the truck bed area cannot be fully covered for 3D initial work area terrain after which with each digging modeling. In typical earthmoving scenarios, the exca- operation, the system rescans the terrain to monitor vator is often positioned on top of an elevated bench the digging progress. Previously developed automated during the earthmoving operations, and the excavator excavation systems are characterized by terrain mod- and the truck frequently operate at the same altitude. eling sensor systems mounted on the top of the cab To address this problem, a sensing system that rotates facing downward, which is closely related to the 360 degrees at constant speed must be designed to be shapes of the excavated terrain. As the stereo vision located at a significantly high level from the ground camera system and the laser scanning system are likely while being closest as possible to the rotational axis of to create blind spots, these approaches are considered the excavator. for minimizing blind spots in the excavated terrain, as much as possible, thereby precisely modeling the changing shape of the terrain after each bucket pass. 2.3.3. Implementing 3D topographic data-based In particular, when a sensing system is located over the object (trucks, site personnel, etc.) recognition top of the excavated terrain, perpendicular to the Because a fully automated excavation system does not ground, blind spots can be minimized and the most involve any operator throughout the entire operation even density modeling is made possible. Therefore, process, possible collisions with nearby obstacles or a 3D surround modeling sensor system must be objects can be a critical issue. While the CMU research located at a level that is as high as possible, pointing partially experimented with truck and object estima- downward to allow for sensing of the terrain below. tion, there is little research on the object recognition rate and truck bed recognition. In PWRI’s fully auto- 2.3.2. 3D modeling of the 360° work area around mated excavation system research, with its primary the excavator focus on 3D modeling of the front workspace, only CMU’s fully automated excavation system uses a left the truck bed area was subject to estimation using scanner, among its two 2-axis scanners, to detect GPS and direction sensors, and none was done on a truck, which is supposed to stop at a designated object detection. For an intelligent excavation system location within the 120 degree sensing range. If the to be able to accurately and promptly recognize any truck is outside of this range, unless the excavator’s obstacles or objects in the excavator’s path, a highly swing body rotates, it is impossible to detect the truck sophisticated object recognition algorithm is required bed, let alone identify whether the tuck is anywhere to discern an object from 3D data on the local area in near the excavator. This also has close implications for a prompt and accurate manner and to estimate the a possible collision between the excavator and the type of object, including its moving direction. truck. Given that the excavators exhibit diverse opera- Recognizing objects, such as loading trucks and site tion patterns, they should be able to promptly and personnel, in atypical terrains requires 3D topographic JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 445 data with significantly even and accurate density, another. Similarly, in the case of banking, the same which must be considered while designing the hard- applies to the loading area, but it is preferable for the ware for the 3D surround sensor. excavator to be positioned at a higher location com- pared to the truck during the loading operation because this offers an unobstructed view to an opera- 3. Design and development of 3D surround tor of the rectangular loading area, minimizing poten- sensor hardware tial accidents where the excavator’s arm or bucket might collide with the truck bed area. However, in 3.1. Analysis of the optimal location of a 3D actual earthwork sites, loading formation may be flex- surround sensor ible depending on the condition of the terrain of the Existing research on automated excavation focus on construction site, while in drainage construction, the performing 3D modeling of changing workspace ter- truck and the excavator often operate at the same rains and objects around an excavation robot, and altitude. Therefore, a truck must be allowed to most research use a stereo vision camera, 2D laser approach in whatever direction possible within 360 scanner, or 3D laser scanner as a sensor interface degrees from the rotational axis of the excavator. In device for 3D modeling and object detection. addition, with or without an elevated bench, the three- A stereo vision camera can rapidly acquire video dimensional coordinates of the truck bead area can be images but it exhibits a considerable amount of noise accurately recognized. and its accuracy is very low, which means that the Because the 3D surround sensor of the intelligent camera can take quite a long time to process the excavation robot must model an area 360° around the data; whereas a terrestrial 3D laser scanner is expensive excavator, the closer it is located to the center of the and susceptible to vibrations in the external environ- excavator on the XY plane, the better it is to evenly ments, and so modifying the laser scanner to suit the model the entire area, as illustrated in Figure 7(a).In excavation operations can be a challenge. Therefore, in actual scenarios, aside from the excavation work area our previous research by Yoo, Kwon, and Kim (2013) (30°-150°) around the center of the excavator platform, we identified five factors–economic feasibility, data a truck can enter the remaining area (151°-360°) from acquisition speed and scope, accuracy, ease of installa- any direction, and in some cases, a truck might even tion, durability–and computed the weight for each enter the front work area. Therefore, the area within factor using the analytic hierarchy process (AHP) 7.5 meters from the excavator’s rotational center must method to determine the most ideal 3D modelling be set as a loadable area. Moreover, to predict whether sensor for an intelligent excavation robot. The result- a truck or object is approaching the excavation system, ing weighted preference calculation determined that the look-ahead distance for determining whether any- among a terrestrial 3D laser scanner, 2D laser scanner, thing is approaching should be 15 meters from the stereo vision camera, and structured light sensor, a 2D excavator’s center. laser scanner is the best fit for the intelligent excava- Meanwhile, the 3D surround sensor of the intelli- tion robot. Therefore, we use a 2D laser scanner to gent excavation robot creates blind spots, as shown in design hardware for a 3D surround sensor. Figure 7(b,c), no matter where the sensor is positioned As the purpose of the 3D surround sensor of the because of the shape of the excavator itself. To mini- intelligent excavation robot is to model the 360° area mize such blind spots, the sensor must be located as surrounding the excavator, definitions concerning the high as possible while also remaining close to the specifications of the intelligent excavation robot and excavator’s central axis. Figure 7(b) illustrates the the working area to be excavated must first be pro- blind spots that can be created when the sensor is vided. In this research, we used Doosan Infracore’s DX- located at 4.7 meters above the ground, with the 140LC, an excavator with a height of 2.8 meters, boom front blind space being 1.8 meters and the rear being and arm length of 4.0 meters and 1.9 meters, respec- 0.95 meters. To even out the front and rear blind space, tively, maximum excavation distance of 8.2 meters, it might be better to position the sensor at the center and maximum excavation depth of 5.0 meters from of the rotational axis or slightly to the right, but if the the ground. sensor is mounted on top of the cab, problems might The ultimate goal of the 3D surround sensor is to arise while the vehicle is in motion. Moreover, colli- recognize and localize trucks or objects that are sions with the boom or even bigger blind spots caused approaching the excavator, which requires an analysis by the boom are all possible concerns. Therefore, we of where the excavation system and the loading truck decided that the optimal location of the sensor should are located. In earthmoving operations, excavation be slightly to the left from the excavator’s central axis. (cutting) involves either piling up soil near the platform Figure 7(c) illustrates the blind spots created by the or loading it onto a truck that is in proximity of the sensor when viewed from the rear of the excavator platform. This means that the cutting area and the because the cab is positioned on the left side. The banking area typically exist independently of one sensor in the figure should be located slightly to the 446 D.-J. YEOM ET AL. (a) Loading sensing area (Yoo, Kwon, and Kim 2013) (b) 3D surround modeling sensor location (side view) (c) 3D surround modeling sensor location (back view) Figure 7. Loading sensing area and blind spots caused by loading sensing area and sensor location. right from the rotational axis of the excavator to mini- determined the back part of the excavator’scab to be mize the blind space on the left and right sides. As the optimal location for the sensor. We also designed shown below, with the sensor positioned at 4.7 m the sensor such that it can be positioned at 4.7 meters above the ground, slightly to the left from the excava- above the ground. tor’s central axis, the left and right blind spots are 1.15 m and 0.98 m, respectively. 3.2. 3D surround sensor layout design In conclusion, if the sensor is positioned closer to the excavator’s central axis, the blind spots created by the The 2D laser scanner (LMS-151, SICK) used for terrain excavator will be minimized and most evenly distribu- modeling in our prior research on intelligent excava- ted, and given the height of the truck (3m), the higher tion robots exhibits a sensing range of 270° around the the sensor is positioned above the ground, the more it Z-axis, as shown in Figure 8(a), measuring 27,050 can scan the truck bed, thereby minimizing the blind points per second with an angular resolution of 0.5°. spot for the truck bed. However, to determine the sen- The 2D laser scanner scans 270° on the XY plane at sor’s height, mobility of the intelligent excavation robot a rate of 50 Hz with one scan measuring 541 points must be considered. In addition, the fact that as the with an angular resolution of 0.5°. A 2D laser scanner, sensor gets higher, the impact caused by vibrations unlike a 3D laser sensor with two rotational axes, is can become even greater. In this research, we consid- a one-axis sensor. This means that acquiring 3D geo- ered the height of the cab, the amount of blind spots metric data using a 2D laser scanner must involve that can be created by the cab and boom and finally either the sensor moving horizontally at constant JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 447 (a) Scanning range of 2D laser scanner (b) 2D laser scanner placement and rotation direction Figure 8. 2D laser scanner orientation. speed or rotating around either the X-axis or the Y-axis. 180–360° area must be segmented and processed In this study, as shown in Figure 8(b), we set up the separately in the data processing stage. Y-axis of the 2D laser sensor such that it is positioned As discussed previously, the 3D surround sensor vertically downward while simultaneously directly should be located at 4.7 m above the ground to be rotating the Y-axis. This allows the sensor to simulta- able to model the entire area around the intelligent neously scan the right and left sides of the intelligent excavation robot. If the sensor is designed to be fixed excavation robot, making it two times faster to create like a tower, it can cause some problems when the a 3D model of the full 360° workspace. excavation robot goes off the earthwork site to As such, to rotate the Y-axis of the 2D laser sensor another location. Given the fact that a typical while positioning it vertically pointing downward, the Caterpillar-type excavator is loaded onto a truck for sensor’s rotary component design should be such that relocation as it cannot be driven on the highways, the Y-axis of the sensor coincides with the rotational the maximum overall height, including the height of axis of the instrument and there should be no blind the truck bed, must not exceed 4.2 m, which is the spots caused by the rotary unit within the sensor’s maximum limit for highways. Therefore, the top rotary scanning range. In other words, because the sensor unit of the sensor should be designed to enable an up- attached to the rotary unit must have a field of view and-down motion, with the height of the sensor top in of 20.1° on the right and left sides of the Y-axis, the the most reduced (the lowest) state of 2.7 m tall or less total height of the rotary unit should be at least 0.4 m, above the ground. As the back part of the cab of the as shown in Figure 9(a), assuming that the width and intelligent excavation robot is actually 2.0 m tall, the length of the driving part of the rotary unit are 0.14 m height of the fixed-type sensor should be less than each. Such a layout allows the 2D laser scanner of the 0.7 m. Assuming that the sensor’s Y-axis coincides rotary unit to exhibit a 93.9° scanning range on each with the central rotational axis; its rotary unit is 0.4 m side. Using a start signal for the automated excavation high; and the total instrument height is 2.7 m, as operation, the 3D surround sensor of the intelligent shown in Figure 10(a). A 2-stage movable unit is esti- excavation robot simultaneously begins to sense the mated to be as long as 1.55 m in the most reduced surrounding 360° area and continuous to operate state, and a 3-stage movable unit to be as long as 1.16 while the excavation robot performs the operation. In me, as shown in Figure 10(b). In conclusion, with the addition, as the sensor continuously scans the sur- rotary unit being at least 0.4 m high, regardless of the rounding 360° area in one direction, slip rings must extent to which the minimum height is reduced by be used, and because the sensor scans both the right adding more stages, you cannot satisfy the 0.7 m and left sides of the vehicle, the 0–180° area and the limit. To address this limitation, we have devised (b) Rotary unit sensor rotation direction (a) Rotary unit layout design Figure 9. Rotary unit layout and rotation direction. 448 D.-J. YEOM ET AL. (a) 2-stage movable (b) 3-stage movable (c) 2-stage with hinge unit unit Figure 10. Sensor movable unit layout design. a way to add a hinge and clamp mechanism at the 3.3. Detailed design of a 3D surround sensor bottom of the instrument to be able to rotate the Based on the sensor layout design discussed in the instrument. With these additional hinges at the bottom previous section, we used SolidWorks 2008 to create of the instrument, the intelligent excavation robot will a detailed design of the 3D surround sensor. As shown be able to travel for long distances in a folded position in Figure 11(a), the 3D surround sensor comprises free of the height limit as the instrument’s maximum a rotary unit, movable unit, and fixed unit, with the height will then be 0.4 m and the overall height from overall height being 2.71 m when fully stretched and the ground will be 2.4 m. 1.82 m when reduced the most. Another important consideration in the layout design The sensor’s rotary unit, as shown in Figure 11(b), stage is the average horizontal rotation speed of the essentially uses a DC motor, reducer, encoder, and sensor. In 3D modeling of the local area, there is a trade- controller, and input and output pulleys and a belt off between rotation speed and modeling quality (point are used to drive the rotating shaft. Similar to our density). While a slow rotation speed of the 3D surround previous discussion, as the Y-axis points downward, sensor does not directly affect the time the intelligent the smaller the size of the rotary driving part, the excavation robot executes its operation, it can have greater is the scanning coverage that can be implications for a 3D modeling data-based object recog- achieved. In terms of size, the rotary driving part nition process as the process is triggered only when the can benefit from a smaller motor with low starting sensor’s rotational axis passes the 0° and 180° points, current and high speed. In this research, we used an meaning the sensor’s rotation speed equals the speed EC-max22 283,840 DC motor (12W), and as of the object recognition results being updated. In addi- a reducer for the motor, a φ22 mm planetary gear tion, if the automated excavation process includes (GP22C 144,003, 690:1) was used. An incremental a loading operation, the intelligent excavation robot encoder that issues 128 pulses per revolution was will wait at a designated location until it senses a truck used. Given that the motor’s gear reduction ratio is and detects the vertexes of the truck bed, and the 690:1; a reduction ratio driven by belt and pulleys is slower the detection process, the more delay it will 34:18; and the number of encoder pulses per revo- incur in the excavation work process. On the other lution is 128, the number of pulses that the sensing hand, if the horizontal rotation speed of the 3D surround instrument outputs per revolution is 166,827, with sensor is too fast, the number of three-dimensional the resulting resolution per pulse at 0.002°. As terrain and object points retrieved will be reduced, shown in Figure 11(c),the final power realized which then leads to lower quality terrain models, ulti- through the motor and reducer is transmitted to mately degrading the performance of the object recog- the rotational axis with a reduction ratio of 34:18, nition process. Therefore, the optimal rotation speed of thereby rotating the 2D laser scanning sensor the 3D surround modeling sensor should be set at around thepipeshaft. a level that does not undermine the performance of Figure 11(d) illustrates the manner in which blind the object recognition process, and from the results of spots can be created by the rotary driving part of the several tests, we determined that approximately 60° 2D laser sensor, and in detailed design, we secured per second is the most effective speed. JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 449 (b) Sensor's rotary unit (c) Torque transmission path configuration (a) 3D surround sensor configuration (d) Right and left scanning angles (e) Coordinate axes and of movable unit scanning area Figure 11. Detailed design of 3D surround sensor. a field of view of 16.8° by increasing the length of the torque model (EC-max30 272,763) as the motor must pipe shaft. As the surround modeling sensor is located withstand the weight of the sensor’s rotary unit, at 4.71 m above the ground, the angle above the x-axis upper shaft, base, ball screw, linear bushes, etc. As of the 2D laser sensor is barely scanned, and so we a reducer for the motor, a φ42mm planetary gear placed a sensor cover on top of the x-axis. The result- (Maxon GP42C 203,115, 12:1) was adopted. In this ing scanning angle for the 2D laser sensor is 76.1° on research, the ball screw is used as the primary power the left and right sides. To control the rotation position transmission device and as shown in Figure 12(b), (angle) of the 3D surround sensor, an EPOS 24/1 con- the power from the motor is transmitted to the ball troller (Maxon) was used, and the rotation speed was screw rod through the input and output pulleys and designed to be controlled in four different modes belt, and based on the rotating direction of the ball (8.3 rpm, 7.5 rpm, 5.0 rpm, 3.75 rpm) As the 3D sur- screw, the upper base fitted with the ball screw nut round sensor employs an incremental encoder, an moves up and down. The upper base exhibits an up- index signal must be entered to calculate the current and-down movement until the index sensor signals horizontal angle of the sensor instrument. As shown in are received, and in case if there is any problem with Figure 11(e), an index signal was designed to be issued the index sensor signals, the design includes when the sensor passes the – X-axis plane. The 3D urethane buffers at the top and bottom of the surround sensor has been designed to infinitely rotate lower base. To protect against vibrations, the upper counterclockwise and scan up to 20 meters ahead until and lower bases each have two shafts and linear it receives a stop signal. bushes, and an amplifier was installed onto As shown in Figure 12(a), the movable unit of the a circular plate at the bottom of the lower base. 3D surround sensor is designed such that it is As the 3D surround sensor stands as high as 4.7 m 2-tiered and its top sensor part can move up and above the ground with a 2D laser sensor on its top, it can down, driven by a DC motor and a ball screw. As the be greatly affected by vibrations from the excavator. In motor used for the movable unit only produces an this study, we installed a base block between the lower up-and-down movement, it does not control the base and the instrument case to securely fix the lower position and consists of a motor, reducer, and base, and by installing a tension roller onto the upper brake. As for the DC motor, we chose a 60w, large base, we ensure that tension can be imposed even 450 D.-J. YEOM ET AL. (a) Movable Unit Structure (c) Sensor Encloser (b) Movable Unit Power Transmission Structure Figure 12. 3D detailed design of 3D surround sensor movable unit. when the instrument is fully stretched. In addition, 4. 3D surround modeling system assembly because excavation operations are susceptible to dust, and field test humidity, and rain, we put in place an enclosure, as 4.1. 3D surround sensor hardware assembly and shown in Figure 12(c), to completely block the instru- installation ment from rain or dust in a fully stretched state. As illustrated in Figure 13, we reviewed the scan- In this study, we developed the hardware for a 3D ning range and the angles of blind spots when the 3D surround sensor in the following order: materials pro- surround sensor is mounted on the intelligent excava- cessing, trial assembly, surface treatment, and final tion robot. When viewed from the rear of the excava- assembly. Each component of the 3D surround sensor tor, the sensing instrument is 4.7 m tall, and based on that was designed using the SolidWorks software in the Z-axis the minimum blind spot is measured at 17.2° the detailed design phase was built of duralumin 7075 while the maximum at 34.6°, and with the blind spot of and precisely cut, and as shown in Figure 14(a–c), a trial the rotary unit on the Z-axis designed to be 16.8°, there assembly was performed to verify the design of the will be no blind spot caused by the instrument. The instrument and to ensure that all parts fit together maximum non-modeling region because of the shape properly. Moreover at this stage, control cables for of the excavator is 1.13 m on the left and 1.01 m on the the 2D laser sensor and the sensing instrument were right. When viewed from the side, the minimum blind put in place to verify that the instrument moves prop- spot is 30.3° based on the Z-axis while the maximum is erly and that sensor data is well received. Following the 45.0°, and the maximum non-modeling region is pre-assembly test, parts that must be exposed exter- 0.92 m in the rear and 1.64 m in the front. nally were painted in the same color as the excavator (a) Scanning range in fully stretched mode (b) scanning range in fully stretched mode (back view) (side view) Figure 13. Scanning range of 3D surround sensor. JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 451 (a) Trial assembly of rotary unit (b) Trial assembly of movable unit (e) Movable/rotary (d) Sensor data test (c) Trial assembly of fixed unit unit test Figure 14. 3D surround sensor trial assembly test. and the surfaces of internal parts were anodized. is based on OpenGL, an open library, and allows the Figure 15 illustrates the sensing instrument installed user to simultaneously view the data processing results on the intelligent excavation robot after all surface- and the sensor’s controls of the instrument. processed components were finally put together on The 3D surround modeling software runs on the site. The 3D surround sensing system hardware was control system (Figure 16(b)) installed inside the cab installed on the rear part of the cab, and for the follow- of the intelligent excavation robot, and it has been ing six months, several tests were performed to verify designed such that the same screen can be displayed terrain data acquisition, object recognition, and truck on the monitor at the remote station via wireless TCP/ localization on an actual earthmoving site. IP data communication. All modules of the 3D sur- The control system of the 3D surround sensor is round modeling software have been designed to oper- intended to control the motion of the sensor instru- ate in a fully automatic manner except for some initial ment, process 2D laser sensor data in real-time, and default values. transmit sensing results to the remote station. As illustrated in Figure 18, the field test on the 3D Therefore, the control system is essentially responsible surround sensor was focused on verifying that the for controls of the sensor instrument, sensor data pro- sensor can accurately create three-dimensional models cessing, results transmission, etc. In this research, we of the loading truck, including the truck bed area in the built a control board for controlling the instrument as same environment as an actual earthwork site. The test shown in Figure 16(a), and the enclosure for the con- was performed assuming various scenarios that can trol system, which is made of duralumin, was anodized. occur on site without deliberately controlling any sur- The control system was placed on a shelf affixed with rounding objects such as piles of earth, vehicles, peo- brackets in the back of the cab, along with an encoder ple, etc. at the test site. A dump truck that is used for an pulse counter to its right side for transmitting the actual loading operation was used for the test. motor’s rotation angles to the data processing board. An analysis of the 3D surround modeling data mea- sured at the actual earthwork site demonstrates that 3D surround scanning quality is outstanding without 4.2. Field test of the 3D surround sensor any particular noise detected (Figure 19). In addition, The data processing software for the 3D surround the shapes of people or trucks were captured clearly, sensor proposed in this research was developed which indicates no significant impact from the vibra- tions that might have been caused by the excavator. using Microsoft Visual C++ 2010 as a Window 32- based application software (Figure 17). The software We also analyzed the accuracy of the 3D modeling 452 D.-J. YEOM ET AL. (a) Assembly completed (b) Installed on excavator (c) Installed on excavator (standing before installation (folded position) position) (d) Sensor's rotary unit (e) Field test (side) (f) Field test (back) moving up Figure 15. Final assembly of 3D surround sensor and field test. (a) Control board (b) Control system (top view) (c) Control system (front view) (d) Control system (side view) Figure 16. Local sensor control system installed. JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 453 Figure 17. User interface for a 3D surround modeling module. (a) Excavator on a slope (b) Excavator on level ground Figure 18. Test environment for loading area recognition. results using a total station, and the results exhibit an this paper is placed on verifying the accuracy of the average error of 3 mm within a 20-meter range (Yoo, sensing system proposed. Kwon, and Kim 2013). If the rotation speed of the To test the accuracy of the sensing system devel- sensor is increased, the scanning time for the front oped, we first set up 9 circular targets in the terrain workspace can be reduced, while simultaneously exhi- surrounding the excavator and obtained 3D data by biting lower point density. A slower rotation speed, on rotating the sensor mounted on the back of the cab at the other hand, can translate into higher-density geo- constant speed, and then using the software developed metric data, but the scanning time per rotation in this study constructed a 3D terrain model. Afterwards, becomes longer. Overall, the test results demonstrated we measured the distances between the sensor’sorigin that the ideal rotation speed should be set at three and the 9 targets with a total station, a laser surveying seconds per rotation. instrument used in civil engineering and earthworks, The primary purpose of this paper is to design hard- and calculated errors by comparing the physical dis- ware for a sensor system and enable the creation of tances with the computed distances in the 3D terrain a high-quality 3D terrain map in a prompt and accurate model (Figure 20). The same test was conducted twice, manner with the minimum blind space. Identifying the so a total of 18 distances were compared (Table 1). The location of a truck or site staff surrounding the exca- test found that the average error was 0.003 m vator based on 3D terrain modeling will be addressed (SD = 0.0082 m) and the maximum was 0.023m, indicat- in subsequent work. Therefore, the primary focus of ing its accuracy is comparatively superior to that of 454 D.-J. YEOM ET AL. (a) Excavator and site personnel modeling (b) Excavator and site personnel modeling (d) Truck and soil mounds modeling (c) Truck and soil mounds modeling Figure 19. Results of a 3D surround sensor’s local area modeling. Figure 20. 2D Laser scanner based 3D terrain modeling. a stereo vision camera. It was also found that, as was the excavation robot. We conducted tests to create case for a stereo vision camera, the longer the distance such three-dimensional models of the terrains and is from the 2D laser scanner to the target the greater the obstacles on an actual earthwork site. The conclu- error becomes with the magnitude of the error being sions that we arrived at are as follows: proportional to the distance. (1) From our analysis of the optimal location of the 3D surround sensor, we determined that the 5. Conclusions sensor should be located on the right of the rotational axis of the excavator to minimize In this paper, we developed hardware for the 3D blind spots on the right and left sides of the surround sensing system to construct 3D models of sensor, with the most appropriate place being the 360° workspace surrounding the intelligent on top of the air compressor inlet on the back of JOURNAL OF ASIAN ARCHITECTURE AND BUILDING ENGINEERING 455 Table 1. A: calculated distance at 3D model, B: actual distance measured by total station. Img. Target no. 1 2 3 4 5 6 7 8 9 1 A 4.462 5.189 6.842 8.759 10.485 12.568 14.896 18.963 22.369 B 4.459 5.186 6.848 8.750 10.489 12.565 14.906 18.971 22.346 Difference 0.003 0.003 −0.006 0.009 −0.004 0.003 −0.010 −0.008 0.023 2 A 3.956 4.661 6.624 7.845 9.966 11.987 13.855 17.915 20.112 B 3.950 4.656 6.625 7.840 9.956 11.978 13.853 17.922 20.100 Difference 0.006 0.005 −0.001 0.005 0.010 0.009 0.002 −0.007 0.012 Description Statistic Std. error Mean 0.0030 0.0011 95% confidence interval for mean Lower −0.0011 Upper 0.0071 Variance 0.0000 Standard deviation 0.0082 Minimum −0.0100 Maximum 0.0230 Significance probability 0.041 the cab. In addition, given the blind spots a system that detects and recognizes objects approach- caused by the height of the loading truck and ing the intelligent excavation robot from any direction the shape of the excavator, the sensor must be within 360 degrees, and when compared with the pre- located at least up to 4.7 m above the ground. vious technologies by Stentz et al. (1999), Yamamoto (2) Upon analyzing the direction in which the 2D (2008), Yamamoto et al. (2006), Yamamoto et al. (2009) laser sensor must be installed and rotated, we of unmanned excavators it is considered to be the most determined that to allow for easy control of con- advanced technology in terms of its expanded sensing stant speed and data processing, the 3D surround range, reduced blind spots, and finer modeling resolution sensor should infinitely rotate at 360° in one due to improved precision. The technology can be direction, with a rotation speed of 30–40 degrees further developed to be used for planning travel paths per second. In addition, the Y-axis of the sensor for the intelligent excavation robot; avoiding people and should be perpendicular to the ground and coin- preventing potential collisions; determining whether to cide with the horizontal rotational axis, which is stop work or not; and recognizing the truck bed area of the most effective way of simultaneously scan- a truck. If the sensor’s performance is further enhanced ning the right and left sides of the sensor. with continuous research and development efforts based (3) We designed a hardware layout for the 3D sur- on the results of this study, it should contribute to devel- round sensor, which comprises a rotary unit, oping a new sensor not only for the intelligent excavation movable unit, and fixed unit. Our analysis robot, but also for other types of unmanned earthmoving demonstrated that the rotary unit should be at machines. We expect that this technology can be used for least 0.4 m in height to gain a scanning angle of many other types of automated construction and earth- 20.1 degrees around the rotational axis during moving equipment and will have wide applications to its one-direction, infinite rotation movement. automated construction equipment going forward. The rotary unit used a DC motor as its power source, along with the belt and pulleys for Disclosure statement power transmission to drive the rotating shaft. As for the movable unit, a ball screw was used to No potential conflict of interest was reported by the authors. enable the up-and-down movement while the belt and pulleys were utilized to transmit power Funding for the ball screw. (4) We developed a Windows 32-based 3D sur- This work was supported by the National Research Foundation round modeling software to process data from of Korea (NRF) grant funded by the [Korea government (MSIT) the 3D surround sensor. Field tests were con- (No. 2016R1A2B2013985)]. ducted at an actual earthwork site with the results exhibiting outstanding 3D scanning Notes on contributors quality for the 360° workspace and zero noise. In addition, the resulting images clearly exhibit Dong-Jun Yeom who earned his Ph.D. in Construction the shapes of people and other objects like Management in Department of Architectural Engineering in 2018 from Inha University. He has given a series of lectures trucks such that there is no significant impact on computer aided design, computer programming for engi- caused by vibrations from the excavator. neering application, construction IT and etc.at Inha University since 2015. He currently serves as a postdoctoral The 3D surround sensor and 3D terrain modeling research engineer in Industrial Science and Technology technology presented in this paper can be used for Research Institute at Inha University. 456 D.-J. YEOM ET AL. Hyun-Seok Yoo who earned his Ph.D. in Construction Sarata, S., N. Koyachi, and K. Sugawara. 2008. “Measuring and Management in Department of Architectural Engineering in Update of Shape of Pile for Loading Operation by Wheel 2012 from Inha University. He currently serves as a vice Loader.” paper presented at the annual meeting for professor in Department Technology Education at Korea society of the ISARC, Vilnius, Lithuania. National University of Education. His research interests are Seo, J., C. Park, and D. Jang. 2007. “Development of int the area of construction information technologies, auto- Intelligent Excavating System – Introduction of Research mation in construction and etc. He has conducted various Center.” paper presented at the annual meeting for research projects in terms of automation in construction: an society of the KICEM, Busan, Korea. Automated Pavement Crack Sealing Machine, an Intelligent Singh, S., and R. Simmons. 1992. “Task Planning for Robotic Excavating System and various applications of information Excavation.” paper presented at the annual meeting for technologies and automation in construction. society of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Raleigh, U.S.A, August. Young-Suk Kim earned his Ph.D. in Construction Engineering Stentz, A., J. Bares, S. Singh, and P. Rowe. 1999. “A Robotic and Project Management in 1997 from the University of Excavator for Autonomous Truck Loading.” Journal Texas at Austin. He has given a series of lectures on an Autonomous Robots 7 (22): 175–186. doi:10.1023/ execution of building work, construction management, A:1008914201877. time management, cost management, contract manage- Whiteborn, M., C. Debrunner, and J. Steele. 2003. “Stereo ment, construction information technology, and automation Vision in LHD Automation.” IEEE Transactions on in construction and etc. at Inha University since 1999. He Industry Apllications 39 (1): 21–29. doi:10.1109/ currently serves as a professor in Department of TIA.2002.807245. Architectural Engineering at Inha University and a chairman Yamamoto, H. 2006. “Introduction to the General Technology of University Development Commission at Korea Institute of Development Project: Research and Development of Construction Engineering and Management. His research advanced Execution Technology by Remote Control interests are in the areas of sustainable construction, cost Robot and Information Technology„. paper presented at and time management, engineering education, and automa- the annual meeting for society of the ISARC, Tokyo, Japan. tion in construction. He has conducted various research Yamamoto, H. 2008. “Research on Automatic Control projects in terms of automation in construction: an Technology of Excavation Work by Hydraulic Shovel.” Automated Pavement Crack Sealing Machine, Tele-operated Public Works Research Institute. https://www.pwri.go.jp/ Concrete Pipe Laying Manipulator in the Trenches, jpn/results/report/report-project/2007/pdf/2007-sen-3.pdf Automated Controller for Checking Verticality and Yamamoto, H., M. Moteki, H. Shao, T. Ootuki, Automated, an Intelligent Excavating System, and etc. H. Kanazawa, and Y. Tanaka. 2009. “Basic Technology toward Autonomous Hydraulic Excavator.” paper pre- sented at the annual meeting for society of the ISARC, References Austin, U.S.A. Yamamoto, H., Y. Ishimatsu, S. Ageishi, N. Ikeda, K. Endo, Cannon, H. 1999. “Extended Earthmoving with an Autonomous M. Masuda, M. Uchida, and H. Yamaguchi. 2006. Excavator.” Master’s Thesis, Carnegie Mellon University, “Example of Experimental Use of 3D Measurement Pittsburgh, U.S.A. System for Construction Robot Based on Component Corke, P., J. Roberts, and G. Winstanley. 1999. “3D Perception for Design Concept.” paper presented at the annual meeting Mining Robotics.” paper presented at the annual meeting for for society of the ISARC, Tokyo, Japan. society of the Field and Service Robotics, Pittsburgh, U.S.A. Yoo, H., S. Kwon, and Y. Kim. 2013. “A Study on the Selection Sarata, S., N. Koyachi, H. Kuniyoshi, T. Tsubouchi, and and Applicability Analysis of 3D Terrain Modeling Sensor K. Sugawara. 2007. “Detection of Dump Truck for for Intelligent Excavation Robot.” Journal of the Korean Loading Operation by Loader.” paper presented at the Society of Civil Engineers 33 (6): 2551–2562. doi:10.12652/ annual meeting for society of the ISARC, Kochi, India. Ksce.2013.33.6.2551.

Journal

Journal of Asian Architecture and Building EngineeringTaylor & Francis

Published: Sep 3, 2019

Keywords: 3D surround modeling; truck recognition; intelligent excavation robot; automated excavation

References