Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

IMAGE PROCESSING BASED AUTONOMOUS LANDING ZONE DETECTION FOR A MULTI-ROTOR DRONE IN EMERGENCY SITUATIONS

IMAGE PROCESSING BASED AUTONOMOUS LANDING ZONE DETECTION FOR A MULTI-ROTOR DRONE IN EMERGENCY... Turkish Journal of Engineering – 2021; 5(4); 193-200 Turkish Journal of Engineering https://dergipark.org.tr/en/pub/tuje e-ISSN 2587-1366 Image processing based autonomous landing zone detection for a multi-rotor drone in emergency situations 1 *1 2 2 Veysel Turan , Ercan Avşar , Davood Asadihendoustani , Emine Avşar Aydın Çukurova University, Faculty of Engineering, Department of Electrical and Electronics Engineering, Adana, Turkey Adana Alparslan Türkeş Science and Technology University, Faculty of Aeronautics and Astronautics, Department of Aerospace Engineering, Adana, Turkey Keywords ABSTRACT Autonomus landing Flight safety and reliability improvement is an important research issue in aerial applications. Image processing Multi-rotor drones are vulnerable to motor failures leading to potentially unsafe operations Object detection or collisions. Therefore, researchers are working on autonomous landing systems to safely UAV recover and land the faulty drone in on a desired landing area. In such a case, a suitable landing zone should be detected rapidly in for emergency landing. Majority of the works related with autonomous landing utilize a marker and GPS signals to detect landing site. In this work, we propose a landing system framework that involves only the processing of images taken from the onboard camera of the vehicle. First, the objects in the image are determined by filtering and edge detection algorithm, then the most suitable landing zone is searched. The area that is free from obstacles and closest to the center of the image is defined as the most immediate and suitable landing zone. The method has been tested on 25 images taken from different heights and its performance has been evaluated in terms runtime on a single board computer and detection precision and recall values. The average measured runtime is 2.4923 seconds and 100% of precision and recall values are achieved for the images taken from 1m and 2m. The smallest precision and recall values are 79.1% and 81.2%, respectively. 1. INTRODUCTION research and development activities continue in many research centers (Hoffmann et al. 2007; Zhao and Wang The use of Unmanned Aerial Vehicles (UAV) has 2012). The vast majority of research activities address increased at an unpredictable rate in recent years. control issues during flying. The ultimate goal of UAV Although these devices have been used particularly in systems is to reach fully autonomous operations (Kim military applications for a long time, their use in non- and Sukkarieh 2002). military applications such as fire extinguishing (Aydin et One of the most important issue for UAV is al. 2019), meteorological research (Martin et al. 2010; autonomous landing in motion. Meanwhile, demands for PropotoUAV 2019), exploration (Jakob et al. 2016; automatic landing of drones on a defined position, safely, Heincke et al. 2019) and agricultural activities accurately, and without human’s intervention, increase (Veroustraete 2015) have become very widespread every day. It is always possible for a UAV to unavoidably nowadays. Quadrotors are the most common devices to face emergencies during flight, such as engine failure, use among UAV types due to their uncomplicated interruption of data link from the ground and other mechanical structures. Quadrotors can fulfill 3D motion unexpected accidents (strong wind, rain, etc.). Thus, tracking requirements with various technical systems forced landing measures should urgently be adopted in such as Global Positioning System (GPS), ultrasonic such situations. Methods of forced landing such as detection, angular velocity sensors and linear parachute and other flight termination systems can cause accelerometers (Zhao and Wang 2012). Despite the use damage on the body of the multi-rotor (Fitzgerald et al., of these integrated systems and sensors, the control of 2005). In addition, GPS signals are highly susceptible to the quadrotors is still one of the most difficult issues, but be interrupted especially at lower altitudes (Lee et al. Corresponding Author Cite this article (veyselturnn@gmail.com) ORCID ID 0000 – 0002 – 0197 – 5227 Turan V, Avşar E, Asadihendoustani D & Aydın E A (2021). Image processing based (ercanavsar@cu.edu.tr) ORCID ID 0000 – 0002 – 1356 – 2753 autonomous landing zone detection for a multi-rotor drone in emergency situations. (dasadihendoustani@atu.edu.tr) ORCID ID 0000 – 0002 – 2066 – 6016 Turkish Journal of Engineering, 5(4), 193-200 (eaydin@atu.edu.tr) ORCID ID 0000 – 0002 – 5068 - 2957 Research Article / DOI: 10.31127/tuje.744954 Received: 29/05/2020; Accepted: 20/07/2020 Turkish Journal of Engineering – 2021; 5(4); 193-200 2012; Ho 2017), and in indoor environments. images taken from a drone’s onboard camera. The aim is Furthermore, GPS signals are controlled by other nations, to rapidly detect a suitable landing zone in an which causes vulnerability issues. For instance, in 29th unstructured and unknown environment. The method October 2018, the GPS jamming that caused 46 drones to initially proposes a candidate landing zone at the center plummet during a display over Victoria Harbour caused of the image. If the initial proposal is not suitable then at least HK$1 million (US$127,500) of damage, according new candidates in the neighboring area are evaluated to a senior official from the Hong Kong Tourism Board until an available spot is detected (Fig. 1). The suitability (Liteye 2018). As a result, some alternative methods of a spot is determined by existence of an object inside it. were adopted to minimize the damage on that UAVs by The object detection is accomplished by means of several enabling them to autonomously find a safe area suitable image processing methods including edge detection, for landing. Some studies about forced landing in color processing, morphologic operations and emergency situations for UAVs were presented in indoor thresholding. The major advantages of the method are (i) environment without using GPS (Nemati et al. 2015). no requirement for a marker and (ii) no need for huge In the past years, in design of emergency landing amount of data for training a model. systems, effective algorithms of machine learning such as 2. METHOD Support Vector Machines (SVM) and Artificial Neural Networks (ANN) were utilized in combination with 2.1. The Landing System Framework digital image processing techniques for selection of appropriate landing site (Guo et al. 2014, Lunghi et al. The use of multi-rotor drones has undeniably 2015). It is observed that the above mentioned machine increased in the past decade, both in the military and learning algorithms have different performance civilian applications, thus raising a number of vital constraints, e.g. SVM is complex and requires huge unsolved issues including safety and reliability. Engine computational power, whereas, ANN requires large malfunction or failures are among the common faults in training data set which corresponds to greater training multi-rotor drones, which apparently endanger the time. Due to these constraints, both algorithms cannot drone and the people's safety on the ground. In order to meet the rapidly changing requirements for landing area increase flight safety and reliability of drones, selection in emergency flight conditions. For instance, in researchers are working on automation enhancement to a previous study about detection of forced landing sites, safely recover the impaired drone (Lopez-Franco et al. 901 images were used to train and test of an ANN model 2017; Mazeh et al. 2018; Nguyen et al. 2019). (Fitzgerald and Walker 2005). Because the images fed to There are several challenges related with safe the ANN model were not representative of the training recovery or landing of impaired aerial vehicles. Majority set, the final classification accuracy is very low when of these challenges are about obstacle detection, suitable compared to training accuracy (Lu et al. 2013). Thus, that landing site detection/selection, fault detection and situation prevents it from being completely reliable. identification, characterizing the aircraft’s new Image processing is an appropriate and reliable way kinematic constraints, trajectory planning, and control of to find safe landing sites in case of an emergency. This is the faulty aircraft on the landing trajectory. To cover generally accomplished by detection of objects in the these challenges, an emergency landing system has been images taken from UAVs. Most of the related studies are proposed according to Fig. 1. In fault or failure scenarios about detection of some “marker” in an image (Barták et where continuation of flight is not possible or endangers al. 2014; Cabrera-Ponce and Martinez-Carranza, 2017; the flight safety, the emergency flight system is triggered Sani and Karimian 2017) which represents the desired to recover drone’s stability and safely land the drone on landing spot. Utilization of a marker may not be a feasible a suitable landing site. way for emergency landing conditions. Since markers The emergency landing system is translated to an can be far from the UAV then it will take time to find the architecture consisting of various subsystems that are markers. For instance, the recent demonstration of capable of landing a faulty drone to a desired landing site package delivery using an UAV by Amazon shows the along a designed trajectory without colliding to any feasibility of an UAV sending a package to its consumer human or animal. The architecture autonomously (AMAZON 2017). A marker which is placed by the detects objects as well as possible landing sites, consumer on the ground is used to allow the UAV to land determines the most suitable landing site, develops the safely. In some circumstances where the marker is landing trajectory based on new kinematic and dynamic unavailable or inappropriately placed, these vehicles constraints of impaired drone, and controls it to the need to be able to sense and avoid the surrounding landing site, using onboard camera data and other objects in the environment and perform a smooth common sensor information like IMU (Inertial descent automatically. Without this capability, the safety Measurement Unit). of the surrounding animals, humans, and property cannot be ensured. Additionally, in some emergency 2.2. Landing Zone Detection situations, the UAV needs to land as quickly as possible. Therefore, it is essential to select a safe landing spot Emergency landing mode of the UAV is activated automatically and emergently, without depending on whenever a fault in any of the motors is detected. Then external systems. the onboard camera is immediately triggered to take a Motivated by the above reasons, in this paper, an top-view photo of the ground in perpendicular direction. image processing method for object detection is This image is the field of view (FOV) of the UAV and it proposed. The method is developed to work on the constitutes the search space for finding a suitable landing Turkish Journal of Engineering – 2021; 5(4); 193-200 zone. In such an emergency situation, locating the neighborhood to the image center is gradually increased appropriate zone in a short time is very important. until a vacant spot is found and that spot is labelled to be Therefore, the algorithm starts checking the suitability of “positive”. Next, the coordinates of this “positive” spot is center of the image which denotes the closest area to the sent to the flight controller to initialize the autonomous UAV. If the image center is occupied by an object than it landing process. The steps for emergency landing are is labelled as “negative” and a neighborhood of the image presented in Fig. 2. center checked for suitability. The distance of the Figure 1. Emergency landing system architecture and subsystem Emergency Landing Mode Activated Capture Fov Image Detect Objects in FOV Image Initiate r=0 distance from the center No Propose a landing Increase r zone at r distance Yes Does the landing zone Label the landing All possible landing zones Yes overlap with any objects? zone as negative of r distance proposed? No Label the landing zone as positive Land Figure 2. Suitable Landing site detection flowchart 2.2.1. Object detection in FOV images Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of The object detection starts with converting the FOV edges in images (Canny 1986). The advantage of using image into a grayscale image. This is necessary for Canny edge detection technique over other well-known preparing the image for canny operator, which is a edge detection algorithms is that it gives better results method for edge detection to find objects on image. The even in noisy conditions (Kumar et al. 2015). Turkish Journal of Engineering – 2021; 5(4); 193-200 The edge detection operator has four steps. unwanted pixels, which may not establish the edge. In  Smooth an image with Gaussian filter. this step, just local maxima must be considered as edges through applying non-maximum suppression. Non-  Calculate gradient magnitude and gradient maximum suppression exchanges the smoothed edges in direction. the frame of the gradient magnitudes to sharp edges. This  “Non - maximum suppression” to ensure the step is necessary to keep every local maximum in the desired edge with one single pixel width. gradient image, and remove any other detected edges  Determine two threshold values, and then select which are possibly false detections. possible edge points and trace edges. The final step of canny edge detector algorithm is The first step in canny edge detector algorithm is to hysteresis thresholding. In this step, two threshold delete the noise in the frames by applying a Gaussian values are selected. The edges with intensity gradient filter. In Canny algorithm, the Gaussian function is greater than the maximum threshold are labeled as applied to smooth the image prior to edge detection. The “sure-edge”. Similarly, the edges with intensity gradient filtering or smoothing operation actually supports two smaller than the minimum threshold are labeled as “non- purposes. The first one is noise effect reduction prior to edge”. Other edges between these threshold values are the detection of pixel intensity changes. The second labeled as “sure-edge” if they are connected to another purpose is setting the resolution or scale at which “sure-edge”, otherwise labeled as “non-edge”. Obviously, intensity changes are to be detected (Chen et al. 2014). this step removes small edges in the images that are These two purposes are necessary to improve the possibly false detections. These threshold values are the efficiency of edge detection method. In other words, only parameters of the method and are selected as 100 Gaussian filtering helps reduction of detecting false and 200 in this work. edges. Typically, the edge lines at the output of Canny Next step is to calculate the magnitude and gradient method are thin lines. Therefore, dilation is applied to of the edges in the smoothed image. This is accomplished binary edge image as morphological operation to thicken by filtering the smoothed image with a Sobel kernel in the line of objects on image. Because thin line may cause vertical and horizontal directions. errors when finding a landing zone. The binary image The complete scan of image is done after receiving obtained after this operation is named as objects image. gradient magnitude and direction, to remove any The outputs of these steps are given in Fig. 3. (a) (b) (c) Figure 3. Steps of object detection. (a) FOV image, (b) Edge detection of FOV image, (c) Morphological operations are applied to binary image (objects image) 2.2.2. Searching for a landing zone altitude, the UAV is initially brought to 5-meter altitude, then the image processing is activated. Once the objects in the FOV image are detected, the After determining the optimal dimensions for next step is to find the closest available zone for landing. landing, an available space for these dimensions is The first parameter to consider for this step is the searched in the FOV image. The availability of a spot in minimum dimensions of the suitable landing zone. the image is defined as the area in which no object is Obviously, this parameter depends on the altitude at present. In order to check the availability, a binary mask which the FOV image is taken. As illustrated in Fig. 4, in is created. In the binary mask, an area with the optimal order to find a relation between the dimensions and the dimensions is made “1”, and all the remaining area is left altitude, a calibration procedure is carried out on the as “0”. The area with “1” values in the binary image is the images taken at five different altitudes. When the altitude proposed region the availability of that region is checked. is 1 meter, the ideal dimensions are 60x60 pixels; on the By observing the output of the logical AND operation other hand, when the altitude is 5 meters, the ideal between the binary mask and the objects image, it is dimensions reduce to 20x20 pixels. The ideal dimensions possible to determine if a spot is available or not. If the for suitable landing zone change linearly with respect to logical AND operation returns “1” value, then it means altitude. that there is an overlap between any of the objects and In a typical emergency landing scenario, a maximum the proposed region in the binary mask. Therefore, this altitude of 5 meters is appropriate for initialization of proposed region is labeled as “negative”. On the other image processing tasks (Lee et al. 2014). Therefore, we hand, if a value of “0” is returned from the AND operation, used the same altitude value for our analysis as well. In then no overlap is present, hence the proposed region is case of triggering the emergency landing at a higher labeled as “positive”. Turkish Journal of Engineering – 2021; 5(4); 193-200 When a proposed region is labeled as “negative”, another region should be proposed immediately until a “positive” label is achieved. Since this work concerns only with the emergency situations, the location of the first proposed region is the middle of the FOV image which is the spot that is closest to UAV. If this spot is not available then a circular vicinity of the middle point is searched for availability, and the radius of the circle is gradually increased until an available spot is found. The steps of searching for a landing zone are given in Fig. 5. Figure 4. Ideal position to initialize the search of landing pad in different heights (a) (b) (c) (d) (e) (f) Figure 5. Steps of searching for a landing zone. (a) Objects image, (b) A binary mask for proposing a region, (c) Output of AND operation between (a) and (b): “negative” labeling, (d) Another binary mask for proposing a region, (e) Output of AND operation between (a) and (d): “positive” labeling, (f) Detected landing zone shown on the FOV image 3. RESULTS AND DISCUSSION different for consecutive runs. This may be because of the tasks related to operating system of the single board The method has been evaluated in terms of speed (i.e. computer. Therefore, the algorithm has been run on all of runtime) and detection performance on a set of images the images for 10 separate times (i.e., 25 images x 10 taken from different altitudes. Here, we define the speed times = 250 observations) and then some statistical as the runtime of the algorithm to find a suitable landing values are calculated on the observed runtimes. The zone in the image. On the other hand, the detection range of the observations is [1.3940, 2.8478] seconds performance is defined as precision and recall values where the mean and standard deviation is calculated as related with the object detection in the images. 2.4923 seconds and 0.3899 seconds, respectively. As explained earlier, the landing zone detection is a For calculating the precision and recall values, part of autonomous landing system framework for number of true detections (TP), false positive (FP) and emergency situations. Hence, the entire detection task false negative (FN) detections are determined. In order has been experimented on a single board computer, to determine these values, intersection over union (IoU) which may easily be involved in a UAV system. The single for all the detections are utilized. Calculation of IoU board computer used in the experiments is Raspberry Pi involves computing the overlap between the ground 3 model B that has a Broadcom BCM2837B0 chipset, truth object and the detections. Since the purpose of this Cortex-A53 (ARMv8) 64-bit processor working at 1.4GHz work is to find a suitable landing location, the threshold frequency and 1GB of memory. Besides, all the processed for IoU is set as 95%. It means that if the IoU for a images have dimensions of 600 x 400 pixels. detection is smaller than this threshold then it is During the experiments, it has been observed that the considered as FP, otherwise TP. On the other hand, any runtime of the algorithm for a particular image is misdetections of an object are counted as FN. Turkish Journal of Engineering – 2021; 5(4); 193-200 In the terminology of object detection, the precision is 𝑒𝑐𝑙𝑟𝑎𝑙 = (2) defined as the probability of the detected objects +𝐹𝑁 matching the actual objects. On the other hand, recall is a way to measure the probability actual objects being There are five test images taken from five different altitudes, hence a total 25 images are used for evaluating correctly detected. the performance object detection. Besides, the algorithm Once the related quantities are determined, the is expected to work efficiently at different times of the precision and recall are calculated as follows: day. Therefore, the same experiments were repeated by changing the brightness of the test images. The added (1) 𝑒𝑠𝑜𝑟𝑐𝑖𝑖𝑝𝑛 = brightness amounts vary from -20% to 60%. The related +𝐹𝑃 results are given in Table 1. Table 1. Precision and recall for different brightness levels and altitudes Altitude Amount of 1m 2m 3m 4m 5m Brightness Precision Recall Precision Recall Precision Recall Precision Recall Precision Recall Added (%) -20 0.916 0.916 0.850 0.950 0.875 0.916 0.812 0.916 0.833 0.875 -10 0.916 1.000 0.900 1.000 0.875 0.916 0.875 0.937 0.875 0.916 0 1.000 1.000 1.000 1.000 0.958 1.000 0.937 1.000 0.916 1.000 +10 1.000 1.000 1.000 1.000 0.958 1.000 0.937 1.000 0.916 1.000 +20 0.916 0.916 0.900 0.950 0.916 0.916 0.875 0.937 0.916 0.916 +40 0.875 0.916 0.850 0.900 0.875 0.875 0.812 0.875 0.833 0.875 +60 0.833 0.833 0.800 0.850 0.833 0.850 0.812 0.812 0.791 0.833 As can be seen from the Table 1 that the highest algorithm. The system is intended to work without any precision and recall values are obtained at low altitudes kind of markers showing the landing zone. Additionally, (1m and 2m) when no or +10% brightness is added to the it does not utilize GPS signals, which may be unavailable images. It is also notable that the results belonging to under certain circumstances. It is based on direct brightness addition of 0 and +10% are identical for all of detection of a suitable landing zone by processing of the altitudes. It means that the object detection is robust images taken from the onboard camera on the UAV. Since to addition of small amount of illumination. On the other it does not involve any training and testing of a predictive hand, when the brightness or darkness of the images are model, the computational load is low and hence the increased, rate of FP and FN detections increase as well corresponding response time is reasonable. On average, yielding decrements in precision and recall values. it takes around 2.5 seconds to make the detection for a Preprocessing of the images using various filters or single board computer and 100% of correct detection histogram equalization may be a useful step to improve rate is achieved for the images taken from 1m and 2m. the detection performance under different illumination The smallest precision and recall values are 79.1% and levels. 81.2%, respectively. These results show that the method When the table is analyzed according to different is suitable for real-world scenarios. In addition, higher altitude levels, slight decrements in performance are detection performance at lower altitudes means that the observed as the altitude is increased. Thus, it may be a algorithm should be fast enough to make a final decision good practice to check the object locations during the at 2 meters. In the future, the object detection may be run emergency landing so that the detected locations may be at different altitudes of the emergency landing process updated when necessary. Furthermore, the recall value is and update the landing trajectory accordingly. Also, the generally higher than the precision value for all of the test latencies in the data transfer pipeline may be considered instances. This means that the rate of FN detections is for more accurate response time. This method is smaller than FP detections. When the application area of obviously intended to work on terrestrial zones. In other the framework is considered, burden of a FN detection is words, it may not detect water area, which is not a higher than a FP detection because misdetection of an suitable zone for landing. Therefore, the algorithm may object (i.e. FN) may cause a crash. However, on the other be improved to work on the images involving water area. hand, detecting an object at an available area (i.e. FP) will ACKNOWLEDGEMENT just cause the runtime of the program to increase. The overall accuracy of landing site detection is This research was supported by Scientific Research directly related with speed and performance of object Project Unit of Adana Alparslan Türkeş Science and detection step. As a result, high speed, precision, and Technology University with the project number of recall values indicate the suitability of this method for autonomous landing. REFERENCES 4. CONCLUSION AMAZON (2017). Prime Air. from An emergency landing system framework is proposed https://www.amazon.com/Amazon-Prime- together with the details of the related image processing Air/b?ie=%20UTF8&node=8037720011. 𝑇𝑃 𝑇𝑃 𝑇𝑃 𝑇𝑃 Turkish Journal of Engineering – 2021; 5(4); 193-200 Aydin B, Selvi E, Tao J & Starek M (2019). Use of Fire- 15th International Technical Meeting of the Satellite Extinguishing Balls for a Conceptual System of Division of The Institute of Navigation (ION GPS Drone-Assisted Wildfire Fighting. Drones, 3(1). 2002). Portland, OR, 510 - 517 https://doi.org/10.3390/drones3010017 Kumar K, Li J & Khan S (2015). Comparative Study on Barták R, Hraško A & Obdržálek D (2014). On Various Edge Detection Techniques for 2-D Image. autonomous landing of AR.Drone: Hands-on International Journal of Computer Applications experience. Proceedings of the 27th International 119(22), 6-10. Florida Artificial Intelligence Research Society Lee D, Lim H, Kim H J & Kim Y (2012). Adaptive Image- Conference, FLAIRS 2014: 400-405. Based Visual Servoing for an Underactuated Cabrera-Ponce A A & Martinez-Carranza J (2017). A Quadrotor System. Journal of Guidance, Control, and vision-based approach for autonomous landing. Dynamics, 35(4), 1335-1353. 2017 Workshop on Research, Education and Lee M R, Su S, Yeah J E, Huang H & Chen J (2014). Development of Unmanned Aerial Systems (RED- Autonomous landing system for aerial mobile robot UAS). Linköping, Sweden. DOI: 10.1109/RED- cooperation. 2014 Joint 7th International UAS.2017.8101655 Conference on Soft Computing and Intelligent Canny J (1986). A Computational Approach to Edge Systems (SCIS) and 15th International Symposium Detection. IEEE Transactions on Pattern Analysis on Advanced Intelligent Systems (ISIS). Kitakyushu, and Machine Intelligence PAMI-8(6): 679-698. Japan, DOI: 10.1109/SCIS-ISIS.2014.7044826 DOI: 10.1109/TPAMI.1986.4767851 Liteye (2018). HK$1 million in damage caused by GPS Chen W, Yue H, Wang J & Wu X (2014). An improved jamming that caused 46 drones to plummet during edge detection algorithm for depth map inpainting. Hong Kong show. from https://liteye.com/hk1- Optics and Lasers in Engineering, 55: 69–77. million-in-damage-caused-by-gps-jamming-that- https://doi.org/10.1016/j.optlaseng.2013.10.025 caused-46-drones-to-plummet-during-hong-kong- Fitzgerald D & Walker R (2005). Classification of show/. Candidate Landing Sites for UAV Forced Landings. Lopez-Franco C, Gomez-Avila J, Alanis A, Arana-Daniel AIAA Guidance, Navigation, and Control Conference N & Villaseñor C (2017). Visual Servoing for an and Exhibit, San Francisco, California. Autonomous Hexarotor Using a Neural Network Fitzgerald D, Walker R & Campbell D (2005). A Vision Based PID Controller. Sensors, 17(8), 1865. Based Emergency Forced Landing System for an https://doi.org/10.3390/s17081865 Autonomous UAV. Australian International Lu A, Ding W & Li H (2013). Multi-information Based Aerospace Congress, Melbourne, Australia. Safe Area Step Selection Algorithm for UAV’s Guo X, Denman S, Fookes C, Mejias L & Sridharan S Emergency Forced Landing. Journal of Software, (2014). Automatic UAV Forced Landing Site 8(4), 995-1002. Detection Using Machine Learning. 2014 Lunghi P, Ciarambino M & Lavagna M (2015). Vision- International Conference on Digital Image Based Hazard Detection With Artificial Neural Computing: Techniques and Applications (DICTA). Networks for Autonomous Planetary Landing. Wollongong, NSW, Australia. In 13th ESA/ESTEC Symposium on Advanced Space DOI: 10.1109/DICTA.2014.7008097 Technologies in Robotics and Automation, ASTRA Heincke B, Jackisch R, Saartenoja A, Salmirinne H, Rapp Martin S, Bange J & Beyrich F (2010). Meteorological S, Zimmermann R, Pirttijärvi M et al. (2019). profiling of the lower troposphere using the Developing multi-sensor drones for geological research UAV "M AV Carolo". Atmospheric mapping and mineral exploration: setup and first Measurement Techniques Discussions 4, 705–716. results from the MULSEDRO project. Geological DOI:10.5194/amt-4-705-2011 Survey of Denmark and Greenland Bulletin 43. Mazeh H, Saied M, Shraim H and F. Clovis (2018). Fault- https://doi.org/10.34194/GEUSB-201943-03-02 Tolerant Control of an Hexarotor Unmanned Aerial Ho H W (2017). Autonomous landing of Micro Air Vehicle Applying Outdoor Tests and Experiments. Vehicles through bio-inspired monocular vision. IFAC-PapersOnLine 51(22), 312-317. PHD Thesis, ISBN: 978-94-6186-818-3 https://doi.org/10.1016/j.ifacol.2018.11.560 Hoffmann G M, Huang H, Waslander S L & Tomlin C Nemati A, Sarim M, Hashemi M, Schnipke E et al. (2015). (2007). Quadrotor Helicopter Flight Dynamics and Autonomous Navigation of UAV through GPS- Control: Theory and Experiment. AIAA Guidance, Denied Indoor Environment with Obstacles. AIAA Navigation and Control Conference and Exhibit, SciTech, Kissimmee, Florida South Carolina Nguyen N P, Mung N X & Hong S K (2019). Actuator Jakob S, Zimmermann R & Gloaguen R (2016). Fault Detection and Fault-Tolerant Control for Processing of drone-borne hyperspectral data for Hexacopter. Sensors 19(21), 4721. geological applications. 2016 8th Workshop on https://doi.org/10.3390/s19214721 Hyperspectral Image and Signal Processing: PropotoUAV (2019). Drones are being used for weather Evolution in Remote Sensing (WHISPERS). Los forecasting by meteorologists. from Angeles, CA, USA. https://www.prophotouav.com/meteorologists- DOI: 10.1109/WHISPERS.2016.8071689 storm-weather-drones/. Kim J & Sukkarieh S (2002). Flight Test Results of Sani M F & Karimian G (2017). Automatic navigation GPS/INS Navigation Loop for an Autonomous and landing of an indoor AR. drone quadrotor using Unmanned Aerial Vehicle (UAV). Proceedings of the ArUco marker and inertial sensors. 2017 Turkish Journal of Engineering – 2021; 5(4); 193-200 International Conference on Computer and Drone Zhao H & Wang Z (2012). Motion Measurement Using Applications (IConDA), Kuching, Malaysia. Inertial Sensors, Ultrasonic Sensors, and DOI: 10.1109/ICONDA.2017.8270408 Magnetometers With Extended Kalman Filter for Veroustraete F (2015). The Rise of the Drones in Data Fusion. IEEE Sensors Journal - IEEE SENS J Agriculture. EC Agriculture 2(2), 325-327. 12(5), 943-953. DOI: 10.1109/JSEN.2011.2166066 © Author(s) 2021. This work is distributed under https://creativecommons.org/licenses/by-sa/4.0/ http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Turkish Journal of Engineering Unpaywall

IMAGE PROCESSING BASED AUTONOMOUS LANDING ZONE DETECTION FOR A MULTI-ROTOR DRONE IN EMERGENCY SITUATIONS

Turkish Journal of EngineeringOct 1, 2021

Loading next page...
 
/lp/unpaywall/image-processing-based-autonomous-landing-zone-detection-for-a-multi-70X80XzyUw

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Unpaywall
ISSN
2587-1366
DOI
10.31127/tuje.744954
Publisher site
See Article on Publisher Site

Abstract

Turkish Journal of Engineering – 2021; 5(4); 193-200 Turkish Journal of Engineering https://dergipark.org.tr/en/pub/tuje e-ISSN 2587-1366 Image processing based autonomous landing zone detection for a multi-rotor drone in emergency situations 1 *1 2 2 Veysel Turan , Ercan Avşar , Davood Asadihendoustani , Emine Avşar Aydın Çukurova University, Faculty of Engineering, Department of Electrical and Electronics Engineering, Adana, Turkey Adana Alparslan Türkeş Science and Technology University, Faculty of Aeronautics and Astronautics, Department of Aerospace Engineering, Adana, Turkey Keywords ABSTRACT Autonomus landing Flight safety and reliability improvement is an important research issue in aerial applications. Image processing Multi-rotor drones are vulnerable to motor failures leading to potentially unsafe operations Object detection or collisions. Therefore, researchers are working on autonomous landing systems to safely UAV recover and land the faulty drone in on a desired landing area. In such a case, a suitable landing zone should be detected rapidly in for emergency landing. Majority of the works related with autonomous landing utilize a marker and GPS signals to detect landing site. In this work, we propose a landing system framework that involves only the processing of images taken from the onboard camera of the vehicle. First, the objects in the image are determined by filtering and edge detection algorithm, then the most suitable landing zone is searched. The area that is free from obstacles and closest to the center of the image is defined as the most immediate and suitable landing zone. The method has been tested on 25 images taken from different heights and its performance has been evaluated in terms runtime on a single board computer and detection precision and recall values. The average measured runtime is 2.4923 seconds and 100% of precision and recall values are achieved for the images taken from 1m and 2m. The smallest precision and recall values are 79.1% and 81.2%, respectively. 1. INTRODUCTION research and development activities continue in many research centers (Hoffmann et al. 2007; Zhao and Wang The use of Unmanned Aerial Vehicles (UAV) has 2012). The vast majority of research activities address increased at an unpredictable rate in recent years. control issues during flying. The ultimate goal of UAV Although these devices have been used particularly in systems is to reach fully autonomous operations (Kim military applications for a long time, their use in non- and Sukkarieh 2002). military applications such as fire extinguishing (Aydin et One of the most important issue for UAV is al. 2019), meteorological research (Martin et al. 2010; autonomous landing in motion. Meanwhile, demands for PropotoUAV 2019), exploration (Jakob et al. 2016; automatic landing of drones on a defined position, safely, Heincke et al. 2019) and agricultural activities accurately, and without human’s intervention, increase (Veroustraete 2015) have become very widespread every day. It is always possible for a UAV to unavoidably nowadays. Quadrotors are the most common devices to face emergencies during flight, such as engine failure, use among UAV types due to their uncomplicated interruption of data link from the ground and other mechanical structures. Quadrotors can fulfill 3D motion unexpected accidents (strong wind, rain, etc.). Thus, tracking requirements with various technical systems forced landing measures should urgently be adopted in such as Global Positioning System (GPS), ultrasonic such situations. Methods of forced landing such as detection, angular velocity sensors and linear parachute and other flight termination systems can cause accelerometers (Zhao and Wang 2012). Despite the use damage on the body of the multi-rotor (Fitzgerald et al., of these integrated systems and sensors, the control of 2005). In addition, GPS signals are highly susceptible to the quadrotors is still one of the most difficult issues, but be interrupted especially at lower altitudes (Lee et al. Corresponding Author Cite this article (veyselturnn@gmail.com) ORCID ID 0000 – 0002 – 0197 – 5227 Turan V, Avşar E, Asadihendoustani D & Aydın E A (2021). Image processing based (ercanavsar@cu.edu.tr) ORCID ID 0000 – 0002 – 1356 – 2753 autonomous landing zone detection for a multi-rotor drone in emergency situations. (dasadihendoustani@atu.edu.tr) ORCID ID 0000 – 0002 – 2066 – 6016 Turkish Journal of Engineering, 5(4), 193-200 (eaydin@atu.edu.tr) ORCID ID 0000 – 0002 – 5068 - 2957 Research Article / DOI: 10.31127/tuje.744954 Received: 29/05/2020; Accepted: 20/07/2020 Turkish Journal of Engineering – 2021; 5(4); 193-200 2012; Ho 2017), and in indoor environments. images taken from a drone’s onboard camera. The aim is Furthermore, GPS signals are controlled by other nations, to rapidly detect a suitable landing zone in an which causes vulnerability issues. For instance, in 29th unstructured and unknown environment. The method October 2018, the GPS jamming that caused 46 drones to initially proposes a candidate landing zone at the center plummet during a display over Victoria Harbour caused of the image. If the initial proposal is not suitable then at least HK$1 million (US$127,500) of damage, according new candidates in the neighboring area are evaluated to a senior official from the Hong Kong Tourism Board until an available spot is detected (Fig. 1). The suitability (Liteye 2018). As a result, some alternative methods of a spot is determined by existence of an object inside it. were adopted to minimize the damage on that UAVs by The object detection is accomplished by means of several enabling them to autonomously find a safe area suitable image processing methods including edge detection, for landing. Some studies about forced landing in color processing, morphologic operations and emergency situations for UAVs were presented in indoor thresholding. The major advantages of the method are (i) environment without using GPS (Nemati et al. 2015). no requirement for a marker and (ii) no need for huge In the past years, in design of emergency landing amount of data for training a model. systems, effective algorithms of machine learning such as 2. METHOD Support Vector Machines (SVM) and Artificial Neural Networks (ANN) were utilized in combination with 2.1. The Landing System Framework digital image processing techniques for selection of appropriate landing site (Guo et al. 2014, Lunghi et al. The use of multi-rotor drones has undeniably 2015). It is observed that the above mentioned machine increased in the past decade, both in the military and learning algorithms have different performance civilian applications, thus raising a number of vital constraints, e.g. SVM is complex and requires huge unsolved issues including safety and reliability. Engine computational power, whereas, ANN requires large malfunction or failures are among the common faults in training data set which corresponds to greater training multi-rotor drones, which apparently endanger the time. Due to these constraints, both algorithms cannot drone and the people's safety on the ground. In order to meet the rapidly changing requirements for landing area increase flight safety and reliability of drones, selection in emergency flight conditions. For instance, in researchers are working on automation enhancement to a previous study about detection of forced landing sites, safely recover the impaired drone (Lopez-Franco et al. 901 images were used to train and test of an ANN model 2017; Mazeh et al. 2018; Nguyen et al. 2019). (Fitzgerald and Walker 2005). Because the images fed to There are several challenges related with safe the ANN model were not representative of the training recovery or landing of impaired aerial vehicles. Majority set, the final classification accuracy is very low when of these challenges are about obstacle detection, suitable compared to training accuracy (Lu et al. 2013). Thus, that landing site detection/selection, fault detection and situation prevents it from being completely reliable. identification, characterizing the aircraft’s new Image processing is an appropriate and reliable way kinematic constraints, trajectory planning, and control of to find safe landing sites in case of an emergency. This is the faulty aircraft on the landing trajectory. To cover generally accomplished by detection of objects in the these challenges, an emergency landing system has been images taken from UAVs. Most of the related studies are proposed according to Fig. 1. In fault or failure scenarios about detection of some “marker” in an image (Barták et where continuation of flight is not possible or endangers al. 2014; Cabrera-Ponce and Martinez-Carranza, 2017; the flight safety, the emergency flight system is triggered Sani and Karimian 2017) which represents the desired to recover drone’s stability and safely land the drone on landing spot. Utilization of a marker may not be a feasible a suitable landing site. way for emergency landing conditions. Since markers The emergency landing system is translated to an can be far from the UAV then it will take time to find the architecture consisting of various subsystems that are markers. For instance, the recent demonstration of capable of landing a faulty drone to a desired landing site package delivery using an UAV by Amazon shows the along a designed trajectory without colliding to any feasibility of an UAV sending a package to its consumer human or animal. The architecture autonomously (AMAZON 2017). A marker which is placed by the detects objects as well as possible landing sites, consumer on the ground is used to allow the UAV to land determines the most suitable landing site, develops the safely. In some circumstances where the marker is landing trajectory based on new kinematic and dynamic unavailable or inappropriately placed, these vehicles constraints of impaired drone, and controls it to the need to be able to sense and avoid the surrounding landing site, using onboard camera data and other objects in the environment and perform a smooth common sensor information like IMU (Inertial descent automatically. Without this capability, the safety Measurement Unit). of the surrounding animals, humans, and property cannot be ensured. Additionally, in some emergency 2.2. Landing Zone Detection situations, the UAV needs to land as quickly as possible. Therefore, it is essential to select a safe landing spot Emergency landing mode of the UAV is activated automatically and emergently, without depending on whenever a fault in any of the motors is detected. Then external systems. the onboard camera is immediately triggered to take a Motivated by the above reasons, in this paper, an top-view photo of the ground in perpendicular direction. image processing method for object detection is This image is the field of view (FOV) of the UAV and it proposed. The method is developed to work on the constitutes the search space for finding a suitable landing Turkish Journal of Engineering – 2021; 5(4); 193-200 zone. In such an emergency situation, locating the neighborhood to the image center is gradually increased appropriate zone in a short time is very important. until a vacant spot is found and that spot is labelled to be Therefore, the algorithm starts checking the suitability of “positive”. Next, the coordinates of this “positive” spot is center of the image which denotes the closest area to the sent to the flight controller to initialize the autonomous UAV. If the image center is occupied by an object than it landing process. The steps for emergency landing are is labelled as “negative” and a neighborhood of the image presented in Fig. 2. center checked for suitability. The distance of the Figure 1. Emergency landing system architecture and subsystem Emergency Landing Mode Activated Capture Fov Image Detect Objects in FOV Image Initiate r=0 distance from the center No Propose a landing Increase r zone at r distance Yes Does the landing zone Label the landing All possible landing zones Yes overlap with any objects? zone as negative of r distance proposed? No Label the landing zone as positive Land Figure 2. Suitable Landing site detection flowchart 2.2.1. Object detection in FOV images Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of The object detection starts with converting the FOV edges in images (Canny 1986). The advantage of using image into a grayscale image. This is necessary for Canny edge detection technique over other well-known preparing the image for canny operator, which is a edge detection algorithms is that it gives better results method for edge detection to find objects on image. The even in noisy conditions (Kumar et al. 2015). Turkish Journal of Engineering – 2021; 5(4); 193-200 The edge detection operator has four steps. unwanted pixels, which may not establish the edge. In  Smooth an image with Gaussian filter. this step, just local maxima must be considered as edges through applying non-maximum suppression. Non-  Calculate gradient magnitude and gradient maximum suppression exchanges the smoothed edges in direction. the frame of the gradient magnitudes to sharp edges. This  “Non - maximum suppression” to ensure the step is necessary to keep every local maximum in the desired edge with one single pixel width. gradient image, and remove any other detected edges  Determine two threshold values, and then select which are possibly false detections. possible edge points and trace edges. The final step of canny edge detector algorithm is The first step in canny edge detector algorithm is to hysteresis thresholding. In this step, two threshold delete the noise in the frames by applying a Gaussian values are selected. The edges with intensity gradient filter. In Canny algorithm, the Gaussian function is greater than the maximum threshold are labeled as applied to smooth the image prior to edge detection. The “sure-edge”. Similarly, the edges with intensity gradient filtering or smoothing operation actually supports two smaller than the minimum threshold are labeled as “non- purposes. The first one is noise effect reduction prior to edge”. Other edges between these threshold values are the detection of pixel intensity changes. The second labeled as “sure-edge” if they are connected to another purpose is setting the resolution or scale at which “sure-edge”, otherwise labeled as “non-edge”. Obviously, intensity changes are to be detected (Chen et al. 2014). this step removes small edges in the images that are These two purposes are necessary to improve the possibly false detections. These threshold values are the efficiency of edge detection method. In other words, only parameters of the method and are selected as 100 Gaussian filtering helps reduction of detecting false and 200 in this work. edges. Typically, the edge lines at the output of Canny Next step is to calculate the magnitude and gradient method are thin lines. Therefore, dilation is applied to of the edges in the smoothed image. This is accomplished binary edge image as morphological operation to thicken by filtering the smoothed image with a Sobel kernel in the line of objects on image. Because thin line may cause vertical and horizontal directions. errors when finding a landing zone. The binary image The complete scan of image is done after receiving obtained after this operation is named as objects image. gradient magnitude and direction, to remove any The outputs of these steps are given in Fig. 3. (a) (b) (c) Figure 3. Steps of object detection. (a) FOV image, (b) Edge detection of FOV image, (c) Morphological operations are applied to binary image (objects image) 2.2.2. Searching for a landing zone altitude, the UAV is initially brought to 5-meter altitude, then the image processing is activated. Once the objects in the FOV image are detected, the After determining the optimal dimensions for next step is to find the closest available zone for landing. landing, an available space for these dimensions is The first parameter to consider for this step is the searched in the FOV image. The availability of a spot in minimum dimensions of the suitable landing zone. the image is defined as the area in which no object is Obviously, this parameter depends on the altitude at present. In order to check the availability, a binary mask which the FOV image is taken. As illustrated in Fig. 4, in is created. In the binary mask, an area with the optimal order to find a relation between the dimensions and the dimensions is made “1”, and all the remaining area is left altitude, a calibration procedure is carried out on the as “0”. The area with “1” values in the binary image is the images taken at five different altitudes. When the altitude proposed region the availability of that region is checked. is 1 meter, the ideal dimensions are 60x60 pixels; on the By observing the output of the logical AND operation other hand, when the altitude is 5 meters, the ideal between the binary mask and the objects image, it is dimensions reduce to 20x20 pixels. The ideal dimensions possible to determine if a spot is available or not. If the for suitable landing zone change linearly with respect to logical AND operation returns “1” value, then it means altitude. that there is an overlap between any of the objects and In a typical emergency landing scenario, a maximum the proposed region in the binary mask. Therefore, this altitude of 5 meters is appropriate for initialization of proposed region is labeled as “negative”. On the other image processing tasks (Lee et al. 2014). Therefore, we hand, if a value of “0” is returned from the AND operation, used the same altitude value for our analysis as well. In then no overlap is present, hence the proposed region is case of triggering the emergency landing at a higher labeled as “positive”. Turkish Journal of Engineering – 2021; 5(4); 193-200 When a proposed region is labeled as “negative”, another region should be proposed immediately until a “positive” label is achieved. Since this work concerns only with the emergency situations, the location of the first proposed region is the middle of the FOV image which is the spot that is closest to UAV. If this spot is not available then a circular vicinity of the middle point is searched for availability, and the radius of the circle is gradually increased until an available spot is found. The steps of searching for a landing zone are given in Fig. 5. Figure 4. Ideal position to initialize the search of landing pad in different heights (a) (b) (c) (d) (e) (f) Figure 5. Steps of searching for a landing zone. (a) Objects image, (b) A binary mask for proposing a region, (c) Output of AND operation between (a) and (b): “negative” labeling, (d) Another binary mask for proposing a region, (e) Output of AND operation between (a) and (d): “positive” labeling, (f) Detected landing zone shown on the FOV image 3. RESULTS AND DISCUSSION different for consecutive runs. This may be because of the tasks related to operating system of the single board The method has been evaluated in terms of speed (i.e. computer. Therefore, the algorithm has been run on all of runtime) and detection performance on a set of images the images for 10 separate times (i.e., 25 images x 10 taken from different altitudes. Here, we define the speed times = 250 observations) and then some statistical as the runtime of the algorithm to find a suitable landing values are calculated on the observed runtimes. The zone in the image. On the other hand, the detection range of the observations is [1.3940, 2.8478] seconds performance is defined as precision and recall values where the mean and standard deviation is calculated as related with the object detection in the images. 2.4923 seconds and 0.3899 seconds, respectively. As explained earlier, the landing zone detection is a For calculating the precision and recall values, part of autonomous landing system framework for number of true detections (TP), false positive (FP) and emergency situations. Hence, the entire detection task false negative (FN) detections are determined. In order has been experimented on a single board computer, to determine these values, intersection over union (IoU) which may easily be involved in a UAV system. The single for all the detections are utilized. Calculation of IoU board computer used in the experiments is Raspberry Pi involves computing the overlap between the ground 3 model B that has a Broadcom BCM2837B0 chipset, truth object and the detections. Since the purpose of this Cortex-A53 (ARMv8) 64-bit processor working at 1.4GHz work is to find a suitable landing location, the threshold frequency and 1GB of memory. Besides, all the processed for IoU is set as 95%. It means that if the IoU for a images have dimensions of 600 x 400 pixels. detection is smaller than this threshold then it is During the experiments, it has been observed that the considered as FP, otherwise TP. On the other hand, any runtime of the algorithm for a particular image is misdetections of an object are counted as FN. Turkish Journal of Engineering – 2021; 5(4); 193-200 In the terminology of object detection, the precision is 𝑒𝑐𝑙𝑟𝑎𝑙 = (2) defined as the probability of the detected objects +𝐹𝑁 matching the actual objects. On the other hand, recall is a way to measure the probability actual objects being There are five test images taken from five different altitudes, hence a total 25 images are used for evaluating correctly detected. the performance object detection. Besides, the algorithm Once the related quantities are determined, the is expected to work efficiently at different times of the precision and recall are calculated as follows: day. Therefore, the same experiments were repeated by changing the brightness of the test images. The added (1) 𝑒𝑠𝑜𝑟𝑐𝑖𝑖𝑝𝑛 = brightness amounts vary from -20% to 60%. The related +𝐹𝑃 results are given in Table 1. Table 1. Precision and recall for different brightness levels and altitudes Altitude Amount of 1m 2m 3m 4m 5m Brightness Precision Recall Precision Recall Precision Recall Precision Recall Precision Recall Added (%) -20 0.916 0.916 0.850 0.950 0.875 0.916 0.812 0.916 0.833 0.875 -10 0.916 1.000 0.900 1.000 0.875 0.916 0.875 0.937 0.875 0.916 0 1.000 1.000 1.000 1.000 0.958 1.000 0.937 1.000 0.916 1.000 +10 1.000 1.000 1.000 1.000 0.958 1.000 0.937 1.000 0.916 1.000 +20 0.916 0.916 0.900 0.950 0.916 0.916 0.875 0.937 0.916 0.916 +40 0.875 0.916 0.850 0.900 0.875 0.875 0.812 0.875 0.833 0.875 +60 0.833 0.833 0.800 0.850 0.833 0.850 0.812 0.812 0.791 0.833 As can be seen from the Table 1 that the highest algorithm. The system is intended to work without any precision and recall values are obtained at low altitudes kind of markers showing the landing zone. Additionally, (1m and 2m) when no or +10% brightness is added to the it does not utilize GPS signals, which may be unavailable images. It is also notable that the results belonging to under certain circumstances. It is based on direct brightness addition of 0 and +10% are identical for all of detection of a suitable landing zone by processing of the altitudes. It means that the object detection is robust images taken from the onboard camera on the UAV. Since to addition of small amount of illumination. On the other it does not involve any training and testing of a predictive hand, when the brightness or darkness of the images are model, the computational load is low and hence the increased, rate of FP and FN detections increase as well corresponding response time is reasonable. On average, yielding decrements in precision and recall values. it takes around 2.5 seconds to make the detection for a Preprocessing of the images using various filters or single board computer and 100% of correct detection histogram equalization may be a useful step to improve rate is achieved for the images taken from 1m and 2m. the detection performance under different illumination The smallest precision and recall values are 79.1% and levels. 81.2%, respectively. These results show that the method When the table is analyzed according to different is suitable for real-world scenarios. In addition, higher altitude levels, slight decrements in performance are detection performance at lower altitudes means that the observed as the altitude is increased. Thus, it may be a algorithm should be fast enough to make a final decision good practice to check the object locations during the at 2 meters. In the future, the object detection may be run emergency landing so that the detected locations may be at different altitudes of the emergency landing process updated when necessary. Furthermore, the recall value is and update the landing trajectory accordingly. Also, the generally higher than the precision value for all of the test latencies in the data transfer pipeline may be considered instances. This means that the rate of FN detections is for more accurate response time. This method is smaller than FP detections. When the application area of obviously intended to work on terrestrial zones. In other the framework is considered, burden of a FN detection is words, it may not detect water area, which is not a higher than a FP detection because misdetection of an suitable zone for landing. Therefore, the algorithm may object (i.e. FN) may cause a crash. However, on the other be improved to work on the images involving water area. hand, detecting an object at an available area (i.e. FP) will ACKNOWLEDGEMENT just cause the runtime of the program to increase. The overall accuracy of landing site detection is This research was supported by Scientific Research directly related with speed and performance of object Project Unit of Adana Alparslan Türkeş Science and detection step. As a result, high speed, precision, and Technology University with the project number of recall values indicate the suitability of this method for autonomous landing. REFERENCES 4. CONCLUSION AMAZON (2017). Prime Air. from An emergency landing system framework is proposed https://www.amazon.com/Amazon-Prime- together with the details of the related image processing Air/b?ie=%20UTF8&node=8037720011. 𝑇𝑃 𝑇𝑃 𝑇𝑃 𝑇𝑃 Turkish Journal of Engineering – 2021; 5(4); 193-200 Aydin B, Selvi E, Tao J & Starek M (2019). Use of Fire- 15th International Technical Meeting of the Satellite Extinguishing Balls for a Conceptual System of Division of The Institute of Navigation (ION GPS Drone-Assisted Wildfire Fighting. Drones, 3(1). 2002). Portland, OR, 510 - 517 https://doi.org/10.3390/drones3010017 Kumar K, Li J & Khan S (2015). Comparative Study on Barták R, Hraško A & Obdržálek D (2014). On Various Edge Detection Techniques for 2-D Image. autonomous landing of AR.Drone: Hands-on International Journal of Computer Applications experience. Proceedings of the 27th International 119(22), 6-10. Florida Artificial Intelligence Research Society Lee D, Lim H, Kim H J & Kim Y (2012). Adaptive Image- Conference, FLAIRS 2014: 400-405. Based Visual Servoing for an Underactuated Cabrera-Ponce A A & Martinez-Carranza J (2017). A Quadrotor System. Journal of Guidance, Control, and vision-based approach for autonomous landing. Dynamics, 35(4), 1335-1353. 2017 Workshop on Research, Education and Lee M R, Su S, Yeah J E, Huang H & Chen J (2014). Development of Unmanned Aerial Systems (RED- Autonomous landing system for aerial mobile robot UAS). Linköping, Sweden. DOI: 10.1109/RED- cooperation. 2014 Joint 7th International UAS.2017.8101655 Conference on Soft Computing and Intelligent Canny J (1986). A Computational Approach to Edge Systems (SCIS) and 15th International Symposium Detection. IEEE Transactions on Pattern Analysis on Advanced Intelligent Systems (ISIS). Kitakyushu, and Machine Intelligence PAMI-8(6): 679-698. Japan, DOI: 10.1109/SCIS-ISIS.2014.7044826 DOI: 10.1109/TPAMI.1986.4767851 Liteye (2018). HK$1 million in damage caused by GPS Chen W, Yue H, Wang J & Wu X (2014). An improved jamming that caused 46 drones to plummet during edge detection algorithm for depth map inpainting. Hong Kong show. from https://liteye.com/hk1- Optics and Lasers in Engineering, 55: 69–77. million-in-damage-caused-by-gps-jamming-that- https://doi.org/10.1016/j.optlaseng.2013.10.025 caused-46-drones-to-plummet-during-hong-kong- Fitzgerald D & Walker R (2005). Classification of show/. Candidate Landing Sites for UAV Forced Landings. Lopez-Franco C, Gomez-Avila J, Alanis A, Arana-Daniel AIAA Guidance, Navigation, and Control Conference N & Villaseñor C (2017). Visual Servoing for an and Exhibit, San Francisco, California. Autonomous Hexarotor Using a Neural Network Fitzgerald D, Walker R & Campbell D (2005). A Vision Based PID Controller. Sensors, 17(8), 1865. Based Emergency Forced Landing System for an https://doi.org/10.3390/s17081865 Autonomous UAV. Australian International Lu A, Ding W & Li H (2013). Multi-information Based Aerospace Congress, Melbourne, Australia. Safe Area Step Selection Algorithm for UAV’s Guo X, Denman S, Fookes C, Mejias L & Sridharan S Emergency Forced Landing. Journal of Software, (2014). Automatic UAV Forced Landing Site 8(4), 995-1002. Detection Using Machine Learning. 2014 Lunghi P, Ciarambino M & Lavagna M (2015). Vision- International Conference on Digital Image Based Hazard Detection With Artificial Neural Computing: Techniques and Applications (DICTA). Networks for Autonomous Planetary Landing. Wollongong, NSW, Australia. In 13th ESA/ESTEC Symposium on Advanced Space DOI: 10.1109/DICTA.2014.7008097 Technologies in Robotics and Automation, ASTRA Heincke B, Jackisch R, Saartenoja A, Salmirinne H, Rapp Martin S, Bange J & Beyrich F (2010). Meteorological S, Zimmermann R, Pirttijärvi M et al. (2019). profiling of the lower troposphere using the Developing multi-sensor drones for geological research UAV "M AV Carolo". Atmospheric mapping and mineral exploration: setup and first Measurement Techniques Discussions 4, 705–716. results from the MULSEDRO project. Geological DOI:10.5194/amt-4-705-2011 Survey of Denmark and Greenland Bulletin 43. Mazeh H, Saied M, Shraim H and F. Clovis (2018). Fault- https://doi.org/10.34194/GEUSB-201943-03-02 Tolerant Control of an Hexarotor Unmanned Aerial Ho H W (2017). Autonomous landing of Micro Air Vehicle Applying Outdoor Tests and Experiments. Vehicles through bio-inspired monocular vision. IFAC-PapersOnLine 51(22), 312-317. PHD Thesis, ISBN: 978-94-6186-818-3 https://doi.org/10.1016/j.ifacol.2018.11.560 Hoffmann G M, Huang H, Waslander S L & Tomlin C Nemati A, Sarim M, Hashemi M, Schnipke E et al. (2015). (2007). Quadrotor Helicopter Flight Dynamics and Autonomous Navigation of UAV through GPS- Control: Theory and Experiment. AIAA Guidance, Denied Indoor Environment with Obstacles. AIAA Navigation and Control Conference and Exhibit, SciTech, Kissimmee, Florida South Carolina Nguyen N P, Mung N X & Hong S K (2019). Actuator Jakob S, Zimmermann R & Gloaguen R (2016). Fault Detection and Fault-Tolerant Control for Processing of drone-borne hyperspectral data for Hexacopter. Sensors 19(21), 4721. geological applications. 2016 8th Workshop on https://doi.org/10.3390/s19214721 Hyperspectral Image and Signal Processing: PropotoUAV (2019). Drones are being used for weather Evolution in Remote Sensing (WHISPERS). Los forecasting by meteorologists. from Angeles, CA, USA. https://www.prophotouav.com/meteorologists- DOI: 10.1109/WHISPERS.2016.8071689 storm-weather-drones/. Kim J & Sukkarieh S (2002). Flight Test Results of Sani M F & Karimian G (2017). Automatic navigation GPS/INS Navigation Loop for an Autonomous and landing of an indoor AR. drone quadrotor using Unmanned Aerial Vehicle (UAV). Proceedings of the ArUco marker and inertial sensors. 2017 Turkish Journal of Engineering – 2021; 5(4); 193-200 International Conference on Computer and Drone Zhao H & Wang Z (2012). Motion Measurement Using Applications (IConDA), Kuching, Malaysia. Inertial Sensors, Ultrasonic Sensors, and DOI: 10.1109/ICONDA.2017.8270408 Magnetometers With Extended Kalman Filter for Veroustraete F (2015). The Rise of the Drones in Data Fusion. IEEE Sensors Journal - IEEE SENS J Agriculture. EC Agriculture 2(2), 325-327. 12(5), 943-953. DOI: 10.1109/JSEN.2011.2166066 © Author(s) 2021. This work is distributed under https://creativecommons.org/licenses/by-sa/4.0/

Journal

Turkish Journal of EngineeringUnpaywall

Published: Oct 1, 2021

There are no references for this article.