Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Recent Advances in Collaborative Scheduling of Computing Tasks in an Edge Computing Paradigm

Recent Advances in Collaborative Scheduling of Computing Tasks in an Edge Computing Paradigm sensors Review Recent Advances in Collaborative Scheduling of Computing Tasks in an Edge Computing Paradigm 1 , 2 3 1 , 4 , 5 , 5 Shichao Chen , Qijie Li , Mengchu Zhou * and Abdullah Abusorrah Faculty of Information Tecnology, Macau University of Science and Technology, Macau 999078, China; shichao.chen@ia.ac.cn The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China School of Mechanical and Electrical Engineering and Automation, Harbin Institute of Technology, Shenzhen 518000, China; liqijie1998@163.com Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA Department of Electrical and Computer Engineering, Faculty of Engineering, and Center of Research Excellence in Renewable Energy and Power Systems, King Abdulaziz University, Jeddah 21481, Saudi Arabia; aabusorrah@kau.edu.sa * Correspondence: mengchu.zhou@njit.edu Abstract: In edge computing, edge devices can offload their overloaded computing tasks to an edge server. This can give full play to an edge server ’s advantages in computing and storage, and efficiently execute computing tasks. However, if they together offload all the overloaded computing tasks to an edge server, it can be overloaded, thereby resulting in the high processing delay of many computing tasks and unexpectedly high energy consumption. On the other hand, the resources in idle edge devices may be wasted and resource-rich cloud centers may be underutilized. Therefore, it is essential to explore a computing task collaborative scheduling mechanism with an edge server, a cloud center and edge devices according to task characteristics, optimization objectives and system status. It can help one realize efficient collaborative scheduling and precise execution of all computing Citation: Chen, S.; Li, Q.; Zhou, M.; tasks. This work analyzes and summarizes the edge computing scenarios in an edge computing Abusorrah, A. Recent Advances in paradigm. It then classifies the computing tasks in edge computing scenarios. Next, it formulates the Collaborative Scheduling of Computing Tasks in an Edge optimization problem of computation offloading for an edge computing system. According to the Computing Paradigm. Sensors 2021, problem formulation, the collaborative scheduling methods of computing tasks are then reviewed. 21, 779. https://doi.org/10.3390/ Finally, future research issues for advanced collaborative scheduling in the context of edge computing s21030779 are indicated. Received: 28 December 2020 Keywords: collaborative scheduling; edge computing; internet of things; limited resources; optimiza- Accepted: 12 January 2021 tion; task offloading Published: 24 January 2021 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in 1. Introduction published maps and institutional affil- With the increasing deployment and application of Internet of Things (IoT), more iations. and more intelligent devices, e.g., smart sensors and smart phones, can access a network, resulting in a considerable amount of network data. Despite that their computing power is very rapidly increasing, they are unable to achieve real-time and efficient execution due to their limited computing resources and ever-demanding applications. When it faces highly Copyright: © 2021 by the authors. complex computing tasks and services, cloud computing [1,2] can process these tasks to Licensee MDPI, Basel, Switzerland. achieve device–cloud collaboration. In a cloud computing paradigm, users can rely on This article is an open access article extremely rich storage and computing resources of a cloud computing center to expand the distributed under the terms and computing and storage power of devices, and achieve the rapid processing of computing- conditions of the Creative Commons intensive tasks. Yet there are some disadvantages in the device–cloud collaboration mode, Attribution (CC BY) license (https:// such as incurring high transmission delay and pushing network bandwidth requirement creativecommons.org/licenses/by/ to the limit. 4.0/). Sensors 2021, 21, 779. https://doi.org/10.3390/s21030779 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 779 2 of 22 In order to solve the problems of cloud computing for data processing, edge com- puting [3,4] is put forward to provide desired computing services [5] for users by using computing, network, storage and other resources on edge, that is near a physical entity or data source. Compared with cloud computing, some applications of users in edge computing can be processed on an edge server near intelligent devices, thus significantly reducing data transmission delay and network bandwidth load required in edge-cloud collaboration. Eliminating long-distance data transmissions encountered in device–cloud computing brings another advantage to edge computing, i.e., the latter can more effectively guarantee user data security. As a result, it has become an important development trend to use edge computing to accomplish various computing tasks for intelligent devices [6,7]. These devices are called edge devices in this paper. The traditional scheduling strategies of edge computing tasks are to offload all computing-intensive tasks of edge devices to an edge server for processing [8–10]. How- ever, it may result in the waste of computing and storage sources in edge devices and cloud computing centers. In addition, many devices may access an edge server at the same time period. As a result, the server may face too many computing tasks, thus resulting in a long queue of tasks. This increases the completion time of all queued tasks, even causing the processing delay of tasks in the edge server to exceed that at the edge devices. On the other hand, many edge devices may be idle, resulting in a waste of their computing resources; and resource-rich cloud centers may be underutilized. To solve the above problems, we can combine a cloud center, edge servers and edge devices together to efficiently handle the computing tasks of edge devices via task offloading. According to the computing tasks’ characteristics, optimization objectives and system status, we should utilize the computing and storage resources of a cloud center, edge servers and edge devices, and schedule computing tasks to them for processing on demand. It can effectively reduce the load of edge servers and improve the utilization of resources, and reduce the average completion time of computing tasks in a system. This paper focuses on the important problem of collaborative scheduling of com- puting tasks in an edge computing paradigm under IoT. It is noted that edge computing systems can be viewed as a special class of distributed computing systems. Traditional task scheduling in distributed computing focuses on distributing and scheduling a large task into multiple similarly powerful computing nodes and do not have task off-loading issues in edging computing [9–12]. Edging computing arises to handle an IoT scenario where edge devices are resource-constrained and relatively independent. In Section 2, we analyze the edge computing scenarios, and clarify their composition, characteristics and application fields. In Section 3, we analyze the computing tasks, and classify them together with factors influencing their completion. We formulate the optimization problem of computation offloading with multiple objective functions for an edge computing system in Section 4. Based on the computing scenarios, computation tasks and formulated optimization model, we survey and summarize the collaborative scheduling methods of computing tasks in Section 5. This work is concluded in Section 6 by indicating the open issues for us to build a desired collaborative scheduling system for edge computing. 2. Computing Scenarios In IoT, computing resources on edge are mainly composed of edge devices and edge servers. In order to take the advantages of cloud centers, we also consider them as part of the whole system in task scheduling. In general, a cloud center contains a large number of computing servers with high computing power. It is very important to reasonably use the computing, storage, bandwidth and other system resources to process computing tasks efficiently. In this section, different computing scenarios are analyzed and summarized according to the composition of computing resources. On the edge, we have an edge server, edge devices and an edge scheduler. The server can provide computing, storage, bandwidth and other resources to support computing services for the edge computing tasks. Edge devices can execute computing tasks and Sensors 2021, 21, x FOR PEER REVIEW 3 of 22 On the edge, we have an edge server, edge devices and an edge scheduler. The server can provide computing, storage, bandwidth and other resources to support computing services for the edge computing tasks. Edge devices can execute computing tasks and may Sensors 2021, 21, 779 3 of 22 offload such tasks to the server and other available/idle edge devices. They have compu- ting, storage, network and other resources, and can provide limited computing services for edge computing tasks. They have much fewer resources than edge servers do. An edge may offload such tasks to the server and other available/idle edge devices. They have computing, storage, network and other resources, and can provide limited computing scheduler receives the computing tasks offloaded by edge devices, and provides schedul- services for edge computing tasks. They have much fewer resources than edge servers do. ing services for the edge computing tasks according to the resources and status of all edge An edge scheduler receives the computing tasks offloaded by edge devices, and provides servers and edge devices under its supervision. It is a controller to realize the collaborative scheduling services for the edge computing tasks according to the resources and status scheduling between edge servers and edge devices. But it does not have to be in an edge of all edge servers and edge devices under its supervision. It is a controller to realize the collaborative scheduling between edge servers and edge devices. But it does not have to computing system. be in an edge computing system. According to the difference among the computing resources involved in the offload- According to the difference among the computing resources involved in the offloading ing and scheduling of computing tasks in edge computing, computing scenarios can be and scheduling of computing tasks in edge computing, computing scenarios can be divided divided into four categories, i.e., basic, scheduler-based, edge-cloud computing, and into four categories, i.e., basic, scheduler-based, edge-cloud computing, and scheduler- based schedu edge-cloud ler-base one. d edge Their -clo characteristics ud one. Their c and application haracterist arics e described and app next. licat An ion are desc edge ribed next. computing architecture is shown in Figure 1. An edge computing architecture is shown in Figure 1. Cloud Sever Core Networks Edge Sever Edge Sever Edge Sever End Users Figure 1. Edge computing architecture. Figure 1. Edge computing architecture. 2.1. Basic Edge Computing 2.1. Basic Edge Computing The first scenario is composed of edge devices and edge servers. There is no edge The first scenario is composed of edge devices and edge servers. There is no edge scheduler on the edge. In this scenario, an edge device can execute a computing task locally, or offload it to its edge server. The edge server executes it, and then feeds back scheduler on the edge. In this scenario, an edge device can execute a computing task lo- the computing result to the corresponding edge device. This scenario is similar to the cally, or offload it to its edge server. The edge server executes it, and then feeds back the scene that devices offload tasks to be performed in a cloud computing center. It is the computing result to the corresponding edge device. This scenario is similar to the scene simplest scenario in edge computing. There is no edge scheduler on the edge. For the that devices offload tasks to be performed in a cloud computing center. It is the simplest computing tasks that can be offloaded to the edge servers, their offloading locations are fixed. Moreover, the processed types of computing tasks are fixed, and the specific types scenario in edge computing. There is no edge scheduler on the edge. For the computing are determined by edge server resources. In addition, this scenario does not contain a tasks that can be offloaded to the edge servers, their offloading locations are fixed. More- cloud computing center. Hence, it is more suitable for processing tasks with a small over, the processed types of computing tasks are fixed, and the specific types are deter- amount of computation and strict delay requirements in a relatively closed environment. mined by edge server resources. In addition, this scenario does not contain a cloud com- Its architecture is shown in Figure 2, which has been used in [11]. puting center. Hence, it is more suitable for processing tasks with a small amount of com- According to the above analysis, the Quality of Service (QoS) levels are expected to be achieved with the proposed scenario. Task completion time is used to measure QoS. We putation and strict delay requirements in a relatively closed environment. Its architecture assume that all edge servers are same and all edge devices are uniform. Note that most of is shown in Figure 2, which has been used in [11]. the presented content can be easily extended to heterogeneous devices and servers. According to the above analysis, the Quality of Service (QoS) levels are expected to be achieved with the proposed scenario. Task completion time is used to measure QoS. We assume that all edge servers are same and all edge devices are uniform. Note that most of the presented content can be easily extended to heterogeneous devices and servers. Let 𝜏 denote the completion time of a task offloaded to an edge server, which in- cludes data transmission latency between an edge device and an edge server, and task processing time in an edge server and waiting time before it is processed. Let  denote Sensors 2021, 21, x FOR PEER REVIEW 4 of 22 the time to run a single instruction and  be the waiting time before a task to be pro- cessed in an edge server. 𝐺 is the number of instructions for this task processing. Then the completion time of a task can be computed as: 𝜏= 𝑛 (1+ 𝑃 )𝜏 ̅ + +𝐺 (1) where 𝑛 is the number of packets transmitted, which includes the process of bidirectional data transmission between an edge device and an edge server, 𝑃 is the packet loss rate between an edge device and an edge server, occurring during 𝑛 packet transmissions. 𝜏 ̅ Sensors 2021, 21, 779 4 of 22 is the average latency per packet between an edge device and an edge server, which in- cludes the sum of delays caused by processing, queuing, and transmission of 𝑛 packets. MAN Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 2. Basic edge computing (MAN: metropolitan area network and WLAN: wireless local Figure 2. Basic edge computing (MAN: metropolitan area network and WLAN: wireless local area area network). network). Let t denote the completion time of a task offloaded to an edge server, which includes data 2.2. transmission Scheduler-latency Based Ed between ge Co an mpu edge tin device g and an edge server, and task processing time in an edge server and waiting time before it is processed. Let h denote the time to The second one is composed of edge devices, edge servers and an edge scheduler. run a single instruction and W be the waiting time before a task to be processed in an edge Compared with the first one, it includes an edge scheduler, which can schedule tasks stra- server. G is the number of instructions for this task processing. Then the completion time of a task can be computed as: tegically. In this scenario, an edge device can process a computing task locally or offload its task to an edge scheduler. The edge scheduler reasonably schedules the tasks to edge t = n(1 + P )t + W + h G (1) servers and edge devices according to scheduling policies. The policies are formed based where n is the number of packets transmitted, which includes the process of bidirectional on the current computing, storage, task execution status, network status, and other infor- data transmission between an edge device and an edge server, P is the packet loss rate mation related to all edge servers and devices. Finally, the scheduler feeds the computing between an edge device and an edge server, occurring during n packet transmissions. t is results back to the source devices. The main feature of this scenario is that the computing the average latency per packet between an edge device and an edge server, which includes the sum of delays caused by processing, queuing, and transmission of n packets. tasks can be reasonably scheduled to different servers and edge devices by the edge sched- uler, so that the collaborative processing of computing tasks in different edge servers and 2.2. Scheduler-Based Edge Computing edge devices can be well-realized. The computing resources of edge devices can be fully The second one is composed of edge devices, edge servers and an edge scheduler. utilized. The types of computing resources on the edge are diverse; so are the types of Compared with the first one, it includes an edge scheduler, which can schedule tasks strategically. In this scenario, an edge device can process a computing task locally or offload computing tasks that can be processed. The computing and storage resources on the edge its task to an edge scheduler. The edge scheduler reasonably schedules the tasks to edge are limited in comparison with a cloud computing center. Clearly, this architecture is suit- servers and edge devices according to scheduling policies. The policies are formed based on able for processing tasks with a small amount of computation and strict delay require- the current computing, storage, task execution status, network status, and other information related to all edge servers and devices. Finally, the scheduler feeds the computing results ments, as shown in Figure 3. The study [12] has adopted it. back to the source devices. The main feature of this scenario is that the computing tasks can be reasonably scheduled to different servers and edge devices by the edge scheduler, so that the collaborative processing of computing tasks in different edge servers and edge Edge MAN devices can be well-realized. The computing resources of edge devices can be fully utilized. Scheduler The types of computing resources on the edge are diverse; so are the types of computing tasks that can be processed. The computing and storage resources on the edge are limited in comparison with a cloud computing center. Clearly, this architecture is suitable for processing tasks with a small amount of computation and strict delay requirements, as Edge Server shown in Figure 3. The study [12] has adopted it. Edge Server Edge Server WLAN WLAN WLAN Figure 3. Scheduler-based edge computing. Sensors 2021, 21, x FOR PEER REVIEW 4 of 22 the time to run a single instruction and  be the waiting time before a task to be pro- cessed in an edge server. 𝐺 is the number of instructions for this task processing. Then the completion time of a task can be computed as: 𝜏= 𝑛 (1+ 𝑃 )𝜏 ̅ + +𝐺 (1) where 𝑛 is the number of packets transmitted, which includes the process of bidirectional data transmission between an edge device and an edge server, 𝑃 is the packet loss rate between an edge device and an edge server, occurring during 𝑛 packet transmissions. 𝜏 ̅ is the average latency per packet between an edge device and an edge server, which in- cludes the sum of delays caused by processing, queuing, and transmission of 𝑛 packets. MAN Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 2. Basic edge computing (MAN: metropolitan area network and WLAN: wireless local area network). 2.2. Scheduler-Based Edge Computing The second one is composed of edge devices, edge servers and an edge scheduler. Compared with the first one, it includes an edge scheduler, which can schedule tasks stra- tegically. In this scenario, an edge device can process a computing task locally or offload its task to an edge scheduler. The edge scheduler reasonably schedules the tasks to edge servers and edge devices according to scheduling policies. The policies are formed based on the current computing, storage, task execution status, network status, and other infor- mation related to all edge servers and devices. Finally, the scheduler feeds the computing results back to the source devices. The main feature of this scenario is that the computing tasks can be reasonably scheduled to different servers and edge devices by the edge sched- uler, so that the collaborative processing of computing tasks in different edge servers and edge devices can be well-realized. The computing resources of edge devices can be fully utilized. The types of computing resources on the edge are diverse; so are the types of computing tasks that can be processed. The computing and storage resources on the edge are limited in comparison with a cloud computing center. Clearly, this architecture is suit- Sensors 2021, 21, 779 5 of 22 able for processing tasks with a small amount of computation and strict delay require- ments, as shown in Figure 3. The study [12] has adopted it. Edge MAN Scheduler Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 3. Scheduler-based edge computing. Figure 3. Scheduler-based edge computing. According to this scenario, the task can be scheduled to an edge server for handling by an edge scheduler. Similar to the first scenario, the completion time of the task scheduled to a target edge server can be computed as: r = n 1 + P r + n 1 + P r + W + h G (2) r r A B where P is the packet loss rate between an edge device and an edge scheduler, and P is r r the packet loss rate between an edge scheduler and a target edge server, occurring during n packets’ transmission. r is the average latency per packet between an edge device and an edge scheduler, and r is the average latency per packet between an edge scheduler and a target edge server, which includes the sum of delays caused by processing, queuing and transmission of n packets. If the task of an edge device is directly scheduled to its edge server, P = P , t = r , r = 0 and r = t. r t A B 0 0 Let h denote the time to run a single instruction and G be the number of instructions for this task processing in an edge device. The completion time of the task scheduled to a target device can be computed as: 00 0 0 0 r = n 1 + P r + n 1 + P r + W + h G (3) r r A C where P is the packet loss rate between an edge scheduler and a target edge device, occurring during n packet transmissions. r is the average latency per packet between an edge scheduler and a target edge device, which includes the sum of delays caused by processing, queuing and transmission of n packets. W is the waiting time before a task is processed at an edge device. Compared to the first scenario, the significant difference is that an edge scheduler can schedule the tasks to an idle edge server according to the network’s real-time status and edge servers. If the task processing time accounts for a large proportion of the total time, then it offers more advantages over the first one. 2.3. Edge-Cloud Computing This scenario is composed of edge devices, edge servers and a cloud computing center, and has no edge scheduler on the edge. An edge device can execute a computing task locally, or offload it to its edge sever or cloud center. Its difference from the first scenario is that its edge devices can offload their tasks to their cloud computing center. The specific offloading to an edge server or cloud computing center is determined by edge devices according to the attributes of their computing tasks and the QoS requirements from users. Its main feature is the same as the first scenario, i.e., the offloading location of an edge computing task and types are fixed. All the tasks that require a large amount of computation and are insensitive to delay on the edge can be offloaded to a cloud computing center. Therefore, the processing of computing tasks in this architecture is not affected by computing amount i.e., but only by the types of edge environment resources, as is shown in Figure 4. Such architectures are adopted in [13]. Sensors 2021, 21, 779 6 of 22 Sensors 2021, 21, x FOR PEER REVIEW 6 of 22 Cloud Center WAN MAN Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 4. Edge-cloud computing. Figure 4. Edge-cloud computing. In In this scenario, a task c this scenario, a task canabe n be handled in handled in an an edge edge server serve orr or a clo a cloud server ud server. We . We formulize formu- this lizescenario this scen with ariothe wicompletion th the complet time, ion accor time, ding accordi to the ng to the location wher locate ion where the task is the task is handled. The completion time of the task offloaded to an edge server is same as Equation (1). handled. The completion time of the task offloaded to an edge server is same as Equation (1). Let z denote the completion time of a task offloaded to a cloud center, which includes data transmission latency between an edge device and a cloud center and task processing Let 𝜁 denote the completion time of a task offloaded to a cloud center, which in- time in a cloud center. We assume that a cloud center is resource-intensive and the task cludes data transmission latency between an edge device and a cloud center and task pro- computation does not need to wait. The completion time of a task can then be computed cessing time in a cloud center. We assume that a cloud center is resource-intensive and as follows: the task computation does not need to wait. The completion time of a task can then be z = n(1 + P )z + gG (4) computed as follows: where n is the number of packets transmitted, which includes the process of bidirectional 𝜁= 𝑛1 + 𝑃 + 𝛾𝐺 (4) data transmission between an edge device and a cloud center, P is the packet loss rate where 𝑛 is the number of packets transmitted, which includes the process of bidirectional between an edge device and a cloud center, occurring during n packets transmission, z is data transmission between an edge device and a cloud center, 𝑃 is the packet loss rate the average latency per packet between an edge device and a cloud center, which includes between an edge device and a cloud center, occurring during 𝑛 packets transmission, 𝜁 the sum of delays caused by processing, queuing and transmission of n packets, and g is the average latency per packet between an edge device and a cloud center, which in- represents the time to run a single command in a cloud center. cludes the sum of delays caused by processing, queuing and transmission of 𝑛 packets, and 𝛾 represents the time to run a single command in a cloud center. 2.4. Scheduler-Based Edge-Cloud Computing The fourth scenario is composed of edge devices, edge servers, a cloud computing cen- 2.4. Scheduler-Based Edge-Cloud Computing ter and an edge scheduler. In this scenario, an edge device can execute its computing tasks The fourth scenario is composed of edge devices, edge servers, a cloud computing locally or offload the tasks to the edge scheduler. Compared with the third architecture, the dif cent ferer and an ence is that edge sched the edge u scheduler ler. In this sce receives nario all , an ed the computing ge device ctasks an execute its co offloaded by mputing edge tasks locally or offload the tasks to the edge scheduler. Compared with the third architec- devices, and schedules the computing tasks to proper computing entities (edge servers, idle ture, the dif edge devices, ference i and/or s tha at the edge schedu cloud computingle center) r receive forsperforming all the comthe putservices ing tasks accor offload ding ed to by the edge devices, and sche computing resources,dule storage s the resour computin ces, network g tasks to proper computi bandwidth and characteristics ng entities (edge of tasks. servers, idle It can give edge d full e play vices, and/or to the syner a clo getic ud computin advantages g center) for among edge performin devices, edge g the serv servers ices and a cloud computing center. Its main feature is the same as the second one, i.e., the according to the computing resources, storage resources, network bandwidth and charac- offloading position of edge computing tasks is uncertain, and the types are diverse. The teristics of tasks. It can give full play to the synergetic advantages among edge devices, processing of computing tasks is not affected by computing amount, but only by the types edge servers and a cloud computing center. Its main feature is the same as the second one, of edge environment resources. Its architecture is shown in Figure 5. It is used in [14]. i.e., the offloading position of edge computing tasks is uncertain, and the types are di- verse. The processing of computing tasks is not affected by computing amount, but only by the types of edge environment resources. Its architecture is shown in Figure 5. It is used in [14]. 𝜁 Sensors 2021, 21, 779 7 of 22 Sensors 2021, 21, x FOR PEER REVIEW 7 of 22 Cloud Center WAN Edge MAN Scheduler Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 5. Scheduler-based edge-cloud computing. Figure 5. Scheduler-based edge-cloud computing. In this scenario, the tasks can be handled in an idle edge server, a cloud server or an In this scenario, the tasks can be handled in an idle edge server, a cloud server or an edge device. They can be scheduled to appropriate locations by an edge scheduler. Similar edge device. They can be scheduled to appropriate locations by an edge scheduler. Similar to the second scenario, the completion time of a task scheduled to a target edge server is to the second scenario, the completion time of a task scheduled to a target edge server is the same as Equation (2). The completion time of a task scheduled to a target edge device the same as Equation (2). The completion time of a task scheduled to a target edge device is the same as Equation (3). is the same as Equation (3). The completion time of a task scheduled to a cloud center can be computed as: The completion time of a task scheduled to a cloud center can be computed as: 𝜖= 𝑛 (1+𝑃 )𝜖 ̅ +𝑛 (1+𝑃 )𝜖 ̅ + 𝛾𝐺 (5) e = n(1 + P )e + n 1 + P e + gG (5) e A e B where 𝑃 is the packet loss rate between an edge device and an edge scheduler, and 𝑃 where P is the packet loss rate between an edge device and an edge scheduler, and P e e is the packet loss rate between an edge scheduler and a cloud center, occurring during 𝑛 is the packet loss rate between an edge scheduler and a cloud center, occurring during n packets transmission. 𝜖 ̅ is the average latency per packet between an edge device and packets transmission. e is the average latency per packet between an edge device and an edge scheduler and 𝜖 ̅ is the average latency per packet between an edge scheduler an edge scheduler and e is the average latency per packet between an edge scheduler and a cloud center, which includes the sum of delays caused by processing, queuing and and a cloud center, which includes the sum of delays caused by processing, queuing and transmission of 𝑛 packets. transmission of n packets. Compared to other scenarios, the offloaded tasks can be scheduled to a suitable loca- Compared to other scenarios, the offloaded tasks can be scheduled to a suitable tion by an edge scheduler according to the attributes of tasks and the real-time status of location by an edge scheduler according to the attributes of tasks and the real-time status of network, edge servers and cloud servers, which can give full play to the computing power network, edge servers and cloud servers, which can give full play to the computing power of the whole network system to achieve the best QoS. of the whole network system to achieve the best QoS. 3. 3. Computing Computing Task Task Analysis Analysis Next, Next, the computi the computing ng ta tasks sks a arree analyzed analyzed to ensure t to ensure that hat they ca they can n be be a accurately ccurately sched- sched- uled to an appropriate node, and achieve the expected objectives, e.g., the minimal task uled to an appropriate node, and achieve the expected objectives, e.g., the minimal task completion completion time a time and nd least least energy consumpti energy consumption. on. Accordi According ng to task a to task attributes, ttributes, we judge we judge whether they can be split or not and whether there is interdependence among subtasks [15]. whether they can be split or not and whether there is interdependence among subtasks A specific judgment criterion is that if computing tasks are simple or highly integrated, [15]. A specific judgment criterion is that if computing tasks are simple or highly inte- they cannot be split, and they can only be executed locally as a whole at the edge devices grated, they cannot be split, and they can only be executed locally as a whole at the edge or completely offloaded to edge servers. If they can be segmented based on their code devices or completely offloaded to edge servers. If they can be segmented based on their and/or data [16,17], they can be divided into several parts, which can be offloaded. In code and/or data [16,17], they can be divided into several parts, which can be offloaded. summary, we have three modes, i.e., local execution, partial offloading and full offloading In summary, we have three modes, i.e., local execution, partial offloading and full offload- given computing tasks. The specific offloading location of computing tasks should be well ing given computing tasks. The specific offloading location of computing tasks should be considered according to the computing power of devices, current network status, and well considered according to the computing power of devices, current network status, resource status of edge devices, edge servers and a cloud computing center. and resource status of edge devices, edge servers and a cloud computing center. 3.1. Local Execution 3.1. Local Execution Whether edge computing tasks are executed locally or not should be determined Whether edge computing tasks are executed locally or not should be determined ac- according to the resources of an edge device, edge servers’ network and resource status. cording to the resources of an edge device, edge servers’ network and resource status. If If the available network bandwidth is not enough to support the successful uploading of the available network bandwidth is not enough to support the successful uploading of a Sensors 2021, 21, 779 8 of 22 a task, i.e., the remaining bandwidth of the current network is less than the bandwidth required for the uploading of the task, the computing task can only be performed locally. In addition, if the computing resources of edge servers are not available, resulting in that the computing tasks cannot be processed in time, the tasks have to be executed locally. If the computing power of an edge device itself can meet the service requirements, it performs its tasks locally, thus effectively reducing the workload of an edge server and the need for network bandwidth. 3.2. Full Offloading Answering whether edge computing tasks are completely offloaded to an edge server or scheduler or not needs one to consider the resources of edge devices, current network, availability of edge servers’ resources and system optimization effect. If (1) the currently available network bandwidth supports the successful offloading of edge computing tasks, and (2) the edge servers or other edge devices are idle and the computing tasks that are successfully offloaded can be processed immediately, then, according to a scheduling goal, the results of local execution and full offloading to the edge servers are compared, and local execution or offloading of the computing tasks is decided. For example, if the goal is to minimize the completion time required for processing a task, it is necessary to compare the completion time required for local execution with the one required for offloading to an edge server/cloud computing center. If the local execution takes less time, the tasks should be processed locally. Otherwise, they should be offloaded to the edge servers or cloud computing center for processing. 3.3. Partial Offloading An indivisible computing task at an edge device can only be executed locally or completely offloaded to the edge scheduler, which then assigns it to an appropriate edge server or idle edge device. Divisible computing tasks can enjoy partial offloading. Their split sub-tasks should be taken as a scheduling unit, and the resources of edge devices, network and the resources of edge servers should be considered comprehensively when they are scheduled. Considering the final processing effect of the overall tasks, each sub-task should be assigned to an appropriate computing node for processing. For the split computing task, if there are no interdependence among the sub-tasks, they can be assigned to different nodes to be processed at the same time so as to achieve the purpose of minimizing energy consumption and reducing task completion time. If there are some interdependence among the sub-tasks, the interdependent subtasks should be assigned to the same computing node for execution. There are many methods for splitting tasks. Yang et al. [18] study the application repartition problem of periodically updating partition during application execution, and propose a framework for the repartition of an application in a dynamic mobile cloud environment. Based on their framework, they design an online solution for the dynamic network connection of cloud, which can significantly shorten the completion time of applications. Yang et al. [19] consider the computing partition of multi-users and the scheduling of offloading computing tasks on cloud resources. According to the number of resources allocated on the cloud, an offline heuristic algorithm called SearchAdjust is designed to solve the problem, thus minimizing the average completion time of a user ’s applications. Liu et al. [20] make an in-depth study on the energy consumption, execution delay and cost of an offloading process in a mobile edge server system by using queuing theory, and put forward an effective solution to solve their formulated multi-objective optimization problems. The analysis result of computing tasks is summarized in Figure 6. Sensors 2021, 21, 779 9 of 22 Sensors 2021, 21, x FOR PEER REVIEW 9 of 22 The resources of devices can meet the service requirements of users The network is crowded and task data cannot be offloaded successfully Local execution Edge server resources are not available Inseparable The resources of devices cannot meet the service requirements of users Full offloading The network is enough and task data can be offloaded successfully Edge server resources are available Computing task Local All parts of the computing task cannot be offloaded execution Divisible Full All parts of the computing task can be offloaded, and quality of services for the user task is higher after offloading Offloading Each sub-task can be assigned to different edge Each sub-task is independent of each other nodes/serversand processed at the same time Partial Offloading Dependent sub-tasks can only be assigned Each sub-task depends on each other to the same edge node/sever Figure 6. Computing task analysis and execution. Figure 6. Computing task analysis and execution. 4. System Model and Problem Formulation 4. System Model and Problem Formulation We can opt We can optimize imize m multiple ultiple object objective ive fu functions nctions for co for compu mputation offloading tation offloading in in an an edge edge computing model. They include average or total delay time and energy consumption. computing model. They include average or total delay time and energy consumption. 4.1. System Model 4.1. System Model A typical model is shown in Figure 7. We assume that the system consists of N edge A typical model is shown in Figure 7. We assume that the system consists of 𝑁 edge devices, an edge cloud and a cloud center. It is assumed that an edge cloud [21] consists of devices, an edge cloud and a cloud center. It is assumed that an edge cloud [21] consists n edge servers powered on. We consider the queue model at an edge device as a M/M/1 of n edge servers powered on. We consider the queue model at an edge device as a M/M/1 queue, an edge cloud as M/M/n queue, and the cloud center as an M/M/¥ queue. Each queue, an edge cloud as M/M/n queue, and the cloud center as an M/M/∞ queue. Each edge device can offload a part or whole of its tasks to an edge cloud through a wireless edge device can offload a part or whole of its tasks to an edge cloud through a wireless channel. If the number of offloaded tasks exceeds the maximum one that an edge cloud channel. If the number of offloaded tasks exceeds the maximum one that an edge cloud can process, the edge cloud may further offload such overloaded tasks to the cloud center can process, the edge cloud may further offload such overloaded tasks to the cloud center for processing. for processing. In this computing system, let D = {d , d , . . . , d }, where D is the set of edge devices 1 2 N In this computing system, let D = {𝑑 ,𝑑 ,…, 𝑑 }, where D is the set of edge devices and d is the ith edge device. let S = {S , S , . . . , S }, where S is the kth edge server. We 1 2 n i k and 𝑑 is the 𝑖 th edge device. let S = {𝑆 ,𝑆 ,… ,𝑆 }, where 𝑆 is the 𝑘 th edge server. We assume that the tasks generated by edge device d obey a Poisson process with an average assume that the tasks generated by edge device 𝑑 obey a Poisson process with an aver- arrival rate l and contains data of size b . Let u denote as an average service rate of edge i i i age arrival rate 𝜆 and contains data of size 𝛽 . Let 𝑢 denote as an average service rate device d . We denote J as the probability of the edge device d choosing to offload the i i i of edge device 𝑑 . We denote 𝜗 as the probability of the edge device 𝑑 choosing to of- task to an edge server. Then the tasks are offloaded to cloud follow a Poisson process with fload the task to an edge server. Then the tasks are offloaded to cloud follow a Poisson an average arrival rate J l , and the tasks that are locally processed also follow a Poisson i i process with an average arrival rate 𝜗 𝜆 , and the tasks that are locally processed also fol- ˆ ˆ ˆ ˆ process with an arrival rate (1 J )l . Let Q = {Q , Q , . . . , Q } where Q denotes the i i 1 2 N i low a Poisson process with an arrival rate (1 − 𝜗 )𝜆 . Let 𝑄 = {𝑄 ,𝑄 ,…, 𝑄 } where 𝑄 maximum task queue buffer of edge device d . We assume that an edge cloud has a single denotes the maximum task queue buffer of edge device 𝑑 . We assume that an edge cloud queue buffer and let Q denote its maximum task queue buffer. has a single queue buffer and let 𝑄 denote its maximum task queue buffer. Sensors 2021, 21, 779 10 of 22 Sensors 2021, 21, x FOR PEER REVIEW 10 of 22 Cloud Center Edge Cloud ... ... Task Task Task Task 1 d d Queue d Queue 2 i Queue N Queue Figure 7. Edge computing model. Figure 7. Edge computing model. 4. 4.2. 2. C Communications ommunications Mo Model del Let Let h ℎ denote denote the channel p the channel power ower gain be gain between tween edge d edge device evice 𝑑 d and and an edge clo an edge cloud. ud. i i Denote Denote p 𝑝 as as t the he transm transmission ission power of edge dev power of edge device ice d ,𝑑 0, 0 << p 𝑝 < p ˆ< , wher 𝑝 , wher e p ˆ e is𝑝 its is i maximum ts maxi- i i i i i m transmission um transmipower ssion p . o Tw he er uplink . The udata plink rate datfor a racomputation te for comput of atfloading ion offloof adedge ing ofdevice edge d dev can ice be obtained as follows [22]: 𝑑 can be obtained as follows [22]: p h i i R = k Blog (1 + ) (6) 𝑝 ℎ i i 2 sB (6) 𝑅 =𝜅 𝐵𝑙𝑜𝑔 (1 + ) 𝜎𝐵 where B is the channel bandwidth, and s denotes the noise power spectral density at the where 𝐵 is the channel bandwidth, and 𝜎 denotes the noise power spectral density at the receiver, and k is the portion of bandwidth of an edge cloud’s channels allocated to edge receiver, and 𝜅 is the portion of bandwidth of an edge cloud’s channels allocated to edge device d , where 0  k  1. i i device 𝑑 , where 0≤𝜅 ≤1. 4.3. Task Offloading Model 4.3. Task Offloading Model (1) Local execution model Let w denote the normalized workload on the edge device d . It represents the (1) Local execution model i i percentage of central processing unit (CPU) that has been used. According a queue model Let 𝑤 denote the normalized workload on the edge device 𝑑 . It represents the per- centage of central processing unit (CPU) that has be u en used. According a queue model M/M/1, we obtain the response time is T = , where r = is the utilization, l is the 1r u M/M/1 task arrival , we obta rate of in the response ti edge device d [23 me ]. iW s e𝑇= can compute , where the𝜌= average is the response utiliza time tion,of 𝜆 is locally the processing the tasks at edge device d as follows: task arrival rate of edge device 𝑑 [23]. We can compute the average response time of locally processing the tasks at edge device 𝑑 as follows: 1 1 T = + (7) i,o 1 1 u 1 w 1 J l u ( ) ( ) i i i i i 𝑇 = + (7) 𝑢 (1−𝑤 ) −(1−𝜗 )𝜆 𝑢 where 0  w < 1 and 0  (1 J )l b  Q . i i i i C where 0 ≤ 𝑤 <1 and 0 ≤ (1 − 𝜗 )𝜆 𝛽 ≤𝑄 . The energy consumption of locally processing the tasks at edge device d can be given The energy consumption of locally processing the tasks at edge device 𝑑 can be as follows: given as follows: E = P T (8) i,o i i, o 𝐸 =𝑃 𝑇 (8) (8) , , where P denotes as computing power of edge device d when the tasks are locally processed. i i (2) Edge cloud execution model where 𝑃 denotes as computing power of edge device 𝑑 when the tasks are locally pro- According to above analysis, we can obtain the transmission time of offloading the cessed. tasks from edge device d to an edge cloud as follows: (2) Edge cloud execution model According to above analysis, we can obtain the transmission time of offloading the J l b i i i T = (9) tasks from edge device 𝑑 to an edge cloud as follows: 𝜗 𝜆 𝛽 𝑇 = (9) In (9), 0  J l b  Q . i i i C In (9), 0≤𝜗 𝜆 𝛽 ≤𝑄 . Sensors 2021, 21, 779 11 of 22 Let E denote the energy consumption of transmitting the tasks from edge device d to i i an edge cloud, and it is as follows: e e E = p T (10) i i i According to the queue model M/M/n, let c = {c , c , . . . , c }, where c is the service 1 2 k rate of the kth edge server. We use f to denote the CPU cycle frequency and w to denote k k the maximum workload capacity of the kth edge server. Let  denote the maximum task accepted rate of the edge cloud. The total task arrival rate from N edge devices to an edge cloud can be computed as: l = J l (11) i i all å i=1 Then the fraction of tasks j that the edge cloud can compute is be given: 1 l  l all j = (12) l<l all all Hence, the actual execution rate at the edge cloud can be denoted as: l = jl (13) all We can get the average waiting time of each task at an edge cloud. T = (14) i, W c l k=1 where S denotes the utilization of an edge cloud. The average computing time of each task is: T = (15) i,C n k=1 According to [24], the workload of each edge server cannot exceed its maximum workload, and we can obtain the constraints: 0 <  w ˆ (16) Let u denote the transmission service rate of an edge cloud, we can get the expected waiting time for the computation results: T = (17) u l Similar to many studies [25,26], because the amount of data for the computation results is generally small, we ignore the transmission delay and energy consumption for an edge cloud to send the results back to an edge device. We can get the energy consumption of edge device d when the offloaded tasks of edge device d are processed: E = p (T + T + T (18) i, W i,W i,C i where p is the computation power of edge device d after offloading tasks. i Sensors 2021, 21, 779 12 of 22 Let T denote the total time of an offloading process from the task transmission of edge device d to the computation results returned of an edge cloud. We have: T = T + T + T + T (19) i i i, W i,C i The energy consumption of offloading process at edge device d can be obtained as: E = E +E (20) i i i, W (3) Cloud execution model When l >l is realizable, the overloaded tasks are transmitted to the cloud center by all wire network. Let T denote a fixed communication time between an edge server and a data center. Due to the enough computing power of the cloud center, we assume no task waiting time at the cloud center. According to the queue model M/M/¥, let u denote CC the service rate of a cloud center. The execution time of an overloaded task is: T = T + (21) i,CC CC The expected time for the results of overloaded tasks from a cloud center back to the corresponding edge device: T = + T (22) i,CC u l Let T denote the total time of an offloading process from the task transmission of i,CC edge device d to the computation results returned from a cloud center. We have: T = T + T (23) i,CC i,CC i,CC The corresponding energy consumption is: 0 0 E = p (T + T ) (24) i,CC i,cc i i,CC 4.4. Problem Formulation From (7), (8), (14), (15), (17), (21) and (22), we can obtain the execution delay time for edge device d , i.e., T = T + T + j T + T + T + 1 j (T + T ) (25) ( ) i i, o i i,W i,C i i,CC i,CC So, the average delay time of all edge devices in the computing system is: T = T (26) å i i=1 To minimize the total execution time of these tasks in edge computing system, we formulate the optimization problem as: Min T (27) subject to: (1 J )l <u (1 w ) (28) i i i i 0  (1 J )l b  Q (29) i i i i 0  J l b  Q (30) i i i C Sensors 2021, 21, 779 13 of 22 l < c (31) å k k=1 0 < p > p ˆ (32) i i 0  J  1 (33) 0  w < 1 (34) 0  q  1 (35) and (16). From (8), (10), (18) and (24), we can get the energy consumption for edge device d , which is given as follows: E = E + E + jE + (1 j)E (36) i i, o i i, W i,CC So, the average energy consumption of all edge devices in the computing system are denoted as follows: E = E (37) å i i=1 To minimize the energy consumption of these tasks in an edge computing system, we formulate the optimization problem as: Min E (38) subject to: (16) and (28)–(35). Min{T} and Min E together with the related constraints lead to a biobjective opti- mization problems. It can be solved with different methods [27–29]. Problem (27) represents a constrained mixed integer non-linear program to make the optimal offloading decisions. Its exact optimal solution complexity is NP-hard, and cannot be solved with polynomial-time methods. We can obtain an optimal solution in some specific scenarios with polynomial-time ones, but not in general cases as shown in [22,24,30]. 5. Computing Task Scheduling Scheme After thoroughly analyzing computing tasks in edge computing, the tasks offloaded to an edge scheduler need to be synergistically scheduled. Due to the limited computing and storage resources on the edge devices, and the resource competition among multiple tasks, it is essential in scheduling the tasks optimally in terms of task completion time and energy consumption [31]. Many scheduling algorithms have been proposed. Traditional task scheduling algo- rithms mainly include Min-Min, Max-Min, and Sufferage algorithm [32], first come first served, and minimum completion time [33]. Most of them take delay as an optimization goal, but it is easy to result in the problem of load imbalance among computing nodes. In- telligent heuristic task scheduling algorithms mainly include Genetic Algorithm (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO), Simulated Annealing (SA), Bat algorithm, artificial immune algorithm, and Tabu Search (TS) [34,35]. These algorithms are based on heuristic rules to quickly get the solution of a problem, but they cannot guarantee the optimality of their solutions [36]. In the scheduling processes of edge computing tasks, we face many goals. We summarize the methods of task scheduling with aims to achieve the lowest delay and/or lowest energy consumption. 5.1. Minimal Delay Time The completion time of computing tasks offloaded to edge servers is mainly composed of three parts, the transmission time required to transmit the tasks to the edge servers or edge devices, the processing time required to execute the tasks in the edge server or Sensors 2021, 21, 779 14 of 22 edge devices and the time required to return the result after the completion of the task processing. Therefore, reducing the above three parts of task completion time can effectively improve QoS. Yuchong et al. [24] propose a greedy algorithm to assign tasks to servers with the shortest response time to minimize the total response time of all tasks. Its experimental results show that the average response time of tasks is reduced in comparison with a random assignment algorithm. In intelligent manufacturing, a four-layer computing system supporting the operation of artificial intelligence tasks from the perspective of a network is proposed in [37]. On this basis, a two-stage algorithm based on a greedy strategy and threshold strategy is proposed to schedule computing tasks on the edge, so as to meet the real-time requirements of intelligent manufacturing. Compared with the traditional algorithm, the experimental results show that it has good real-time performance and acceptable energy consumption. A Markov decision process (MDP) method is used to schedule computing tasks according to the queuing state of task buffers, the execution state of a local processing unit and the state of the transmission unit. Its goal is to minimize delay while enforcing a power constraint. Its simulation results show that compared with other benchmark strategies, the proposed optimal random task scheduling strategy has shorter average execution delay [22]. Zhang et al. [38] study a task scheduling problem based on delay minimization, and establish an accurate delay model. A delay Lyapunov function is defined, and a new task scheduling algorithm is proposed. Compared with the traditional algorithm depending on Little’s law, it can reduce the maximum delay by 55%. Zhang et al. [39] model the delay of communication and computing queue as a virtual delay queue. A new delay-based Lyapunov function is defined, and joint subcarrier allocation, base station selection, power control and virtual machine scheduling algorithms are proposed to minimize the delay. Zhang et al. [40] propose an optimization model of maximum allowable delay considering both average delay and delay jitter. An effective conservative heterogeneous earliest completion time algorithm is designed to solve it. Yuan et al. [41] jointly consider CPU, memory, and bandwidth resources, load balance of all heterogeneous nodes in the edge layer, the maximum amount of energy, the maximum number of servers, and task queue stability in the cloud data centers layer. It designs a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of the system while ensuring that response time limits of tasks are strictly met. Its simulation results show better performance than other algorithms, i.e., firefly algorithm and genetic learning particle swarm optimization. In the present research, treating the minimum delay as a scheduling goal, most researchers have improved some traditional task scheduling algorithms by using intelligent optimization algorithms. The reason why these algorithms are not chosen in practical use is that they need multiple iterations to derive a relatively high-quality solution. However, facing many tasks or online application scenarios of random tasks, their execution may introduce delay. 5.2. Minimal Energy Consumption The energy consumption of computing tasks is mainly composed of two parts, includ- ing energy for their processing, transmission from edge devices to an edge server, and returning results to the source node [36]. Therefore, on the premise of meeting the delay requirements of tasks, energy consumption should be minimized. Xu et al. [30] propose a particle swarm optimization algorithm for the scheduling of tasks that can be offloaded to edge servers. It considers different kinds of computing resources in a Mobile Edge Computing (MEC) environment and aims to reduce mobile devices’ energy consumption under response time constraints. Their experimental results show that it has stable convergence and optimal adaptability, and can effectively achieve the optimization goal. A heuristic algorithm based on MEC for efficient energy scheduling is proposed in [42]. The task scheduling among MEC servers and the downlink energy consumption of roadside units are comprehensively considered. The energy consumption Sensors 2021, 21, 779 15 of 22 of MEC servers is minimized while enforcing task delay constraints. The algorithm can effectively reduce the energy consumption, task processing delay and solve the problem of task blocking. Li et al. [43] study the energy efficiency of an IoT system under an edge computing paradigm, and describes a dynamic process with a generalized queuing network model. It applies the ordered optimization technology to a Markov decision- making process to develop resource management and task scheduling schemes, to meet the challenge of the explosion in Markov decision process search space. Its simulation results show that this method can effectively reduce the energy consumption in an IoT system. A dynamic voltage frequency scale (DVFS) technology is proposed in [44], which can adjust the offloading rate of a device and the working frequency of CPU to minimize the energy consumption under the constraint of time delay. Zhang et al. [45] propose a dual depth Q- learning model. A learning algorithm based on experience playback is used to train model parameters. It can improve training efficiency and reduces system energy consumption. An improved probability scheme is adopted to control the congestion of different priority packets transmitted to MEC in [46]. Based on this, an improved krill herd meta-heuristic optimization algorithm is proposed to minimize the energy consumption and queuing congestion of MEC. Bi et al. [47] propose a partial computation offloading method to minimize the total energy consumed by smart mobile devices (SMDs) and edge servers by jointly optimizing offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels and transmission power of each SMD in each time slot. They formulate a nonlinear constrained optimization problem and presents a novel hybrid meta-heuristic algorithm named genetic simulated-annealing-based particle swarm optimization (GSP) to find a close-to-optimal solution for the problem. Its experimental results prove that it achieves lower energy consumption in less convergence time than other optimization algorithms including SA-based PSO, GA and SA. Based on the current research, treating the lowest energy consumption as the schedul- ing goal, researchers have proposed many improved heuristic task scheduling algorithms. For cases where a delay constraint is not strong, most of the heuristic scheduling algo- rithms are able to generate a complete feasible schedule by gradually expanding a local schedule. The more iterations, the greater chance to get the best solution, and the lower energy consumption. 5.3. Minimal Delay Time and Energy Consumption In such application scenarios as virtual reality, augmented reality, and driverless vehicles, the requirements of delay time and energy consumption are very strict. How to make an optimal task schedule to minimize both time delay and the energy consumption is very important. Two objectives are, unfortunately, in conflict with each other. The energy consumption and processing/transmission time of computing tasks are regarded as costs. With the support of a cloud computing center, a distributed algorithm for cost minimization is proposed by optimizing the offloading decision and resource allocation of a mobile edge computing system [48]. Its experimental results show that compared with other existing algorithms, i.e., Greedy algorithm, and those in [49,50], the cost can be reduced by about 30%. Zhang et al. [51] study the trade-off between system energy consumption and delay time. Based on the Lyapunov optimization method, the optimal scheduling of CPU cycle frequency and data transmission power of mobile devices is performed, and an online dynamic task allocation scheduling method is proposed to modify the data backlog of a queue. A large number of simulation experiments show that the scheme can realize good trade-off between energy consumption and delay. A task scheduling problem of a computing system considering both time delay and energy consumption is proposed in [52]. A task allocation method based on reinforcement learning is proposed to solve the problem, which can ensure the timely execution of tasks and a good deal of efficient energy saving. The simulation results show that compared with other existing methods, i.e., SpanEdge [53] and suspension-and energy-aware offloading algorithm [54], it can reduce 13–22% of task processing time and 1–10% of task processing Sensors 2021, 21, 779 16 of 22 energy consumption. Note that Sen at al [52] fail to consider the transmission energy in such a system. In the existing research, a distributed algorithm, Lyapunov optimization method, reinforcement learning and other task scheduling algorithms can be used to improve the overall performance of the system with a target to lower both delay time and energy consumption. It can be seen that to balance both well, engineers can select traditional and heuristic task scheduling algorithms. The latter are more popular since they can handle dual objective functions well. All the discussed scheduling schemes and other methods [55–61] are summarized in Tables 1 and 2. Table 1. Summary of task scheduling schemes. Objectives Studies Features Markov decision process method Greedy Algorithm Two-stage algorithm based on greedy strategy and threshold strategy Establishing the delay model of cellular edge computing system and defining the delay Lyapunov function Defining D Lyapunov function and proposing the algorithm of joint Minimize delay time [22,24,37–41] subcarrier allocation, base station selection, power control and virtual machine scheduling Conservative heterogeneous earliest completion time algorithm A profit-maximized collaborative computation offloading and resource allocation algorithm to guarantee the response time limits of tasks. Particle swarm optimization-based task scheduling algorithm for multi-resource computing offloading Heuristic algorithm of task scheduling among Mobile Edge Computing (MEC) servers by considering the downlink energy consumption of Road Side Units Minimize energy Applying ordered optimization technology to a Markov decision process [30,42–47] consumption Based on Dynamic Voltage Frequency Scale (DVFS) Technology Double Deep Q-learning model Improved krill herd meta heuristic optimization algorithm A novel genetic simulated-annealing-based particle swarm optimization (GSP) algorithm to produce a close-to-optimal solution Dynamic task allocation and scheduling algorithm based on a Lyapunov Minimize both delay optimization method time and energy [48,51,52] Distributed algorithm consumption Reinforcement learning Sensors 2021, 21, 779 17 of 22 Table 2. Summary of algorithms for different optimization objectives (ES = edge server, CC = cloud center, Unc = uncertain and N-O = near-optimal). Objective Scheme Optimal Complexity Where Pros and Cons One-dimensional search Achieving the minimum average delay in various Yes low ES algorithm [22] specific scenarios, but not general ones Saving time by 20–30%, in comparison to the proposed Greedy algorithm [24] Yes Medium ES random algorithm but only for a simple M/M/1 queuing system in a specific scenario. Efficient and suitable for scenarios with large number Customized TS algorithm [24] Yes Medium ES of tasks, but only for a simple M/M/1 queuing system in a specific scenario. Being more accurate than the other delay models; and Lyapunov function-based task Delay Time NP Medium ES smaller delay than that of a traditional scheduling algorithm [38] scheduling algorithm. Efficient conservative Reducing the delays of task offloading, and heterogeneous Unc Medium ES considering the task execution order. earliest-finish-time algorithm [40] Providing a high-accuracy and fine-grained energy model by jointly considering central processing unit SA-based migrating birds N-O Medium ES/CC (CPU), memory, and bandwidth resource limits, load optimization procedure [41] balance requirements of all nodes, but only for a simple M/M/1 system. Providing a closed-form solution suitable for a specific scenario about the partial compression offloading but Sub-gradient algorithm [55] Yes Medium ES not for general scenarios; reducing the end-to-end latency. Comprehensively considering the workload Energy efficient Multi-resource conditions among mobile devices, edge servers and computation Offloading strategy Yes Medium ES/CC cloud centers; task scheduling algorithm [30] Having stable convergence speed and reducing the power consumption effectively in a specific scenario. Being effective and efficient; Ordinal Optimization-based Unc High ES/CC Making good tradeoff between delay time and Markov Decision Process [43] energy consumption. Effectively solving the problem of choosing which edge server to offload, and minimizing the total Energy cosumption Ben’s genetic algorithm [56] N-O Medium ES energy consumption, but working only for a simple M/M/1 queue model Joint computation and communication cooperation by Algorithms for partial and binary considering both partial and binary offloading cases offloading with energy Yes High ES Reducing the power consumption effectively; consumption optimization [57] obtaining the optimal solution in a partial offloading case Guaranteeing the global optimization, strong Artificial fish swarm Yes Medium ES robustness and fast convergence for a specific problem algorithm [58] and reducing the power consumption Establishing the conditions under which total or no Multidimensional numerical offloading is optimal; Reducing the execution delay of Yes High ES method [17] applications and minimizing the total consumed energy but failing to consider latency constraints Solving the computing resource allocation and task placement problems; Software defined task Reducing task duration and energy cost compared to offloading/Task Placement Yes Medium ES random and uniform computation offloading schemes Algorithm [59] by considering computation amount and data size of a Delay/energy task in a software defined ultra network. consumption Making good tradeoff between delay time and energy consumption Energy-aware mobility N-O High ES Dealing with various practical deployment scenarios management algorithm [60] including BSs dynamically switching on and off, but failing to consider the capability of a cloud server. Taking full advantages of green energy without Lyapunov Optimization on Time Yes High ES/CC significantly increasing the response time and having and Energy Cost [61] better optimization ability 6. Issues and Future Directions An edge server has more computing power and storage capacity than devices, and edge computing has lower task transmission delay than cloud computing. In addition, due to the limitation of edge resources, the task scheduling problem of edge computing is NP-hard. Its high-performance solution methods are highly valuable, while its exact Sensors 2021, 21, 779 18 of 22 global optimal solution cannot be obtained in general for sizable problems. Although there have been many studies on collaborative scheduling of computing tasks in edge computing [47,62,63], the following issues should be addressed: (1) Consider the occurrence of emergencies. In an edge computing paradigm, a system involves the coordination of devices, edge server and network link, each of which plays an important role. Therefore, if any devices and edge servers are shut down or the network fails in the process of task processing, scheduled task execution can be significantly impacted. Therefore, the question of how to add the consideration of emergencies in the process of task scheduling to ensure that tasks can also be successfully executed is a widely open problem. In other words, researchers have to take their failure probability into tasks scheduling consideration so that the risk of failing some important tasks should be minimized. (2) Consider multiple optimization objectives. Now, most of research is based on the optimization goals of delay time and/or energy consumption to develop task schedules. Other QoS indicators of user tasks are rarely considered. Therefore, they should be added to optimize a schedule, and a task scheduling scheme with comprehensive optimization goals should be formulated to achieve high-quality user service experience as well. (3) Consider data security issues. Data security [64–69] is one of the most concerned issues. Security protocols and encryption algorithms are mostly used to achieve the security and privacy of data, but there are few considerations in terms of their induced delay and energy consumption issues. Therefore, it is worthy to develop light to heavy- weight security protocols and encryption algorithms such that some best trade-off solutions between performance and data security levels can be made. (4) Find any-time task scheduling algorithms. The research on task scheduling al- gorithms mostly uses the improved traditional and heuristic task ones. They need long iteration time to achieve a near-optimal or optimal schedule. In practice, we must offer a feasible schedule in short time. We can then improve it if we are given enough computing time before a schedule needs to be developed. Hence, inventing some fast algorithms to produce a first feasible schedule and then some intelligent optimizations that can improve the solutions are highly desired. (5) Add some important factors to optimization goals. Generally, network bandwidth and CPU of task offloading locations could be taken into consideration in the process of offloading tasks at the edge, but many other factors, such as offloading ratio of tasks, are not yet taken into consideration in order to obtain the best offloading strategy. (6) Balance partial computing offloading of Deep Learning (DL) models. To improve the intelligence of the applications, DL is increasingly adopted in various areas, namely face recognition, natural language processing, interactive gaming, and augmented reality. Due to the limited resources of edge hardware, lightweight DL models are suitable for edge devices. However, in order to accelerate the inference speed of models and minimize the energy consumed by devices, they need to be developed and partially offloaded. It is challenging to answer how to determine partial offloading for DL model training and balance the resource consumption between edge devices and edge servers. 7. Conclusions This paper analyzes and summarizes the computing scenarios, computing tasks, optimization objectives’ formulation and computing task scheduling methods for the scheduling process of an edge computing system. According to the resources in edge computing, the computing scenarios of scheduling tasks are divided into four categories, and their composition and characteristics are analyzed in detail. According to where their execution takes place, computing tasks can be accomplished via local execution, partial offloading and full offloading. Then we formulate the optimization problem to minimize delay time and energy consumption for computation offloading of an edge computing system with different queuing models, and indicate its solution complexity. With regard to computing task scheduling methods in edge computing, most existing studies set their Sensors 2021, 21, 779 19 of 22 optimization goal to minimize delay, energy consumption or both of them. Improved traditional task scheduling algorithms and some intelligent optimization algorithms can be used to solve such optimization problem. For the reviewed optimization problems, most re- searchers tend to use the improved heuristic algorithm/intelligent optimization algorithms instead of mathematical programing ones due to their computational complexity. This paper also discusses the issues and future directions in the area of collaborative scheduling of computing tasks in an edge computing paradigm. This paper should stimulate further research on collaborative scheduling and its applications in the context of edge-computing, e.g., [70–72]. Author Contributions: S.C. conceived the idea and structure of this manuscript, and wrote this paper by making a survey and summary of other papers and partially acquired funding support. Q.L. contributed to the early versions of the manuscript. M.Z. guided the writing of this manuscript, in- cluding its structure and content, offered some key ideas, and supervised the project. A.A. suggested some ideas and partially acquired funding support. All authors contributed to writing, reviewing, and editing the paper. All authors have read and agreed to the published version of the manuscript. Funding: This work was in part supported by the National Key Research and Development Program of China (No. 2018YFB1700202) and in part by Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (RG-21-135-39). Conflicts of Interest: The authors declare no conflict of interest. References 1. Patidar, S.; Rane, D.; Jain, P. A Survey Paper on Cloud Computing. In Proceedings of the 2012 Second International Conference on Advanced Computing & Communication Technologies, Institute of Electrical and Electronics Engineers (IEEE), Rohtak, Haryana, India, 7–8 January 2012; pp. 394–398. 2. Moghaddam, F.F.; Ahmadi, M.; Sarvari, S.; Eslami, M.; Golkar, A. Cloud computing challenges and opportunities: A survey. In Proceedings of the 2015 1st International Conference on Telematics and Future Generation Networks (TAFGEN), Institute of Electrical and Electronics Engineers (IEEE), Kuala Lumpur, Malaysia, 26–27 May 2015; pp. 34–38. 3. Varghese, B.; Wang, N.; Barbhuiya, S.; Kilpatrick, P.; Nikolopoulos, D.S. Challenges and Opportunities in Edge Computing. In Proceedings of the 2016 IEEE International Conference on Smart Cloud (SmartCloud), Institute of Electrical and Electronics Engineers (IEEE), New York, NY, USA, 18–20 November 2016; pp. 20–26. 4. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [CrossRef] 5. Rincon, J.A.; Guerra-Ojeda, S.; Carrascosa, C.; Julian, V. An IoT and Fog Computing-Based Monitoring System for Cardiovascular Patients with Automatic ECG Classification Using Deep Neural Networks. Sensors 2020, 20, 7353. [CrossRef] 6. Liu, B. Research on collaborative scheduling technology Based on edge computing. Master ’s Thesis, South China University of Technology, Guangzhou, China, 2019. 7. Jiao, J. Cooperative Task Scheduling in Mobile Edge Computing System. Master ’s Thesis, University of Electronic Science and Technology, Chengdu, China, 2018. 8. Zhao, J.; Li, Q.; Gong, Y.; Zhang, K. Computation Offloading and Resource Allocation for Cloud Assisted Mobile Edge Computing in Vehicular Networks. IEEE Trans. Veh. Technol. 2019, 68, 7944–7956. [CrossRef] 9. Lyu, X.; Ni, W.; Tian, H.; Liu, R.P.; Wang, X.; Giannakis, G.B.; Paulraj, A. Optimal Schedule of Mobile Edge Computing for Internet of Things Using Partial Information. IEEE J. Sel. Areas Commun. 2017, 35, 2606–2615. [CrossRef] 10. Mao, Y.; Zhang, J.; Letaief, K.B. Joint Task Offloading Scheduling and Transmit Power Allocation for Mobile-Edge Computing Systems. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference (WCNC), Institute of Electrical and Electronics Engineers (IEEE), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. 11. Tao, X.; Ota, K.; Dong, M.; Qi, H.; Li, K. Performance Guaranteed Computation Offloading for Mobile-Edge Cloud Computing. IEEE Wirel. Commun. Lett. 2017, 6, 774–777. [CrossRef] 12. Kim, Y.; Song, C.; Han, H.; Jung, H.; Kang, S. Collaborative Task Scheduling for IoT-Assisted Edge Computing. IEEE Access 2020, 8, 216593–216606. [CrossRef] 13. Wang, S.; Zafer, M.; Leung, K.K. Online placement of multi-component applications in edge computing environments. IEEE Access 2017, 5, 2514–2533. [CrossRef] 14. Zhao, T.; Zhou, S.; Guo, X.; Zhao, Y.; Niu, Z. A Cooperative Scheduling Scheme of Local Cloud and Internet Cloud for Delay- Aware Mobile Cloud Computing. In Proceedings of the 2015 IEEE Globecom Workshops (GC Wkshps), Institute of Electrical and Electronics Engineers (IEEE), San Diego, CA, USA, 6–10 December 2015; pp. 1–6. Sensors 2021, 21, 779 20 of 22 15. Kao, Y.-H.; Krishnamachari, B.; Ra, M.-R.; Bai, F. Hermes: Latency optimal task assignment for resource-constrained mobile computing. In Proceedings of the 2015 IEEE Conference on Computer Communications (INFOCOM), Institute of Electrical and Electronics Engineers (IEEE), Kowloon, Hong Kong, 26 April–1 May 2015; pp. 1894–1902. 16. Cuervo, E.; Balasubramanian, A.; Cho, D.; Wolman, A.; Saroiu, S.; Chandra, R.; Bahl, P. Maui: Making smartphones last longer with code offload. In Proceedings of the MobiSys, ACM, San Francisco, CA, USA, 15–18 June 2010; pp. 49–62. 17. Munoz, O.; Pascual-Iserte, A.; Vidal, J. Optimization of Radio and Computational Resources for Energy Efficiency in Latency- Constrained Application Offloading. IEEE Trans. Veh. Technol. 2015, 64, 4738–4755. [CrossRef] 18. Yang, L.; Cao, J.; Tang, S.; Han, D.; Suri, N. Run Time Application Repartitioning in Dynamic Mobile Cloud Environments. IEEE Trans. Cloud Comput. 2014, 4, 336–348. [CrossRef] 19. Yang, L.; Cao, J.; Cheng, H.; Ji, Y. Multi-User Computation Partitioning for Latency Sensitive Mobile Cloud Applications. IEEE Trans. Comput. 2015, 64, 2253–2266. [CrossRef] 20. Liu, L.; Chang, Z.; Guo, X.; Ristaniemi, T. Multi-objective optimization for computation offloading in mobile-edge computing. In Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece, 3–6 July 2017; pp. 832–837. 21. Carson, K.; Thomason, J.; Wolski, R.; Krintz, C.; Mock, M. Mandrake: Implementing Durability for Edge Clouds. In Proceedings of the 2019 IEEE International Conference on Edge Computing (EDGE), Institute of Electrical and Electronics Engineers (IEEE), Milan, Italy, 8–13 July 2019; pp. 95–101. 22. Liu, J.; Mao, Y.; Zhang, J.; Letaief, K.B. Delay-optimal computation task scheduling for mobile-edge computing systems. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Institute of Electrical and Electronics Engineers (IEEE), Barcelona, Spain, 10–15 July 2016; pp. 1451–1455. 23. Lazar, A. The throughput time delay function of anM/M/1queue (Corresp.). IEEE Trans. Inf. Theory 1983, 29, 914–918. [CrossRef] 24. Zhang, G.; Zhang, W.; Cao, Y.; Li, D.; Wang, L. Energy-Delay Tradeoff for Dynamic Offloading in Mobile-Edge Com-puting System with Energy Harvesting Devices. IEEE Trans. Industr. Inform. 2018, 14, 4642–4655. [CrossRef] 25. Chen, X. Decentralized Computation Offloading Game for Mobile Cloud Computing. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 974–983. [CrossRef] 26. Chen, X.; Jiao, L.; Li, W.; Fu, X. Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing. IEEE/ACM Trans. Netw. 2016, 24, 2795–2808. [CrossRef] 27. Yuan, H.; Bi, J.; Zhou, M.; Liu, Q.; Ammari, A.C. Biobjective Task Scheduling for Distributed Green Data Centers. IEEE Trans. Autom. Sci. Eng. 2020. Available online: https://ieeexplore.ieee.org/document/8951255 (accessed on 29 December 2020). [CrossRef] 28. Guo, X.; Liu, S.; Zhou, M.; Tian, G. Dual-Objective Program and Scatter Search for the Optimization of Disassembly Sequences Subject to Multiresource Constraints. IEEE Trans. Autom. Sci. Eng. 2018, 15, 1091–1103. [CrossRef] 29. Fu, Y.; Zhou, M.; Guo, X.; Qi, L. Scheduling Dual-Objective Stochastic Hybrid Flow Shop with Deteriorating Jobs via Bi-Population Evolutionary Algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 5037–5048. [CrossRef] 30. Sheng, Z.; Pfersich, S.; Eldridge, A.; Zhou, J.; Tian, D.; Leung, V.C.M. Wireless acoustic sensor networks and edge computing for rapid acoustic monitoring. IEEE/CAA J. Autom. Sin. 2019, 6, 64–74. [CrossRef] 31. Yang, G.; Zhao, X.; Huang, J. Overview of task scheduling algorithms in cloud computing. Appl. Electron. Tech. J. 2019, 45, 13–17. 32. Zhang, P.; Zhou, M.; Wang, X. An Intelligent Optimization Method for Optimal Virtual Machine Allocation in Cloud Data Centers. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1725–1735. [CrossRef] 33. Yuan, H.; Bi, J.; Zhou, M. Spatial Task Scheduling for Cost Minimization in Distributed Green Cloud Data Centers. IEEE Trans. Autom. Sci. Eng. 2018, 16, 729–740. [CrossRef] 34. Yuan, H.; Zhou, M.; Liu, Q.; Abusorrah, A. Fine-Grained Resource Provisioning and Task Scheduling for Heterogeneous Applications in Distributed Green Clouds. IEEE/CAA J. Autom. Sin. 2020, 7, 1380–1393. 35. Alfakih, T.; Hassan, M.M.; Gumaei, A.; Savaglio, C.; Fortino, G. Task Offloading and Resource Allocation for Mobile Edge Computing by Deep Reinforcement Learning Based on SARSA. IEEE Access 2020, 8, 54074–54084. [CrossRef] 36. Yuchong, L.; Jigang, W.; Yalan, W.; Long, C. Task Scheduling in Mobile Edge Computing with Stochastic Requests and M/M/1 Servers. In Proceedings of the 2019 IEEE 21st International Conference on High Performance Computing and Communica- tions, IEEE 17th International Conference on Smart City, IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Institute of Electrical and Electronics Engineers (IEEE), Zhangjiajie, China, 10–12 August 2019; pp. 2379–2382. 37. Li, X.; Wan, J.; Dai, H.-N.; Imran, M.; Xia, M.; Celesti, A. A Hybrid Computing Solution and Resource Scheduling Strategy for Edge Computing in Smart Manufacturing. IEEE Trans. Ind. Inform. 2019, 15, 4225–4234. [CrossRef] 38. Zhang, Y.; Xie, M. A More Accurate Delay Model based Task Scheduling in Cellular Edge Computing Systems. In Proceedings of the 2019 IEEE 5th International Conference on Computer and Communications (ICCC), Institute of Electrical and Electronics Engineers (IEEE), Chengdu, China, 6–9 December 2019; pp. 72–76. 39. Zhang, Y.; Du, P. Delay-Driven Computation Task Scheduling in Multi-Cell Cellular Edge Computing Systems. IEEE Access 2019, 7, 149156–149167. [CrossRef] 40. Zhang, W.; Zhang, Z.; Zeadally, S.; Chao, H.-C. Efficient Task Scheduling with Stochastic Delay Cost in Mobile Edge Computing. IEEE Commun. Lett. 2019, 23, 4–7. [CrossRef] Sensors 2021, 21, 779 21 of 22 41. Yuan, H.; Zhou, M. Profit-Maximized Collaborative Computation Offloading and Resource Allocation in Distributed Cloud and Edge Computing Systems. IEEE Trans. Autom. Sci. Eng. 2020. Available online: https://ieeexplore.ieee.org/document/9140317 (accessed on 29 December 2020). [CrossRef] 42. Xu, J.; Li, X.; Ding, R.; Liu, X. Energy efficient multi-resource computation offloading strategy in mobile edge computing. CIMS 2019, 25, 954–961. 43. Ning, Z.; Huang, J.; Wang, X.; Rodrigues, J.J.P.C.; Guo, L. Mobile Edge Computing-Enabled Internet of Vehicles: Toward Energy-Efficient Scheduling. IEEE Netw. 2019, 33, 198–205. [CrossRef] 44. Li, S.; Huang, J. Energy Efficient Resource Management and Task Scheduling for IoT Services in Edge Computing Paradigm. In Proceedings of the 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC), Institute of Electrical and Electronics Engineers (IEEE), Guangzhou, China, 12–15 December 2017; pp. 846–851. 45. Yoo, W.; Yang, W.; Chung, J. Energy Consumption Minimization of Smart Devices for Delay-Constrained Task Processing with Edge Computing. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 4–6 January 2020; pp. 1–3. 46. Zhang, Q.; Lin, M.; Yang, L.T.; Chen, Z.; Khan, S.U.; Li, P. A Double Deep Q-Learning Model for Energy-Efficient Edge Scheduling. IEEE Trans. Serv. Comput. 2018, 12, 739–749. [CrossRef] 47. Yang, Y.; Ma, Y.; Xiang, W.; Gu, X.; Zhao, H. Joint Optimization of Energy Consumption and Packet Scheduling for Mobile Edge Computing in Cyber-Physical Networks. IEEE Access 2018, 6, 15576–15586. [CrossRef] 48. Bi, J.; Yuan, H.; Duanmu, S.; Zhou, M.C.; Abusorrah, A. Energy-optimized Partial Computation Offloading in Mobile Edge Computing with Genetic Simulated-annealing-based Particle Swarm Optimization. IEEE Internet Things J. 2020. Available online: https://ieeexplore.ieee.org/document/9197634 (accessed on 29 December 2020). [CrossRef] 49. Yu, H.; Wang, Q.; Guo, S. Energy-Efficient Task Offloading and Resource Scheduling for Mobile Edge Computing. In Proceedings of the 2018 IEEE International Conference on Networking, Architecture and Storage (NAS), Institute of Electrical and Electronics Engineers (IEEE), Chongqing, China, 11–14 October 2018; pp. 1–4. 50. Mao, Y.; Zhang, J.; Song, S.H.; Letaief, K.B. Stochastic joint radio and computational resource management for multi-user mobile-edge computing systems. IEEE Trans. Wirel. Commun. 2017, 16, 5994–6009. [CrossRef] 51. Dinh, T.Q.; Tang, J.; La, Q.D.; Quek, T.Q.S. Offloading in Mobile Edge Computing: Task Allocation and Computational Frequency Scaling. IEEE Trans. Commun. 2017, 65, 1. 52. Sen, T.; Shen, H. Machine Learning based Timeliness-Guaranteed and Energy-Efficient Task Assignment in Edge Computing Systems. In Proceedings of the 2019 IEEE 3rd International Conference on Fog and Edge Computing (ICFEC); Institute of Electrical and Electronics Engineers (IEEE), Larnaca, Cyprus, 14–17 May 2019; pp. 1–10. 53. Sajjad, H.P.; Danniswara, K.; Al-Shishtawy, A.; Vlassov, V. SpanEdge: Towards Unifying Stream Processing over Central and Near-the-Edge Data Centers. In Proceedings of the 2016 IEEE/ACM Symposium on Edge Computing (SEC); Institute of Electrical and Electronics Engineers (IEEE), Washington, DC, USA, 27–28 October 2016; pp. 168–178. 54. Dong, Z.; Liu, Y.; Zhou, H.; Xiao, X.; Gu, Y.; Zhang, L.; Liu, C. An energy-efficient offloading framework with predictable temporal correctness. In Proceedings of the SEC ’17: IEEE/ACM Symposium on Edge Computing Roc, San Jose, CA, USA, 12–14 October 2017; pp. 1–12. 55. Ren, J.; Yu, G.; Cai, Y.; He, Y. Latency Optimization for Resource Allocation in Mobile-Edge Computation Offloading. IEEE Trans. Wirel. Commun. 2018, 17, 5506–5519. [CrossRef] 56. Wang, J.; Yue, Y.; Wang, R.; Yu, M.; Yu, J.; Liu, H.; Ying, X.; Yu, R. Energy-Efficient Admission of Delay-Sensitive Tasks for Multi-Mobile Edge Computing Servers. In Proceedings of the 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), Institute of Electrical and Electronics Engineers (IEEE), Tianjin, China, 4–6 December 2019; pp. 747–753. 57. Cao, X.; Wang, F.; Xu, J.; Zhang, R.; Cui, S. Joint Computation and Communication Cooperation for Energy-Efficient Mobile Edge Computing. IEEE Internet Things J. 2019, 6, 4188–4200. [CrossRef] 58. Zhang, H.; Guo, J.; Yang, L.; Li, X.; Ji, H. Computation offloading considering fronthaul and backhaul in small-cell networks integrated with MEC. In Proceedings of the 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS); Institute of Electrical and Electronics Engineers (IEEE), Atlanta, GA, USA, 1–4 May 2017; pp. 115–120. 59. Chen, M.; Hao, Y. Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network. IEEE J. Sel. Areas Commun. 2018, 36, 587–597. [CrossRef] 60. Sun, Y.; Zhou, S.; Xu, J. EMM: Energy-Aware Mobility Management for Mobile Edge Computing in Ultra Dense Networks. IEEE J. Sel. Areas Commun. 2017, 35, 2637–2646. [CrossRef] 61. Nan, Y.; Li, W.; Bao, W.; Delicato, F.C.; Pires, P.F.; Dou, Y.; Zomaya, A.Y. Adaptive Energy-Aware Computation Offloading for Cloud of Things Systems. IEEE Access 2017, 5, 23947–23957. [CrossRef] 62. Sahni, Y.; Cao, J.; Yang, L.; Ji, Y. Multi-Hop Offloading of Multiple DAG Tasks in Collaborative Edge Computing. IEEE Internet Things J. 2020. Available online: https://ieeexplore.ieee.org/document/9223724 (accessed on 29 December 2020). [CrossRef] 63. Sahni, Y.; Cao, J.; Yang, L.; Ji, Y. Multi-Hop Multi-Task Partial Computation Offloading in Collaborative Edge Computing. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1. 64. Zhang, P.; Zhou, M.; Fortino, G. Security and trust issues in Fog computing: A survey. Futur. Gener. Comput. Syst. 2018, 88, 16–27. [CrossRef] Sensors 2021, 21, 779 22 of 22 65. Wang, X.; Ning, Z.; Zhou, M.; Hu, X.; Wang, L.; Zhang, Y.; Yu, F.R.; Hu, B. Privacy-Preserving Content Dissemination for Vehicular Social Networks: Challenges and Solutions. IEEE Commun. Surv. Tutorials 2018, 21, 1314–1345. [CrossRef] 66. Huang, X.; Ye, D.; Yu, R.; Shu, L. Securing parked vehicle assisted fog computing with blockchain and optimal smart contract design. IEEE/CAA J. Autom. Sin. 2020, 7, 426–441. [CrossRef] 67. Zhang, Y.; Du, L.; Lewis, F.L. Stochastic DoS attack allocation against collaborative estimation in sensor networks. IEEE/CAA J. Autom. Sin. 2020, 7, 1–10. [CrossRef] 68. Zhang, P.; Zhou, M. Security and Trust in Blockchains: Architecture, Key Technologies, and Open Issues. IEEE Trans. Comput. Soc. Syst. 2020, 7, 790–801. [CrossRef] 69. Oevermann, J.; Weber, P.; Tretbar, S.H. Encapsulation of Capacitive Micromachined Ultrasonic Transducers (CMUTs) for the Acoustic Communication between Medical Implants. Sensors 2021, 21, 421. [CrossRef] 70. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [CrossRef] 71. Fortino, G.; Messina, F.; Rosaci, D.; Sarne, G.M.L. ResIoT: An IoT social framework resilient to malicious activities. IEEE/CAA J. Autom. Sin. 2020, 7, 1263–1278. [CrossRef] 72. Wang, F.-Y. Parallel Intelligence: Belief and Prescription for Edge Emergence and Cloud Convergence in CPSS. IEEE Trans. Comput. Soc. Syst. 2020, 7, 1105–1110. [CrossRef] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Sensors (Basel, Switzerland) Pubmed Central

Recent Advances in Collaborative Scheduling of Computing Tasks in an Edge Computing Paradigm

Sensors (Basel, Switzerland) , Volume 21 (3) – Jan 24, 2021

Loading next page...
 
/lp/pubmed-central/recent-advances-in-collaborative-scheduling-of-computing-tasks-in-an-nHQmX5WvPB

References (87)

Publisher
Pubmed Central
Copyright
© 2021 by the authors.
eISSN
1424-8220
DOI
10.3390/s21030779
Publisher site
See Article on Publisher Site

Abstract

sensors Review Recent Advances in Collaborative Scheduling of Computing Tasks in an Edge Computing Paradigm 1 , 2 3 1 , 4 , 5 , 5 Shichao Chen , Qijie Li , Mengchu Zhou * and Abdullah Abusorrah Faculty of Information Tecnology, Macau University of Science and Technology, Macau 999078, China; shichao.chen@ia.ac.cn The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China School of Mechanical and Electrical Engineering and Automation, Harbin Institute of Technology, Shenzhen 518000, China; liqijie1998@163.com Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA Department of Electrical and Computer Engineering, Faculty of Engineering, and Center of Research Excellence in Renewable Energy and Power Systems, King Abdulaziz University, Jeddah 21481, Saudi Arabia; aabusorrah@kau.edu.sa * Correspondence: mengchu.zhou@njit.edu Abstract: In edge computing, edge devices can offload their overloaded computing tasks to an edge server. This can give full play to an edge server ’s advantages in computing and storage, and efficiently execute computing tasks. However, if they together offload all the overloaded computing tasks to an edge server, it can be overloaded, thereby resulting in the high processing delay of many computing tasks and unexpectedly high energy consumption. On the other hand, the resources in idle edge devices may be wasted and resource-rich cloud centers may be underutilized. Therefore, it is essential to explore a computing task collaborative scheduling mechanism with an edge server, a cloud center and edge devices according to task characteristics, optimization objectives and system status. It can help one realize efficient collaborative scheduling and precise execution of all computing Citation: Chen, S.; Li, Q.; Zhou, M.; tasks. This work analyzes and summarizes the edge computing scenarios in an edge computing Abusorrah, A. Recent Advances in paradigm. It then classifies the computing tasks in edge computing scenarios. Next, it formulates the Collaborative Scheduling of Computing Tasks in an Edge optimization problem of computation offloading for an edge computing system. According to the Computing Paradigm. Sensors 2021, problem formulation, the collaborative scheduling methods of computing tasks are then reviewed. 21, 779. https://doi.org/10.3390/ Finally, future research issues for advanced collaborative scheduling in the context of edge computing s21030779 are indicated. Received: 28 December 2020 Keywords: collaborative scheduling; edge computing; internet of things; limited resources; optimiza- Accepted: 12 January 2021 tion; task offloading Published: 24 January 2021 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in 1. Introduction published maps and institutional affil- With the increasing deployment and application of Internet of Things (IoT), more iations. and more intelligent devices, e.g., smart sensors and smart phones, can access a network, resulting in a considerable amount of network data. Despite that their computing power is very rapidly increasing, they are unable to achieve real-time and efficient execution due to their limited computing resources and ever-demanding applications. When it faces highly Copyright: © 2021 by the authors. complex computing tasks and services, cloud computing [1,2] can process these tasks to Licensee MDPI, Basel, Switzerland. achieve device–cloud collaboration. In a cloud computing paradigm, users can rely on This article is an open access article extremely rich storage and computing resources of a cloud computing center to expand the distributed under the terms and computing and storage power of devices, and achieve the rapid processing of computing- conditions of the Creative Commons intensive tasks. Yet there are some disadvantages in the device–cloud collaboration mode, Attribution (CC BY) license (https:// such as incurring high transmission delay and pushing network bandwidth requirement creativecommons.org/licenses/by/ to the limit. 4.0/). Sensors 2021, 21, 779. https://doi.org/10.3390/s21030779 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 779 2 of 22 In order to solve the problems of cloud computing for data processing, edge com- puting [3,4] is put forward to provide desired computing services [5] for users by using computing, network, storage and other resources on edge, that is near a physical entity or data source. Compared with cloud computing, some applications of users in edge computing can be processed on an edge server near intelligent devices, thus significantly reducing data transmission delay and network bandwidth load required in edge-cloud collaboration. Eliminating long-distance data transmissions encountered in device–cloud computing brings another advantage to edge computing, i.e., the latter can more effectively guarantee user data security. As a result, it has become an important development trend to use edge computing to accomplish various computing tasks for intelligent devices [6,7]. These devices are called edge devices in this paper. The traditional scheduling strategies of edge computing tasks are to offload all computing-intensive tasks of edge devices to an edge server for processing [8–10]. How- ever, it may result in the waste of computing and storage sources in edge devices and cloud computing centers. In addition, many devices may access an edge server at the same time period. As a result, the server may face too many computing tasks, thus resulting in a long queue of tasks. This increases the completion time of all queued tasks, even causing the processing delay of tasks in the edge server to exceed that at the edge devices. On the other hand, many edge devices may be idle, resulting in a waste of their computing resources; and resource-rich cloud centers may be underutilized. To solve the above problems, we can combine a cloud center, edge servers and edge devices together to efficiently handle the computing tasks of edge devices via task offloading. According to the computing tasks’ characteristics, optimization objectives and system status, we should utilize the computing and storage resources of a cloud center, edge servers and edge devices, and schedule computing tasks to them for processing on demand. It can effectively reduce the load of edge servers and improve the utilization of resources, and reduce the average completion time of computing tasks in a system. This paper focuses on the important problem of collaborative scheduling of com- puting tasks in an edge computing paradigm under IoT. It is noted that edge computing systems can be viewed as a special class of distributed computing systems. Traditional task scheduling in distributed computing focuses on distributing and scheduling a large task into multiple similarly powerful computing nodes and do not have task off-loading issues in edging computing [9–12]. Edging computing arises to handle an IoT scenario where edge devices are resource-constrained and relatively independent. In Section 2, we analyze the edge computing scenarios, and clarify their composition, characteristics and application fields. In Section 3, we analyze the computing tasks, and classify them together with factors influencing their completion. We formulate the optimization problem of computation offloading with multiple objective functions for an edge computing system in Section 4. Based on the computing scenarios, computation tasks and formulated optimization model, we survey and summarize the collaborative scheduling methods of computing tasks in Section 5. This work is concluded in Section 6 by indicating the open issues for us to build a desired collaborative scheduling system for edge computing. 2. Computing Scenarios In IoT, computing resources on edge are mainly composed of edge devices and edge servers. In order to take the advantages of cloud centers, we also consider them as part of the whole system in task scheduling. In general, a cloud center contains a large number of computing servers with high computing power. It is very important to reasonably use the computing, storage, bandwidth and other system resources to process computing tasks efficiently. In this section, different computing scenarios are analyzed and summarized according to the composition of computing resources. On the edge, we have an edge server, edge devices and an edge scheduler. The server can provide computing, storage, bandwidth and other resources to support computing services for the edge computing tasks. Edge devices can execute computing tasks and Sensors 2021, 21, x FOR PEER REVIEW 3 of 22 On the edge, we have an edge server, edge devices and an edge scheduler. The server can provide computing, storage, bandwidth and other resources to support computing services for the edge computing tasks. Edge devices can execute computing tasks and may Sensors 2021, 21, 779 3 of 22 offload such tasks to the server and other available/idle edge devices. They have compu- ting, storage, network and other resources, and can provide limited computing services for edge computing tasks. They have much fewer resources than edge servers do. An edge may offload such tasks to the server and other available/idle edge devices. They have computing, storage, network and other resources, and can provide limited computing scheduler receives the computing tasks offloaded by edge devices, and provides schedul- services for edge computing tasks. They have much fewer resources than edge servers do. ing services for the edge computing tasks according to the resources and status of all edge An edge scheduler receives the computing tasks offloaded by edge devices, and provides servers and edge devices under its supervision. It is a controller to realize the collaborative scheduling services for the edge computing tasks according to the resources and status scheduling between edge servers and edge devices. But it does not have to be in an edge of all edge servers and edge devices under its supervision. It is a controller to realize the collaborative scheduling between edge servers and edge devices. But it does not have to computing system. be in an edge computing system. According to the difference among the computing resources involved in the offload- According to the difference among the computing resources involved in the offloading ing and scheduling of computing tasks in edge computing, computing scenarios can be and scheduling of computing tasks in edge computing, computing scenarios can be divided divided into four categories, i.e., basic, scheduler-based, edge-cloud computing, and into four categories, i.e., basic, scheduler-based, edge-cloud computing, and scheduler- based schedu edge-cloud ler-base one. d edge Their -clo characteristics ud one. Their c and application haracterist arics e described and app next. licat An ion are desc edge ribed next. computing architecture is shown in Figure 1. An edge computing architecture is shown in Figure 1. Cloud Sever Core Networks Edge Sever Edge Sever Edge Sever End Users Figure 1. Edge computing architecture. Figure 1. Edge computing architecture. 2.1. Basic Edge Computing 2.1. Basic Edge Computing The first scenario is composed of edge devices and edge servers. There is no edge The first scenario is composed of edge devices and edge servers. There is no edge scheduler on the edge. In this scenario, an edge device can execute a computing task locally, or offload it to its edge server. The edge server executes it, and then feeds back scheduler on the edge. In this scenario, an edge device can execute a computing task lo- the computing result to the corresponding edge device. This scenario is similar to the cally, or offload it to its edge server. The edge server executes it, and then feeds back the scene that devices offload tasks to be performed in a cloud computing center. It is the computing result to the corresponding edge device. This scenario is similar to the scene simplest scenario in edge computing. There is no edge scheduler on the edge. For the that devices offload tasks to be performed in a cloud computing center. It is the simplest computing tasks that can be offloaded to the edge servers, their offloading locations are fixed. Moreover, the processed types of computing tasks are fixed, and the specific types scenario in edge computing. There is no edge scheduler on the edge. For the computing are determined by edge server resources. In addition, this scenario does not contain a tasks that can be offloaded to the edge servers, their offloading locations are fixed. More- cloud computing center. Hence, it is more suitable for processing tasks with a small over, the processed types of computing tasks are fixed, and the specific types are deter- amount of computation and strict delay requirements in a relatively closed environment. mined by edge server resources. In addition, this scenario does not contain a cloud com- Its architecture is shown in Figure 2, which has been used in [11]. puting center. Hence, it is more suitable for processing tasks with a small amount of com- According to the above analysis, the Quality of Service (QoS) levels are expected to be achieved with the proposed scenario. Task completion time is used to measure QoS. We putation and strict delay requirements in a relatively closed environment. Its architecture assume that all edge servers are same and all edge devices are uniform. Note that most of is shown in Figure 2, which has been used in [11]. the presented content can be easily extended to heterogeneous devices and servers. According to the above analysis, the Quality of Service (QoS) levels are expected to be achieved with the proposed scenario. Task completion time is used to measure QoS. We assume that all edge servers are same and all edge devices are uniform. Note that most of the presented content can be easily extended to heterogeneous devices and servers. Let 𝜏 denote the completion time of a task offloaded to an edge server, which in- cludes data transmission latency between an edge device and an edge server, and task processing time in an edge server and waiting time before it is processed. Let  denote Sensors 2021, 21, x FOR PEER REVIEW 4 of 22 the time to run a single instruction and  be the waiting time before a task to be pro- cessed in an edge server. 𝐺 is the number of instructions for this task processing. Then the completion time of a task can be computed as: 𝜏= 𝑛 (1+ 𝑃 )𝜏 ̅ + +𝐺 (1) where 𝑛 is the number of packets transmitted, which includes the process of bidirectional data transmission between an edge device and an edge server, 𝑃 is the packet loss rate between an edge device and an edge server, occurring during 𝑛 packet transmissions. 𝜏 ̅ Sensors 2021, 21, 779 4 of 22 is the average latency per packet between an edge device and an edge server, which in- cludes the sum of delays caused by processing, queuing, and transmission of 𝑛 packets. MAN Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 2. Basic edge computing (MAN: metropolitan area network and WLAN: wireless local Figure 2. Basic edge computing (MAN: metropolitan area network and WLAN: wireless local area area network). network). Let t denote the completion time of a task offloaded to an edge server, which includes data 2.2. transmission Scheduler-latency Based Ed between ge Co an mpu edge tin device g and an edge server, and task processing time in an edge server and waiting time before it is processed. Let h denote the time to The second one is composed of edge devices, edge servers and an edge scheduler. run a single instruction and W be the waiting time before a task to be processed in an edge Compared with the first one, it includes an edge scheduler, which can schedule tasks stra- server. G is the number of instructions for this task processing. Then the completion time of a task can be computed as: tegically. In this scenario, an edge device can process a computing task locally or offload its task to an edge scheduler. The edge scheduler reasonably schedules the tasks to edge t = n(1 + P )t + W + h G (1) servers and edge devices according to scheduling policies. The policies are formed based where n is the number of packets transmitted, which includes the process of bidirectional on the current computing, storage, task execution status, network status, and other infor- data transmission between an edge device and an edge server, P is the packet loss rate mation related to all edge servers and devices. Finally, the scheduler feeds the computing between an edge device and an edge server, occurring during n packet transmissions. t is results back to the source devices. The main feature of this scenario is that the computing the average latency per packet between an edge device and an edge server, which includes the sum of delays caused by processing, queuing, and transmission of n packets. tasks can be reasonably scheduled to different servers and edge devices by the edge sched- uler, so that the collaborative processing of computing tasks in different edge servers and 2.2. Scheduler-Based Edge Computing edge devices can be well-realized. The computing resources of edge devices can be fully The second one is composed of edge devices, edge servers and an edge scheduler. utilized. The types of computing resources on the edge are diverse; so are the types of Compared with the first one, it includes an edge scheduler, which can schedule tasks strategically. In this scenario, an edge device can process a computing task locally or offload computing tasks that can be processed. The computing and storage resources on the edge its task to an edge scheduler. The edge scheduler reasonably schedules the tasks to edge are limited in comparison with a cloud computing center. Clearly, this architecture is suit- servers and edge devices according to scheduling policies. The policies are formed based on able for processing tasks with a small amount of computation and strict delay require- the current computing, storage, task execution status, network status, and other information related to all edge servers and devices. Finally, the scheduler feeds the computing results ments, as shown in Figure 3. The study [12] has adopted it. back to the source devices. The main feature of this scenario is that the computing tasks can be reasonably scheduled to different servers and edge devices by the edge scheduler, so that the collaborative processing of computing tasks in different edge servers and edge Edge MAN devices can be well-realized. The computing resources of edge devices can be fully utilized. Scheduler The types of computing resources on the edge are diverse; so are the types of computing tasks that can be processed. The computing and storage resources on the edge are limited in comparison with a cloud computing center. Clearly, this architecture is suitable for processing tasks with a small amount of computation and strict delay requirements, as Edge Server shown in Figure 3. The study [12] has adopted it. Edge Server Edge Server WLAN WLAN WLAN Figure 3. Scheduler-based edge computing. Sensors 2021, 21, x FOR PEER REVIEW 4 of 22 the time to run a single instruction and  be the waiting time before a task to be pro- cessed in an edge server. 𝐺 is the number of instructions for this task processing. Then the completion time of a task can be computed as: 𝜏= 𝑛 (1+ 𝑃 )𝜏 ̅ + +𝐺 (1) where 𝑛 is the number of packets transmitted, which includes the process of bidirectional data transmission between an edge device and an edge server, 𝑃 is the packet loss rate between an edge device and an edge server, occurring during 𝑛 packet transmissions. 𝜏 ̅ is the average latency per packet between an edge device and an edge server, which in- cludes the sum of delays caused by processing, queuing, and transmission of 𝑛 packets. MAN Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 2. Basic edge computing (MAN: metropolitan area network and WLAN: wireless local area network). 2.2. Scheduler-Based Edge Computing The second one is composed of edge devices, edge servers and an edge scheduler. Compared with the first one, it includes an edge scheduler, which can schedule tasks stra- tegically. In this scenario, an edge device can process a computing task locally or offload its task to an edge scheduler. The edge scheduler reasonably schedules the tasks to edge servers and edge devices according to scheduling policies. The policies are formed based on the current computing, storage, task execution status, network status, and other infor- mation related to all edge servers and devices. Finally, the scheduler feeds the computing results back to the source devices. The main feature of this scenario is that the computing tasks can be reasonably scheduled to different servers and edge devices by the edge sched- uler, so that the collaborative processing of computing tasks in different edge servers and edge devices can be well-realized. The computing resources of edge devices can be fully utilized. The types of computing resources on the edge are diverse; so are the types of computing tasks that can be processed. The computing and storage resources on the edge are limited in comparison with a cloud computing center. Clearly, this architecture is suit- Sensors 2021, 21, 779 5 of 22 able for processing tasks with a small amount of computation and strict delay require- ments, as shown in Figure 3. The study [12] has adopted it. Edge MAN Scheduler Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 3. Scheduler-based edge computing. Figure 3. Scheduler-based edge computing. According to this scenario, the task can be scheduled to an edge server for handling by an edge scheduler. Similar to the first scenario, the completion time of the task scheduled to a target edge server can be computed as: r = n 1 + P r + n 1 + P r + W + h G (2) r r A B where P is the packet loss rate between an edge device and an edge scheduler, and P is r r the packet loss rate between an edge scheduler and a target edge server, occurring during n packets’ transmission. r is the average latency per packet between an edge device and an edge scheduler, and r is the average latency per packet between an edge scheduler and a target edge server, which includes the sum of delays caused by processing, queuing and transmission of n packets. If the task of an edge device is directly scheduled to its edge server, P = P , t = r , r = 0 and r = t. r t A B 0 0 Let h denote the time to run a single instruction and G be the number of instructions for this task processing in an edge device. The completion time of the task scheduled to a target device can be computed as: 00 0 0 0 r = n 1 + P r + n 1 + P r + W + h G (3) r r A C where P is the packet loss rate between an edge scheduler and a target edge device, occurring during n packet transmissions. r is the average latency per packet between an edge scheduler and a target edge device, which includes the sum of delays caused by processing, queuing and transmission of n packets. W is the waiting time before a task is processed at an edge device. Compared to the first scenario, the significant difference is that an edge scheduler can schedule the tasks to an idle edge server according to the network’s real-time status and edge servers. If the task processing time accounts for a large proportion of the total time, then it offers more advantages over the first one. 2.3. Edge-Cloud Computing This scenario is composed of edge devices, edge servers and a cloud computing center, and has no edge scheduler on the edge. An edge device can execute a computing task locally, or offload it to its edge sever or cloud center. Its difference from the first scenario is that its edge devices can offload their tasks to their cloud computing center. The specific offloading to an edge server or cloud computing center is determined by edge devices according to the attributes of their computing tasks and the QoS requirements from users. Its main feature is the same as the first scenario, i.e., the offloading location of an edge computing task and types are fixed. All the tasks that require a large amount of computation and are insensitive to delay on the edge can be offloaded to a cloud computing center. Therefore, the processing of computing tasks in this architecture is not affected by computing amount i.e., but only by the types of edge environment resources, as is shown in Figure 4. Such architectures are adopted in [13]. Sensors 2021, 21, 779 6 of 22 Sensors 2021, 21, x FOR PEER REVIEW 6 of 22 Cloud Center WAN MAN Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 4. Edge-cloud computing. Figure 4. Edge-cloud computing. In In this scenario, a task c this scenario, a task canabe n be handled in handled in an an edge edge server serve orr or a clo a cloud server ud server. We . We formulize formu- this lizescenario this scen with ariothe wicompletion th the complet time, ion accor time, ding accordi to the ng to the location wher locate ion where the task is the task is handled. The completion time of the task offloaded to an edge server is same as Equation (1). handled. The completion time of the task offloaded to an edge server is same as Equation (1). Let z denote the completion time of a task offloaded to a cloud center, which includes data transmission latency between an edge device and a cloud center and task processing Let 𝜁 denote the completion time of a task offloaded to a cloud center, which in- time in a cloud center. We assume that a cloud center is resource-intensive and the task cludes data transmission latency between an edge device and a cloud center and task pro- computation does not need to wait. The completion time of a task can then be computed cessing time in a cloud center. We assume that a cloud center is resource-intensive and as follows: the task computation does not need to wait. The completion time of a task can then be z = n(1 + P )z + gG (4) computed as follows: where n is the number of packets transmitted, which includes the process of bidirectional 𝜁= 𝑛1 + 𝑃 + 𝛾𝐺 (4) data transmission between an edge device and a cloud center, P is the packet loss rate where 𝑛 is the number of packets transmitted, which includes the process of bidirectional between an edge device and a cloud center, occurring during n packets transmission, z is data transmission between an edge device and a cloud center, 𝑃 is the packet loss rate the average latency per packet between an edge device and a cloud center, which includes between an edge device and a cloud center, occurring during 𝑛 packets transmission, 𝜁 the sum of delays caused by processing, queuing and transmission of n packets, and g is the average latency per packet between an edge device and a cloud center, which in- represents the time to run a single command in a cloud center. cludes the sum of delays caused by processing, queuing and transmission of 𝑛 packets, and 𝛾 represents the time to run a single command in a cloud center. 2.4. Scheduler-Based Edge-Cloud Computing The fourth scenario is composed of edge devices, edge servers, a cloud computing cen- 2.4. Scheduler-Based Edge-Cloud Computing ter and an edge scheduler. In this scenario, an edge device can execute its computing tasks The fourth scenario is composed of edge devices, edge servers, a cloud computing locally or offload the tasks to the edge scheduler. Compared with the third architecture, the dif cent ferer and an ence is that edge sched the edge u scheduler ler. In this sce receives nario all , an ed the computing ge device ctasks an execute its co offloaded by mputing edge tasks locally or offload the tasks to the edge scheduler. Compared with the third architec- devices, and schedules the computing tasks to proper computing entities (edge servers, idle ture, the dif edge devices, ference i and/or s tha at the edge schedu cloud computingle center) r receive forsperforming all the comthe putservices ing tasks accor offload ding ed to by the edge devices, and sche computing resources,dule storage s the resour computin ces, network g tasks to proper computi bandwidth and characteristics ng entities (edge of tasks. servers, idle It can give edge d full e play vices, and/or to the syner a clo getic ud computin advantages g center) for among edge performin devices, edge g the serv servers ices and a cloud computing center. Its main feature is the same as the second one, i.e., the according to the computing resources, storage resources, network bandwidth and charac- offloading position of edge computing tasks is uncertain, and the types are diverse. The teristics of tasks. It can give full play to the synergetic advantages among edge devices, processing of computing tasks is not affected by computing amount, but only by the types edge servers and a cloud computing center. Its main feature is the same as the second one, of edge environment resources. Its architecture is shown in Figure 5. It is used in [14]. i.e., the offloading position of edge computing tasks is uncertain, and the types are di- verse. The processing of computing tasks is not affected by computing amount, but only by the types of edge environment resources. Its architecture is shown in Figure 5. It is used in [14]. 𝜁 Sensors 2021, 21, 779 7 of 22 Sensors 2021, 21, x FOR PEER REVIEW 7 of 22 Cloud Center WAN Edge MAN Scheduler Edge Server Edge Server Edge Server WLAN WLAN WLAN Figure 5. Scheduler-based edge-cloud computing. Figure 5. Scheduler-based edge-cloud computing. In this scenario, the tasks can be handled in an idle edge server, a cloud server or an In this scenario, the tasks can be handled in an idle edge server, a cloud server or an edge device. They can be scheduled to appropriate locations by an edge scheduler. Similar edge device. They can be scheduled to appropriate locations by an edge scheduler. Similar to the second scenario, the completion time of a task scheduled to a target edge server is to the second scenario, the completion time of a task scheduled to a target edge server is the same as Equation (2). The completion time of a task scheduled to a target edge device the same as Equation (2). The completion time of a task scheduled to a target edge device is the same as Equation (3). is the same as Equation (3). The completion time of a task scheduled to a cloud center can be computed as: The completion time of a task scheduled to a cloud center can be computed as: 𝜖= 𝑛 (1+𝑃 )𝜖 ̅ +𝑛 (1+𝑃 )𝜖 ̅ + 𝛾𝐺 (5) e = n(1 + P )e + n 1 + P e + gG (5) e A e B where 𝑃 is the packet loss rate between an edge device and an edge scheduler, and 𝑃 where P is the packet loss rate between an edge device and an edge scheduler, and P e e is the packet loss rate between an edge scheduler and a cloud center, occurring during 𝑛 is the packet loss rate between an edge scheduler and a cloud center, occurring during n packets transmission. 𝜖 ̅ is the average latency per packet between an edge device and packets transmission. e is the average latency per packet between an edge device and an edge scheduler and 𝜖 ̅ is the average latency per packet between an edge scheduler an edge scheduler and e is the average latency per packet between an edge scheduler and a cloud center, which includes the sum of delays caused by processing, queuing and and a cloud center, which includes the sum of delays caused by processing, queuing and transmission of 𝑛 packets. transmission of n packets. Compared to other scenarios, the offloaded tasks can be scheduled to a suitable loca- Compared to other scenarios, the offloaded tasks can be scheduled to a suitable tion by an edge scheduler according to the attributes of tasks and the real-time status of location by an edge scheduler according to the attributes of tasks and the real-time status of network, edge servers and cloud servers, which can give full play to the computing power network, edge servers and cloud servers, which can give full play to the computing power of the whole network system to achieve the best QoS. of the whole network system to achieve the best QoS. 3. 3. Computing Computing Task Task Analysis Analysis Next, Next, the computi the computing ng ta tasks sks a arree analyzed analyzed to ensure t to ensure that hat they ca they can n be be a accurately ccurately sched- sched- uled to an appropriate node, and achieve the expected objectives, e.g., the minimal task uled to an appropriate node, and achieve the expected objectives, e.g., the minimal task completion completion time a time and nd least least energy consumpti energy consumption. on. Accordi According ng to task a to task attributes, ttributes, we judge we judge whether they can be split or not and whether there is interdependence among subtasks [15]. whether they can be split or not and whether there is interdependence among subtasks A specific judgment criterion is that if computing tasks are simple or highly integrated, [15]. A specific judgment criterion is that if computing tasks are simple or highly inte- they cannot be split, and they can only be executed locally as a whole at the edge devices grated, they cannot be split, and they can only be executed locally as a whole at the edge or completely offloaded to edge servers. If they can be segmented based on their code devices or completely offloaded to edge servers. If they can be segmented based on their and/or data [16,17], they can be divided into several parts, which can be offloaded. In code and/or data [16,17], they can be divided into several parts, which can be offloaded. summary, we have three modes, i.e., local execution, partial offloading and full offloading In summary, we have three modes, i.e., local execution, partial offloading and full offload- given computing tasks. The specific offloading location of computing tasks should be well ing given computing tasks. The specific offloading location of computing tasks should be considered according to the computing power of devices, current network status, and well considered according to the computing power of devices, current network status, resource status of edge devices, edge servers and a cloud computing center. and resource status of edge devices, edge servers and a cloud computing center. 3.1. Local Execution 3.1. Local Execution Whether edge computing tasks are executed locally or not should be determined Whether edge computing tasks are executed locally or not should be determined ac- according to the resources of an edge device, edge servers’ network and resource status. cording to the resources of an edge device, edge servers’ network and resource status. If If the available network bandwidth is not enough to support the successful uploading of the available network bandwidth is not enough to support the successful uploading of a Sensors 2021, 21, 779 8 of 22 a task, i.e., the remaining bandwidth of the current network is less than the bandwidth required for the uploading of the task, the computing task can only be performed locally. In addition, if the computing resources of edge servers are not available, resulting in that the computing tasks cannot be processed in time, the tasks have to be executed locally. If the computing power of an edge device itself can meet the service requirements, it performs its tasks locally, thus effectively reducing the workload of an edge server and the need for network bandwidth. 3.2. Full Offloading Answering whether edge computing tasks are completely offloaded to an edge server or scheduler or not needs one to consider the resources of edge devices, current network, availability of edge servers’ resources and system optimization effect. If (1) the currently available network bandwidth supports the successful offloading of edge computing tasks, and (2) the edge servers or other edge devices are idle and the computing tasks that are successfully offloaded can be processed immediately, then, according to a scheduling goal, the results of local execution and full offloading to the edge servers are compared, and local execution or offloading of the computing tasks is decided. For example, if the goal is to minimize the completion time required for processing a task, it is necessary to compare the completion time required for local execution with the one required for offloading to an edge server/cloud computing center. If the local execution takes less time, the tasks should be processed locally. Otherwise, they should be offloaded to the edge servers or cloud computing center for processing. 3.3. Partial Offloading An indivisible computing task at an edge device can only be executed locally or completely offloaded to the edge scheduler, which then assigns it to an appropriate edge server or idle edge device. Divisible computing tasks can enjoy partial offloading. Their split sub-tasks should be taken as a scheduling unit, and the resources of edge devices, network and the resources of edge servers should be considered comprehensively when they are scheduled. Considering the final processing effect of the overall tasks, each sub-task should be assigned to an appropriate computing node for processing. For the split computing task, if there are no interdependence among the sub-tasks, they can be assigned to different nodes to be processed at the same time so as to achieve the purpose of minimizing energy consumption and reducing task completion time. If there are some interdependence among the sub-tasks, the interdependent subtasks should be assigned to the same computing node for execution. There are many methods for splitting tasks. Yang et al. [18] study the application repartition problem of periodically updating partition during application execution, and propose a framework for the repartition of an application in a dynamic mobile cloud environment. Based on their framework, they design an online solution for the dynamic network connection of cloud, which can significantly shorten the completion time of applications. Yang et al. [19] consider the computing partition of multi-users and the scheduling of offloading computing tasks on cloud resources. According to the number of resources allocated on the cloud, an offline heuristic algorithm called SearchAdjust is designed to solve the problem, thus minimizing the average completion time of a user ’s applications. Liu et al. [20] make an in-depth study on the energy consumption, execution delay and cost of an offloading process in a mobile edge server system by using queuing theory, and put forward an effective solution to solve their formulated multi-objective optimization problems. The analysis result of computing tasks is summarized in Figure 6. Sensors 2021, 21, 779 9 of 22 Sensors 2021, 21, x FOR PEER REVIEW 9 of 22 The resources of devices can meet the service requirements of users The network is crowded and task data cannot be offloaded successfully Local execution Edge server resources are not available Inseparable The resources of devices cannot meet the service requirements of users Full offloading The network is enough and task data can be offloaded successfully Edge server resources are available Computing task Local All parts of the computing task cannot be offloaded execution Divisible Full All parts of the computing task can be offloaded, and quality of services for the user task is higher after offloading Offloading Each sub-task can be assigned to different edge Each sub-task is independent of each other nodes/serversand processed at the same time Partial Offloading Dependent sub-tasks can only be assigned Each sub-task depends on each other to the same edge node/sever Figure 6. Computing task analysis and execution. Figure 6. Computing task analysis and execution. 4. System Model and Problem Formulation 4. System Model and Problem Formulation We can opt We can optimize imize m multiple ultiple object objective ive fu functions nctions for co for compu mputation offloading tation offloading in in an an edge edge computing model. They include average or total delay time and energy consumption. computing model. They include average or total delay time and energy consumption. 4.1. System Model 4.1. System Model A typical model is shown in Figure 7. We assume that the system consists of N edge A typical model is shown in Figure 7. We assume that the system consists of 𝑁 edge devices, an edge cloud and a cloud center. It is assumed that an edge cloud [21] consists of devices, an edge cloud and a cloud center. It is assumed that an edge cloud [21] consists n edge servers powered on. We consider the queue model at an edge device as a M/M/1 of n edge servers powered on. We consider the queue model at an edge device as a M/M/1 queue, an edge cloud as M/M/n queue, and the cloud center as an M/M/¥ queue. Each queue, an edge cloud as M/M/n queue, and the cloud center as an M/M/∞ queue. Each edge device can offload a part or whole of its tasks to an edge cloud through a wireless edge device can offload a part or whole of its tasks to an edge cloud through a wireless channel. If the number of offloaded tasks exceeds the maximum one that an edge cloud channel. If the number of offloaded tasks exceeds the maximum one that an edge cloud can process, the edge cloud may further offload such overloaded tasks to the cloud center can process, the edge cloud may further offload such overloaded tasks to the cloud center for processing. for processing. In this computing system, let D = {d , d , . . . , d }, where D is the set of edge devices 1 2 N In this computing system, let D = {𝑑 ,𝑑 ,…, 𝑑 }, where D is the set of edge devices and d is the ith edge device. let S = {S , S , . . . , S }, where S is the kth edge server. We 1 2 n i k and 𝑑 is the 𝑖 th edge device. let S = {𝑆 ,𝑆 ,… ,𝑆 }, where 𝑆 is the 𝑘 th edge server. We assume that the tasks generated by edge device d obey a Poisson process with an average assume that the tasks generated by edge device 𝑑 obey a Poisson process with an aver- arrival rate l and contains data of size b . Let u denote as an average service rate of edge i i i age arrival rate 𝜆 and contains data of size 𝛽 . Let 𝑢 denote as an average service rate device d . We denote J as the probability of the edge device d choosing to offload the i i i of edge device 𝑑 . We denote 𝜗 as the probability of the edge device 𝑑 choosing to of- task to an edge server. Then the tasks are offloaded to cloud follow a Poisson process with fload the task to an edge server. Then the tasks are offloaded to cloud follow a Poisson an average arrival rate J l , and the tasks that are locally processed also follow a Poisson i i process with an average arrival rate 𝜗 𝜆 , and the tasks that are locally processed also fol- ˆ ˆ ˆ ˆ process with an arrival rate (1 J )l . Let Q = {Q , Q , . . . , Q } where Q denotes the i i 1 2 N i low a Poisson process with an arrival rate (1 − 𝜗 )𝜆 . Let 𝑄 = {𝑄 ,𝑄 ,…, 𝑄 } where 𝑄 maximum task queue buffer of edge device d . We assume that an edge cloud has a single denotes the maximum task queue buffer of edge device 𝑑 . We assume that an edge cloud queue buffer and let Q denote its maximum task queue buffer. has a single queue buffer and let 𝑄 denote its maximum task queue buffer. Sensors 2021, 21, 779 10 of 22 Sensors 2021, 21, x FOR PEER REVIEW 10 of 22 Cloud Center Edge Cloud ... ... Task Task Task Task 1 d d Queue d Queue 2 i Queue N Queue Figure 7. Edge computing model. Figure 7. Edge computing model. 4. 4.2. 2. C Communications ommunications Mo Model del Let Let h ℎ denote denote the channel p the channel power ower gain be gain between tween edge d edge device evice 𝑑 d and and an edge clo an edge cloud. ud. i i Denote Denote p 𝑝 as as t the he transm transmission ission power of edge dev power of edge device ice d ,𝑑 0, 0 << p 𝑝 < p ˆ< , wher 𝑝 , wher e p ˆ e is𝑝 its is i maximum ts maxi- i i i i i m transmission um transmipower ssion p . o Tw he er uplink . The udata plink rate datfor a racomputation te for comput of atfloading ion offloof adedge ing ofdevice edge d dev can ice be obtained as follows [22]: 𝑑 can be obtained as follows [22]: p h i i R = k Blog (1 + ) (6) 𝑝 ℎ i i 2 sB (6) 𝑅 =𝜅 𝐵𝑙𝑜𝑔 (1 + ) 𝜎𝐵 where B is the channel bandwidth, and s denotes the noise power spectral density at the where 𝐵 is the channel bandwidth, and 𝜎 denotes the noise power spectral density at the receiver, and k is the portion of bandwidth of an edge cloud’s channels allocated to edge receiver, and 𝜅 is the portion of bandwidth of an edge cloud’s channels allocated to edge device d , where 0  k  1. i i device 𝑑 , where 0≤𝜅 ≤1. 4.3. Task Offloading Model 4.3. Task Offloading Model (1) Local execution model Let w denote the normalized workload on the edge device d . It represents the (1) Local execution model i i percentage of central processing unit (CPU) that has been used. According a queue model Let 𝑤 denote the normalized workload on the edge device 𝑑 . It represents the per- centage of central processing unit (CPU) that has be u en used. According a queue model M/M/1, we obtain the response time is T = , where r = is the utilization, l is the 1r u M/M/1 task arrival , we obta rate of in the response ti edge device d [23 me ]. iW s e𝑇= can compute , where the𝜌= average is the response utiliza time tion,of 𝜆 is locally the processing the tasks at edge device d as follows: task arrival rate of edge device 𝑑 [23]. We can compute the average response time of locally processing the tasks at edge device 𝑑 as follows: 1 1 T = + (7) i,o 1 1 u 1 w 1 J l u ( ) ( ) i i i i i 𝑇 = + (7) 𝑢 (1−𝑤 ) −(1−𝜗 )𝜆 𝑢 where 0  w < 1 and 0  (1 J )l b  Q . i i i i C where 0 ≤ 𝑤 <1 and 0 ≤ (1 − 𝜗 )𝜆 𝛽 ≤𝑄 . The energy consumption of locally processing the tasks at edge device d can be given The energy consumption of locally processing the tasks at edge device 𝑑 can be as follows: given as follows: E = P T (8) i,o i i, o 𝐸 =𝑃 𝑇 (8) (8) , , where P denotes as computing power of edge device d when the tasks are locally processed. i i (2) Edge cloud execution model where 𝑃 denotes as computing power of edge device 𝑑 when the tasks are locally pro- According to above analysis, we can obtain the transmission time of offloading the cessed. tasks from edge device d to an edge cloud as follows: (2) Edge cloud execution model According to above analysis, we can obtain the transmission time of offloading the J l b i i i T = (9) tasks from edge device 𝑑 to an edge cloud as follows: 𝜗 𝜆 𝛽 𝑇 = (9) In (9), 0  J l b  Q . i i i C In (9), 0≤𝜗 𝜆 𝛽 ≤𝑄 . Sensors 2021, 21, 779 11 of 22 Let E denote the energy consumption of transmitting the tasks from edge device d to i i an edge cloud, and it is as follows: e e E = p T (10) i i i According to the queue model M/M/n, let c = {c , c , . . . , c }, where c is the service 1 2 k rate of the kth edge server. We use f to denote the CPU cycle frequency and w to denote k k the maximum workload capacity of the kth edge server. Let  denote the maximum task accepted rate of the edge cloud. The total task arrival rate from N edge devices to an edge cloud can be computed as: l = J l (11) i i all å i=1 Then the fraction of tasks j that the edge cloud can compute is be given: 1 l  l all j = (12) l<l all all Hence, the actual execution rate at the edge cloud can be denoted as: l = jl (13) all We can get the average waiting time of each task at an edge cloud. T = (14) i, W c l k=1 where S denotes the utilization of an edge cloud. The average computing time of each task is: T = (15) i,C n k=1 According to [24], the workload of each edge server cannot exceed its maximum workload, and we can obtain the constraints: 0 <  w ˆ (16) Let u denote the transmission service rate of an edge cloud, we can get the expected waiting time for the computation results: T = (17) u l Similar to many studies [25,26], because the amount of data for the computation results is generally small, we ignore the transmission delay and energy consumption for an edge cloud to send the results back to an edge device. We can get the energy consumption of edge device d when the offloaded tasks of edge device d are processed: E = p (T + T + T (18) i, W i,W i,C i where p is the computation power of edge device d after offloading tasks. i Sensors 2021, 21, 779 12 of 22 Let T denote the total time of an offloading process from the task transmission of edge device d to the computation results returned of an edge cloud. We have: T = T + T + T + T (19) i i i, W i,C i The energy consumption of offloading process at edge device d can be obtained as: E = E +E (20) i i i, W (3) Cloud execution model When l >l is realizable, the overloaded tasks are transmitted to the cloud center by all wire network. Let T denote a fixed communication time between an edge server and a data center. Due to the enough computing power of the cloud center, we assume no task waiting time at the cloud center. According to the queue model M/M/¥, let u denote CC the service rate of a cloud center. The execution time of an overloaded task is: T = T + (21) i,CC CC The expected time for the results of overloaded tasks from a cloud center back to the corresponding edge device: T = + T (22) i,CC u l Let T denote the total time of an offloading process from the task transmission of i,CC edge device d to the computation results returned from a cloud center. We have: T = T + T (23) i,CC i,CC i,CC The corresponding energy consumption is: 0 0 E = p (T + T ) (24) i,CC i,cc i i,CC 4.4. Problem Formulation From (7), (8), (14), (15), (17), (21) and (22), we can obtain the execution delay time for edge device d , i.e., T = T + T + j T + T + T + 1 j (T + T ) (25) ( ) i i, o i i,W i,C i i,CC i,CC So, the average delay time of all edge devices in the computing system is: T = T (26) å i i=1 To minimize the total execution time of these tasks in edge computing system, we formulate the optimization problem as: Min T (27) subject to: (1 J )l <u (1 w ) (28) i i i i 0  (1 J )l b  Q (29) i i i i 0  J l b  Q (30) i i i C Sensors 2021, 21, 779 13 of 22 l < c (31) å k k=1 0 < p > p ˆ (32) i i 0  J  1 (33) 0  w < 1 (34) 0  q  1 (35) and (16). From (8), (10), (18) and (24), we can get the energy consumption for edge device d , which is given as follows: E = E + E + jE + (1 j)E (36) i i, o i i, W i,CC So, the average energy consumption of all edge devices in the computing system are denoted as follows: E = E (37) å i i=1 To minimize the energy consumption of these tasks in an edge computing system, we formulate the optimization problem as: Min E (38) subject to: (16) and (28)–(35). Min{T} and Min E together with the related constraints lead to a biobjective opti- mization problems. It can be solved with different methods [27–29]. Problem (27) represents a constrained mixed integer non-linear program to make the optimal offloading decisions. Its exact optimal solution complexity is NP-hard, and cannot be solved with polynomial-time methods. We can obtain an optimal solution in some specific scenarios with polynomial-time ones, but not in general cases as shown in [22,24,30]. 5. Computing Task Scheduling Scheme After thoroughly analyzing computing tasks in edge computing, the tasks offloaded to an edge scheduler need to be synergistically scheduled. Due to the limited computing and storage resources on the edge devices, and the resource competition among multiple tasks, it is essential in scheduling the tasks optimally in terms of task completion time and energy consumption [31]. Many scheduling algorithms have been proposed. Traditional task scheduling algo- rithms mainly include Min-Min, Max-Min, and Sufferage algorithm [32], first come first served, and minimum completion time [33]. Most of them take delay as an optimization goal, but it is easy to result in the problem of load imbalance among computing nodes. In- telligent heuristic task scheduling algorithms mainly include Genetic Algorithm (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO), Simulated Annealing (SA), Bat algorithm, artificial immune algorithm, and Tabu Search (TS) [34,35]. These algorithms are based on heuristic rules to quickly get the solution of a problem, but they cannot guarantee the optimality of their solutions [36]. In the scheduling processes of edge computing tasks, we face many goals. We summarize the methods of task scheduling with aims to achieve the lowest delay and/or lowest energy consumption. 5.1. Minimal Delay Time The completion time of computing tasks offloaded to edge servers is mainly composed of three parts, the transmission time required to transmit the tasks to the edge servers or edge devices, the processing time required to execute the tasks in the edge server or Sensors 2021, 21, 779 14 of 22 edge devices and the time required to return the result after the completion of the task processing. Therefore, reducing the above three parts of task completion time can effectively improve QoS. Yuchong et al. [24] propose a greedy algorithm to assign tasks to servers with the shortest response time to minimize the total response time of all tasks. Its experimental results show that the average response time of tasks is reduced in comparison with a random assignment algorithm. In intelligent manufacturing, a four-layer computing system supporting the operation of artificial intelligence tasks from the perspective of a network is proposed in [37]. On this basis, a two-stage algorithm based on a greedy strategy and threshold strategy is proposed to schedule computing tasks on the edge, so as to meet the real-time requirements of intelligent manufacturing. Compared with the traditional algorithm, the experimental results show that it has good real-time performance and acceptable energy consumption. A Markov decision process (MDP) method is used to schedule computing tasks according to the queuing state of task buffers, the execution state of a local processing unit and the state of the transmission unit. Its goal is to minimize delay while enforcing a power constraint. Its simulation results show that compared with other benchmark strategies, the proposed optimal random task scheduling strategy has shorter average execution delay [22]. Zhang et al. [38] study a task scheduling problem based on delay minimization, and establish an accurate delay model. A delay Lyapunov function is defined, and a new task scheduling algorithm is proposed. Compared with the traditional algorithm depending on Little’s law, it can reduce the maximum delay by 55%. Zhang et al. [39] model the delay of communication and computing queue as a virtual delay queue. A new delay-based Lyapunov function is defined, and joint subcarrier allocation, base station selection, power control and virtual machine scheduling algorithms are proposed to minimize the delay. Zhang et al. [40] propose an optimization model of maximum allowable delay considering both average delay and delay jitter. An effective conservative heterogeneous earliest completion time algorithm is designed to solve it. Yuan et al. [41] jointly consider CPU, memory, and bandwidth resources, load balance of all heterogeneous nodes in the edge layer, the maximum amount of energy, the maximum number of servers, and task queue stability in the cloud data centers layer. It designs a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of the system while ensuring that response time limits of tasks are strictly met. Its simulation results show better performance than other algorithms, i.e., firefly algorithm and genetic learning particle swarm optimization. In the present research, treating the minimum delay as a scheduling goal, most researchers have improved some traditional task scheduling algorithms by using intelligent optimization algorithms. The reason why these algorithms are not chosen in practical use is that they need multiple iterations to derive a relatively high-quality solution. However, facing many tasks or online application scenarios of random tasks, their execution may introduce delay. 5.2. Minimal Energy Consumption The energy consumption of computing tasks is mainly composed of two parts, includ- ing energy for their processing, transmission from edge devices to an edge server, and returning results to the source node [36]. Therefore, on the premise of meeting the delay requirements of tasks, energy consumption should be minimized. Xu et al. [30] propose a particle swarm optimization algorithm for the scheduling of tasks that can be offloaded to edge servers. It considers different kinds of computing resources in a Mobile Edge Computing (MEC) environment and aims to reduce mobile devices’ energy consumption under response time constraints. Their experimental results show that it has stable convergence and optimal adaptability, and can effectively achieve the optimization goal. A heuristic algorithm based on MEC for efficient energy scheduling is proposed in [42]. The task scheduling among MEC servers and the downlink energy consumption of roadside units are comprehensively considered. The energy consumption Sensors 2021, 21, 779 15 of 22 of MEC servers is minimized while enforcing task delay constraints. The algorithm can effectively reduce the energy consumption, task processing delay and solve the problem of task blocking. Li et al. [43] study the energy efficiency of an IoT system under an edge computing paradigm, and describes a dynamic process with a generalized queuing network model. It applies the ordered optimization technology to a Markov decision- making process to develop resource management and task scheduling schemes, to meet the challenge of the explosion in Markov decision process search space. Its simulation results show that this method can effectively reduce the energy consumption in an IoT system. A dynamic voltage frequency scale (DVFS) technology is proposed in [44], which can adjust the offloading rate of a device and the working frequency of CPU to minimize the energy consumption under the constraint of time delay. Zhang et al. [45] propose a dual depth Q- learning model. A learning algorithm based on experience playback is used to train model parameters. It can improve training efficiency and reduces system energy consumption. An improved probability scheme is adopted to control the congestion of different priority packets transmitted to MEC in [46]. Based on this, an improved krill herd meta-heuristic optimization algorithm is proposed to minimize the energy consumption and queuing congestion of MEC. Bi et al. [47] propose a partial computation offloading method to minimize the total energy consumed by smart mobile devices (SMDs) and edge servers by jointly optimizing offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels and transmission power of each SMD in each time slot. They formulate a nonlinear constrained optimization problem and presents a novel hybrid meta-heuristic algorithm named genetic simulated-annealing-based particle swarm optimization (GSP) to find a close-to-optimal solution for the problem. Its experimental results prove that it achieves lower energy consumption in less convergence time than other optimization algorithms including SA-based PSO, GA and SA. Based on the current research, treating the lowest energy consumption as the schedul- ing goal, researchers have proposed many improved heuristic task scheduling algorithms. For cases where a delay constraint is not strong, most of the heuristic scheduling algo- rithms are able to generate a complete feasible schedule by gradually expanding a local schedule. The more iterations, the greater chance to get the best solution, and the lower energy consumption. 5.3. Minimal Delay Time and Energy Consumption In such application scenarios as virtual reality, augmented reality, and driverless vehicles, the requirements of delay time and energy consumption are very strict. How to make an optimal task schedule to minimize both time delay and the energy consumption is very important. Two objectives are, unfortunately, in conflict with each other. The energy consumption and processing/transmission time of computing tasks are regarded as costs. With the support of a cloud computing center, a distributed algorithm for cost minimization is proposed by optimizing the offloading decision and resource allocation of a mobile edge computing system [48]. Its experimental results show that compared with other existing algorithms, i.e., Greedy algorithm, and those in [49,50], the cost can be reduced by about 30%. Zhang et al. [51] study the trade-off between system energy consumption and delay time. Based on the Lyapunov optimization method, the optimal scheduling of CPU cycle frequency and data transmission power of mobile devices is performed, and an online dynamic task allocation scheduling method is proposed to modify the data backlog of a queue. A large number of simulation experiments show that the scheme can realize good trade-off between energy consumption and delay. A task scheduling problem of a computing system considering both time delay and energy consumption is proposed in [52]. A task allocation method based on reinforcement learning is proposed to solve the problem, which can ensure the timely execution of tasks and a good deal of efficient energy saving. The simulation results show that compared with other existing methods, i.e., SpanEdge [53] and suspension-and energy-aware offloading algorithm [54], it can reduce 13–22% of task processing time and 1–10% of task processing Sensors 2021, 21, 779 16 of 22 energy consumption. Note that Sen at al [52] fail to consider the transmission energy in such a system. In the existing research, a distributed algorithm, Lyapunov optimization method, reinforcement learning and other task scheduling algorithms can be used to improve the overall performance of the system with a target to lower both delay time and energy consumption. It can be seen that to balance both well, engineers can select traditional and heuristic task scheduling algorithms. The latter are more popular since they can handle dual objective functions well. All the discussed scheduling schemes and other methods [55–61] are summarized in Tables 1 and 2. Table 1. Summary of task scheduling schemes. Objectives Studies Features Markov decision process method Greedy Algorithm Two-stage algorithm based on greedy strategy and threshold strategy Establishing the delay model of cellular edge computing system and defining the delay Lyapunov function Defining D Lyapunov function and proposing the algorithm of joint Minimize delay time [22,24,37–41] subcarrier allocation, base station selection, power control and virtual machine scheduling Conservative heterogeneous earliest completion time algorithm A profit-maximized collaborative computation offloading and resource allocation algorithm to guarantee the response time limits of tasks. Particle swarm optimization-based task scheduling algorithm for multi-resource computing offloading Heuristic algorithm of task scheduling among Mobile Edge Computing (MEC) servers by considering the downlink energy consumption of Road Side Units Minimize energy Applying ordered optimization technology to a Markov decision process [30,42–47] consumption Based on Dynamic Voltage Frequency Scale (DVFS) Technology Double Deep Q-learning model Improved krill herd meta heuristic optimization algorithm A novel genetic simulated-annealing-based particle swarm optimization (GSP) algorithm to produce a close-to-optimal solution Dynamic task allocation and scheduling algorithm based on a Lyapunov Minimize both delay optimization method time and energy [48,51,52] Distributed algorithm consumption Reinforcement learning Sensors 2021, 21, 779 17 of 22 Table 2. Summary of algorithms for different optimization objectives (ES = edge server, CC = cloud center, Unc = uncertain and N-O = near-optimal). Objective Scheme Optimal Complexity Where Pros and Cons One-dimensional search Achieving the minimum average delay in various Yes low ES algorithm [22] specific scenarios, but not general ones Saving time by 20–30%, in comparison to the proposed Greedy algorithm [24] Yes Medium ES random algorithm but only for a simple M/M/1 queuing system in a specific scenario. Efficient and suitable for scenarios with large number Customized TS algorithm [24] Yes Medium ES of tasks, but only for a simple M/M/1 queuing system in a specific scenario. Being more accurate than the other delay models; and Lyapunov function-based task Delay Time NP Medium ES smaller delay than that of a traditional scheduling algorithm [38] scheduling algorithm. Efficient conservative Reducing the delays of task offloading, and heterogeneous Unc Medium ES considering the task execution order. earliest-finish-time algorithm [40] Providing a high-accuracy and fine-grained energy model by jointly considering central processing unit SA-based migrating birds N-O Medium ES/CC (CPU), memory, and bandwidth resource limits, load optimization procedure [41] balance requirements of all nodes, but only for a simple M/M/1 system. Providing a closed-form solution suitable for a specific scenario about the partial compression offloading but Sub-gradient algorithm [55] Yes Medium ES not for general scenarios; reducing the end-to-end latency. Comprehensively considering the workload Energy efficient Multi-resource conditions among mobile devices, edge servers and computation Offloading strategy Yes Medium ES/CC cloud centers; task scheduling algorithm [30] Having stable convergence speed and reducing the power consumption effectively in a specific scenario. Being effective and efficient; Ordinal Optimization-based Unc High ES/CC Making good tradeoff between delay time and Markov Decision Process [43] energy consumption. Effectively solving the problem of choosing which edge server to offload, and minimizing the total Energy cosumption Ben’s genetic algorithm [56] N-O Medium ES energy consumption, but working only for a simple M/M/1 queue model Joint computation and communication cooperation by Algorithms for partial and binary considering both partial and binary offloading cases offloading with energy Yes High ES Reducing the power consumption effectively; consumption optimization [57] obtaining the optimal solution in a partial offloading case Guaranteeing the global optimization, strong Artificial fish swarm Yes Medium ES robustness and fast convergence for a specific problem algorithm [58] and reducing the power consumption Establishing the conditions under which total or no Multidimensional numerical offloading is optimal; Reducing the execution delay of Yes High ES method [17] applications and minimizing the total consumed energy but failing to consider latency constraints Solving the computing resource allocation and task placement problems; Software defined task Reducing task duration and energy cost compared to offloading/Task Placement Yes Medium ES random and uniform computation offloading schemes Algorithm [59] by considering computation amount and data size of a Delay/energy task in a software defined ultra network. consumption Making good tradeoff between delay time and energy consumption Energy-aware mobility N-O High ES Dealing with various practical deployment scenarios management algorithm [60] including BSs dynamically switching on and off, but failing to consider the capability of a cloud server. Taking full advantages of green energy without Lyapunov Optimization on Time Yes High ES/CC significantly increasing the response time and having and Energy Cost [61] better optimization ability 6. Issues and Future Directions An edge server has more computing power and storage capacity than devices, and edge computing has lower task transmission delay than cloud computing. In addition, due to the limitation of edge resources, the task scheduling problem of edge computing is NP-hard. Its high-performance solution methods are highly valuable, while its exact Sensors 2021, 21, 779 18 of 22 global optimal solution cannot be obtained in general for sizable problems. Although there have been many studies on collaborative scheduling of computing tasks in edge computing [47,62,63], the following issues should be addressed: (1) Consider the occurrence of emergencies. In an edge computing paradigm, a system involves the coordination of devices, edge server and network link, each of which plays an important role. Therefore, if any devices and edge servers are shut down or the network fails in the process of task processing, scheduled task execution can be significantly impacted. Therefore, the question of how to add the consideration of emergencies in the process of task scheduling to ensure that tasks can also be successfully executed is a widely open problem. In other words, researchers have to take their failure probability into tasks scheduling consideration so that the risk of failing some important tasks should be minimized. (2) Consider multiple optimization objectives. Now, most of research is based on the optimization goals of delay time and/or energy consumption to develop task schedules. Other QoS indicators of user tasks are rarely considered. Therefore, they should be added to optimize a schedule, and a task scheduling scheme with comprehensive optimization goals should be formulated to achieve high-quality user service experience as well. (3) Consider data security issues. Data security [64–69] is one of the most concerned issues. Security protocols and encryption algorithms are mostly used to achieve the security and privacy of data, but there are few considerations in terms of their induced delay and energy consumption issues. Therefore, it is worthy to develop light to heavy- weight security protocols and encryption algorithms such that some best trade-off solutions between performance and data security levels can be made. (4) Find any-time task scheduling algorithms. The research on task scheduling al- gorithms mostly uses the improved traditional and heuristic task ones. They need long iteration time to achieve a near-optimal or optimal schedule. In practice, we must offer a feasible schedule in short time. We can then improve it if we are given enough computing time before a schedule needs to be developed. Hence, inventing some fast algorithms to produce a first feasible schedule and then some intelligent optimizations that can improve the solutions are highly desired. (5) Add some important factors to optimization goals. Generally, network bandwidth and CPU of task offloading locations could be taken into consideration in the process of offloading tasks at the edge, but many other factors, such as offloading ratio of tasks, are not yet taken into consideration in order to obtain the best offloading strategy. (6) Balance partial computing offloading of Deep Learning (DL) models. To improve the intelligence of the applications, DL is increasingly adopted in various areas, namely face recognition, natural language processing, interactive gaming, and augmented reality. Due to the limited resources of edge hardware, lightweight DL models are suitable for edge devices. However, in order to accelerate the inference speed of models and minimize the energy consumed by devices, they need to be developed and partially offloaded. It is challenging to answer how to determine partial offloading for DL model training and balance the resource consumption between edge devices and edge servers. 7. Conclusions This paper analyzes and summarizes the computing scenarios, computing tasks, optimization objectives’ formulation and computing task scheduling methods for the scheduling process of an edge computing system. According to the resources in edge computing, the computing scenarios of scheduling tasks are divided into four categories, and their composition and characteristics are analyzed in detail. According to where their execution takes place, computing tasks can be accomplished via local execution, partial offloading and full offloading. Then we formulate the optimization problem to minimize delay time and energy consumption for computation offloading of an edge computing system with different queuing models, and indicate its solution complexity. With regard to computing task scheduling methods in edge computing, most existing studies set their Sensors 2021, 21, 779 19 of 22 optimization goal to minimize delay, energy consumption or both of them. Improved traditional task scheduling algorithms and some intelligent optimization algorithms can be used to solve such optimization problem. For the reviewed optimization problems, most re- searchers tend to use the improved heuristic algorithm/intelligent optimization algorithms instead of mathematical programing ones due to their computational complexity. This paper also discusses the issues and future directions in the area of collaborative scheduling of computing tasks in an edge computing paradigm. This paper should stimulate further research on collaborative scheduling and its applications in the context of edge-computing, e.g., [70–72]. Author Contributions: S.C. conceived the idea and structure of this manuscript, and wrote this paper by making a survey and summary of other papers and partially acquired funding support. Q.L. contributed to the early versions of the manuscript. M.Z. guided the writing of this manuscript, in- cluding its structure and content, offered some key ideas, and supervised the project. A.A. suggested some ideas and partially acquired funding support. All authors contributed to writing, reviewing, and editing the paper. All authors have read and agreed to the published version of the manuscript. Funding: This work was in part supported by the National Key Research and Development Program of China (No. 2018YFB1700202) and in part by Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (RG-21-135-39). Conflicts of Interest: The authors declare no conflict of interest. References 1. Patidar, S.; Rane, D.; Jain, P. A Survey Paper on Cloud Computing. In Proceedings of the 2012 Second International Conference on Advanced Computing & Communication Technologies, Institute of Electrical and Electronics Engineers (IEEE), Rohtak, Haryana, India, 7–8 January 2012; pp. 394–398. 2. Moghaddam, F.F.; Ahmadi, M.; Sarvari, S.; Eslami, M.; Golkar, A. Cloud computing challenges and opportunities: A survey. In Proceedings of the 2015 1st International Conference on Telematics and Future Generation Networks (TAFGEN), Institute of Electrical and Electronics Engineers (IEEE), Kuala Lumpur, Malaysia, 26–27 May 2015; pp. 34–38. 3. Varghese, B.; Wang, N.; Barbhuiya, S.; Kilpatrick, P.; Nikolopoulos, D.S. Challenges and Opportunities in Edge Computing. In Proceedings of the 2016 IEEE International Conference on Smart Cloud (SmartCloud), Institute of Electrical and Electronics Engineers (IEEE), New York, NY, USA, 18–20 November 2016; pp. 20–26. 4. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [CrossRef] 5. Rincon, J.A.; Guerra-Ojeda, S.; Carrascosa, C.; Julian, V. An IoT and Fog Computing-Based Monitoring System for Cardiovascular Patients with Automatic ECG Classification Using Deep Neural Networks. Sensors 2020, 20, 7353. [CrossRef] 6. Liu, B. Research on collaborative scheduling technology Based on edge computing. Master ’s Thesis, South China University of Technology, Guangzhou, China, 2019. 7. Jiao, J. Cooperative Task Scheduling in Mobile Edge Computing System. Master ’s Thesis, University of Electronic Science and Technology, Chengdu, China, 2018. 8. Zhao, J.; Li, Q.; Gong, Y.; Zhang, K. Computation Offloading and Resource Allocation for Cloud Assisted Mobile Edge Computing in Vehicular Networks. IEEE Trans. Veh. Technol. 2019, 68, 7944–7956. [CrossRef] 9. Lyu, X.; Ni, W.; Tian, H.; Liu, R.P.; Wang, X.; Giannakis, G.B.; Paulraj, A. Optimal Schedule of Mobile Edge Computing for Internet of Things Using Partial Information. IEEE J. Sel. Areas Commun. 2017, 35, 2606–2615. [CrossRef] 10. Mao, Y.; Zhang, J.; Letaief, K.B. Joint Task Offloading Scheduling and Transmit Power Allocation for Mobile-Edge Computing Systems. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference (WCNC), Institute of Electrical and Electronics Engineers (IEEE), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. 11. Tao, X.; Ota, K.; Dong, M.; Qi, H.; Li, K. Performance Guaranteed Computation Offloading for Mobile-Edge Cloud Computing. IEEE Wirel. Commun. Lett. 2017, 6, 774–777. [CrossRef] 12. Kim, Y.; Song, C.; Han, H.; Jung, H.; Kang, S. Collaborative Task Scheduling for IoT-Assisted Edge Computing. IEEE Access 2020, 8, 216593–216606. [CrossRef] 13. Wang, S.; Zafer, M.; Leung, K.K. Online placement of multi-component applications in edge computing environments. IEEE Access 2017, 5, 2514–2533. [CrossRef] 14. Zhao, T.; Zhou, S.; Guo, X.; Zhao, Y.; Niu, Z. A Cooperative Scheduling Scheme of Local Cloud and Internet Cloud for Delay- Aware Mobile Cloud Computing. In Proceedings of the 2015 IEEE Globecom Workshops (GC Wkshps), Institute of Electrical and Electronics Engineers (IEEE), San Diego, CA, USA, 6–10 December 2015; pp. 1–6. Sensors 2021, 21, 779 20 of 22 15. Kao, Y.-H.; Krishnamachari, B.; Ra, M.-R.; Bai, F. Hermes: Latency optimal task assignment for resource-constrained mobile computing. In Proceedings of the 2015 IEEE Conference on Computer Communications (INFOCOM), Institute of Electrical and Electronics Engineers (IEEE), Kowloon, Hong Kong, 26 April–1 May 2015; pp. 1894–1902. 16. Cuervo, E.; Balasubramanian, A.; Cho, D.; Wolman, A.; Saroiu, S.; Chandra, R.; Bahl, P. Maui: Making smartphones last longer with code offload. In Proceedings of the MobiSys, ACM, San Francisco, CA, USA, 15–18 June 2010; pp. 49–62. 17. Munoz, O.; Pascual-Iserte, A.; Vidal, J. Optimization of Radio and Computational Resources for Energy Efficiency in Latency- Constrained Application Offloading. IEEE Trans. Veh. Technol. 2015, 64, 4738–4755. [CrossRef] 18. Yang, L.; Cao, J.; Tang, S.; Han, D.; Suri, N. Run Time Application Repartitioning in Dynamic Mobile Cloud Environments. IEEE Trans. Cloud Comput. 2014, 4, 336–348. [CrossRef] 19. Yang, L.; Cao, J.; Cheng, H.; Ji, Y. Multi-User Computation Partitioning for Latency Sensitive Mobile Cloud Applications. IEEE Trans. Comput. 2015, 64, 2253–2266. [CrossRef] 20. Liu, L.; Chang, Z.; Guo, X.; Ristaniemi, T. Multi-objective optimization for computation offloading in mobile-edge computing. In Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece, 3–6 July 2017; pp. 832–837. 21. Carson, K.; Thomason, J.; Wolski, R.; Krintz, C.; Mock, M. Mandrake: Implementing Durability for Edge Clouds. In Proceedings of the 2019 IEEE International Conference on Edge Computing (EDGE), Institute of Electrical and Electronics Engineers (IEEE), Milan, Italy, 8–13 July 2019; pp. 95–101. 22. Liu, J.; Mao, Y.; Zhang, J.; Letaief, K.B. Delay-optimal computation task scheduling for mobile-edge computing systems. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Institute of Electrical and Electronics Engineers (IEEE), Barcelona, Spain, 10–15 July 2016; pp. 1451–1455. 23. Lazar, A. The throughput time delay function of anM/M/1queue (Corresp.). IEEE Trans. Inf. Theory 1983, 29, 914–918. [CrossRef] 24. Zhang, G.; Zhang, W.; Cao, Y.; Li, D.; Wang, L. Energy-Delay Tradeoff for Dynamic Offloading in Mobile-Edge Com-puting System with Energy Harvesting Devices. IEEE Trans. Industr. Inform. 2018, 14, 4642–4655. [CrossRef] 25. Chen, X. Decentralized Computation Offloading Game for Mobile Cloud Computing. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 974–983. [CrossRef] 26. Chen, X.; Jiao, L.; Li, W.; Fu, X. Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing. IEEE/ACM Trans. Netw. 2016, 24, 2795–2808. [CrossRef] 27. Yuan, H.; Bi, J.; Zhou, M.; Liu, Q.; Ammari, A.C. Biobjective Task Scheduling for Distributed Green Data Centers. IEEE Trans. Autom. Sci. Eng. 2020. Available online: https://ieeexplore.ieee.org/document/8951255 (accessed on 29 December 2020). [CrossRef] 28. Guo, X.; Liu, S.; Zhou, M.; Tian, G. Dual-Objective Program and Scatter Search for the Optimization of Disassembly Sequences Subject to Multiresource Constraints. IEEE Trans. Autom. Sci. Eng. 2018, 15, 1091–1103. [CrossRef] 29. Fu, Y.; Zhou, M.; Guo, X.; Qi, L. Scheduling Dual-Objective Stochastic Hybrid Flow Shop with Deteriorating Jobs via Bi-Population Evolutionary Algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 5037–5048. [CrossRef] 30. Sheng, Z.; Pfersich, S.; Eldridge, A.; Zhou, J.; Tian, D.; Leung, V.C.M. Wireless acoustic sensor networks and edge computing for rapid acoustic monitoring. IEEE/CAA J. Autom. Sin. 2019, 6, 64–74. [CrossRef] 31. Yang, G.; Zhao, X.; Huang, J. Overview of task scheduling algorithms in cloud computing. Appl. Electron. Tech. J. 2019, 45, 13–17. 32. Zhang, P.; Zhou, M.; Wang, X. An Intelligent Optimization Method for Optimal Virtual Machine Allocation in Cloud Data Centers. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1725–1735. [CrossRef] 33. Yuan, H.; Bi, J.; Zhou, M. Spatial Task Scheduling for Cost Minimization in Distributed Green Cloud Data Centers. IEEE Trans. Autom. Sci. Eng. 2018, 16, 729–740. [CrossRef] 34. Yuan, H.; Zhou, M.; Liu, Q.; Abusorrah, A. Fine-Grained Resource Provisioning and Task Scheduling for Heterogeneous Applications in Distributed Green Clouds. IEEE/CAA J. Autom. Sin. 2020, 7, 1380–1393. 35. Alfakih, T.; Hassan, M.M.; Gumaei, A.; Savaglio, C.; Fortino, G. Task Offloading and Resource Allocation for Mobile Edge Computing by Deep Reinforcement Learning Based on SARSA. IEEE Access 2020, 8, 54074–54084. [CrossRef] 36. Yuchong, L.; Jigang, W.; Yalan, W.; Long, C. Task Scheduling in Mobile Edge Computing with Stochastic Requests and M/M/1 Servers. In Proceedings of the 2019 IEEE 21st International Conference on High Performance Computing and Communica- tions, IEEE 17th International Conference on Smart City, IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Institute of Electrical and Electronics Engineers (IEEE), Zhangjiajie, China, 10–12 August 2019; pp. 2379–2382. 37. Li, X.; Wan, J.; Dai, H.-N.; Imran, M.; Xia, M.; Celesti, A. A Hybrid Computing Solution and Resource Scheduling Strategy for Edge Computing in Smart Manufacturing. IEEE Trans. Ind. Inform. 2019, 15, 4225–4234. [CrossRef] 38. Zhang, Y.; Xie, M. A More Accurate Delay Model based Task Scheduling in Cellular Edge Computing Systems. In Proceedings of the 2019 IEEE 5th International Conference on Computer and Communications (ICCC), Institute of Electrical and Electronics Engineers (IEEE), Chengdu, China, 6–9 December 2019; pp. 72–76. 39. Zhang, Y.; Du, P. Delay-Driven Computation Task Scheduling in Multi-Cell Cellular Edge Computing Systems. IEEE Access 2019, 7, 149156–149167. [CrossRef] 40. Zhang, W.; Zhang, Z.; Zeadally, S.; Chao, H.-C. Efficient Task Scheduling with Stochastic Delay Cost in Mobile Edge Computing. IEEE Commun. Lett. 2019, 23, 4–7. [CrossRef] Sensors 2021, 21, 779 21 of 22 41. Yuan, H.; Zhou, M. Profit-Maximized Collaborative Computation Offloading and Resource Allocation in Distributed Cloud and Edge Computing Systems. IEEE Trans. Autom. Sci. Eng. 2020. Available online: https://ieeexplore.ieee.org/document/9140317 (accessed on 29 December 2020). [CrossRef] 42. Xu, J.; Li, X.; Ding, R.; Liu, X. Energy efficient multi-resource computation offloading strategy in mobile edge computing. CIMS 2019, 25, 954–961. 43. Ning, Z.; Huang, J.; Wang, X.; Rodrigues, J.J.P.C.; Guo, L. Mobile Edge Computing-Enabled Internet of Vehicles: Toward Energy-Efficient Scheduling. IEEE Netw. 2019, 33, 198–205. [CrossRef] 44. Li, S.; Huang, J. Energy Efficient Resource Management and Task Scheduling for IoT Services in Edge Computing Paradigm. In Proceedings of the 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC), Institute of Electrical and Electronics Engineers (IEEE), Guangzhou, China, 12–15 December 2017; pp. 846–851. 45. Yoo, W.; Yang, W.; Chung, J. Energy Consumption Minimization of Smart Devices for Delay-Constrained Task Processing with Edge Computing. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 4–6 January 2020; pp. 1–3. 46. Zhang, Q.; Lin, M.; Yang, L.T.; Chen, Z.; Khan, S.U.; Li, P. A Double Deep Q-Learning Model for Energy-Efficient Edge Scheduling. IEEE Trans. Serv. Comput. 2018, 12, 739–749. [CrossRef] 47. Yang, Y.; Ma, Y.; Xiang, W.; Gu, X.; Zhao, H. Joint Optimization of Energy Consumption and Packet Scheduling for Mobile Edge Computing in Cyber-Physical Networks. IEEE Access 2018, 6, 15576–15586. [CrossRef] 48. Bi, J.; Yuan, H.; Duanmu, S.; Zhou, M.C.; Abusorrah, A. Energy-optimized Partial Computation Offloading in Mobile Edge Computing with Genetic Simulated-annealing-based Particle Swarm Optimization. IEEE Internet Things J. 2020. Available online: https://ieeexplore.ieee.org/document/9197634 (accessed on 29 December 2020). [CrossRef] 49. Yu, H.; Wang, Q.; Guo, S. Energy-Efficient Task Offloading and Resource Scheduling for Mobile Edge Computing. In Proceedings of the 2018 IEEE International Conference on Networking, Architecture and Storage (NAS), Institute of Electrical and Electronics Engineers (IEEE), Chongqing, China, 11–14 October 2018; pp. 1–4. 50. Mao, Y.; Zhang, J.; Song, S.H.; Letaief, K.B. Stochastic joint radio and computational resource management for multi-user mobile-edge computing systems. IEEE Trans. Wirel. Commun. 2017, 16, 5994–6009. [CrossRef] 51. Dinh, T.Q.; Tang, J.; La, Q.D.; Quek, T.Q.S. Offloading in Mobile Edge Computing: Task Allocation and Computational Frequency Scaling. IEEE Trans. Commun. 2017, 65, 1. 52. Sen, T.; Shen, H. Machine Learning based Timeliness-Guaranteed and Energy-Efficient Task Assignment in Edge Computing Systems. In Proceedings of the 2019 IEEE 3rd International Conference on Fog and Edge Computing (ICFEC); Institute of Electrical and Electronics Engineers (IEEE), Larnaca, Cyprus, 14–17 May 2019; pp. 1–10. 53. Sajjad, H.P.; Danniswara, K.; Al-Shishtawy, A.; Vlassov, V. SpanEdge: Towards Unifying Stream Processing over Central and Near-the-Edge Data Centers. In Proceedings of the 2016 IEEE/ACM Symposium on Edge Computing (SEC); Institute of Electrical and Electronics Engineers (IEEE), Washington, DC, USA, 27–28 October 2016; pp. 168–178. 54. Dong, Z.; Liu, Y.; Zhou, H.; Xiao, X.; Gu, Y.; Zhang, L.; Liu, C. An energy-efficient offloading framework with predictable temporal correctness. In Proceedings of the SEC ’17: IEEE/ACM Symposium on Edge Computing Roc, San Jose, CA, USA, 12–14 October 2017; pp. 1–12. 55. Ren, J.; Yu, G.; Cai, Y.; He, Y. Latency Optimization for Resource Allocation in Mobile-Edge Computation Offloading. IEEE Trans. Wirel. Commun. 2018, 17, 5506–5519. [CrossRef] 56. Wang, J.; Yue, Y.; Wang, R.; Yu, M.; Yu, J.; Liu, H.; Ying, X.; Yu, R. Energy-Efficient Admission of Delay-Sensitive Tasks for Multi-Mobile Edge Computing Servers. In Proceedings of the 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), Institute of Electrical and Electronics Engineers (IEEE), Tianjin, China, 4–6 December 2019; pp. 747–753. 57. Cao, X.; Wang, F.; Xu, J.; Zhang, R.; Cui, S. Joint Computation and Communication Cooperation for Energy-Efficient Mobile Edge Computing. IEEE Internet Things J. 2019, 6, 4188–4200. [CrossRef] 58. Zhang, H.; Guo, J.; Yang, L.; Li, X.; Ji, H. Computation offloading considering fronthaul and backhaul in small-cell networks integrated with MEC. In Proceedings of the 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS); Institute of Electrical and Electronics Engineers (IEEE), Atlanta, GA, USA, 1–4 May 2017; pp. 115–120. 59. Chen, M.; Hao, Y. Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network. IEEE J. Sel. Areas Commun. 2018, 36, 587–597. [CrossRef] 60. Sun, Y.; Zhou, S.; Xu, J. EMM: Energy-Aware Mobility Management for Mobile Edge Computing in Ultra Dense Networks. IEEE J. Sel. Areas Commun. 2017, 35, 2637–2646. [CrossRef] 61. Nan, Y.; Li, W.; Bao, W.; Delicato, F.C.; Pires, P.F.; Dou, Y.; Zomaya, A.Y. Adaptive Energy-Aware Computation Offloading for Cloud of Things Systems. IEEE Access 2017, 5, 23947–23957. [CrossRef] 62. Sahni, Y.; Cao, J.; Yang, L.; Ji, Y. Multi-Hop Offloading of Multiple DAG Tasks in Collaborative Edge Computing. IEEE Internet Things J. 2020. Available online: https://ieeexplore.ieee.org/document/9223724 (accessed on 29 December 2020). [CrossRef] 63. Sahni, Y.; Cao, J.; Yang, L.; Ji, Y. Multi-Hop Multi-Task Partial Computation Offloading in Collaborative Edge Computing. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1. 64. Zhang, P.; Zhou, M.; Fortino, G. Security and trust issues in Fog computing: A survey. Futur. Gener. Comput. Syst. 2018, 88, 16–27. [CrossRef] Sensors 2021, 21, 779 22 of 22 65. Wang, X.; Ning, Z.; Zhou, M.; Hu, X.; Wang, L.; Zhang, Y.; Yu, F.R.; Hu, B. Privacy-Preserving Content Dissemination for Vehicular Social Networks: Challenges and Solutions. IEEE Commun. Surv. Tutorials 2018, 21, 1314–1345. [CrossRef] 66. Huang, X.; Ye, D.; Yu, R.; Shu, L. Securing parked vehicle assisted fog computing with blockchain and optimal smart contract design. IEEE/CAA J. Autom. Sin. 2020, 7, 426–441. [CrossRef] 67. Zhang, Y.; Du, L.; Lewis, F.L. Stochastic DoS attack allocation against collaborative estimation in sensor networks. IEEE/CAA J. Autom. Sin. 2020, 7, 1–10. [CrossRef] 68. Zhang, P.; Zhou, M. Security and Trust in Blockchains: Architecture, Key Technologies, and Open Issues. IEEE Trans. Comput. Soc. Syst. 2020, 7, 790–801. [CrossRef] 69. Oevermann, J.; Weber, P.; Tretbar, S.H. Encapsulation of Capacitive Micromachined Ultrasonic Transducers (CMUTs) for the Acoustic Communication between Medical Implants. Sensors 2021, 21, 421. [CrossRef] 70. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [CrossRef] 71. Fortino, G.; Messina, F.; Rosaci, D.; Sarne, G.M.L. ResIoT: An IoT social framework resilient to malicious activities. IEEE/CAA J. Autom. Sin. 2020, 7, 1263–1278. [CrossRef] 72. Wang, F.-Y. Parallel Intelligence: Belief and Prescription for Edge Emergence and Cloud Convergence in CPSS. IEEE Trans. Comput. Soc. Syst. 2020, 7, 1105–1110. [CrossRef]

Journal

Sensors (Basel, Switzerland)Pubmed Central

Published: Jan 24, 2021

There are no references for this article.