Dike: Deep Reinforcement Learning For Function Scheduling in SLO-targeted Serverless Edge Computing
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Serverless computing is regarded as a good match for distributed edge infrastructures. However, bringing the function-as-a-service model to a highly dynamic, distributed, and heterogeneous pool of resources has its fair amount of challenges. Allocation of functions to the proper resource is an essential operation that avoids over-provisioning and the relative waste of computational resources. In this work, we propose Dike, a bi-level function scheduling and resource allocation framework designed to meet end-to-end latency SLOs (service-level objectives). Dike leverages deep reinforcement learning to balance resource provi- sioning and monetary cost by incorporating both composite cost and SLO violations into the reward. Extensive simulations with real-world production workloads demonstrate the superiority of Dike. Experimental results show that the proposed algorithms approximate the results of state-of-the-art ILP solver within a factor of 1.08 while dramatically reducing the scheduling time