[go: up one dir, main page]

Repository logo
 

Dike: Deep Reinforcement Learning For Function Scheduling in SLO-targeted Serverless Edge Computing

Accepted version
Peer-reviewed

Change log

Abstract

Serverless computing is regarded as a good match for distributed edge infrastructures. However, bringing the function-as-a-service model to a highly dynamic, distributed, and heterogeneous pool of resources has its fair amount of challenges. Allocation of functions to the proper resource is an essential operation that avoids over-provisioning and the relative waste of computational resources. In this work, we propose Dike, a bi-level function scheduling and resource allocation framework designed to meet end-to-end latency SLOs (service-level objectives). Dike leverages deep reinforcement learning to balance resource provi- sioning and monetary cost by incorporating both composite cost and SLO violations into the reward. Extensive simulations with real-world production workloads demonstrate the superiority of Dike. Experimental results show that the proposed algorithms approximate the results of state-of-the-art ILP solver within a factor of 1.08 while dramatically reducing the scheduling time

Description

Keywords

Journal Title

ISCC

Conference Name

30th IEEE Symposium on Computers and Communications

Journal ISSN

2642-7389

Volume Title

Publisher

IEEE

Publisher DOI

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International
Sponsorship
Horizon Europe UKRI Underwrite Innovate (10066543)