Tag Archives: 2019

Linearize, predict and place: minimizing the makespan for edge-based stream processing of directed acyclic graphs

Khare, S., Sun, H., Gascon-Samson, J., Zhang, K., Gokhale, A., Barve, Y., Bhattacharjee, A. and Koutsoukos, X. (2019). Linearize, predict and place: minimizing the makespan for edge-based stream processing of directed acyclic graphs. Symposium of Edge Computing (SEC) 2019
[Preprint]

Abstract: Many IoT applications found in cyber-physical systems, such as smart grids, must take control actions in response to critical events, such as supply-demand mismatch, which requires low-latency processing of streaming data for rapid event detection and anomaly remediation. These streaming applications generally take the form of directed acyclic graphs (DAGs), where vertices represent operators and edges represent the flow of data between these operators. Edge computing has recently attracted significant attention as a means to readily meet the requirements of latency-critical IoT applications due to its ability to provide low-latency processing near the source of data. To accrue the benefits of edge computing, the constituent operators of these applications must be placed in a manner that intelligently trades-off inter-operator communication costs with the cost of interference incurred due to co-location of operators on the same resource-constrained edge devices. To address these challenges and to substantially simplify the placement problem for DAGs of arbitrary sizes and topologies, we present an algorithm that first transforms any arbitrary stream processing DAG into an approximate set of linear chains. Subsequently, a data-driven latency prediction model for co-located linear chains is used to inform the placement of operators such that the makespan, defined as the maximum latency of all paths in the DAG, is minimized. We empirically evaluate our algorithm using a variety of DAG placement scenarios on a Beagle Bone cluster, which is representative of an edge computing environment.

OneOS: POSIX+ Actors= General-Purpose IoT Platform

Jung, K., Gascon-Samson, J., Pattabiraman, K. (2019). OneOS: POSIX+ Actors= General-Purpose IoT Platform (Poster). EuroSys 2019, Dresden, Germany
[Preprint]

Abstract: The Internet of Things (IoT) is now a reality. With an increasing number of” smart” devices, a recent interest in Edge/Fog Computing has challenged IoT platforms to support general-purpose workloads on arbitrary devices with the same performance and reliability guarantees as the Cloud. We present a design of an IoT platform called OneOS, resembling a Distributed Operating System, to provide a single-system image of the entire network of computers. OneOS operates over an abstract machine comprising a grid of high-level language runtimes modeled as Actors. We demonstrate an evaluation context replacement technique for mapping the POSIX interface over the networked system to run regular JavaScript and Python programs on OneOS without any modification.

OneOS: IoT Platform based on POSIX and Actors

Jung, K., Gascon-Samson, J., Pattabiraman, K. (2019). OneOS: IoT Platform based on POSIX and Actors. HotEdge 2019, Renton, États-Unis
[Preprint] [Presentation Slides] [Code]

Abstract: Recent interest in Edge/Fog Computing has pushed IoT Platforms to support a broader range of general-purpose workloads. We propose a design of an IoT Platform called OneOS, inspired by Distributed OS and micro-kernel principles, providing a single system image of the IoT network. OneOS aims to preserve the portability of applications by reusing a subset of the POSIX interface at a higher layer over a flat group of Actors. As a distributed middleware, OneOS achieves its goal through evaluation context replacement, which enables a process to run in a virtual context rather than its local context.

Failure Prediction in the Internet of Things due to Memory Exhaustion

Rafiuzzaman M., Gascon-Samson J., Pattabiraman K., Gopalakrishnan S. (2019) Failure Prediction in the Internet of Things due to Memory Exhaustion. 34th ACM Symposium on Applied Computing (SAC 2019), Limassol, Cyprus
> Acceptance ratio: 27.5% [Preprint] [Presentation Slides]

Abstract: We present a technique to predict failures resulting from memory exhaustion in devices built for the modern Internet of Things (IoT). These devices can run general-purpose applications on the network edge for local data processing to reduce latency, bandwidth and infrastructure costs, and to address data safety and privacy concerns. Applications are, however, not optimized for all devices and could result in sudden and unexpected memory exhaustion failures because of limited available memory on those IoT devices. Proactive prediction of such failures, with sufficient lead time, allows for adaptation of the application or its safe termination. Our memory failure prediction technique for applications running on IoT devices uses k-Nearest-Neighbor (kNN) based machine learning models. We have evaluated our technique using two third-party applications and a real-world IoT simulation application on two different IoT platforms and on an Amazon EC2 t2.micro instance for both single and multitenancy use cases. Our results indicate that our technique significantly outperforms simpler threshold-based techniques: in our test applications, with 180 seconds of lead time, failures were accurately predicted with 88% recall at 74% precision for a single application failure and 76% recall at 71% precision for multitenancy failure.