PAISE 2021

3rd Workshop on Parallel AI and Systems for the Edge

PAISE 2021 will be Virtually co-conductedwith IPDPS 2021 on May 21st, 2021.

Past Editions:

2020 2019



Applications involving voluminous data but needing low-latency computation and local feedback require that the computing be performed as close to the data source as possible. Communication constraints and the need for privacy-preserving approaches also dictate the need for computing at the edge. Given the growth in such application scenarios and the recent advances in algorithms and techniques, machine learning and inference at the edge are unfolding and growing at a rapid pace. In support of these applications, a wide range of hardware (CPUs, GPUs, ASICs) is venturing farther away from the center and enabling computation closer to the physical world, often at the interface to the physical world.

The resulting diversity in edge-computing hardware in terms of capabilities, architectures, and programming models as well as the various runtime requirements and resource constraints of the various edge applications poses several new challenges. Some edge applications may need to run continuously, whereas others may run when particular events occur. Furthermore, situations may also warrant running applications in sandboxes for privacy, security and resource allocation purposes. Due to the usually limited capacity at the edge in terms of computation, energy and network bandwidth, these applications need to be scheduled simultaneously and run concurrently. Consequently, a future with heterogeneous edge hardware and multiple applications sharing the underlying resources becomes imminent.

Deploying and managing such applications with diverse properties at the edge in a concurrent manner requires considerations for multitenancy and presents a challenge that requires cooperation and coordination between the various components of the software stack. Mechanisms need to be devised that communicate both data and control with the applications in order to fine-tune their behavior and change their operational parameters. Realizing the computing continuum and coupling these edge applications with centrally located cloud and HPC resources and applications also opens up many research areas.

As we push more toward edge-enabled networks of devices, we inherit a setting where resources are deployed away from the safety of secure indoor spaces, often in the midst of a bustling urban canyon, and exposed to physical and cybersecurity threats. Deployed and interconnected predominantly over public networks, these systems have to be designed with cybersecurity as a first-class design citizen, rather than introduced as an afterthought.

Another prominent and relevant advancement to this evolving landscape is the evolution of the last-mile wireless connectivity. The emergence of 5G and Wi-Fi 6, and the likely convergence, will initially provide a combination of bandwidth improvements, and latency reduction. Together with advancements in processor technology, this will enable us to deploy more advanced sensors, actuators and services than what is possible today. The next-generation wireless connectivity will also employ radio base stations more densely and with substantially higher computing capability for both core operations and as services for end users. The allocation and orchestration of these computing resources for various competing user needs and network functions will share many challenges currently encountered in edge-computing. Perhaps, the largest DevOps and management challenges in edge-computing at infrastructure level may be witnessed in this space.

The goal of this workshop is hence to gather the community working in three broad areas:

  • processing — artificial intelligence, computer vision, machine learning, data reduction;
  • management — parallel and distributed programming models for resource-constrained and domain-specific hardware, containers, remote resource management, DevOps, runtime-system design, and cybersecurity; and
  • hardware — systems and devices conducive to use in resource-constrained (energy, space, etc.) and harsh edge-applications.
The workshop will provide a critically needed opportunity to discuss the current trends and issues, to share visions, and to present solutions.


For this workshop we welcome original work covering three broad topics including Edge AI/ML and Data, Edge Architecture and Practice. In particular, we welcome work and discussions on:

    • Adaptive Training Sampling
    • AI Enabled IoT Applications at the Edge
    • Collaborative Training at the Edge
    • Cyber-Security and Privacy Aspects of Edge Computing
    • DevOps for Deploying and Managing Applications at the Edge
    • Edge Data and Storage Management
    • Edge Inference
    • Edge driven HPC, and HPC steered Edge Computing
    • Efficiency Edge-cloud Data Orchestration
    • Enabling SDN NFV at the Edge
    • Energy Efficient Processors for Training and Inference
    • Hardware for Edge-computing and Machine Learning
    • Opportunities for Edge Computing driven by WiFi-Cellular (5G/WiFi6) Convergence
    • Programming Models for Edge Computing
    • Software and Hardware Multitenancy at the Edge

Paper Submission, Paper Style, and Proceedings

All papers must be original and not simultaneously submitted to another journal or conference. The papers submitted to the workshop will be peer reviewed by a minimum of 3 reviewers.

The following paper categories are welcome:

  • Full Papers: Research papers should describe original work and be 6 - 8 pages in length. The papers will be presented as 20 min talks.
  • Short Papers: Short research papers, 4 or 5 pages in length, should contain enough information for the program committee to understand the scope of the project and evaluate the novelty of the problem or approach. The papers will be presented as 15 min talks.
  • Concept Papers and Practitioner Reports: Short papers, reports and extended abstracts 2 - 3 pages in length. These papers can describe new concepts, emerging hardware and Software platforms, DevOps and Management of IoT/Edge, including initial proof-of-concept design and implementation are welcome. Reports may also focus on a particular aspect of technology usage in practice, or describe broad project experiences. They may describe a particular design idea, or experience with a particular piece of technology. The papers will be presented as 8-10 min lightning-talks.

Templates for MS Word and LaTeX provided by IEEE eXpress Conference Publishing are available for download. See the latest versions here.

Here is a link to the EasyChair CFP. Upload your submission to EasyChair submission server in PDF format. Accepted manuscripts will be included in the IPDPS workshop proceedings.

Important Dates:

  • February 7th February 24th AOE, 2021: Submission deadline extended!
  • March 11th March 22nd AOE, 2021: Notification of acceptance.
  • March 15th March 31st, 2021: Camera ready papers due.
  • May 21st, 2021: Workshop!!!

Program Committee

  • Paarijaat Aditya, Nokia Bell Labs
  • Istemi Ekin Akkus, Nokia Bell Labs
  • Marco Brocanelli, Wayne State University
  • Kevin Chan, Army Research Laboratory
  • Ruichuan Chen, Nokia Bell Labs
  • Lucy Cherkasova, ARM Research
  • Nirmit V Desai, IBM Research
  • Nicolas Erdody, Open Parallel
  • Nicola Ferrier, Argonne National Laboratory
  • Felipe M. G. Fran├ža, Universidade Federal do Rio de Janeiro (UFRJ)
  • Dennis Gannon, Indiana University Bloomington
  • Dawei Li, Amazon Inc.
  • Diego Lugones, Nokia Bell Labs
  • Eric Van Hensbergen, ARM Research
  • Eric Matson, Purdue University
  • Leandro Marzulo, Google LLC
  • Michael Papka, Northern Illinois University
  • Koichi Shinoda, Tokyo Institute of Technology
  • Jerry Trahan, Louisiana State University
  • Sean Shahkarami, University of Chicago
  • Ramachandran Vaidyanathan, Louisiana State University
  • Wei Wang, The Hong Kong University of Science and Technology
  • Feng Yan, University of Nevada, Reno
  • Kazutomo Yoshii, Argonne National Laboratory

General Chairs

Workshop Organizers