Caching, Computing and Delivery in Wireless Networks Workshop (CCDWN)

Invited Speakers

Abstract: Crucial performance metrics of a caching algorithm include its ability to quickly and accurately learn a popularity distribution of requests. However, a majority of work on analytical performance analysis focuses on hit probability after an asymptotically large time has elapsed. We consider an online learning viewpoint, and characterize the “regret” in terms of the finite time difference between the hits achieved by a candidate caching algorithm with respect to a genie-aided scheme that places the most popular items in the cache. We first consider the Full Observation regime wherein all requests are seen by the cache. We show that the Least Frequently Used (LFU) algorithm is able to achieve order optimal regret, which is matched by an efficient counting algorithm design that we call LFU-Lite. We then consider the Partial Observation regime wherein only requests for items currently cached are seen by the cache, making it similar to an online learning problem related to the multi-armed bandit problem. We show how approaching this “caching bandit” using traditional approaches yields either high complexity or regret, but a simple algorithm design that exploits the structure of the distribution can ensure order optimal regret. We conclude by illustrating our insights using numerical simulations. This work is in collaboration with Desik Rengarajan, Archana Bura, Dileep Kalathil, and Jean-Francois Chamberland, all at Texas A&M University.

Abstract: We consider the problem of cache-aided Multiuser Private Information Retrieval (MuPIR) which is an extension of the single-user cache-aided PIR problem to the case of multiple users. In cache-aided MuPIR, each cache-equipped user wishes to privately retrieve a message out of K messages from N databases each having access to the entire message library. Demand privacy requires that any individual database learns nothing about the demands of all users. The users are connected to each database via an error-free shared-link. When the cached information at users are known to the servers, the fundamental trade-off between user cache memory and communication load is wide open. In this talk, we will present some preliminary progresses for this problem. In particular, we first propose a product design achieving the order optimal trade-off. This result shows that the benefits of both coded caching and PIR can be “multiplied”. Then we will introduce a new cache-aided interference alignment (CIA) approach that achieves the exact optimality under some parameter settings. In addition, we will introduce a novel decoding tree approach to characterize the converse of this trade-off.

Optimizing Contributions to Distributed, Networked Learning

Carlee Joe-Wong (Carnegie Mellon University)

Abstract: As the Internet of Things continues to proliferate, enormous amounts of data are being generated by increasing numbers of devices, which are generally equipped with varying amounts of compute and communication resources. Much recent work has proposed methods to analyze these data in order to produce useful insights, e.g., through variations on federated learning where devices run local computations on data they collect, with occasional communication between them to aggregate their local models. In this talk, we focus on how multiple devices should contribute to such distributed learning systems. These contributions may take multiple forms, including reserving energy for local computations, sacrificing privacy by sharing models learned on local data, and utilizing limited communication bandwidth when aggregating local models. Optimizing such contributions requires individual devices to make decisions on what resources they are willing to provide to the learning (e.g., in exchange for payments offered by the learning operator), and conversely may require the operator to design aggregation methods that maximally leverage each device’s ability to contribute local computations between model aggregations. We propose novel methods for operators to recruit devices to participate in their learning tasks, and novel aggregation and task assignment algorithms for them to optimize the data, compute, and communication resources offered by these recruited devices.

Abstract: The talk will elaborate on the recent breakthroughs and open challenges in applying coded caching in wireless settings. We will see how the wireless medium forces a new set of fundamental challenges and bottlenecks which can -- unless dealt with -- all but dissolve some of the theoretical gains of coded caching as we know it. At the same time though, we will analyze some new opportunities that the wireless medium offers towards further leveraging the ideas behind coded multicasting. We will also elaborate on open challenges in information theoretic converses, open challenges in applying combinatorics to reduce the subpacketization problem, as well as a variety of challenges for the communication theorists and network theorists in the audience. At the end, we will seek to shed light on a much more open-ended question: Can caching truly make a difference as a core technology in wireless communications?

Bio: Petros Elia received the B.Sc. degree from the Illinois Institute of Technology, and the M.Sc. and Ph.D. degrees in electrical engineering from the University of Southern California (USC), Los Angeles, in 2001 and 2006 respectively. He is now a professor with the Department of Communication Systems at EURECOM in Sophia Antipolis, France. His latest research deals with the intersection of coded caching and feedback-aided communications in multiuser settings. He has also worked in the area of complexity-constrained communications, MIMO, queueing theory and cross-layer design, coding theory, information theoretic limits in cooperative communications, and surveillance networks. He is a Fulbright scholar, the co-recipient of the NEWCOM++ distinguished achievement award 2008-2011 for a sequence of publications on the topic of the computation-vs-communications tradeoff, and the recipient of the ERC Consolidator Grant 2017-2023 on cache-aided wireless communications and computations.

Cache Networks with Optimality Guarantees

Stratis Ioannidis (Northeastern University)

Abstract: We study networks in which nodes act as caches that store objects. Requests for objects are routed towards nodes that store them; as a result, object traffic in the network is determined not only by demand but, crucially, by where objects are cached. We determine how to place objects in caches to attain a certain design objective, such as, e.g., minimizing network congestion or retrieval delays. We show that this optimization problem can be cast as an NP-hard submodular maximization problem subject to matroid constraints. Moreover, this structure arises in a broad class of modeling settings, including queueing with a variety of service disciplines as well as when caching and routing are jointly optimized..

Bio: Stratis Ioannidis is an assistant professor in the Electrical and Computer Engineering Department of Northeastern University, in Boston, MA, where he also holds a courtesy appointment with the College of Computer and Information Science. He received his B.Sc. (2002) in Electrical and Computer Engineering from the National Technical University of Athens, Greece, and his M.Sc. (2004) and Ph.D. (2009) in Computer Science from the University of Toronto, Canada. Prior to joining Northeastern, he was a research scientist at the Technicolor research centers in Paris, France, and Palo Alto, CA, as well as at Yahoo Labs in Sunnyvale, CA. He is the recipient of an NSF CAREER award, a Google Faculty Research award, a Facebook Research award, and several best paper awards. His research interests span machine learning, distributed systems, networking, optimization, and privacy.

Keynotes

Abstract: Blockchain systems are already gaining popularity in a variety of applications due to their decentralized design that is favorable in many settings. To overcome excessive storage and latency burden, light nodes and side blockchains have been proposed to, respectively, enhance the basic blockchain architecture. However, both light nodes and side chains are vulnerable to data availability (DA) attacks by malicious nodes. Recently, a technique based on erasure codes called Coded Merkle Tree (CMT) was proposed by Yu et al. that enables light nodes to detect a DA attack with high probability. CMT method relies on the use of random LDPC codes. We build on the previous work and demonstrate that graph codes specifically designed for the target applications in blockchain systems perform better than randomly constructed codes; intriguingly, the new finite-length code optimization framework unveils code properties beyond the established metrics.

Towards 6G Edge Intelligence: Shannon Meets Turing

Kaibin Huang (The University of Hong Kong)

Abstract: TBD

Program

Please visit HERE ( https://wiopt.online/workshops/track/ccdwn ) for the Technical Program of CCDWN.

Sanaullah Manzoor and Adnan Noor Mian, “Robust Federated Learning based Content Caching over Uncertain Wireless Transmission Channels in FRANs
Shadab Mahboob, Koushik Kar and Jacob Chakareski, “Decentralized Collaborative Video Caching in 5G Small-Cell Base Station Cellular Networks
Anu Krishna, Ramya Burra and Chandramani Singh, “Caching dynamic contents with varying popularity

Contact

Derya Malak, deryamalak@gmail.com

Workshop Organizers

Derya Malak, deryamalak@gmail.com
Tianyi Chen, chent18@rpi.edu

Submission Instructions

All submissions will be handled by EASYCHAIR. The submission link is the same as the Wiopt.

No-show Policy

At least one author should have full registration either for the whole WiOpt event or for the workshop and present the work.

Important Dates

Paper submission: July 16, 2021, 23:59 EST
Notification of acceptance: August 8, 2021, 23:59 EST
Camera ready/registration due: August 13, 2021, 23:59 EST
Workshop: October 18, 2021 (Monday)

TPC Members

  • Alhussein A. Abouzeid (RPI)
  • Arjun Anand (Intel)
  • Ejder Bastug (Nokia Bell Labs)
  • Soumya Basu (Google)
  • Harpreet Dhillon (Virginia Tech)
  • Rafael D’Oliveira (MIT)
  • Yanjie Dong (The University of British Columbia)
  • Petros Elia (Eurecom)
  • Homa Esfahanizadeh (MIT)
  • Xiaowen Gong (Auburn University)
  • Stratis Ioannidis (Northeastern University)
  • George Iosifidis (Delft University of Technology)
  • Mingyue Ji (University of Utah)
  • Koushik Kar (RPI)
  • Marios Kountouris (Eurecom)
  • Jia Liu (OSU)
  • Mohammad Ali Maddah-Ali (Sharif University)
  • Navid NaderiAlizadeh (HRL Laboratories)
  • Konstantinos Poularakis (Yale University)
  • Alireza Sadeghi (UMN)
  • Srinivas Shakkottai (Texas A&M)
  • Cong Shen (University of Virginia)
  • Xingyu Zhou (Wayne State University)

Call for Papers

With the ever-growing demand for connectivity in 6G networks, limited air interface resources, storage capabilities, network, and protocols, as well as throughput hungry and delay-sensitive applications challenge the effective processing of a large volume of data. The caching paradigm can enable the providers to provide faster downloads and cheaper services. Existing work in the field has demonstrated key savings through edge helpers, erasure coding, and distributed optimization techniques, and shed light on the fundamental limits of bandwidth and storage requirements. To satisfy the QoS requirements of emerging applications, the focus of the 6th Caching, Computing and Delivery in Wireless Networks Workshop (CCDWN) (previously known as Content Caching and Delivery in Wireless Networks) is to bring together techniques from optimization, information theory, and networking to enable personalized, intelligent, scalable, and secure caching at the edge and meet the stringent requirements in 6G networks. We believe that the high-quality contributions from the researchers in the field will advance the understanding of performance limits, closing the gap between theory and practice.

Topics of interests include but not limited to

  • Cache protocols, distributed and adaptive algorithms, effective relaxation and rounding techniques forplacement to provide optimality guarantees
  • Joint optimization of caching, routing, rate, power, congestion, delay for meeting throughput-latencyrequirements, providing guarantees and resilience to failures
  • Fairness and privacy in caching
  • Distributed reinforcement learning for caching
  • Caching for networking, caching in HetNets, spatial and temporal caching, content updates incache-aided networks, load balancing with asymmetric upload and download bandwidth, accounting forbackhaul cost and bottlenecks
  • Information-centric networking, named data networking
  • Caching for computation, coding for computation
  • Information theoretic limits and throughput scaling laws and the role of feedback
  • Delivery delay, queueing delay, transmission delay
  • Modeling the tail distribution, demand correlations
  • Cache replacement techniques
  • Learning at the edge, intelligent caching
  • Coded caching, network coding for distributed storage/caching
  • Management of storage systems, updates, and storage versus repair-bandwidth tradeoff
  • Caching games, multi-agent techniques
  • Caching for multimedia, live video, AR-VR, holographic communication
  • Video caching, multiple description, multiple rates and/or multi-view videos
  • Prefetching, proactive caching, recommendation-aware content caching