The distributed deep learning architecture can support the front-deployment of deep learning systems in resource constrained IoT devices and is attracting increasing interest. However, most ready-to-use deep models are designed for centralized deployment without considering the transmission loss of the intermediate representation inside the distributed architecture. This oversight significantly affects the inference performance of distributed deployed deep models. To alleviate this problem, a state-of-the-art work chooses to retrain the original model to form an intermediate representation with ordered importance and yields better inference accuracy under constrained transmission bandwidth. This paper first reveals that this solution is essentially a pruning-like solution, where unimportant information is adaptively pruned to fit within the limited bandwidth. With this understanding, a novel scheme named Naturally Aggregated Intermediate Representation (NAIR) has been proposed, which aims to naturally amplify the difference of importance embedded in the intermediate representation from a mature deep model and reassemble the intermediate representation into a hierarchy of importance from high-to-low to accommodate the transmission loss. As a result, this method shows further improved performance in various scenarios, avoids compromising the overall inference performance of the system, and saves astronomical retraining and storage costs. The effectiveness of NAIR has been validated through extensive experiments, achieving a 112% improvement in performance compared to the state-of-the-art work.
Funding
10.13039/501100012166-National Key Research and Development Program of China (Grant Number: 2021YFC3090204 and 2022YFA100390)
10.13039/100000001-National Science Foundation (Grant Number: CNS 2128346)
Self-Learning Digital Twins for Sustainable Land Management
Engineering and Physical Sciences Research Council