Edge computing methods allow devices to efficiently train a high-performing, robust, and personalized model for predictive tasks. However, these methods succumb to privacy and scalability concerns such as adversarial data recovery and expensive model communication. Furthermore, edge computing methods unrealistically assume that all devices train an identical model. In practice,
edge devices have varying computational and memory constraints which may not allow certain devices to have the space or speed to train a specific model. To overcome these issues, we propose MIDDLE: a model independent distributed learning algorithm which allows heterogeneous edge devices to assist each other’s training while communicating only non-sensitive information.
MIDDLE unlocks the ability for edge devices, regardless of computational or memory constraints, to assist each other even with completely different model architectures. Furthermore, MIDDLE does not require model or gradient communication which greatly reduces communication size and time. We prove that MIDDLE attains the optimal convergence rate O(1/sqrt(TM)) of stochastic
gradient descent for convex and non-convex smooth optimization (for total iterations T and batch size M). Finally, our experimental results demonstrate that MIDDLE (even in non-IID data settings) attains robust and high-performing models without model or gradient communication.
Published: April 10, 2025
Citation
Bornstein M.I., M. Nazir, J. Drgona, S. Kundu, and V.A. Adetola. 2024.Finding MIDDLE Ground: Scalable and Secure Distributed Learning. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (CIKM 2024), October 21-25, 2024, Boise, ID, 141 - 151. New York, New York:Association for Computing Machinery.PNNL-SA-181607.doi:10.1145/3627673.3679587