AbstractVariational quantum algorithms (VQA), which are comprised of a classical optimizer and a parameterized quantum circuit, emerges as one of the most promising approaches of harvesting quantum power in the noisy-intermediate-scale-quantum (NISQ) era. However, the deployment of VQAs on today's NISQ devices often faces considerable system noise and prohibitively slow training speeds. On the other hand, the expensive supporting sources and infrastructure make quantum computers extremely keen on high utilization. In this paper, we propose a novel way of thinking about a quantum backend: rather than relying on one physical device which tends to introduce platform-specific noise and bias, a quantum ensemble, which distributes quantum tasks across parallel devices, can serve as a virtualized quantum computer for offering reduced noise levels through an adaptive mixture and also provide significantly improved training speeds through parallelization. With this idea, we build a distributive VQA optimization framework called DVQA, serving as the first effort in adopting parallel quantum devices for cooperative VQA training. To further constraint noise and speed-up convergence, we design a model for individual NISQ devices concerning their properties and running conditions, and propose a weighting mechanism for regularizing the returned gradients. Extensive evaluations on 10 IBM-Q quantum devices using the VQE example show that the distributive VQA training framework can substantially boost the training speed by 10.5x on average (up to 86x and at least 5.2x) with improved training accuracy.
Published: June 14, 2022