Discovery via Quantum Neural Networks

PIs: Carlos Ortiz-Marrero, Nathan wiebe
This project aims to explain the limitations and expressive power of quantum machine learning models, as well as to find feasible training algorithms that support scientific and mission-critical applications. By incorporating quantum effects such as entanglement into machine learning models, researchers can improve model performance and understand more complex datasets. This is particularly important in the design of quantum neural networks (QNNs), a promising framework for creating quantum algorithms that can outperform classical models by combining the speedups of quantum computation with the widespread successes of deep learning.
QNNs can outperform existing deep learning models because there is entanglement between the visible and hidden layers of a model. In joint work with Nathan Wiebe (University of Toronto) and Mária Kieferová (UT Sydney), we showed that applying this approach alone to quantum deep learning is problematic given that excess entanglement between the hidden and visible layers can destroy the predictive power of QNN models. Our key insight is that barren plateaus, i.e., vanishing gradients as a model scale in the number of units, can occur because of excess entanglement between visible and hidden units in deep quantum neural networks. This surplus entanglement to some extent defeats the purpose of deep learning by non-locally storing information in the correlations between the layers rather than in the layers themselves. As a result, when one tries to remove the hidden units as is customary in deep learning, the resulting state is close to the maximally mixed state, which is no better than random guessing.
In 2023, we began applying our algorithmic methods to topics in Quantum Sensing.
Visit our GitHub repository.
Learn more about our Quantum Computing Bootcamp.