November 18, 2024
Report
Unifying Combinatorial and Graphical Methods in Artificial Intelligence
Abstract
Recently, a new graph Laplacian, called the inner product Laplacian, was introduced which generalizes many existing Laplacians, including the normalized and combinatorial Laplacian and their weighted variants. The key observation behind the inner product Laplacian is that by defining appropriate inner product spaces on the vertices and edges, the standard Laplacians can be recovered as Hodge Laplacians over the simplicial complex formed by the edges and vertices. These inner product spaces form a natural way to incorporate non-combinatorial information into the definition of a domain-specific Laplacian. In particular, in contrast to current domain-specific weighting schemes which rely solely on edge weights, information regarding the similarity of non-adjacent vertices and arbitrary pairs of edges can be effectively incorporated into the Laplacian. In order to illustrate this approach we consider the problem of calculating the potential energy of an atomistic configuration using Graph Neural Networks. In comparison with start-of-the-art approaches, such as SchNet, our approach replaces a learned (via auto-encoder) representation of the atom types with an inner product space on atoms based on scientific knowledge (e.g., electronegativity). We will illustrate how this approach captures key chemical properties of the molecules and compare the energy calculations with state-of-the-art neural network approaches. However, to compute the resulting Laplacian involves a mixture of sparse and dense matrix computation and yields a dense matrix as the basis for the graph convolution. This dense convolutional kernel necessitates moving away from the standard message passing framework for graph neural networks and increases the computational cost of applying the kernel. In order to mitigate these costs we investigate means of leveraging the mixed sparse and dense computations to reduce the overall computational cost and how these approaches can be automatically transferred to energy efficient hardware (e.g., field programmable gate arrays (FPGAs)).Published: November 18, 2024