AbstractThe generation of custom hardware accelerators for applications implemented within high-level productive programming frameworks requires considerable manual effort. To automate this process, we introduce \sodaopt, a compiler tool that extends the MLIR infrastructure. \sodaopt automatically searches, outlines, tiles, and pre-optimizes relevant code regions to generate high-quality accelerators through high-level synthesis. \sodaopt can support any high-level programming framework and domain-specific language that interface with the MLIR infrastructure. By leveraging MLIR, \sodaopt solves compiler optimization problems with specialized abstractions. Backend synthesis tools connect to \sodaopt through progressive intermediate representation lowerings. \sodaopt interfaces to a design space exploration engine to identify the combination of compiler optimization passes and options that provides high-performance generated designs for different backends and targets. We demonstrate the practical applicability of the compilation flow by exploring the automatic generation of accelerators for deep neural networks operators outlined at arbitrary granularity and by combining outlining with tiling on large convolution layers. Experimental results with kernels from the PolyBench benchmark show that \sodaopt high-level optimizations improve execution delays of synthesized accelerators up to 60x. We also show that for the selected kernels, our solution outperforms the current of state-of-the art in more than 70% of the benchmarks and provides better average speedup in 55% of them.
Published: January 20, 2023