Software
DMC projects have produced a number of open source software packages that are available on GitHub.
Converged Applications
Causal Inference and ML Methods for MIP Analysis of Security Constrained Unit (SCY0) Commitment
Heterogenous Computing
Transfer Learning for Building Energy Modeling (TransBeam)
- TransBeam: Transfer learning for building energy modeling
Hybrid Advanced Workflows (HAW)
- MCL: Minos Computing Library (MCL) is a modern task-based, asynchronous programming model for extremely heterogeneous systems
- SHAD: Scalable High-performance Algorithms and Data-structures (SHAD) C++ library
- SHADES: The SHAD Exploration System
Software-Defined Architecture for Data Analysis (SO(DA)2)
- ARENA: Asynchronous reconfigurable accelerator ring architecture serving as a selective solution on what the future HPC cluster will be like
- Bambu: A free framework for the high-level synthesis of complex applications
- Energy Characterization of Graph Workloads Data: Energy characterization study of six graph algorithm benchmark suite kernels with a variety of input graphs (9 synthetic, 3 real-world) on a dual-socket x86-based system
- OpenCGRA: Open-source framework for Coarse-Grained Reconfigurable Arrays (CGRAs)
- SODA-OPT: Enabling System Level Design in MLIR
-
SODA Toolchain Production Docker Image: The SODA toolchain leverages mlir to extract, optimize, and translate high-level code snippets into LLVM IR, so that they can be synthesized by our high-level synthesis tool of choice
Computational-Flow Architecture (CFA): Designing Non-Von-Neumann Architecture for Future Data-Centric Computing
- ARENA: Asynchronous reconfigurable accelerator ring architecture serving as a selective solution on what the future HPC cluster will be like
- OpenCGRA: Open-source framework for Coarse-Grained Reconfigurable Arrays (CGRAs)
- TCBNN: Tensor-Core Accelerated Binarized Neural Network (TCBNN) is an efficient Binarized-Neural-Network (BNN) design accelerated by NVIDIA Turing Bit-Tensor-Cores
Application-Algorithm-Architecture Co-Design for Large-Scale, Sparse Tensor/Matrix Methods (HiParTI)
- HiPartTI: Hierarchical Parallel Tensor Infrastructure (HiParTI) supports fast essential sparse tensor operations and tensor decompositions on multicore CPU and GPU architectures
Fixing amdahl's law within the limits of accelerated systems (fallacy)
- MemGaze: MemGaze is a memory analysis toolset that combines high-resolution trace analysis and low overhead measurement, both with respect to time and space
A compiler infrastructure for data-model convergence (duomo)
- COMET: Domain specific COMpiler for Extreme Targets
Lab-Level Communications Priority Topics
Computing