Skip to content
Snippets Groups Projects
  • Released date
  • Created date

Evidence collection

Collected 3 years ago

Release notes

This new release introduces several significant new features into the HPVM infrastructure, as well as bug fixes and improvements. The main additional components in this release are:

  • The Hetero-C++ front end: A new front end language for HPVM that makes it much simpler and cleaner to parallelize C/C++ applications for HPVM compilation. Hetero-C++ describes hierarchical task level and data level parallelism which directly lower to HPVM dataflow graphs.
  • An FPGA back end for HPVM: A new back end for HPVM allowing us to target Intel FPGAs. The back end uses the Intel FPGA SDK for OpenCL, so any Intel FPGA supported by the SDK can be targeted with our new back end.
  • An optimization and design space exploration (DSE) framework: A framework that consists of compiler optimizations and design space exploration designed to automatically tune programs for a given hardware target. The framework includes a performance model for Intel FPGAs, and provides the capability to use a custom evaluation script that allows targeting any other devices supported by HPVM. The compiler optimizations included in the optimizer consist of both HPVM-DFG-level and standard LLVM-level optimizations.
  • LLVM upgrade: HPVM has been upgraded to LLVM 13.0.
  • Code reorganization: Code base and library / tool structure have been reorganized to be more logical and easier to navigate.
  • Testing infrastructure enhancements: More lit tests have been added for all the different components of HPVM.

Evidence collection

Collected 3 years ago

Release notes

Release Candidate for HPVM 2.0 New components since last release:

  • Upgrading to LLVM 13.0
  • General reorganization of library components
  • HeteroC++ front end
  • HPVM2FPGA back end
  • HPVM optimization and design space exploration framework
  • General bug fixes

Evidence collection

Collected 3 years ago

Release notes

This release is a major addition to our first release (version 0.5), adding support for linear algebra tensor operations, Pytorch and Keras frontends, approximations for convolution operators, and an efficient and flexible framework for approximation tuning. Our novel approximation-tuner [2] automatically selects approximation knobs for individual tensor operations and selects configurations that maximize a (configurable) performance objective.