Google AI has released an open-source pruning and sparse training library for machine learning researchers called JaxPruner. The library was created to provide a comprehensive toolkit that would make it easier for researchers to quickly develop and assess sparsity concepts against various dynamic benchmarks.
Over the last few years, the scientific community has used JAX due to its distinct division between functions and states. This distinguishes it from well-known deep learning frameworks like PyTorch and TensorFlow. JaxPruner’s major objective is parameter sparsity, which has been shown to perform better than dense models with the same number of parameters.
The library supports two methods for obtaining parameter sparsity: pruning, which creates sparse networks from dense networks for efficient inference, and sparse training, which develops sparse networks from scratch while lowering training costs. The ease of function transformations and single-state location makes it simpler to construct shared procedures across several pruning and sparse training methods.
To create JaxPruner, Google Research was guided by three principles: fast integration, study first, and minimal overhead. The library employs the well-known Optax optimization library, making it easier for others to integrate JaxPruner into existing codebases. JaxPruner also commits to a generic API used by several algorithms, making switching between various algorithms simple.
Integration with current frameworks is frequently lacking, making it challenging to use advancements in methods like CPU acceleration and activation sparsity, which JaxPruner addresses by utilizing binary masks for introducing sparsity.
As machine learning research moves quickly and has many evolving codebases, the library’s fast integration and minimal overhead are crucial for its adaptability. The ease of switching between popular sparsity structures and algorithms also makes JaxPruner a valuable asset to the research community and is an interesting development in the research of sparsity in neural networks.
If you’re interested in seeing for yourself, you can find more documentation via its GitHub repository.