Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditioner and more)
-
Updated
Jun 10, 2025 - Python
Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditioner and more)
LoRA (Low-Rank Adaptation) inspector for Stable Diffusion
Fine-tuning of diffusion models
VIP is a python package/library for angular, reference star and spectral differential imaging for exoplanet/disk detection through high-contrast imaging.
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
A framework based on the tensor train decomposition for working with multivariate functions and multidimensional arrays
Tensorflow implementation of preconditioned stochastic gradient descent
Pytorch implementation of Factorizer.
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
[ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)
Pytorch implemenation of "Learning Filter Basis for Convolutional Neural Network Compression" ICCV2019
Solver in the low-rank tensor train format with cross approximation approach for the multidimensional Fokker-Planck equation
This repository is the official code for the paper "Enhanced MRI Brain Tumor Detection and Classification via Topological Data Analysis and Low-Rank Tensor Decomposition" by Serena Grazia De Benedictis, Grazia Gargano and Gaetano Settembre.
MUSCO: Multi-Stage COmpression of neural networks
My experiment of multilayer NMF, a deep neural network in which the first several layers take Semi-NMF as its pseudo-activation-function that finds the latent sturcture embedding in the original data unsupervisely.
Lowrankdensity
[CVPR 2025 Highlight] CASP: Compression of Large Multimodal Models Based on Attention Sparsity
The repository contains code to reproduce the experiments from our paper Error Feedback Can Accurately Compress Preconditioners available below:
Deep learning models have become state of the art for natural language processing (NLP) tasks, however deploying these models in production system poses significant memory constraints. Existing compression methods are either lossy or introduce significant latency. We propose a compression method that leverages low rank matrix factorization durin…
Low-Rank Augmented Lagrangian algorithm designed to solve large-scale SDP problems.
Add a description, image, and links to the low-rank-approximation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-approximation topic, visit your repo's landing page and select "manage topics."