Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
-
Updated
Dec 22, 2024 - Python
Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
SSD-based object and text detection with Keras, SSD, DSOD, TextBoxes, SegLink, TextBoxes++, CRNN
🎯 Accumulated Gradients for TensorFlow 2
Distributed training (multi-node) of a Transformer model
TorchHandle makes your PyTorch development more efficient and make you use PyTorch more comfortable
Gradient accumulation on tf.estimator
A simple implementation of Multi-passage BERT
This project aims to help people implement tensorflow model pipelines quickly for different nlp tasks.
tensorflow2-keras gradient accumulation
Gradient Accumulation with Tensorflow2.x
AlzMRI-Net: Classify Alzheimer's stages from MRI scans.
🎯 Production-ready implementation of video prediction models using PyTorch. Features Enhanced ConvLSTM with temporal attention, PredRNN with spatiotemporal memory, and Transformer-based architecture.
Implementation of Gradient Accumulation for low-memory language modelling transformer fine tuning.
Classifying images of flowers into 17 categories using EfficientNet-B0 and PyTorch.
Add a description, image, and links to the gradient-accumulation topic page so that developers can more easily learn about it.
To associate your repository with the gradient-accumulation topic, visit your repo's landing page and select "manage topics."