Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add doc for serialization/deserialization of torchao optimized models
Summary: Addressing following questions: 1. What happens if I save a quantized model 2. What happens if I load a quantized model and describing deteails like assign=True Specifically 1. Do you need ao as a dependency when you're loading a quantized model 2. Is the saved quantized model smaller on disk than the unquantized one Test Plan: . Reviewers: Subscribers: Tasks: Tags:
- Loading branch information