Skip to content

Commit

Permalink
Remove jit from readme and cleanup dependencies. (#1789)
Browse files Browse the repository at this point in the history
Summary:
We do not support jit in the `test_bench.py` anymore. If anyone needs it, they will be suggested to implement their userbenchmarks for jit benchmarking.

We also want to simplify the user installation so that they don't need to install extra libraries except running the `install.py` script. Therefore, unnecessary package deps are removed from `install.py`.

Pull Request resolved: #1789

Reviewed By: msaroufim

Differential Revision: D47834999

Pulled By: xuzhao9

fbshipit-source-id: 7a069e4546679844265d34066b103308900c3a44
  • Loading branch information
xuzhao9 authored and facebook-github-bot committed Jul 27, 2023
1 parent 83f17f9 commit b10fcd2
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 12 deletions.
13 changes: 4 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
This is a collection of open source benchmarks used to evaluate PyTorch performance.

`torchbenchmark/models` contains copies of popular or exemplary workloads which have been modified to:
*(a)* expose a standardized API for benchmark drivers, *(b)* optionally, enable JIT,
*(a)* expose a standardized API for benchmark drivers, *(b)* optionally, enable backends such as torchinductor/torchscript,
*(c)* contain a miniature version of train/test data and a dependency install script.

## Installation
Expand Down Expand Up @@ -36,11 +36,6 @@ Or use pip:
pip install --pre torch torchvision torchtext torchaudio -i https://download.pytorch.org/whl/nightly/cu118
```

Install other necessary libraries:
```
pip install boto3 pyyaml
```

Install the benchmark suite, which will recursively install dependencies for all the models. Currently, the repo is intended to be installed from the source tree.
```
git clone https://github.com/pytorch/benchmark
Expand Down Expand Up @@ -115,8 +110,8 @@ Some useful options include:
- `--cpu_only` if running on a local CPU machine and ignoring machine configuration checks

#### Examples of Benchmark Filters
- `-k "test_train[NAME-cuda-jit]"` for a particular flavor of a particular model
- `-k "(BERT and (not cuda) and (not jit))"` for a more flexible approach to filtering
- `-k "test_train[NAME-cuda-eager]"` for a particular flavor of a particular model
- `-k "(BERT and (not cuda))"` for a more flexible approach to filtering

Note that `test_bench.py` will eventually be deprecated as the `userbenchmark` work evolve. Users are encouraged to explore and consider using [userbenchmark](#using-userbenchmark).

Expand All @@ -128,7 +123,7 @@ The `userbenchmark` allows you to develop your customized benchmarks with TorchB
Sometimes you may want to just run train or eval on a particular model, e.g. for debugging or profiling. Rather than relying on __main__ implementations inside each model, `run.py` provides a lightweight CLI for this purpose, building on top of the standard BenchmarkModel API.

```
python run.py <model> [-d {cpu,cuda}] [-m {eager,jit}] [-t {eval,train}] [--profile]
python run.py <model> [-d {cpu,cuda}] [-t {eval,train}] [--profile]
```
Note: `<model>` can be a full, exact name, or a partial string match.

Expand Down
3 changes: 0 additions & 3 deletions install.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,10 @@
import subprocess
import os
import sys
import yaml
import tarfile
from utils import TORCH_DEPS, proxy_suggestion, get_pkg_versions, _test_https
from pathlib import Path
REPO_ROOT = Path(__file__).parent


def pip_install_requirements(requirements_txt="requirements.txt"):
if not _test_https():
print(proxy_suggestion)
Expand Down

0 comments on commit b10fcd2

Please sign in to comment.