Skip to content

Commit

Permalink
Merge branch 'main' into asonawane/mpt
Browse files Browse the repository at this point in the history
  • Loading branch information
apsonawane committed Aug 2, 2023
2 parents 7d3f086 + 00c01df commit f116952
Show file tree
Hide file tree
Showing 218 changed files with 555 additions and 866 deletions.
1 change: 1 addition & 0 deletions .github/workflows/userbenchmark-regression-detector.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
SETUP_SCRIPT: "/workspace/setup_instance.sh"
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
jobs:
run-userbenchmark:
runs-on: [self-hosted, a100-runner]
Expand Down
13 changes: 4 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
This is a collection of open source benchmarks used to evaluate PyTorch performance.

`torchbenchmark/models` contains copies of popular or exemplary workloads which have been modified to:
*(a)* expose a standardized API for benchmark drivers, *(b)* optionally, enable JIT,
*(a)* expose a standardized API for benchmark drivers, *(b)* optionally, enable backends such as torchinductor/torchscript,
*(c)* contain a miniature version of train/test data and a dependency install script.

## Installation
Expand Down Expand Up @@ -36,11 +36,6 @@ Or use pip:
pip install --pre torch torchvision torchtext torchaudio -i https://download.pytorch.org/whl/nightly/cu118
```

Install other necessary libraries:
```
pip install boto3 pyyaml
```

Install the benchmark suite, which will recursively install dependencies for all the models. Currently, the repo is intended to be installed from the source tree.
```
git clone https://github.com/pytorch/benchmark
Expand Down Expand Up @@ -115,8 +110,8 @@ Some useful options include:
- `--cpu_only` if running on a local CPU machine and ignoring machine configuration checks

#### Examples of Benchmark Filters
- `-k "test_train[NAME-cuda-jit]"` for a particular flavor of a particular model
- `-k "(BERT and (not cuda) and (not jit))"` for a more flexible approach to filtering
- `-k "test_train[NAME-cuda-eager]"` for a particular flavor of a particular model
- `-k "(BERT and (not cuda))"` for a more flexible approach to filtering

Note that `test_bench.py` will eventually be deprecated as the `userbenchmark` work evolve. Users are encouraged to explore and consider using [userbenchmark](#using-userbenchmark).

Expand All @@ -128,7 +123,7 @@ The `userbenchmark` allows you to develop your customized benchmarks with TorchB
Sometimes you may want to just run train or eval on a particular model, e.g. for debugging or profiling. Rather than relying on __main__ implementations inside each model, `run.py` provides a lightweight CLI for this purpose, building on top of the standard BenchmarkModel API.

```
python run.py <model> [-d {cpu,cuda}] [-m {eager,jit}] [-t {eval,train}] [--profile]
python run.py <model> [-d {cpu,cuda}] [-t {eval,train}] [--profile]
```
Note: `<model>` can be a full, exact name, or a partial string match.

Expand Down
45 changes: 0 additions & 45 deletions compare.py

This file was deleted.

5 changes: 0 additions & 5 deletions compare.sh

This file was deleted.

3 changes: 0 additions & 3 deletions conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,6 @@ def pytest_addoption(parser):
help="Disable checks/assertions for machine configuration for stable benchmarks")
parser.addoption("--disable_nograd", action='store_true',
help="Disable no_grad for eval() runs")
parser.addoption("--check_opt_vs_noopt_jit",
action='store_true',
help="The best attempt to check results for inference runs. Not all models support this!")
parser.addoption("--cpu_only", action='store_true',
help="Run benchmarks on cpu only and ignore machine configuration checks")
parser.addoption("--cuda_only", action='store_true',
Expand Down
237 changes: 0 additions & 237 deletions fx_profile.py

This file was deleted.

4 changes: 2 additions & 2 deletions gen_summary_metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def _extract_detail(path: str) -> Dict[str, Any]:
# Separate train and eval to isolated processes.
task_t = ModelTask(path, timeout=TIMEOUT)
try:
task_t.make_model_instance(device=device, jit=False)
task_t.make_model_instance(device=device)
task_t.set_train()
task_t.train()
task_t.extract_details_train()
Expand All @@ -64,7 +64,7 @@ def _extract_detail(path: str) -> Dict[str, Any]:

task_e = ModelTask(path, timeout=TIMEOUT)
try:
task_e.make_model_instance(device=device, jit=False)
task_e.make_model_instance(device=device)
task_e.set_eval()
task_e.eval()
task_e.extract_details_eval()
Expand Down
3 changes: 0 additions & 3 deletions install.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,10 @@
import subprocess
import os
import sys
import yaml
import tarfile
from utils import TORCH_DEPS, proxy_suggestion, get_pkg_versions, _test_https
from pathlib import Path
REPO_ROOT = Path(__file__).parent


def pip_install_requirements(requirements_txt="requirements.txt"):
if not _test_https():
print(proxy_suggestion)
Expand Down
4 changes: 2 additions & 2 deletions regression_detector.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ def get_default_output_path(bm_name: str) -> str:
return os.path.join(output_path, fname)

def generate_regression_result(control: Dict[str, Any], treatment: Dict[str, Any]) -> TorchBenchABTestResult:
def _call_userbenchmark_detector(detector, start_file: str, end_file: str) -> TorchBenchABTestResult:
return detector(start_file, end_file)
def _call_userbenchmark_detector(detector, control: Dict[str, Any], treatment: Dict[str, Any]) -> TorchBenchABTestResult:
return detector(control, treatment)
assert control["name"] == treatment["name"], f'Expected the same userbenchmark name from metrics files, \
but getting {control["name"]} and {treatment["name"]}.'
bm_name = control["name"]
Expand Down
Loading

0 comments on commit f116952

Please sign in to comment.