Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fx2trt ]add temporary setup.py for fx2trt #1132

Closed
wants to merge 3 commits into from
Closed

Conversation

frank-wei
Copy link
Contributor

@frank-wei frank-wei commented Jun 19, 2022

Description

Currently, we do not have nightly build setup yet(circle CI is enabled and we are free to add the tests and nightly) and user who want to use fx2trt needs to set up installation which takes long time to figure out the environment setting (for my first time, it takes half day since I am not able to use docker). Also the torch_tensorrt build takes around 40 mins.
So I propose to add temporary solution for fx2trt user who would like to try it only, we add this setup file for their convenience. Overall, they can quickly install the fx2trt module and try it in their python environment.

It is needed for our hugging face PR since the reviewers need to repro the results.

Here is what I can think of the possible universal solution, NV folks and Meta folks, please feel free to comment:

  1. build nightly wheel file which user can install and use promptly. But it needs to cover different combination of trt versoin X cudnn version X cuda versions ?
  2. provide docker build file to help user set up the build environment. It needs, nightly torch + cudnn 11.3 or 11.4 + trt 8.2.4.2? A few combinations also?
  3. For users who are not able to use docker image, our README needs to provide more details on changing WORKSPACE and BUILD files.
  4. For users who want to try out fx2trt, can we keep the this PR's setup file to continue provide their convenience?

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@frank-wei frank-wei requested a review from narendasan June 19, 2022 05:10
@github-actions github-actions bot requested review from 842974287 and wushirong June 19, 2022 05:10
@frank-wei frank-wei changed the title add temporary setup.py [fx2trt ]add temporary setup.py for fx2trt Jun 19, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@narendasan
Copy link
Collaborator

narendasan commented Jun 20, 2022

  1. Build nightly wheel file which user can install and use promptly. But it needs to cover different combination of trt version X cudnn version X cuda versions?
  1. IMO this should be a pretty infrequently changing stack: Latest CUDA supported by PyTorch + Last TensorRT GA and its associated cuDNN. PyTorch nightly should be fetched at build time
  1. Provide docker build file to help user set up the build environment. It needs, nightly torch + cudnn 11.3 or 11.4 + trt 8.2.4.2? A few combinations also?
  1. This container is available for this purpose: https://github.com/pytorch/TensorRT/blob/master/docker/Dockerfile
  1. For users who are not able to use docker image, our README needs to provide more details on changing WORKSPACE and BUILD files.
  1. Fair enough, I actually think of this usecase as the primary setup method. We can add more documentation on how to stage the tarballs and point to them in the WORKSPACE file to get a hermetic build.
  1. For users who want to try out fx2trt, can we keep the this PR's setup file to continue provide their convenience?
  1. @ncomly-nvidia thoughts here? I think we can frame it as a build option for torch-tensorrt that is pure python or something. I think though in that case we should still call it torch-tensorrt but just have the TorchScript path throw Unimplemented or something, rather than having a alternative package.

@narendasan
Copy link
Collaborator

I am interested to hear that the build takes 40mins. That is pretty concerning, is this building a docker container + building the library or just the library?

@yinghai
Copy link

yinghai commented Jun 21, 2022

@ncomly-nvidia thoughts here? I think we can frame it as a build option for torch-tensorrt that is pure python or something. I think though in that case we should still call it torch-tensorrt but just have the TorchScript path throw Unimplemented or something, rather than having a alternative package.

Yeah, I think we should just have one setup.py and maybe add an --fx-only option to that so that we can just copy the python file and not do any C++/CUDA compilation. I'd prefer we maintain the torch-tensor name as that's what we strived for for the past several months.

@frank-wei
Copy link
Contributor Author

I am interested to hear that the build takes 40mins. That is pretty concerning, is this building a docker container + building the library or just the library?

I think the my previous description is not accurate due to the reason that my internal machine is shared with other user.
We can use the circle CI build time as a reference. It takes around 20min. https://app.circleci.com/pipelines/github/pytorch/TensorRT/53/workflows/f39f08b9-350b-4c92-814a-0c543a74be03/jobs/63

@frank-wei
Copy link
Contributor Author

frank-wei commented Jun 21, 2022

The current import path does not change. I am still using import torch_tensorrt.fx. If user use it to import ts, error below is thrown

>>> import torch_tensorrt.ts
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_tensorrt.ts'
>>> import torch_tensorrt.fx

Is there way we can add send a flag to control setup to proceed in different way as @yinghai mentioned?

@narendasan
Copy link
Collaborator

We already have a few flags in the top level setup.py already, I guess we can add a new one to gate which file are added and tasks which are performed

@ncomly-nvidia
Copy link
Contributor

The current import path does not change. I am still using import torch_tensorrt.fx. If user use it to import ts, error below is thrown

>>> import torch_tensorrt.ts
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_tensorrt.ts'
>>> import torch_tensorrt.fx

Is there way we can add send a flag to control setup to proceed in different way as @yinghai mentioned?

@frank-wei what happens with import torch_tensorrt? This is the path I'd expect most users will likely try first, even if only using the FX path.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@frank-wei
Copy link
Contributor Author

The current import path does not change. I am still using import torch_tensorrt.fx. If user use it to import ts, error below is thrown

>>> import torch_tensorrt.ts
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_tensorrt.ts'
>>> import torch_tensorrt.fx

Is there way we can add send a flag to control setup to proceed in different way as @yinghai mentioned?

@frank-wei what happens with import torch_tensorrt? This is the path I'd expect most users will likely try first, even if only using the FX path.

It won't throw error if user run python -c "import torch_tensorrt" since under the installation folder python3.8/site-packages/torch_tensorrt there is only fx folder.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@frank-wei
Copy link
Contributor Author

frank-wei commented Jun 22, 2022

I have updated the PR. Users can rely on a flag to enable installation of fx2trt only.
python3 setup.py install --fx2trt-only

@frank-wei
Copy link
Contributor Author

close it as it is merged in #1137

@frank-wei frank-wei closed this Jun 22, 2022
@frank-wei frank-wei deleted the fx2trt_wei_1 branch June 22, 2022 23:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants