Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preferred global encoding format getting changed in TE #133

Closed
ksivaman opened this issue Apr 5, 2023 · 2 comments
Closed

Preferred global encoding format getting changed in TE #133

ksivaman opened this issue Apr 5, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@ksivaman
Copy link
Member

ksivaman commented Apr 5, 2023

When using the TransformerLayer or LayerNormMLP APIs, the preferred encoding format is getting changed. Here is a repro script:

import locale
import torch
from transformer_engine.pytorch import TransformerLayer 

H = 768
seqlen = 2048                                                                                                                                                                                                                                                                                                        
batch_size = 2
nheads = 12

inp = torch.randn(seqlen, batch_size, H, device="cuda")
model = TransformerLayer(H, 4*H, nheads)

assert locale.getpreferredencoding() == "UTF-8", f"Preferred encoding: {locale.getpreferredencoding()}"
out = model(inp)
assert locale.getpreferredencoding() == "UTF-8", f"Preferred encoding: {locale.getpreferredencoding()}"
out = model(inp)
assert locale.getpreferredencoding() == "UTF-8", f"Preferred encoding: {locale.getpreferredencoding()}"
@ksivaman
Copy link
Member Author

ksivaman commented Apr 5, 2023

This bug been narrowed down to an issue in TorchScript. To read more about it please see this issue in NVFuser. As a workaround, one of the following can be done:

  • Run with environment variable NVTE_BIAS_GELU_NVFUSION=0 to disable bias and gelu fusion via NVFuser in TE or run with PYTORCH_JIT=0 to disable jit fusion altogether. NOTE: This option might impact performance.
  • If doing file I/O, explicitly set the encoding format. For .e.g. with open(input_file, "r", encoding="utf-8") as f:

@ptrendx ptrendx added the bug Something isn't working label May 16, 2024
@ptrendx
Copy link
Member

ptrendx commented May 16, 2024

The underlying issue is going to be fixed in CUDA 12.7.

@ptrendx ptrendx closed this as completed May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants