Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checking that the LM actually trained #3728

Closed
nikkon3 opened this issue Apr 9, 2020 · 8 comments
Closed

Checking that the LM actually trained #3728

nikkon3 opened this issue Apr 9, 2020 · 8 comments
Labels

Comments

@nikkon3
Copy link

nikkon3 commented Apr 9, 2020

I have trained a gpt2 from scratch with the way that is decribed in that post https://huggingface.co/blog/how-to-train .
Just in the step 4, where he checks if the trained model actually works, he uses from pipeline the
"fill-mask" but that works only for models with masked language modeling objective.
Exists something similar i could use like "fill-mask" for my case?

@julien-c
Copy link
Member

julien-c commented Apr 9, 2020

Yes: simply model.generate() (not even a need for a Pipeline in that case)

cc @patrickvonplaten

@patrickvonplaten
Copy link
Contributor

I'd check if 'GPT2' works by sampling from a simple prompt. E.g.:

output = model.generate(tokenizer.encode('The president', return_tensors='pt'), do_sample=True)
tokenizer.decode(output[0])

@enzoampil
Copy link
Contributor

Thanks for clarifying! I was about to consider sending a PR for a GenerationPipeline under transformers.pipeline.

@thomwolf thomwolf reopened this Apr 10, 2020
@enzoampil
Copy link
Contributor

enzoampil commented Apr 11, 2020

I have a branch that implements a GenerationPipeline which already works for GPT models

The initial version of GenerationPipeline can be found in the branch's pipelines module, where I've registered it to the pipeline function using gpt2 as the default.

The implementation is based on the approach taken in run_generation.py, which means the forward pass uses the model.generate() method explained by @julien-c and @patrickvonplaten above.

So far, the code above works smoothly for open-ai and gpt2.

Sample code:

# Pip install
# If you're using Google Colab, make sure to reset runtime after installing
!pip install -e git+git://github.com/enzoampil/transformers.git@generation_pipeline#egg=transformers

# Pipeline uses `gpt2` by default
from transformers import pipeline
gpt = pipeline('generation', num_return_sequences=1, length=40)
gpt("You look great")
# ['You look great, me!" he says. "There\'s nothing wrong with that, it\'s just I wanted a bit of attention so I had to go to work. I had to back down."\n']

However, the module still doesn't work with other language models like xlm, xlnet, and transfo-xl.

I will do a root cause analysis on this and will send a PR as soon as I get this to work on the rest of the language models that should work with GenerationPipeline (i.e. those runnable from run_generation.py).

For more details, you can check out this colab notebook, which shows the gpt models working so far, and the rest of the models not working in the later sections.

@enzoampil
Copy link
Contributor

enzoampil commented Apr 12, 2020

[UPDATE] The issues above have been resolved and I'm in the process of sending a PR.

Google Colab tutorial here for running GenerationPipeline for the following LM models:

  1. OpenAI GPT
  2. OpenAI GPT-2
  3. Transformer-XL
  4. XML
  5. XLNet
  6. T5
  7. CTRL

@patrickvonplaten
Copy link
Contributor

You're PR looks very nice so far :-) I will take a look early next week!

@enzoampil
Copy link
Contributor

Thanks!

@stale
Copy link

stale bot commented Jun 12, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants