Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is TreeGen better than regular transformers? #25

Open
brando90 opened this issue Mar 31, 2022 · 1 comment
Open

Is TreeGen better than regular transformers? #25

brando90 opened this issue Mar 31, 2022 · 1 comment

Comments

@brando90
Copy link

I'm curious, is there good evidence in paper that TreeGen is better than regular transformers?

I've noticed that other papers and my own experiments, if I increase the data set size then the extra effort to insert the TreeGen/code inductive biases are not clear if they are worth it.

Do you have a difference experience? Did you do these abalation experiments how each part helped TreeGen and if it did?

@zysszy
Copy link
Owner

zysszy commented Apr 1, 2022

In our paper, we showed TreeGen is better than regular transformers in HearthStone dataset.

Do you have a difference experience?

Do you mean to use a large dataset to train the code generation models? A large dataset size can improve the performance of all models, and I think using a grammar rule-guided model like TreeGen can further improve the performance of code generation.

Did you do these ablation experiments how each part helped TreeGen and if it did?

We have conducted an ablation test on the HearthStone dataset and details are in our paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants