You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm curious, is there good evidence in paper that TreeGen is better than regular transformers?
I've noticed that other papers and my own experiments, if I increase the data set size then the extra effort to insert the TreeGen/code inductive biases are not clear if they are worth it.
Do you have a difference experience? Did you do these abalation experiments how each part helped TreeGen and if it did?
The text was updated successfully, but these errors were encountered:
In our paper, we showed TreeGen is better than regular transformers in HearthStone dataset.
Do you have a difference experience?
Do you mean to use a large dataset to train the code generation models? A large dataset size can improve the performance of all models, and I think using a grammar rule-guided model like TreeGen can further improve the performance of code generation.
Did you do these ablation experiments how each part helped TreeGen and if it did?
We have conducted an ablation test on the HearthStone dataset and details are in our paper.
I'm curious, is there good evidence in paper that TreeGen is better than regular transformers?
I've noticed that other papers and my own experiments, if I increase the data set size then the extra effort to insert the TreeGen/code inductive biases are not clear if they are worth it.
Do you have a difference experience? Did you do these abalation experiments how each part helped TreeGen and if it did?
The text was updated successfully, but these errors were encountered: