Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify Machine Translation demo by using Trainer API #10895

Conversation

nickyfantasy
Copy link
Contributor

No description provided.

context = encoder(is_sparse)
translation_ids, translation_scores = decoder_decode(context, is_sparse)

exe = Executor(place)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should move away from using executor, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think we should not expose executor. Probably we can write decode_main() similar to the train() method above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The decode and train in this example are not compatible with each other, we cannot provide the save model from train and use it in infer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot use trainer.train either because during decode, it is not using optimizer or backward pass, it is doing a beam search

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, and I think the GPU implementation of beam search is still missing?


src_word_data = to_lodtensor(map(lambda x: x[0], data), place)

result_ids, result_scores = exe.run(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any Inferencer. We should use the high level api.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or will there be any sub-sequence PR to add the infer part?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed with Nicky and Jeff. We could add some simple test to translate a sample sentence later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will talk to Longfei to see if there is any solution to not expose executor, will try to update in next PR

@nickyfantasy nickyfantasy merged commit d4c2164 into PaddlePaddle:develop May 24, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants