[Want to Help!!] Welcome to JEN-1-COMPOSER-pytorch Discussions! #1
0417keito
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
👋What is the discussion for?
In implementing JEN-1-Composer, there are a few points that are unclear and we would like to discuss them.
For this purpose, the following is a summary of what has been done and what has not been done.
What has been done
What I want to discuss.
First, select the number of tracks determined by the curriculum training, then randomly select a non-zero time step ti for each sample in the batch for the selected tracks, and for the remaining tracks, for each sample in the batch, select [0, ti, T I implemented this in trainer.py, is this correct?
Furthermore, is k the maximum number of tracks for input and output for any curriculum training stage? If we don't match the maximum number of tracks k, I don't think we will be able to train with a single model because the input and output channels of the model will not match for each stage.
For the generation of surroundings, condition generation, and co-generation, I simply calculated x_t for the selected track and the remaining track, respectively, based on the selected time step, and then concatenated the two calculated x_t. Is this correct?
Beta Was this translation helpful? Give feedback.
All reactions