Releases: Spijkervet/CLMR
Releases · Spijkervet/CLMR
CLMR weights (MagnaTagATune, MLP)
Weights of a SampleCNN encoder pre-trained with CLMR, and a two-layer multi-layer perceptron trained on the music classification task using the learned representations from the frozen encoder.
CLMR weights (MagnaTagATune)
CLMR weights (MagnaTagATune, SampleCNN, 48 batch size, 1550 epochs).
Both of the SampleCNN encoder and the fine-tuned linear layer.
ROC-AUC_tag = 88.49
PR-AUC_tag = 35.37
Fine-tuned Linear Classifier weights
Weights of a fine-tuned linear classifier on the task of music classification on the MagnaTagATune dataset, using the pre-trained weights from https://github.com/Spijkervet/CLMR/releases/tag/1.0
CLMR weights (MagnaTagATune, SampleCNN, 48 batch size, 1550 epochs)
Configuration used for pre-training on the MagnaTagATune dataset using CLMR, with a SampleCNN encoder:
# distributed training
nodes: 1
gpus: 1 # I recommend always assigning 1 GPU to 1 node
nr: 0 # machine nr. in node (0 -- nodes - 1)
workers: 16
## dataset options
dataset: "magnatagatune"
data_input_dir: "./datasets"
pretrain_dataset: "magnatagatune"
download: 0
## task / dataset options
domain: "audio"
task: "tags"
model_name: "clmr"
## train options
seed: 42
batch_size: 48
start_epoch: 0
epochs: 2000
checkpoint_epochs: 10
## audio
audio_length: 59049
sample_rate: 22050
## audio transformations
transforms_polarity: 0.8
transforms_noise: 0.0
transforms_gain: 0.0
transforms_filters: 0.4
transforms_delay: 0.3
## loss options
optimizer: "Adam" # [Adam, LARS]
learning_rate: 3.0e-4 # for Adam optimizer, LARS uses batch-specific LR
weight_decay: 1.0e-4
temperature: 0.5
## supervised params
supervised: False # to train encoder in fully supervised fashion
## model options
normalize: True
projection_dim: 128
projector_layers: 2
dropout: 0.5
## reload options
model_path: "save" # set to the directory containing `checkpoint_##.tar`
epoch_num: 0 # set to checkpoint number
finetune_model_path: ""
finetune_epoch_num: ""
reload: False
## linear evaluation options
mlp: False # use one extra hidden layer during fine-tuning
logistic_batch_size: 48
logistic_epochs: 10
logistic_lr: 0.001
reload_logreg: False
## train / fine-tune with percentage of total train data
perc_train_data: 1.0