Skip to content

Latest commit

 

History

History
86 lines (74 loc) · 5.57 KB

README_ja.md

File metadata and controls

86 lines (74 loc) · 5.57 KB

DNNによる音源分離

DNNによる音源分離(PyTorch実装)

新しい情報

  • v0.6.3
    • 結果の更新.
    • MDXチャレンジ2021の例を追加.

モデル

モデル 参考文献 実装
WaveNet WaveNet: A Generative Model for Raw Audio
Wave-U-Net Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
Deep clustering Single-Channel Multi-Speaker Separation using Deep Clustering
Chimera++ Alternative Objective Functions for Deep Clustering
DANet Deep Attractor Network for Single-microphone Apeaker Aeparation
ADANet Speaker-independent Speech Separation with Deep Attractor Network
TasNet TasNet: Time-domain Audio Separation Network for Real-time, Single-channel Speech Separation
Conv-TasNet Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation
DPRNN-TasNet Dual-path RNN: Efficient Long Sequence Modeling for Time-domain Single-channel Speech Separation
Gated DPRNN-TasNet Voice Separation with an Unknown Number of Multiple Speakers
FurcaNet FurcaNet: An End-to-End Deep Gated Convolutional, Long Short-term Memory, Deep Neural Networks for Single Channel Speech Separation
FurcaNeXt FurcaNeXt: End-to-End Monaural Speech Separation with Dynamic Gated Dilated Temporal Convolutional Networks
DeepCASA Divide and Conquer: A Deep Casa Approach to Talker-independent Monaural Speaker Separation
Conditioned-U-Net Conditioned-U-Net: Introducing a Control Mechanism in the U-Net for multiple source separations
MMDenseNet Multi-scale Multi-band DenseNets for Audio Source Separation
MMDenseLSTM MMDenseLSTM: An Efficient Combination of Convolutional and Recurrent Neural Networks for Audio Source Separation
UMX (Open-Unmix) Open-Unmix - A Reference Implementation for Music Source Separation
Wavesplit Wavesplit: End-to-End Speech Separation by Speaker Clustering
DPTNet Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation
D3Net D3Net: Densely connected multidilated DenseNet for music source separation
LaSAFT LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation
SepFormer Attention is All You Need in Speech Separation
GALR Effective Low-Cost Time-Domain Audio Separation Using Globally Attentive Locally Reccurent networks

モジュール

モジュール 参考文献 実装
Depthwise-separable convolution
Gated Linear Units
FiLM (Feature-wise Linear Modulation) FiLM: Visual Reasoning with a General Conditioning Layer
PoCM (Point-wise Convolutional Modulation) LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation

学習に関する方法

方法 参考文献 実装
Pemutation invariant training (PIT) Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks
One-and-rest PIT Recursive Speech Separation for Unknown Number of Speakers
Probabilistic PIT Probabilistic Permutation Invariant Training for Speech Separation
Sinkhorn PIT Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm

実行例

Open In Colab

Conv-TasNetによるLibriSpeechデータセットを用いた音源分離の例

<REPOSITORY_ROOT>/egs/tutorials/で他のチュートリアルも確認可能.

0. データセットの準備

cd <REPOSITORY_ROOT>/egs/tutorials/common/
. ./prepare_librispeech.sh --dataset_root <DATASET_DIR> --n_sources <#SPEAKERS>

1. 学習

cd <REPOSITORY_ROOT>/egs/tutorials/conv-tasnet/
. ./train.sh --exp_dir <OUTPUT_DIR>

学習を途中から再開したい場合,

. ./train.sh --exp_dir <OUTPUT_DIR> --continue_from <MODEL_PATH>

2. 評価

cd <REPOSITORY_ROOT>/egs/tutorials/conv-tasnet/
. ./test.sh --exp_dir <OUTPUT_DIR>

3. デモンストレーション

cd <REPOSITORY_ROOT>/egs/tutorials/conv-tasnet/
. ./demo.sh