Skip to content

A Python library for audio data augmentation. Inspired by albumentations. Useful for machine learning.

License

Notifications You must be signed in to change notification settings

podcastle-studio/audiomentations

 
 

Repository files navigation

Audiomentations

Build status Code coverage Code Style: Black Licence: MIT DOI

A Python library for audio data augmentation. Inspired by albumentations. Useful for deep learning. Runs on CPU. Supports mono audio and multichannel audio. Can be integrated in training pipelines in e.g. Tensorflow/Keras or Pytorch. Has helped people get world-class results in Kaggle competitions. Is used by companies making next-generation audio products.

Need a Pytorch-specific alternative with GPU support? Check out torch-audiomentations!

Setup

Python version support PyPI version Number of downloads from PyPI per month

pip install audiomentations

Usage example

from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift
import numpy as np

augment = Compose([
    AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5),
    TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5),
    PitchShift(min_semitones=-4, max_semitones=4, p=0.5),
    Shift(min_fraction=-0.5, max_fraction=0.5, p=0.5),
])

# Generate 2 seconds of dummy audio for the sake of example
samples = np.random.uniform(low=-0.2, high=0.2, size=(32000,)).astype(np.float32)

# Augment/transform/perturb the audio data
augmented_samples = augment(samples=samples, sample_rate=16000)

Documentation

See https://iver56.github.io/audiomentations/

Transforms

Changelog

See https://iver56.github.io/audiomentations/changelog/

Acknowledgements

Thanks to Nomono for backing audiomentations.

Thanks to all contributors who help improving audiomentations.

About

A Python library for audio data augmentation. Inspired by albumentations. Useful for machine learning.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%