diff --git a/README.md b/README.md index b2e3c31..bd79977 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ Looking for a person who would like to help me maintain this repository! Contact List of useful data augmentation resources. You will find here some links to more or less popular github repos :sparkles:, libraries, papers :books: and other information. Do you like it? Feel free to :star: ! -Feel free to pull request! +Feel free to make a pull request! * [Introduction](README.md#Introduction) * [Repositories](README.md#Repositories) @@ -33,8 +33,8 @@ Feel free to pull request! # Introduction -Data augmentation can be simply described as any method that makes our dataset larger. To create more images for example, we could zoom the in and save a result, we could change the brightness of the image or rotate it. To get bigger sound dataset we could try raise or lower the pitch of the audio sample or slow down/speed up. -Example data augmentation techniques are presented on the diagram below. +Data augmentation can be simply described as any method that makes our dataset larger by making modified copies of the existing dataset. To create more images for example, we could zoom in and save the result, we could change the brightness of the image or rotate it. To get a bigger sound dataset we could try to raise or lower the pitch of the audio sample or slow down/speed up. +Example data augmentation techniques are presented in the diagram below. ![data augmentation diagram](images/da_diagram_v2.png) @@ -82,7 +82,7 @@ Example data augmentation techniques are presented on the diagram below. * Warping * Jittering * Perturbing - * Advanced approches + * Advanced approaches * Embedding space * GAN/Adversarial * RL/Meta-Learning @@ -100,7 +100,7 @@ Example data augmentation techniques are presented on the diagram below. * Spectrograms/Melspectrograms - usually done with time series data augmentation (jittering, perturbing, warping) or image augmentation (random erasing) -If you wish to cite us, you can cite following paper of your choice: [Style transfer-based image synthesis as an efficient regularization technique in deep learning](https://ieeexplore.ieee.org/document/8864616) or [Data augmentation for improving deep learning in image classification problem](https://ieeexplore.ieee.org/document/8388338). +If you wish to cite us, you can cite the following paper of your choice: [Style transfer-based image synthesis as an efficient regularization technique in deep learning](https://ieeexplore.ieee.org/document/8864616) or [Data augmentation for improving deep learning in image classification problem](https://ieeexplore.ieee.org/document/8388338). [![Star History Chart](https://api.star-history.com/svg?repos=AgaMiko/data-augmentation-review&type=Date)](https://star-history.com/#AgaMiko/data-augmentation-review&Date) @@ -109,9 +109,9 @@ If you wish to cite us, you can cite following paper of your choice: [Style tra ## Computer vision -#### - [albumentations](https://github.com/albu/albumentations) ![](https://img.shields.io/github/stars/albu/albumentations.svg?style=social) is a python library with a set of useful, large and diverse data augmentation methods. It offers over 30 different types of augmentations, easy and ready to use. Moreover, as the authors prove, the library is faster than other libraries on most of the transformations. +#### - [albumentations](https://github.com/albu/albumentations) ![](https://img.shields.io/github/stars/albu/albumentations.svg?style=social) is a Python library with a set of useful, large, and diverse data augmentation methods. It offers over 30 different types of augmentations, easy and ready to use. Moreover, as the authors prove, the library is faster than other libraries on most of the transformations. -Example jupyter notebooks: +Example Jupyter notebooks: * [All in one showcase notebook](https://github.com/albu/albumentations_examples/blob/master/notebooks/showcase.ipynb) * [Classification](https://github.com/albu/albumentations_examples/blob/master/notebooks/example.ipynb), * [Object detection](https://github.com/albu/albumentations_examples/blob/master/notebooks/example_bboxes.ipynb), [image segmentation](https://github.com/albu/albumentations_examples/blob/master/notebooks/example_kaggle_salt.ipynb) and [keypoints](https://github.com/albu/albumentations_examples/blob/master/notebooks/example_keypoints.ipynb) @@ -119,13 +119,13 @@ Example jupyter notebooks: [Serialization](https://github.com/albu/albumentations_examples/blob/master/notebooks/serialization.ipynb), [Replay/Deterministic mode](https://github.com/albu/albumentations_examples/blob/master/notebooks/replay.ipynb), [Non-8-bit images](https://github.com/albu/albumentations_examples/blob/master/notebooks/example_16_bit_tiff.ipynb) -Example tranformations: +Example transformations: ![albumentations examples](https://camo.githubusercontent.com/3bb6e4bb500d96ad7bb4e4047af22a63ddf3242a894adf55ebffd3e184e4d113/68747470733a2f2f686162726173746f726167652e6f72672f776562742f62642f6e652f72762f62646e6572763563746b75646d73617a6e687734637273646669772e6a706567) -#### - [imgaug](https://github.com/aleju/imgaug) ![](https://img.shields.io/github/stars/aleju/imgaug.svg?style=social) - is another very useful and widely used python library. As authors describe: *it helps you with augmenting images for your machine learning projects. It converts a set of input images into a new, much larger set of slightly altered images.* It offers many augmentation techniques such as affine transformations, perspective transformations, contrast changes, gaussian noise, dropout of regions, hue/saturation changes, cropping/padding, blurring. +#### - [imgaug](https://github.com/aleju/imgaug) ![](https://img.shields.io/github/stars/aleju/imgaug.svg?style=social) - is another very useful and widely used Python library. As the author describes: *it helps you with augmenting images for your machine learning projects. It converts a set of input images into a new, much larger set of slightly altered images.* It offers many augmentation techniques such as affine transformations, perspective transformations, contrast changes, gaussian noise, dropout of regions, hue/saturation changes, cropping/padding, and blurring. -Example jupyter notebooks: +Example Jupyter notebooks: * [Load and Augment an Image](https://nbviewer.jupyter.org/github/aleju/imgaug-doc/blob/master/notebooks/A01%20-%20Load%20and%20Augment%20an%20Image.ipynb) * [Multicore Augmentation](https://nbviewer.jupyter.org/github/aleju/imgaug-doc/blob/master/notebooks/A03%20-%20Multicore%20Augmentation.ipynb) * Augment and work with: [Keypoints/Landmarks](https://nbviewer.jupyter.org/github/aleju/imgaug-doc/blob/master/notebooks/B01%20-%20Augment%20Keypoints.ipynb), @@ -135,7 +135,7 @@ Example jupyter notebooks: [Heatmaps](https://nbviewer.jupyter.org/github/aleju/imgaug-doc/blob/master/notebooks/B04%20-%20Augment%20Heatmaps.ipynb), [Segmentation Maps](https://nbviewer.jupyter.org/github/aleju/imgaug-doc/blob/master/notebooks/B05%20-%20Augment%20Segmentation%20Maps.ipynb) -Example tranformations: +Example transformations: ![imgaug examples](https://raw.githubusercontent.com/aleju/imgaug-doc/master/readme_images/examples_grid.jpg) #### - [Kornia](https://github.com/kornia/kornia) ![](https://img.shields.io/github/stars/kornia/kornia.svg?style=social) - is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions. @@ -147,7 +147,7 @@ At a granular level, Kornia is a library that consists of the following componen | [kornia](https://kornia.readthedocs.io/en/latest/index.html) | a Differentiable Computer Vision library, with strong GPU support | | [kornia.augmentation](https://kornia.readthedocs.io/en/latest/augmentation.html) | a module to perform data augmentation in the GPU | | [kornia.color](https://kornia.readthedocs.io/en/latest/color.html) | a set of routines to perform color space conversions | -| [kornia.contrib](https://kornia.readthedocs.io/en/latest/contrib.html) | a compilation of user contrib and experimental operators | +| [kornia.contrib](https://kornia.readthedocs.io/en/latest/contrib.html) | a compilation of user contributed and experimental operators | | [kornia.enhance](https://kornia.readthedocs.io/en/latest/enhance.html) | a module to perform normalization and intensity transformation | | [kornia.feature](https://kornia.readthedocs.io/en/latest/feature.html) | a module to perform feature detection | | [kornia.filters](https://kornia.readthedocs.io/en/latest/filters.html) | a module to perform image filtering and edge detection | @@ -160,7 +160,7 @@ At a granular level, Kornia is a library that consists of the following componen ![kornia examples](https://github.com/kornia/kornia/raw/master/docs/source/_static/img/hakuna_matata.gif) #### - [UDA](https://github.com/google-research/uda) ![](https://img.shields.io/github/stars/google-research/uda.svg?style=social)- a simple data augmentation tool for image files, intended for use with machine learning data sets. The tool scans a directory containing image files, and generates new images by performing a specified set of augmentation operations on each file that it finds. This process multiplies the number of training examples that can be used when developing a neural network, and should significantly improve the resulting network's performance, particularly when the number of training examples is relatively small. -The details are avaible here: [UNSUPERVISED DATA AUGMENTATION FOR CONSISTENCY TRAINING](https://arxiv.org/pdf/1904.12848.pdf) +The details are available here: [UNSUPERVISED DATA AUGMENTATION FOR CONSISTENCY TRAINING](https://arxiv.org/pdf/1904.12848.pdf) #### - [Data augmentation for object detection](https://github.com/Paperspace/DataAugmentationForObjectDetection) ![](https://img.shields.io/github/stars/Paperspace/DataAugmentationForObjectDetection.svg?style=social) - Repository contains a code for the paper [space tutorial series on adapting data augmentation methods for object detection tasks](https://blog.paperspace.com/data-augmentation-for-bounding-boxes/). They support a lot of data augmentations, like Horizontal Flipping, Scaling, Translation, Rotation, Shearing, Resizing. ![Data augmentation for object detection - exmpale](https://blog.paperspace.com/content/images/2018/09/vanila_aug.jpg) @@ -173,11 +173,11 @@ The details are avaible here: [UNSUPERVISED DATA AUGMENTATION FOR CONSISTENCY TR ![qualitative1.png](https://github.com/super-AND/super-AND/raw/master/fig/qualitative1.png) -#### - [vidaug](https://github.com/okankop/vidaug) ![](https://img.shields.io/github/stars/okankop/vidaug.svg?style=social) - This python library helps you with augmenting videos for your deep learning architectures. It converts input videos into a new, much larger set of slightly altered videos. +#### - [vidaug](https://github.com/okankop/vidaug) ![](https://img.shields.io/github/stars/okankop/vidaug.svg?style=social) - This Python library helps you with augmenting videos for your deep learning architectures. It converts input videos into a new, much larger set of slightly altered videos. ![](https://github.com/okankop/vidaug/blob/master/videos/combined.gif) -#### - [Image augmentor](https://github.com/codebox/image_augmentor) ![](https://img.shields.io/github/stars/codebox/image_augmentor.svg?style=social) - This is a simple python data augmentation tool for image files, intended for use with machine learning data sets. The tool scans a directory containing image files, and generates new images by performing a specified set of augmentation operations on each file that it finds. This process multiplies the number of training examples that can be used when developing a neural network, and should significantly improve the resulting network's performance, particularly when the number of training examples is relatively small. +#### - [Image augmentor](https://github.com/codebox/image_augmentor) ![](https://img.shields.io/github/stars/codebox/image_augmentor.svg?style=social) - This is a simple Python data augmentation tool for image files, intended for use with machine learning data sets. The tool scans a directory containing image files, and generates new images by performing a specified set of augmentation operations on each file that it finds. This process multiplies the number of training examples that can be used when developing a neural network, and should significantly improve the resulting network's performance, particularly when the number of training examples is relatively small. #### - [torchsample](https://github.com/ncullen93/torchsample) ![](https://img.shields.io/github/stars/ncullen93/torchsample.svg?style=social) - this python package provides High-Level Training, Data Augmentation, and Utilities for Pytorch. This toolbox provides data augmentation methods, regularizers and other utility functions. These transforms work directly on torch tensors: * Compose() @@ -192,7 +192,7 @@ The details are avaible here: [UNSUPERVISED DATA AUGMENTATION FOR CONSISTENCY TR * RandomFlip() -#### - [Random erasing](https://github.com/zhunzhong07/Random-Erasing) ![](https://img.shields.io/github/stars/zhunzhong07/Random-Erasing.svg?style=social) - The code is based on the paper: https://arxiv.org/abs/1708.04896. The Absract: +#### - [Random erasing](https://github.com/zhunzhong07/Random-Erasing) ![](https://img.shields.io/github/stars/zhunzhong07/Random-Erasing.svg?style=social) - The code is based on the paper: https://arxiv.org/abs/1708.04896. The Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: this https URL. @@ -200,7 +200,7 @@ In this paper, we introduce Random Erasing, a new data augmentation method for t #### - [data augmentation in C++](https://github.com/takmin/DataAugmentation) - ![](https://img.shields.io/github/stars/takmin/DataAugmentation.svg?style=social) Simple image augmnetation program transform input images with rotation, slide, blur, and noise to create training data of image recognition. -#### - [Data augmentation with GANs](https://github.com/AntreasAntoniou/DAGAN) ![](https://img.shields.io/github/stars/AntreasAntoniou/DAGAN.svg?style=social) - This repository contain files with Generative Adversarial Network, which can be used to successfully augment the dataset. This is an implementation of DAGAN as described in https://arxiv.org/abs/1711.04340. The implementation provides data loaders, model builders, model trainers, and synthetic data generators for the Omniglot and VGG-Face datasets. +#### - [Data augmentation with GANs](https://github.com/AntreasAntoniou/DAGAN) ![](https://img.shields.io/github/stars/AntreasAntoniou/DAGAN.svg?style=social) - This repository contains files with Generative Adversarial Network, which can be used to successfully augment the dataset. This is an implementation of DAGAN as described in https://arxiv.org/abs/1711.04340. The implementation provides data loaders, model builders, model trainers, and synthetic data generators for the Omniglot and VGG-Face datasets. #### - [Joint Discriminative and Generative Learning](https://github.com/NVlabs/DG-Net) ![](https://img.shields.io/github/stars/NVlabs/DG-Net.svg?style=social) - This repo is for Joint Discriminative and Generative Learning for Person Re-identification (CVPR2019 Oral). The author proposes an end-to-end training network that simultaneously generates more training samples and conducts representation learning. Given N real samples, the network could generate O(NxN) high-fidelity samples. ![Example of DGNet](https://github.com/NVlabs/DG-Net/blob/master/NxN.jpg) @@ -211,7 +211,7 @@ In this paper, we introduce Random Erasing, a new data augmentation method for t ![](https://user-images.githubusercontent.com/37669469/76104483-6eb5c100-5fa1-11ea-832b-b7a9a8e23895.jpg) -#### - [DocCreator (OCR)](https://github.com/DocCreator/DocCreator) ![](https://img.shields.io/github/stars/DocCreator/DocCreator.svg?style=social) - is an open source, cross-platform software allowing to generate synthetic document images and the accompanying groundtruth. Various degradation models can be applied on original document images to create virtually unlimited amounts of different images. +#### - [DocCreator (OCR)](https://github.com/DocCreator/DocCreator) ![](https://img.shields.io/github/stars/DocCreator/DocCreator.svg?style=social) - is an open source, cross-platform software allowing to generate synthetic document images and the accompanying ground truth. Various degradation models can be applied on original document images to create virtually unlimited amounts of different images. A multi-platform and open-source software able to create synthetic image documents with ground truth. ![](http://doc-creator.labri.fr/images/back.gif) @@ -229,13 +229,13 @@ A multi-platform and open-source software able to create synthetic image documen - State-of-the-art performance (in combination with AutoAugment). ![](https://github.com/zhiqiangdon/online-augment/raw/master/vis/STN.gif) -#### - [Augraphy (OCR)](https://github.com/sparkfish/augraphy) ![](https://img.shields.io/github/stars/sparkfish/augraphy.svg?style=social) - is a Python library that creates multiple copies of original documents though an augmentation pipeline that randomly distorts each copy -- degrading the clean version into dirty and realistic copies rendered through synthetic paper printing, faxing, scanning and copy machine processes. +#### - [Augraphy (OCR)](https://github.com/sparkfish/augraphy) ![](https://img.shields.io/github/stars/sparkfish/augraphy.svg?style=social) - is a Python library that creates multiple copies of original documents through an augmentation pipeline that randomly distorts each copy -- degrading the clean version into dirty and realistic copies rendered through synthetic paper printing, faxing, scanning and copy machine processes. ![](https://user-images.githubusercontent.com/74747193/135170284-8249fbab-2748-4230-821c-e56815e797cf.png) #### - [Data Augmentation optimized for GAN (DAG)](https://github.com/sutd-visual-computing-group/dag-gans) ![](https://img.shields.io/github/stars/sutd-visual-computing-group/dag-gans.svg?style=social) - implementation in PyTorch and Tensorflow -DAG-GAN provide simple implementations of the DAG modules in both PyTorch and TensorFlow, which can be easily integrated into any GAN models to improve the performance, especially in the case of limited data. We only illustrate some augmentation techniques (rotation, cropping, flipping, ...) as discussed in our paper, but our DAG is not limited to these augmentations. The more augmentation to be used, the better improvements DAG enhances the GAN models. It is also easy to design your augmentations within the modules. However, there may be a trade-off between the numbers of many augmentations to be used in DAG and the computational cost. +DAG-GAN provides simple implementations of the DAG modules in both PyTorch and TensorFlow, which can be easily integrated into any GAN models to improve the performance, especially in the case of limited data. We only illustrate some augmentation techniques (rotation, cropping, flipping, ...) as discussed in our paper, but our DAG is not limited to these augmentations. The more augmentation to be used, the better improvements DAG enhances the GAN models. It is also easy to design your augmentations within the modules. However, there may be a trade-off between the numbers of many augmentations to be used in DAG and the computational cost. #### - [Unsupervised Data Augmentation (google-research/uda)](https://github.com/google-research/uda) ![](https://img.shields.io/github/stars/google-research/uda.svg?style=social) - implementation in Tensorflow. Unsupervised Data Augmentation or UDA is a semi-supervised learning method which achieves state-of-the-art results on a wide variety of language and vision tasks. With only 20 labeled examples, UDA outperforms the previous state-of-the-art on IMDb trained on 25,000 labeled examples. @@ -261,7 +261,7 @@ It can be used to significantly improve the data efficiency for GAN training. We ## Natural Language Processing -#### - [nlpaug](https://github.com/makcedward/nlpaug) ![](https://img.shields.io/github/stars/makcedward/nlpaug.svg?style=social) - This python library helps you with augmenting nlp for your machine learning projects. Visit this introduction to understand about [Data Augmentation in NLP](https://towardsdatascience.com/data-augmentation-in-nlp-2801a34dfc28). `Augmenter` is the basic element of augmentation while `Flow` is a pipeline to orchestra multi augmenter together. +#### - [nlpaug](https://github.com/makcedward/nlpaug) ![](https://img.shields.io/github/stars/makcedward/nlpaug.svg?style=social) - This Python library helps you with augmenting nlp for your machine learning projects. Visit this introduction to understand about [Data Augmentation in NLP](https://towardsdatascience.com/data-augmentation-in-nlp-2801a34dfc28). `Augmenter` is the basic element of augmentation while `Flow` is a pipeline to orchestra multi augmenter together. Features: * Generate synthetic data for improving model performance without manual effort @@ -296,21 +296,21 @@ for data augmentation [source:QData/TextAttack](https://github.com/QData/TextAtt - **Random Swap (RS):** Randomly choose two words in the sentence and swap their positions. Do this *n* times. - **Random Deletion (RD):** For each word in the sentence, randomly remove it with probability *p*. -#### - [NL-Augmenter 🦎 → 🐍](https://github.com/GEM-benchmark/NL-Augmenter) ![](https://img.shields.io/github/stars/GEM-benchmark/NL-Augmenter.svg?style=social) - The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformations augment text datasets in diverse ways, including: randomizing names and numbers, changing style/syntax, paraphrasing, KB-based paraphrasing ... and whatever creative augmentation you contribute. We invite submissions of transformations to this framework by way of GitHub pull request, through August 31, 2021. All submitters of accepted transformations (and filters) will be included as co-authors on a paper announcing this framework. +#### - [NL-Augmenter 🦎 → 🐍](https://github.com/GEM-benchmark/NL-Augmenter) ![](https://img.shields.io/github/stars/GEM-benchmark/NL-Augmenter.svg?style=social) - The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformations augment text datasets in diverse ways, including: randomizing names and numbers, changing style/syntax, paraphrasing, KB-based paraphrasing ... and whatever creative augmentation you contribute. We invite submissions of transformations to this framework by way of GitHub pull requests, through August 31, 2021. All submitters of accepted transformations (and filters) will be included as co-authors on a paper announcing this framework. -#### - [Contextual data augmentation](https://github.com/pfnet-research/contextual_augmentation) ![](https://img.shields.io/github/stars/pfnet-research/contextual_augmentation.svg?style=social) - Contextual augmentation is a domain-independent data augmentation for text classification tasks. Texts in supervised dataset are augmented by replacing words with other words which are predicted by a label-conditioned bi-directional language model. +#### - [Contextual data augmentation](https://github.com/pfnet-research/contextual_augmentation) ![](https://img.shields.io/github/stars/pfnet-research/contextual_augmentation.svg?style=social) - Contextual augmentation is a domain-independent data augmentation for text classification tasks. Texts in the supervised dataset are augmented by replacing words with other words which are predicted by a label-conditioned bi-directional language model. This repository contains a collection of scripts for an experiment of [Contextual Augmentation](https://arxiv.org/pdf/1805.06201.pdf). ![example contextual data augmentation](https://i.imgur.com/JOyKkVt.png) -#### - [Wiki Edits](https://github.com/snukky/wikiedits) ![](https://img.shields.io/github/stars/snukky/wikiedits.svg?style=social) - A collection of scripts for automatic extraction of edited sentences from text edition histories, such as Wikipedia revisions. It was used to create the WikEd Error Corpus --- a corpus of corrective Wikipedia edits. The corpus has been prepared for two languages: Polish and English. Can be used as a dictionary-based augmentatioon to insert user-induced errors. +#### - [Wiki Edits](https://github.com/snukky/wikiedits) ![](https://img.shields.io/github/stars/snukky/wikiedits.svg?style=social) - A collection of scripts for automatic extraction of edited sentences from text edition histories, such as Wikipedia revisions. It was used to create the WikEd Error Corpus --- a corpus of corrective Wikipedia edits. The corpus has been prepared for two languages: Polish and English. Can be used as a dictionary-based augmentation to insert user-induced errors. #### - [Text AutoAugment (TAA)](https://github.com/lancopku/text-autoaugment) ![](https://img.shields.io/github/stars/lancopku/text-autoaugment.svg?style=social) - Text AutoAugment is a learnable and compositional framework for data augmentation in NLP. The proposed algorithm automatically searches for the optimal compositional policy, which improves the diversity and quality of augmented samples. ![text autoaugment](https://github.com/lancopku/text-autoaugment/blob/main/figures/taa.png?raw=true) #### - [Unsupervised Data Augmentation (google-research/uda)](https://github.com/google-research/uda) ![](https://img.shields.io/github/stars/google-research/uda.svg?style=social) - implementation in Tensorflow. -Unsupervised Data Augmentation or UDA is a semi-supervised learning method which achieves state-of-the-art results on a wide variety of language and vision tasks. With only 20 labeled examples, UDA outperforms the previous state-of-the-art on IMDb trained on 25,000 labeled examples. +Unsupervised Data Augmentation or UDA is a semi-supervised learning method that achieves state-of-the-art results on a wide variety of language and vision tasks. With only 20 labeled examples, UDA outperforms the previous state-of-the-art on IMDb trained on 25,000 labeled examples. They are releasing the following: * Code for text classifications based on BERT. @@ -361,19 +361,19 @@ AugLy is a great library to utilize for augmenting your data in model training, * random time warping 5 times in parallel, * random crop subsequences with length 300, * random quantize to 10-, 20-, or 30- level sets, - * with 80% probability , random drift the signal up to 10% - 50%, + * with 80% probability, random drift the signal up to 10% - 50%, * with 50% probability, reverse the sequence. ![](https://tsaug.readthedocs.io/en/stable/_images/notebook_Examples_of_augmenters_3_0.png) -#### - [Data Augmentation For Wearable Sensor Data](https://github.com/terryum/Data-Augmentation-For-Wearable-Sensor-Data) ![](https://img.shields.io/github/stars/terryum/Data-Augmentation-For-Wearable-Sensor-Data.svg?style=social) - a sample code of data augmentation methods for wearable sensor data (time-series data) based opn the paper below: +#### - [Data Augmentation For Wearable Sensor Data](https://github.com/terryum/Data-Augmentation-For-Wearable-Sensor-Data) ![](https://img.shields.io/github/stars/terryum/Data-Augmentation-For-Wearable-Sensor-Data.svg?style=social) - a sample code of data augmentation methods for wearable sensor data (time-series data) based on the paper below: T. T. Um et al., “Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks,” in Proceedings of the 19th ACM International Conference on Multimodal Interaction, ser. ICMI 2017. New York, NY, USA: ACM, 2017, pp. 216–220. ![](https://github.com/terryum/Data-Augmentation-For-Wearable-Sensor-Data/raw/master/DA_examples.png) ## AutoAugment -Automatic Data Augmentation is a family of algorithms that searches for the policy of augmenting the dataset for solivng the selcted task. +Automatic Data Augmentation is a family of algorithms that searches for the policy of augmenting the dataset for solving the selected task. Github repositories: * [Text AutoAugment (TAA)](https://github.com/lancopku/text-autoaugment) ![](https://img.shields.io/github/stars/lancopku/text-autoaugment.svg?style=social)