Skip to content

Latest commit

 

History

History
236 lines (160 loc) · 11.3 KB

README_EN.md

File metadata and controls

236 lines (160 loc) · 11.3 KB

GPL-3.0 Licensed Stars TensorFlow Version Python Version DOI

ASRT is A Deep-Learning-Based Chinese Speech Recognition System. If you like this project, please star it.

ReadMe Language | 中文版 | English |

ASRT Project Home Page | Released Download | View this project's wiki document (Chinese) | Experience Demo | Donate

If you have any questions in your works with this project, welcome to put up issues in this repo and I will response as soon as possible.

You can check the FAQ Page (Chinese) first before asking questions to avoid repeating questions.

If there is any abnormality when the program is running, please send a complete screenshot when asking questions, and indicate the CPU architecture, GPU model, operating system, Python, TensorFlow and CUDA versions used, and whether any code has been modified or data sets have been added or deleted, etc. .

Introduction

This project uses tensorFlow.keras based on deep convolutional neural network and long-short memory neural network, attention mechanism and CTC to implement.

Minimum requirements for training

Hardware

  • CPU: 4 Core (x86_64, amd64) +
  • RAM: 16 GB +
  • GPU: NVIDIA, Graph Memory 11GB+ (>1080ti)
  • 硬盘: 500 GB HDD(or SSD)

Software

  • Linux: Ubuntu 18.04 + / CentOS 7 + or Windows 10/11
  • Python: 3.7 - 3.10 and later
  • TensorFlow: 2.5 - 2.11 and later

Quick Start

Take the operation under the Linux system as an example:

First, clone the project to your computer through Git, and then download the data sets needed for the training of this project. For the download links, please refer to End of Document

$ git clone https://github.com/nl8590687/ASRT_SpeechRecognition.git

Or you can use the "Fork" button to copy a copy of the project and then clone it locally with your own SSH key.

After cloning the repository via git, go to the project root directory; create a subdirectory /data/speech_data (you can use a soft link instead) for datasets, and then extract the downloaded datasets directly into it.

$ cd ASRT_SpeechRecognition

$ mkdir /data/speech_data

$ tar zxf <dataset zip files name> -C /data/speech_data/ 

Note that in the current version, in the configuration file, six data sets, Thchs30, ST-CMDS, Primewords, aishell-1, aidatatang200, MagicData, are added by default, please delete them if you don’t need them. If you want to use other data sets, you need to add data configuration yourself, and use the standard format supported by ASRT to organize the data in advance.

To download pinyin syllable list files for default dataset:

$ python download_default_datalist.py

Currently available models are 24, 25, 251 and 251bn

Before running this project, please install the necessary Python3 version dependent library

To start training this project, please execute:

$ python3 train_speech_model.py

To start the test of this project, please execute:

$ python3 evaluate_speech_model.py

Before testing, make sure the model file path filled in the code files exists.

To predict one wave audio file for speech recognition:

$ python3 predict_speech_file.py

To startup ASRT API Server with HTTP protocol please execute:

$ python3 asrserver_http.py

Please note that after opening the API server, you need to use the client software corresponding to this ASRT project for voice recognition. For details, see the Wiki documentation to download ASRT Client SDK & Demo.

To test whether it is successful or not that calls api service interface with HTTP protocol:

$ python3 client_http.py

To startup ASRT API Server with GRPC protocol please execute:

$ python3 asrserver_grpc.py

To test whether it is successful or not that calls api service interface with GRPC protocol:

$ python3 client_grpc.py

If you want to train and use other model(not Model 251bn), make changes in the corresponding position of the from speech_model.xxx import xxx in the code files.

If there is any problem during the execution of the program or during use, it can be promptly put forward in the issue, and I will reply as soon as possible.

Deploy ASRT by docker:

$ docker pull ailemondocker/asrt_service:1.3.0
$ docker run --rm -it -p 20001:20001 -p 20002:20002 --name asrt-server -d ailemondocker/asrt_service:1.3.0

It will start a api server for recognition rather than training.

Model

Speech Model

DCNN + CTC

The maximum length of the input audio is 16 seconds, and the output is the corresponding Chinese pinyin sequence.

  • Questions about downloading trained models

The released finished software that includes trained model weights can be downloaded from ASRT download page.

Github Releases page includes the archives of the various versions of the software released and it's introduction. Under each version module, there is a zip file that includes trained model weights files.

Language Model

Maximum Entropy Hidden Markov Model Based on Probability Graph.

The input is a Chinese pinyin sequence, and the output is the corresponding Chinese character text.

About Accuracy

At present, the best model can basically reach 85% of Pinyin correct rate on the test set.

Python Dependency Library

  • tensorFlow (2.5-2.11+)
  • numpy
  • wave
  • matplotlib
  • math
  • scipy
  • requests
  • flask
  • waitress
  • grpcio / grpcio-tools / protobuf

If you have trouble when install those packages, please run the following script to do it as long as you have a GPU and python 3.9, CUDA 11.2 and cudnn 8.1 have been installed:

$ pip install -r requirements.txt

Dependent Environment Details and Hardware Requirement

ASRT Client SDK for Calling Speech Recognition API

ASRT provides the abilities to import client SDKs for several platform and programing language for client develop speech recognition features , which work by RPC. Please refer ASRT project documents for detail.

Client Platform Project Repos Link
Windows Client SDK & Demo ASRT_SDK_WinClient
Python3 Client SDK & Demo (Any Platform) ASRT_SDK_Python3
Golang Client SDK & Demo asrt-sdk-go
Java Client SDK & Demo ASRT_SDK_Java

Data Sets

For full content please refer: Some free Chinese speech datasets (Chinese)

Dataset Time Size Download (CN Mirrors) Download (Source)
THCHS30 40h 6.01G data_thchs30.tgz data_thchs30.tgz
ST-CMDS 100h 7.67G ST-CMDS-20170001_1-OS.tar.gz ST-CMDS-20170001_1-OS.tar.gz
AIShell-1 178h 14.51G data_aishell.tgz data_aishell.tgz
Primewords 100h 8.44G primewords_md_2018_set1.tar.gz primewords_md_2018_set1.tar.gz
aidatatang_200zh 200h 17.47G aidatatang_200zh.tgz aidatatang_200zh.tgz
MagicData 755h 52G/1.0G/2.2G train_set.tar.gz / dev_set.tar.gz / test_set.tar.gz train_set.tar.gz / dev_set.tar.gz / test_set.tar.gz

Note:The way to unzip AISHELL-1 dataset

$ tar xzf data_aishell.tgz
$ cd data_aishell/wav
$ for tar in *.tar.gz;  do tar xvf $tar; done

Special thanks! Thanks to the predecessors' public voice data set.

If the provided dataset link cannot be opened and downloaded, click this link OpenSLR

ASRT Docuemnts

A post about ASRT's introduction

About how to use ASRT to train and deploy:

For questions about the principles of the statistical language model that are often asked, see:

For questions about CTC, see:

For more infomation please refer to author's blog website: AILemon Blog (Chinese)

License

GPL v3.0 © nl8590687 Author: ailemon

Cite this project

DOI: 10.5281/zenodo.5808434

Contributors

Contributors Page

@nl8590687 (repo owner)