Skip to content
keneoneth edited this page May 21, 2023 · 6 revisions

Introduction

AITestPlaform is built as an easy-to-use training/testing/fine-tuning plaform for neural network models based on the Tensorflow framework. The idea is to use a single test run toml file for users to define all tunable parameters of a neural network model, and automatically obtain the results after training/testing/fine-tuning. The results will be available in a json report.

Architecture

The test platform is comprised of 5 components and each component has its own folder in the main directory:

i) datasets

  • Consists of dataset to be used (currently support mnist/cifar-10)
  • Each dataset has a custom_load.py file that defines how input data can be retrieved during training/testing

ii) models

  • Consists of different neural networks that are written in Python using Tensorflow's library
  • Currently support:
  1. alexnet_model
  2. densenet_model
  3. googlenet_model
  4. lenet_model
  5. mnist_demo_model
  6. nin_model
  7. placeholder
  8. resnet_model
  9. vgg16_model
  • To be supported:

maskrcnn_model

iii) testcases

  • Consists of testcase Python scripts to be executed for training/testing
  • Defines how input data is loaded, loss function, optimizer, training/testing/fine-tuning process, and what performance info. is collected by the output report

iv) testrun

  • Consists of toml files that list out the combination of dataset,model,testcase to be run

v) results

  • Results will be available under this folder, which consists of a json report of performance stats (e.g. accurarcy, precision, etc.), possible wrongly classified / sample images, and most importantly, model files (e.g. tf/h5 format)

Setup

  • Install python libraries pip install -r requirements

  • Add env. variables source setup.sh

  • Prepare datasets please refer to

Test run

The python command-line interface for the test platform is the run_aitest.py script stored under the AITestPlatform directory. You may use python run_aitest.py --help or aitest --help to take a look at the options available.

  • To train & test the testcase, please run aitest -t testrun/{TESTRUN_TOML} --train --test
  • To train the testcase only, please run aitest -t testrun/{TESTRUN_TOML} --train
  • To test the testcase only, please run aitest -t testrun/{TESTRUN_TOML} --test -m results/{PATH_TO_RESULT_MODEL_FILE}
  • To fine tune the testcase, please run aitest -t testrun/{TESTRUN_TOML} --train -m results/{PATH_TO_RESULT_MODEL_FILE}
    Note: you may also add --test to the above command to test the case after fine tune

Demo case

  • The demo case illustrates the use of a simple model to train and test on MNIST dataset
  • The test run toml is located at testrun/demo_mnist_testrun.toml. We can take a look at the components inside the toml

The title below will determine the folder name in the results folder i.e. results/DemoMNISTExample will be created

title = "Demo MNIST Example" # test run title

Datasets tables, with loadfname and loadobj keys defined

[datasets]  
[datasets.mnist] # name of dataset folder under 'datasets' folder  
loadfname = "custom_load" # filename that contains the data load functions under datasets/mnist/ folder  
loadobj = "MNIST"         # class name of dataset in custom_load.py  

Models tables, with key 'modelname' defined in the model python script

[models]  
[models.mnist_demo_model] # name of python script under 'models' folder  
modelname = "mymodel" # variable name of model in model python script  

Testcase tables, with params defined

[testcases]  
[testcases.demo_testcase] # name of testcase to run under 'testcases' folder  
testfunc = "mytest"         # function name of testcase  
validsize = 0.15                   # 15% of training data #custom testconfig param  
epochs = 1                  # custom testconfig param  
optimizer = 'sgd'                  # custom optimizer adam/sgd  
sgd_learning_rate = 0.01            # sgd learning rate  
sgd_momentum = 0.1                  # sgd momentum  
dump_err_img = true # whether detailed comparison is used 

The testrun array table consists of 4 keys: dataset, model, testcase, and output format array. You may extend this array so as to run different combinations of dataset, model and testcase during one test run

[[testrun]]    
dataset = "mnist"          # dataset name  
model = "mnist_demo_model" # model name  
testcase = "demo_testcase" # testcase name  
out_format = ["json"]      # output format, only json supported for now
  • To run the case for testing and training, please execute: aitest -t testrun/demo_mnist_testrun.toml --train --test

  • Please refer to testcase/demo_testcase.py to check how the training and testing are done

  • The training & testing results are available under results/DemoMNISTExample/

Clone this wiki locally