Skip to content

Latest commit

 

History

History
13 lines (8 loc) · 3.1 KB

methodology.md

File metadata and controls

13 lines (8 loc) · 3.1 KB

Pipeline

First, all the core files are compiled (activation_funcs.f90, derived_types.f90, layers.f90, readTester.f90). activation_funcs.f90 stores activation functions, derived_types.f90 stores derived types for certain layer types, layers.f90 stores the math behind certain layers (currently we support GEMM, LSTM, Convolutional, and MaxPool layers), and readTester.f90 loads in the weights that are stored in the system itself.

Initialization and Preprocessing

Then, in each of the test case files in goldenFiles, the .py file is run to create the model, randomly initialized with weights. It creates an intermediary file called inputs.fpp, which stores the exact inputs given to the model, which is later fed to the fortran built model. It also creates a "golden file" which represents the correct shape and output of the model. Lastly, the model that was run is stored in .onnx format.

modelParserONNX.py is run to parse the onnx model and gathers information about the model and creates onnxModel.txt (layer names and weights dimensions) and onnxWeights.txt (the corresponding weights for each layer). It also creates a variables.fpp file that stores some key information about the model that fypp will process during model creation.

Running and Testing

Lastly, we have two .fpp files. modelCreator.fpp is the module that builds the subroutine that stores the correct model architecture. It parses through variables.fpp and reconstructs the model with the subroutines in layers.f90. userTesting.fpp is used to create userTesting.f90, a sample file that calls "initialize" (which enables fortran to read in the weights and model structure from onnxModel.txt and onnxWeights.txt. Then it passes in the inputs from the intermediary file inputs.fpp, and runs the model. userTesting.fpp then stores the shape and output in a text file.

testChecker.py compares the outputted text file to the test's "golden file". If the shapes match and the outputs are within reasonable range, the test case passes. Otherwise, the error is outputted as either a failure due to shape or mismatching values or to an external text file output.txt indicating there was a runtime failure somewhere (probably due to the model encoding, decoding, or running).