Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VTA][TSIM][Build] Towards TSIM CI testing #3704

Merged
merged 20 commits into from
Aug 13, 2019
Merged

Conversation

tmoreau89
Copy link
Contributor

@tmoreau89 tmoreau89 commented Aug 3, 2019

Before both build and execution would be controlled by the TARGET field in the vta_config.json. Now we separate the build (set in cmake with the USE_TSIM field) from the execution (set by the TARGET field in the vta_config.json.

This change presents the following advantages:

  • Quickly switch between tsim and sim without having to build
  • Makes colab demos more streamlined since we can build TSIM for all demo packages and control which simulator to use
  • This will also enable TSIM testing in the CI (needs follow up in separate PR)

Update: now that #3721 is merged, I'm also enabling TSIM CI testing so we can make sure that TSIM support does not break moving forward. I've updated the title to reflect this major feature update.

@tmoreau89
Copy link
Contributor Author

@vegaluisjose @liangfu please review, thanks!

@vegaluisjose
Copy link
Member

Hey @tmoreau89 thanks for leading this.

Isn't going back to enable through CMAKE a similar thing than having it in the json file? this is how we used to have it initially. In my opinion is better to set any target in the same way and not have another level of indirection for a particular one. All targets sim, tsim, pynq, ultra96, f1, etc should all be built in the same way.

Regarding the simulation libraries (sim and tsim). I think it is better to actually build two shared libraries once somebody builds TVM like vta_sim.so and vta_tsim.so. We don't have to worry about any Verilator dependencies because we are using function pointers and Verilator dependency kicks in only when you want to build the shared library for chisel version.

Also, if we decided to go this route we could even have a way of generating a library for your target say vta_pynq.so or vta_your_fpga.so. The cool thing about this is that you don't have to rely on the config.json file for selecting target in python. Then, we could be able to select target directly from python without rebuilding again. Also, it is extremely straightforward to know what was the target of a particular library, which is not the case right now with a pure vta name.

@tmoreau89
Copy link
Contributor Author

Thanks for the feedback Luis, I think we can build TSIM by default in either simulation modes then in order to always generate have both vta_sim.so and vta_tsim.so; and check upon runtime that the hardware library was built by the chisel make; and if not error out.

When in pynq, ultra96 etc. mode it's not necessary to build these simulation libraries, so I'll leave them out (since building the rpc_runtime already takes a while)

@tmoreau89
Copy link
Contributor Author

@vegaluisjose I've made the change so that in either simulation modes, we always build both tsim and sim drivers. Then the hardware design that is integrated with the tsim driver will be determined by what is built inside of the chisel directory.

@liangfu
Copy link
Member

liangfu commented Aug 5, 2019

@tmoreau89 This is a great idea with careful engineering work!

I would agree with @vegaluisjose to rename the libraries into libvta_sim.so and libvta_tsim.so. Even more aggressively, can we just provide the target options inside CMAKE, instead of parsing VTA_TARGET from the json file? Specifically, I think we can provide the options in the CMAKE like the following

option(build_vta_sim "Build VTA with fast simulation support." ON)
option(build_vta_tsim "Build VTA with cycle-accurate simulation support." ON)

, and add_library when the corresponding option has been enabled.

For the benefit of doing this, here I quote from @vegaluisjose 's comment,

we could be able to select target directly from python without rebuilding again.

@tmoreau89
Copy link
Contributor Author

@liangfu this is an interesting suggestion; essentially control the build of runtime DLLs from the CMAKE, and control what hardware design gets emulated from the json file.

One suggestion would be to have a target string, like tsim, sim, or fpga since tsim has multiple targets that it can emulate. Thoughts @vegaluisjose @liangfu ?

@tmoreau89
Copy link
Contributor Author

Actually, this may be a little more complicated due to the simple reason that depending on the target (say pynq vs. sim) we build the same DLL but with different driver sources. This lets us have a streamlined compilation both on host for simulation purposes, and on FPGA devices to build the RPC server / runtime.

@tmoreau89
Copy link
Contributor Author

tmoreau89 commented Aug 5, 2019

Therefore, if we want to have CMAKE controlled build for the simulator sources, we may need yet another parameter for the VTA RPC when we build on device (e.g. pynq). We'd essentially have 3 options:

# Whether to build fast VTA simulator driver
set(USE_VTA_FSIM ON)

# Whether to build cycle-accurate VTA simulator driver
set(USE_VTA_TSIM ON)

# Whether to build VTA RPC server (device side only)
set(USE_VTA_RPC OFF)

@liangfu
Copy link
Member

liangfu commented Aug 5, 2019

One suggestion would be to have a target string, like tsim, sim, or fpga

This is fine by me.

since tsim has multiple targets that it can emulate.

I switched the testing between DefaultF1Config and DefaultPynqConfig without rebuilding libvta.so, because the 'multiple targets' are handled in libvta_hw.so with exactly the same interface. Therefore, there is no need to build multiple copies of libvta_tsim.so.

We'd essentially have 3 options: ...

I personally think this is a better design.

@tmoreau89
Copy link
Contributor Author

OK sounds like we are on the same page! Please see the latest commit - this will let us explicitly set what dlls to build (either TSIM, FSIM, or FPGA driver); then the json target determines what FPGA is targeted. Right now it's only used when building the FPGA driver, but we could also use it to set the TSIM top-level design, e.g. DefaultF1Config or DefaultPynqConfig

Copy link
Member

@liangfu liangfu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tmoreau89
Copy link
Contributor Author

Note before merging, I'll need to update the docs to reflect the build steps for simulation or to run on FPGA. So please do not merge right away!

@tmoreau89 tmoreau89 added status: need update need update based on feedbacks and removed status: need update need update based on feedbacks labels Aug 6, 2019
@tmoreau89
Copy link
Contributor Author

@liangfu quick update: one suggestion that was made was to have a single parameter that indicates whether we are building on host or on FPGA: if it's on host, we build both simulator runtimes, and if it's on FPGA, we build the pynq runtime (or other if it's another vendor). How does that sound?

@tmoreau89
Copy link
Contributor Author

I suggest we hold from merging this PR until #3721 is merged, and we can enable TSIM unit testing.

@tmoreau89 tmoreau89 added the status: need update need update based on feedbacks label Aug 7, 2019
@liangfu
Copy link
Member

liangfu commented Aug 7, 2019

@tmoreau89 I would personally prefer the explicit way for this, in order to configure what to build separately.

@tmoreau89
Copy link
Contributor Author

Thanks for the feedback Liangfu.

@tmoreau89
Copy link
Contributor Author

We'll stick with the 3 flags in the config.cmake

@tmoreau89 tmoreau89 removed the status: need update need update based on feedbacks label Aug 8, 2019
@tmoreau89 tmoreau89 changed the title [VTA][TSIM] Separate sim DLLs for ease of switching between two sim modes [VTA][TSIM][Build] Enabling TSIM CI testing Aug 8, 2019
@tmoreau89
Copy link
Contributor Author

#3721 is merged, and I've enabled TSIM CI testing!

@tmoreau89 tmoreau89 changed the title [VTA][TSIM][Build] Enabling TSIM CI testing [VTA][TSIM][Build] Towards TSIM CI testing Aug 9, 2019
@jroesch jroesch merged commit e518fe1 into apache:master Aug 13, 2019
@tmoreau89 tmoreau89 deleted the tsim branch August 13, 2019 21:15
wweic pushed a commit to neo-ai/tvm that referenced this pull request Aug 16, 2019
* building TSIM specific library along with fast simulator to quickly switch between dlls

* cmake controlled TSIM libraries

* always build tsim driver in either simulation modes

* build DLLs based on CMAKE flags

* updating the jenkinsfile

* small restructuring

* reducing the cmake flags

* update instructions

* reverting to 3 flags

* update Jenkinsfile

* adding new line

* enabling TSIM unit and integration tests

* fix description

* temporarily disabling task_python_vta tests in CPU Build stage

* move CPU tests in unit test stage

* stage  reorg

* better make

* disabling TSIM tests for now

* reverting some restructuring

* fix
anijain2305 pushed a commit to anijain2305/tvm that referenced this pull request Aug 22, 2019
* building TSIM specific library along with fast simulator to quickly switch between dlls

* cmake controlled TSIM libraries

* always build tsim driver in either simulation modes

* build DLLs based on CMAKE flags

* updating the jenkinsfile

* small restructuring

* reducing the cmake flags

* update instructions

* reverting to 3 flags

* update Jenkinsfile

* adding new line

* enabling TSIM unit and integration tests

* fix description

* temporarily disabling task_python_vta tests in CPU Build stage

* move CPU tests in unit test stage

* stage  reorg

* better make

* disabling TSIM tests for now

* reverting some restructuring

* fix
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 6, 2019
* building TSIM specific library along with fast simulator to quickly switch between dlls

* cmake controlled TSIM libraries

* always build tsim driver in either simulation modes

* build DLLs based on CMAKE flags

* updating the jenkinsfile

* small restructuring

* reducing the cmake flags

* update instructions

* reverting to 3 flags

* update Jenkinsfile

* adding new line

* enabling TSIM unit and integration tests

* fix description

* temporarily disabling task_python_vta tests in CPU Build stage

* move CPU tests in unit test stage

* stage  reorg

* better make

* disabling TSIM tests for now

* reverting some restructuring

* fix
@apivovarov
Copy link
Contributor

@jroesch @tmoreau89

In file included from /root/tvm/vta/src/runtime.cc:29:0:
/root/tvm/vta/include/vta/hw_spec.h:104:33: error: 'VTA_LOG_UOP_BUFF_SIZE' was not declared in this scope
 #define VTA_LOG_UOP_BUFF_DEPTH (VTA_LOG_UOP_BUFF_SIZE - VTA_LOG_UOP_WIDTH + 3)

Can you explain where you declare in VTA_LOG_UOP_BUFF_SIZE
in https://github.com/dmlc/tvm/blob/master/vta/include/vta/hw_spec.h ?

@liangfu
Copy link
Member

liangfu commented Sep 18, 2019

@apivovarov I think VTA_LOG_UOP_BUFF_SIZE is a variable to be customized, and it's defined in vta_config.json. Hope this helps.

@apivovarov
Copy link
Contributor

apivovarov commented Sep 18, 2019

@liangfu Can you reproduce the error? Try to build with LLVM

mkdir build
cd build
cp ../cmake/config.cmake .
cmake USE_LLVM=ON ..
make

Install TVM from Source https://docs.tvm.ai/install/from_source.html

tqchen pushed a commit to tqchen/tvm that referenced this pull request Mar 29, 2020
* building TSIM specific library along with fast simulator to quickly switch between dlls

* cmake controlled TSIM libraries

* always build tsim driver in either simulation modes

* build DLLs based on CMAKE flags

* updating the jenkinsfile

* small restructuring

* reducing the cmake flags

* update instructions

* reverting to 3 flags

* update Jenkinsfile

* adding new line

* enabling TSIM unit and integration tests

* fix description

* temporarily disabling task_python_vta tests in CPU Build stage

* move CPU tests in unit test stage

* stage  reorg

* better make

* disabling TSIM tests for now

* reverting some restructuring

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants