Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix nondefault project name handling #626

Merged
merged 8 commits into from
Aug 16, 2022
Merged

Conversation

jmitrevs
Copy link
Contributor

@jmitrevs jmitrevs commented Aug 1, 2022

Description

As discussed in #624, convert_from_keras_model with project_name set to nondefault value (e.g. mynewprojname) fails to build, because in build.tcl the variable ${mynewprojname} is not defined. This fixes the problem my always leaving the variable called myproject, but just changing the value of the variable. Some changes in the tcl code were requried since now since the variable name does not always match the value it contains. The build.tcl indentation was fixed, and the parsing of the Vivado report was fixed. Note that this is broken in main even in the default case currently.

Type of change

  • Bug fix (non-breaking change that fixes an issue)

Tests

Testing was done offline by modifying and running part1 of the hls4ml-tutorial, as mentioned in #624. Nevertheless, we probably should add a test. The complication is that the test requires synthesis, and generally our pytests don't do synthesis. Plus, synthesis is long. Do we want to add another synthesis test for this case or modify an existing one?

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have added tests that prove my fix is effective or that my feature works.

@jmitrevs
Copy link
Contributor Author

jmitrevs commented Aug 2, 2022

The qkeras pytest failure is a known problem that exists in the main branch even without this change.

@thesps
Copy link
Contributor

thesps commented Aug 2, 2022

Thanks for the fix, I don't think we need a new test for this

@jmitrevs
Copy link
Contributor Author

jmitrevs commented Aug 3, 2022

The last push was not a complete fix. I'll send a message again when I think it's ready.

@jmitrevs
Copy link
Contributor Author

jmitrevs commented Aug 3, 2022

I still need to test vsynth=True and also that I didn't mess up the FIFO optimizations.

@jmitrevs
Copy link
Contributor Author

jmitrevs commented Aug 3, 2022

Actually, looking in the main branch as a starting point, I see

 'VivadoSynthReport': {'LUT': '0',
  'FF': '0',
  'BRAM_18K': '0',
  'URAM': '0',
  'DSP48E': '0'}

when I do part1 of the hls4ml-tutorial with hls_model.build(vsynth=True) so vsynth might be broken in the main branch. Will try to investigate.

@vloncar
Copy link
Contributor

vloncar commented Aug 3, 2022

This has been broken for some time now, I observed it as well while playing with Vitis. Thanks for looking into it.

@jmitrevs
Copy link
Contributor Author

jmitrevs commented Aug 3, 2022

This seems to break the VivadoAccelerator backend.

@jmitrevs
Copy link
Contributor Author

I believe this is ready for merging.

@vloncar vloncar merged commit e8f048a into main Aug 16, 2022
Jonathan-Shoemaker added a commit to Jonathan-Shoemaker/hls4ml that referenced this pull request Oct 12, 2022
…". The commit breaks the Vivado Accelerator workflow, and the fix is unclear to me right now.

This reverts commit e8f048a.
Jonathan-Shoemaker added a commit to Jonathan-Shoemaker/hls4ml that referenced this pull request Oct 13, 2022
…". The commit breaks the Vivado Accelerator workflow, and the fix is unclear to me right now.

This reverts commit e8f048a.
judicaelclair added a commit to judicaelclair/hls4ml that referenced this pull request Oct 19, 2022
@jmduarte jmduarte deleted the jmitrevs/fix-project-name branch November 2, 2022 02:30
jmduarte pushed a commit to jmduarte/hls4ml that referenced this pull request Mar 18, 2023
add new files for conv1dtranspose resource

clean up so that conv code is reached. Still need to get the actual implementation matching keras

implement conv1dtranspose super inefficiently (gets correct answer though)

try to fix indices to make code work

make the c code work for conv1dtranspose

reduce weight dimensions to properly reflect transposed kernel size

clean up so that transpose filter width is passes around from config

fix code such that simple transpose layer gets synthesized

move variables out of loops, optimize slightly and add in alternative method of computation to compute by kernel (that option is not optimized as of now)

add in conv1d transpose linebuffer format code. seems to work, unsure of if it is optimized yet

trying to fix stream behavior

get transpose compilation working mostly as expected. weird jump in latency from reuse 1 to 2 still exists

initial conv2dtranspose addition. Output is permuted as of now.

output in correct order. using large array to buffer output though

fix up conv1dtranspose a bit to pad correctly. fix up stream instructions for both 1d and 2d transposes

fix allowed reuse factors for transpose layers

update to new conv methods for io_parallel. Still some issues with multiple filters as well as some padding issues

clean up error with multiple filters and larger kernels

optimize conv transpose resource to get it working reasonably well. may still have slight optimization left

fix output to conv1d transpose resource

add conv2dtranspose io_parallel implementation. Can still be optimized

small changeup to data storage in conv1d parallel

fix zero padding pass addition for transpose stream layers

move transposing of weight matrix to resource_strategy for transpose layers

change how stream loads in weights to be like parallel for conv transposes. unroll all stride steps completely

 fix output of 1d transpose parallel to be faster

change 1d transpose weight input to be 2-dimensional (passed from python code)

change 2d transpose weight input to be 3-dimensional (passed from python code)

small changes to transposes

Revert "fix nondefault project name handling (fastmachinelearning#626)". The commit breaks the Vivado Accelerator workflow, and the fix is unclear to me right now.

This reverts commit e8f048a.

steps towards getting integer inputs to work
jmduarte pushed a commit to jmduarte/hls4ml that referenced this pull request Mar 18, 2023
add new files for conv1dtranspose resource

clean up so that conv code is reached. Still need to get the actual implementation matching keras

implement conv1dtranspose super inefficiently (gets correct answer though)

try to fix indices to make code work

make the c code work for conv1dtranspose

reduce weight dimensions to properly reflect transposed kernel size

clean up so that transpose filter width is passes around from config

fix code such that simple transpose layer gets synthesized

move variables out of loops, optimize slightly and add in alternative method of computation to compute by kernel (that option is not optimized as of now)

add in conv1d transpose linebuffer format code. seems to work, unsure of if it is optimized yet

trying to fix stream behavior

get transpose compilation working mostly as expected. weird jump in latency from reuse 1 to 2 still exists

initial conv2dtranspose addition. Output is permuted as of now.

output in correct order. using large array to buffer output though

fix up conv1dtranspose a bit to pad correctly. fix up stream instructions for both 1d and 2d transposes

fix allowed reuse factors for transpose layers

update to new conv methods for io_parallel. Still some issues with multiple filters as well as some padding issues

clean up error with multiple filters and larger kernels

optimize conv transpose resource to get it working reasonably well. may still have slight optimization left

fix output to conv1d transpose resource

add conv2dtranspose io_parallel implementation. Can still be optimized

small changeup to data storage in conv1d parallel

fix zero padding pass addition for transpose stream layers

move transposing of weight matrix to resource_strategy for transpose layers

change how stream loads in weights to be like parallel for conv transposes. unroll all stride steps completely

 fix output of 1d transpose parallel to be faster

change 1d transpose weight input to be 2-dimensional (passed from python code)

change 2d transpose weight input to be 3-dimensional (passed from python code)

small changes to transposes

Revert "fix nondefault project name handling (fastmachinelearning#626)". The commit breaks the Vivado Accelerator workflow, and the fix is unclear to me right now.

This reverts commit e8f048a.

steps towards getting integer inputs to work
calad0i pushed a commit to calad0i/hls4ml that referenced this pull request Jul 1, 2023
* only rename value of ${myproject}, not the variable name

* consistently indent build_prj.tcl; be consistent in using myproject only as a variable

* fix parsing of Vivado reports

* migrate more to using variables instead of changing text; change function name to better match

* rename myproject to project_name in remaining Vivado places

* try to fix vivado accelerator for project_name

* replace forgotten myproject with project_name

* fix FIFO depth optimization tcl
jmitrevs pushed a commit that referenced this pull request Aug 3, 2023
add new files for conv1dtranspose resource

clean up so that conv code is reached. Still need to get the actual implementation matching keras

implement conv1dtranspose super inefficiently (gets correct answer though)

try to fix indices to make code work

make the c code work for conv1dtranspose

reduce weight dimensions to properly reflect transposed kernel size

clean up so that transpose filter width is passes around from config

fix code such that simple transpose layer gets synthesized

move variables out of loops, optimize slightly and add in alternative method of computation to compute by kernel (that option is not optimized as of now)

add in conv1d transpose linebuffer format code. seems to work, unsure of if it is optimized yet

trying to fix stream behavior

get transpose compilation working mostly as expected. weird jump in latency from reuse 1 to 2 still exists

initial conv2dtranspose addition. Output is permuted as of now.

output in correct order. using large array to buffer output though

fix up conv1dtranspose a bit to pad correctly. fix up stream instructions for both 1d and 2d transposes

fix allowed reuse factors for transpose layers

update to new conv methods for io_parallel. Still some issues with multiple filters as well as some padding issues

clean up error with multiple filters and larger kernels

optimize conv transpose resource to get it working reasonably well. may still have slight optimization left

fix output to conv1d transpose resource

add conv2dtranspose io_parallel implementation. Can still be optimized

small changeup to data storage in conv1d parallel

fix zero padding pass addition for transpose stream layers

move transposing of weight matrix to resource_strategy for transpose layers

change how stream loads in weights to be like parallel for conv transposes. unroll all stride steps completely

 fix output of 1d transpose parallel to be faster

change 1d transpose weight input to be 2-dimensional (passed from python code)

change 2d transpose weight input to be 3-dimensional (passed from python code)

small changes to transposes

Revert "fix nondefault project name handling (#626)". The commit breaks the Vivado Accelerator workflow, and the fix is unclear to me right now.

This reverts commit e8f048a.

steps towards getting integer inputs to work
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants