-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantizer v2 #26
Draft
aquaticsarah
wants to merge
16
commits into
wntrblm:main
Choose a base branch
from
aquaticsarah:quantizer-v2
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Quantizer v2 #26
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
aquaticsarah
force-pushed
the
quantizer-v2
branch
2 times, most recently
from
March 14, 2022 00:18
9245555
to
8b61bd8
Compare
Note: For now, the scale cannot be chosen. The input is always quantized to the nearest 12-tone equal temperament step.
The version of the description added in the previous patch was marked up wrong, resulting in the text appearing in two separate boxes rather than one box with two paragraphs inside it. This patch fixes that issue.
aquaticsarah
force-pushed
the
quantizer-v2
branch
from
April 23, 2022 20:19
8b61bd8
to
315d2d8
Compare
Note: For testing purposes, these sysexes currently affect the live config directly. A later commit will change them to affect the config in flash. As the config struct is about 2KB in size, and we end up needing ~2 copies on the stack in cmd_0x1A_read_quantizer_config_(), we need to increase the stack size to 8KB to prevent overflow. This can likely be reduced in future.
The previous commits needed more stack size than was previously allocated, and changed the STACK_SIZE variable in configure.py accordingly. Strangely, however, the linker still only allocated 2KB to the stack. On further investigation, the line STACK_SIZE = DEFINED(__stack_size__) ? __stack_size__ : 0x800; doesn't work, it always sets STACK_SIZE to 0x800. But changing it to STACK_SIZE = __stack_size__ works perfectly
It is important to ensure that the addresses and lengths of the existing NVM sections (settings and lut) are not changed by this patch. I have manually verified this as follows: * Build the firmware before applying this commit * Run: $ objdump -t build/gemini-firmware.elf | grep _nvm * Then apply the commit and rebuild * Run objdump again * Compare the two sets of outputs. The relevant columns are the first (symbol value) and last (symbol name)
theacodes
force-pushed
the
main
branch
2 times, most recently
from
February 11, 2024 03:06
d591366
to
27e38c6
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Follow up to #25 . Replaces the simple logic in that patch, which can only handle 12-tone equal temperament, with a table-based scheme which can handle any user-specified scale.