Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3.0 roadmap #348

Closed
trentgill opened this issue Jul 15, 2020 · 0 comments
Closed

3.0 roadmap #348

trentgill opened this issue Jul 15, 2020 · 0 comments
Milestone

Comments

@trentgill
Copy link
Collaborator

trentgill commented Jul 15, 2020

The next version of crow is directed at pushing the ecosystem into being an embeddable nervous system for other devices. At present it has a number of quirks and idioms that are opinionated in rather strange ways.

main ideas

  • global timebase (using norns style clock system) -> compositional timing
  • ASL to focus on modulation/audio shapes (not arbitrary function sequences).
  • System(s) for sequencing events in musical time
  • Web-based & norns-based script uploaders / param-interface
  • Parameter system for interactivity with a USB connected device
  • Native USB MIDI support

minor additions

  • Finalized syntax for crow as an i2c follower (ie Teletype->crow, crow->crow, Ansible->crow)
  • Frequency counter for inputs (tuner / v8 calibrator)
  • Just intonation support (input & outputs 'scale', JI->volts conversion)
  • Teletype support

discussion

ASL vs Clock

ASL and Clock are very similar in terms of under-the-hood the implementation. The key difference is the syntax / style of chaining events in time. ASL tries to be more declarative, while Clock reads like imperative code.

ASL is sequenced by the length of modulations, while clocks are sequenced by a global sense of time. The former is a natural fit for writing voltage sequences, but breaks down once arbitrary code is executed in the structures.

This update should draw a distinction between the two linked by separable use-cases:

  • ASL for isolated modulations (ie an envelope, lfo, voltage sequence)
  • Clock for 'compositional' sequences, chaining functions, especially across multiple outputs / destinations.

These two system overlap in the space of a note sequence on a specific channel. ASL might be best for simple things like fixed-rate arpeggios. Clock would be best for sequences of ASL actions. The two systems should interact well, but if we allow ASL to do less of what Clock can do, we can push ASL in a different direction.

ASL

audiorate

The main driver here is wanting to allow ASL to run faster, without artifacts, specifically at audiorates. ASL can already define waveforms but they breakdown when executed quickly. Instead, these waveforms should smoothly run at signal rate (hence being used as oscillators), while capturing a subset of algorithmic control.

The goal is to define a waveform that has a dynamic waveshape where that dynamism is algorithmically defined. That way a morphing waveform can be conceived of as a single unit of sound. Sounds cool!

dynamics

ASL should no longer be able to interleave arbitrary lua function calls. Sequencing code like this should be done through the clock system. ASL meanwhile is more about creating 'waves' (for LFOs or envelopes or waveforms), whereas sequences are in clocks, or far simplified. Variables to ASL should still take tables or specific functions. eg: a list of notes with a cycling rule. I'm pursuing these ideas as a sequencer elsewhere, and it makes sense to hyper-restrict the ability for variables to be modified by the larger system.

I originally wanted ASL to represent 'every modulation source you could imagine' , and while that's fine, it was never intended to include 'every action located in time you could imagine'. In the end everyone just uses lfo and ar and that's kinda it. they'd be just as easy to implement as a clock routine.

I've made some progress at the idea of there being 'runtime' variables for asl that can be controlled by lua, but don't have to call back into the virtual env. I think it's necessary to go a step further and make all the variables live in C-land. ASL should be able to call into lua, but it should be a specific request, rather than happen at every breakpoint. Most of the time, you just want to start a process running that doesn't need access. And if it does call to Lua, it should happen asynchronously so the signal doesn't stop or hiccup.

By doing these things, ASL will be able to run at audiorates without issue, meaning the dynamic(algorithmic?)-waveforms idea will be brought to life.

Clock

Using the norns clock library brings a few interesting ideas. Most importantly it introduces the concept of 'global' time that can be shared between different elements of a script. This global timebase can be set natively, clocked from an input, or clocked from a USB device (norns / USB-MIDI).

eg. If norns has selected 'crow' as the input/output of clock, crow itself will 'listen in' to what that tempo is, in addition to forwarding it to/from the appropriate port. That means the change is transparent to norns, but enables tempo sensitive behaviour in addition to this basic 'expansion' / 'io' usage of crow.

musical time

Two elements at play here. They should all be prototyped on norns (because they are already possible there!)

Musical notation

  • (eg. 1:1, 4:2 as bars:beats)
  • converts to 'clock.sync' or 'ms'

Event tables

Lists of {time, function} pairs with some helper functions for 'start', 'stop'.

Key point is that time should be referenced from a '1:1' marker (or from ms=0), then place events in absolute time relative to 1:1. I believe the existing Clock library talks about time relative to when the clock.sync function is called, but i want to talk about time relative to where the event table (timeline? tracker?) is relative to the time when it started.

To run a function at 2:1 shouldn't care if any other timed functions were called between 1:1 and 2:1. They are directly relative to the time that 1:1 occurred.

Web-based & norns-based script uploaders (relates to param system and USB MIDI)

This idea is to provide a highly simplified script uploading utility, combining elements of druid and the bowery repo.

A musician opens the website and sees a list of available scripts (dynamically sourced from the bowery repo). Simply click a script name, see the --- description and click 'upload', to put it on crow.

The user will likely have to click an 'allow access to serial port' button.

Second stage is to auto-discover the parameters from the script (see below), and present a UI to modify these values. They will be uploaded live so you can use it as a lightweight performance interface for a script, but mostly are designed for configuration of a script, that you would then upload to crow for permanent storage.

A norns script shall be provided with the same functionality. The key difference is the repo would not be updated via the web, but instead through the maiden librarian as a git submodule relying on bowery. The second difference is a simplified UI for editing the parameters in a consistent way. This could be a set of widgets that are provided depending on the param types discovered from the script.

Parameter system / Interactivity

Much like with norns 'param' system, crow needs some way of declaring certain variables as 'user configurable' in a more formal way. This enables:

  • auto-discovery of modifiable variables by a remote device (computer / norns)
  • variable storage on crow, to allow remote configuration, then untether.

The auto-discovery method means we can build the above web & norns uploaders in a simple way, allowing them to provide their own interface into modifying this published parameters. It means a crow script can be substantially more accessible to users.

The variable storage would simply overwrite the existing script with an exact replica, plus a footer that contains the saved variables (perhaps inside a helper function) that is called on boot if any variables are in the 'remote' / 'params' table.

minor additions

These features are less important to the overall concept of the module, but do enable new classes of usage than are currently possible.

crow as i2c follower

An initial implementation strategy has been defined in #258 and simply needs to be implemented. This could later be extended with syntax helpers, but should be sufficient for prototyping as is.

frequency counter

This feature is already prototyped and working well. The goal is to accurately track the frequency of single-cycle waveforms at the input. This is useful for tuning oscillators, and building volt-per-octave calibration tools.

Tuning can be useful for live performance, though it is probably not that pertinent here due to the lack of an on-board interface.

The latter use (v8 calibration) is the real purpose of the FC, and it relies upon the 'tuner' functionality working. Further, the calibration engine needs to be improved before this is truly useful in a production setting. By combining the tuner & well calibrated outputs, we can provide a 'turn up'/'turn down' signal for v8 trimpot adjustment. This is something we use in-house for oscillator calibration, and would be nice to provide as a tool for the community. Would be great in combination with the Parameter system (so the tuning can be visualized on a screen).

Just intonation support

Since the beginning I've hoped crow could be used for interesting alternate tuning systems. By adding formal support for Just Intonation in the input/output scale functions, it would feel like less of a second-class citizen. At present it's possible, with a little math. The goal should be to make it work with the current system, then figure out how to abstract some of the boilerplate back behind the scenes.

Having a (set of) helper function to convert Just ratios into 'volts' representation would decenter the 'automatic' behaviour of the i/o scalers. Again, these can be built with druid & a lua script, then captured to a library if deemed useful.

Teletype support

Building on the ii follower support, Teletype should have it's functionality extended to be able to communicate with crow. There has already been numerous people who've offered to implement the interface once we have the system formally defined in #258, and tested on hardware.

@trentgill trentgill reopened this Jul 19, 2020
@trentgill trentgill added this to the 3.0 milestone Jul 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant