Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: modules as plugins #1002 #3062

Draft
wants to merge 67 commits into
base: main
Choose a base branch
from
Draft

feat: modules as plugins #1002 #3062

wants to merge 67 commits into from

Conversation

Julusian
Copy link
Member

@Julusian Julusian commented Oct 1, 2024

in progress..

This needs some further UX thought, but the functionality is making progress.

Managing modules

There is a new 'modules' tab, which uses the split layout approach.
image

The left column lists all the modules that are installed, allowing selecting which one to manage. Some more details should be added to this table at some point.
It has a similar style of filtering as we use for the connections list.

At the top of the page, you can upload a module package, to install it as a 'custom' version.

In the right panel, is a list of all the modules as reported by the store. On the right of each row are some external links, with the button on the left either installing the latest version of the module from the store, or a shortcut to manage the module if a version of it is already installed.

The list from the store is refreshed when opening this view if it hasnt been updated in a while, and can be refrehsed on demand by clicking the button.
Currently this is using a hardcoded json blob, simulating what will happen once the api exists.

When selecting a module, the right panel changes tab to a different layout
image

The top table lists all the versions of the module either reported by the store, or already installed from the store.
You can install/uninstall versions here with the button on the left. That button changes to different icons if it can't be installed for some reason (eg, module-api is too new for companion or api doesn't give a download url)

The data for this table is fetched from the api separetely, so has its own refresh button.

The lower table lists all the 'custom' imported versions of the module.
This shows similar details to the table above, but simplified a bit. From here you can remove these versions.

Managing connections

When adding a module, you can choose what version of the module to use. The latest options allow for auto-updating as new versions of the modules are installed. This will only choose from the dev, builtin and store modules, not custom versions.
image

On a disabled connection, you can change the version of the module that it is using
image

Some additional behaviours:

  • After installing a new version of a module, connections will automatically restart if needed. Either satisfying a missing version, or if what 'latest stable' means changes, it will restart to use the new version
  • The 'discover' tab is loading and caching the list of modules. Intended to scrape the api on demand, or automatically after some number of hours.

TODO:

  • All the UX
  • Differentiating between stable and prerelease doesnt work currently
  • Hook it up to a real api
  • Figure out how/where to show a hint that there are newer versions of modules that could be installed
  • Better handling of offline environments
  • Much error handling
  • Implement more granular reloading of connections after module install/uninstall
  • Should builtin modules be moved into the same store mdoules path, so that they persist when upgrading/downgrading?
  • Installed modules shoudn't be stored inside the release specific config dir
  • Make sure that the versions persist correctly through import/export
  • How should the legacyId flow work now?
  • should the module declaring itself as prerelease be part of manifest, in the api or a file inside the archive?
  • bulk import of module-bundles

@Julusian Julusian added this to the v3.5 milestone Oct 7, 2024
@Julusian Julusian linked an issue Oct 10, 2024 that may be closed by this pull request
@krocheck
Copy link
Member

What's the intent in terms of bundling modules in the future? Should we have multiple download options, such as:

  • minimal - no bundled modules
  • common - top 50? modules
  • full - all current modules

In terms of the UI, I'm thinking we use the Connections tab for inspiration.

  • Have a main listing of all modules "in the system" (bundled, dev, installed) filterable and searchable. Have the "Import Module" button at the top here.
  • "+ Add connection" would be the "Discover" tab showing Bitfocus cache and display any not installed or modules with new versions. "Add" button is instead "Install" with a popup to select which version to install.
  • Clicking an existing module pulls up what would be the "edit" tab but called something different. There you can see all the versions available, purge unused ones, check the help files for each one ... potentially see a listing of the other versions available (same as under the discover tab).

I will want to talk about data structures once #3045 is merged. I think the cache should be in cache. I standardized on _ instead of - in key name stores as part of that. Could parts of this be better served with its own table instead of full JSON blobs. I'm sure you've already considered some of these points.

@Julusian
Copy link
Member Author

What's the intent in terms of bundling modules in the future? Should we have multiple download options, such as:

I think that having multiple download options will add more confusion and complexity than it is worth.

My view is that we should always include 'common' modules. And the versions bundled in the betas will be updated much less often, maybe as infrequently as updated just before a stable release.
As part of this 'store' I think that it needs to be possible to download the modules from the bitfocus website, to cater to offline users who can't use the store inside of companion.

I'm not yet sure if it should be possible to include modules inside exports, or if they need to be installed separetely (for online users, this could be automatic)

In terms of the UI, I'm thinking we use the Connections tab for inspiration.

I think that makes sense. That list could have icons to indicate what 'types' of build each module has.

I'll give this a try, this is just the inspiration I need to have some direction :)

I will want to talk about data structures once #3045 is merged.

The code is still in the 'getting things working' stage, and I haven't thought much about the cache other than knowing what I am doing is temporary. Tbh, until there is an api to scrape, I don't have any idea how the fetched data will look, or if I am assuming it will provide more in the first level than it does.

@Julusian Julusian force-pushed the feat/loadable-modules2 branch 3 times, most recently from d72d449 to 781b691 Compare October 13, 2024 19:55
@dnmeid
Copy link
Member

dnmeid commented Oct 16, 2024

Thank you Julian for digging in this super long standing request.
Here are a few thoughts I have:

  • Should we call it "module" or "plugin" in the UI? Personally I'm fine with both, maybe plugin is the more correct term.
  • Would it be possible to automatically include the store modules in search and download the selected version on first use if the user has internet connection right from the connections tab. So the user with internet wouldn't need to take care of module management a lot.
  • I see the point of the order of the tabs when you are forced to use the Modules tab first to get any module. But when downloading of the modules is also possible from the Connections tab, I see the module management more like a setting and would see the tab more in vicinity of the settings tab at the right.
  • I would not make any differentiation between custom modules and non-custom modules. I think there should just be modules/plugins. So the whole concept of the local development modules path would be superseeded by just a directory where all the modules are stored at. The user should be free to just store modules there from the OS or Companion does download modules there, so this would be an accessible directory. Only tiny caveat here is if there are multiple modules with the same version number, maybe add then the filename and/or date for unambiguity.
  • There should be a method of mass management, like downloading a lot or all modules, downloading updates of all used modules, deleting outdated versions
  • Isn't the "discover modules" sub tab somehow redundant?
    Right now the workflow seems to be 1. go to modules 2. discover modules at the right 3. add module so it appears on the left 4. select module on the left so it can be managed on the right 5. use module in connections tab.
    Couldn't we have just one list of modules in the management tab and this list shows a combination of all modules, the discovered store modules and the locally available ones? Filters can still be applied to show only the local ones or only the remote available ones or only the betas or only ones with available updates

According to the open questions:

Figure out how/where to show a hint that there are newer versions of modules that could be installed

I'd suggest to show a hint in the module list of the management tab, maybe a little up arrow or something. On the tab header we could have a dot with the number of updates available

Better handling of offline environments

That's where the mass management is a benefit. You are online once, download all modules and go to the job. Also the user accessible module directory helps for users which are installing completely offline. They can install Companion offline and then just copy/unzip a bundle with all the needed modules to the directory.

Should builtin modules be moved into the same store mdoules path, so that they persist when upgrading/downgrading?

I'd say yes. Then the only difference between a builtin module and all other modules is that the builtins are always delivered with the core application.

How should the legacyId flow work now?

Why should there be any difference from today? I see that the legacyId sytem is not super robust and there might be an increase of problems if the user can downgrade modules more easily. Do you have any particular problems in mind?

What's the intent in terms of bundling modules in the future?

My suggestion would be to always bundle the internal modules and that's it, no different installers for the core application. But at the same time there needs to be a convenient download possibility for a "all modules bundle", so you can grab them e.g. as a zip archive and then just drop them in the module directory with only one additional step if you are doing a complete offline installation.

@dnmeid
Copy link
Member

dnmeid commented Oct 16, 2024

And another thing is coming to my mind right now. Today we are talking about modules for connections. Maybe in the future we want to have plugins also for surfaces, APIs, Themes, Language-Packs...
Maybe we should have an eye on these possibilities right now when it comes to the interface.

@krocheck
Copy link
Member

@dnmeid Most of what you mentioned above @Julusian and I have been discussing in Slack, so I'll add some of that here.

I tend to agree that the Modules tab should be more of a 'settings' aspect of the workflow instead of a stopping point in getting a working installation going (except for maybe offline users). Custom having its own section was discussed and conflicts related to version numbers in custom vs published releases was a main sticking point to keeping them clearly separated. I also think it would be better for the "built in" modules to not necessarily show up in the "installed" list; with my preference being they install on demand as part of creating a connection, and I thought it was good option to allow JIT install of a newer "discovered" version of a module at the time of creating the connection. But none of that negates the need to have the Modules tab to purge unused versions, upload custom versions, or bulk upload a module bundle. Part of not installing the built-in modules was so that any used modules would copy over as being installed on use so that they are retained on a core update.

I'm not sure yet what the module upgrade notifications should look like yet, but that's definitely going to be a part of this somehow.

The offline workflow is going to be the more interesting aspect. I don't like the concept of bundling "commonly used" modules because its doubtful that's going to 100% cover offline users' needs anyway. My thoughts have been to have a 'minimal' and 'full' version (minimal having no modules and full being what we do today). Those could just as easily be called 'online' and 'offline'. Or we go with no bundled modules and have an offline module bundle download. I'm good with either. But the notion that someone can "sync up" after they download before they take the system back offline is not workable for my use-cases because we need the stable installs on a shared drive for the offline systems. Those would not have an opportunity to grab modules from the online system right after downloading and installing.

Julian and I have discussed a module approach for Surfaces as a start, so that has been in the back of our minds during this. Services would be next on the list after that.

@Julusian
Copy link
Member Author

Should we call it "module" or "plugin" in the UI? Personally I'm fine with both, maybe plugin is the more correct term.

I don't feel strongly on this, just that later on surfaces will also use a similar 'plugin' system. So if we call this plugins, would the surface ones share the ui, or be called something else? There is already some confusion from users over how to enable additional surface types.

Would it be possible to automatically include the store modules in search

Good point, I don't see why not. Maybe it might be a little confusing as I'm not sure if we can show the module help until we have installed the module (the store api is very unknown at this point)

I would not make any differentiation between custom modules and non-custom modules.
Only tiny caveat here is if there are multiple modules with the same version number,

This is my current reasoning for it. If we have a module on disk claiming to be 'v1.1.1' is that the same as the version on the store, or is that a user modified/custom build? That affects how we should treat it in the ui. If its the same as the store then we shouldn't offer to download that version. If its different then we should.
This separation is intended to force the separation of 'we know where this came from' and 'the user gave us this, it could be anything'. There are other ways to solve this, depending how strict/accurate we want this to be, and how much we trust users to blindly copy these folders and make edits to the files they contain.

The user should be free to just store modules there from the OS or Companion does download modules there, so this would be an accessible directory.

I disagree with this, I don't want this directory to be accessible.
The current implementation of managing these modules is assuming a very fixed folder structure, in part to make its life easier, and to keep things predictable. Without that predictability we will get weird behaviour when version numbers clash.

There should be a method of mass management, like downloading a lot or all modules, downloading updates of all used modules, deleting outdated versions

Yeah, not something I've thought about. I think this should be a follow up PR, none of it sounds critical to making this PR useful.

Isn't the "discover modules" sub tab somehow redundant?

I'm not sure that it is. Even with Would it be possible to automatically include the store modules in search, being able to install a module but not add a connection with it might be a useful thing?
Maybe more so if there is a filter or it only shows non-installed modules?

Why should there be any difference from today? I see that the legacyId sytem is not super robust and there might be an increase of problems if the user can downgrade modules more easily. Do you have any particular problems in mind?

I could write a lot on thoughts I have about this, but trying to keep it short;
The main thing is where the definition exists in the api. In the api, it makes sense to have a deprecated field in the api for each module. So that in the lists and things we can show that a module is deprecated. It would be ideal to know from that same api payload which module to direct them towards.
I suppose that I'm really just trying to justify making this be the way to do it instead of having each module version declare what it replaced.
But this approach has some downsides, of not working offline and whether we want to be adding all the current legacyIds as modules in the api for this to work.

I haven't really thought too much about when/how we push connections across from one module to another, because being able to pin connections to a specific version of a module makes that harder (especially as we should allow going backwards across the rename, just like we do for within a module).

Should builtin modules be moved into the same store mdoules path, so that they persist when upgrading/downgrading?

The default/recommended way of adding a module will be to leave it set to 'use latest version'. Meaning that most of the time, when you update it then wont try to use the old versions.
I have exposing manual selection is so that users can pin an old version when needed. Such as because of a bug in a newer version, or testing a new version as a secondary module.

I propose that modules should be copied into the same store modules path just before they are first used.
Doing it for every module automatically both wastes disk space, and adds noise to that folder and companion (forcing users to clean out old versions of modules they never use). And provides the safety net of preserving the current version of a module when updating.

My suggestion would be to always bundle the internal modules and that's it
But at the same time there needs to be a convenient download possibility for a "all modules bundle"

My concern here is that the number of modules is only going to increase. Currently we are at I think ~400, taking 280MB on disk, over 7400 files. (that size might jump by ~35MB if we repackage the legacy modules to no longer be handled in a special way)
We aren't talking huge numbers, but that 7400 files is noticable when installing on windows.

If we are ok with shipping these 400, is there a point where we will say that there are too many?
Thats a lot of modules and files some of which each user uses a small minority of. Some of those don't work anymore, some which have a target audience of one organization, and some which need internet to work so don't need to available in an offline system.

That said, I think we need to keep them all in for a couple of releases. That might be necessary anyway to ensure that we aren't deleting modules out from under people.

But a question I have, while there is a simplicity to being able to load in 'all modules', is that something that is actually useful? Surely you already know what kit you have beforehand, so you can narrow it down to a small list of modules?

I should add, one thing I am leaning towards is that full exports probably wouldn't package up the modules they use, which could make doing a full import on a new machine painful if we dont ship all the modules (although those could be the wrong/old versions).

@dnmeid
Copy link
Member

dnmeid commented Oct 19, 2024

I would not make any differentiation between custom modules and non-custom modules.
Only tiny caveat here is if there are multiple modules with the same version number,

This is my current reasoning for it. If we have a module on disk claiming to be 'v1.1.1' is that the same as the version on the store, or is that a user modified/custom build?

I still think there should be no difference.

  1. The division between custom modules and non-custom modules sounds like a developer's solution for a developer's problem.
  2. The division doesn't actually solve this problem, you can still end up with the same version numbers in each section and then everything is getting even more messy.

We can't avoid that users want to install module x.y.z that they have obtained somewhere and that may have the same version number although it may or may not have different code. We can't guarantee that version numbers in the store are always correct either.
I think the best solution is to make sure we have at least one thing that makes a module unique in an installation and that can be easily understood by the user. I think the filename would be the best such thing. All files in a folder need to have a unique name and the OS will enforce that no matter how a file got into a folder. That means if we have e.g. blackmagic-atem_v3.14.1 it is a different name from blackmagic-atem_v3.13.0 and if a user somehow manages to get a custom module where the developer didn't change the version number and it is named blackmagic-atem_v3.14.1 too and he wants to store it in the modules folder, there will be an error and he will be forced to cancel or overwrite the existing file or rename one of the files. All this is known to users and can easily be understood. The only part Companion would need to do is to look if there are multiple files of the same module with the same version and then additionally show the filename in the GUI so the user can pick the module he really wants.
I also don't think we have to take care if a module is really the same as in the store or not. If a user has a modified version and this has an unmodified version number, he will know the reasons why he has this and how to deal with it. We on the other hand don't know the reasons and how to deal with it and so I think we should not try to deal with it.
But we definitely should rise module developer's awareness for that topic and make sure they use e.g. build numbers or alpha or beta markers for their custom builds.

But a question I have, while there is a simplicity to being able to load in 'all modules', is that something that is actually useful? Surely you already know what kit you have beforehand, so you can narrow it down to a small list of modules?

Yeah, for me that is a must have. I would roughly divide the Companion users in two groups here: 1. Users with a slow changing set of devices, like studios, streamers, churches, home-users, small companies with limited scope; 2. Users with dynamically changing devices, like freelance technicians, full service companies, rental companies.
First users tent to have internet connectivity and they can download just the modules they need and they usually do more complex Companion setups so they are more affected by some breaking changes and are more concerned about version management. Second users usually are challenged with very dynamic situations, where gear is added in last minute and they often don't have internet and they usually don't have much time, so the goal is not to make the most fancy setup, but to make this additional thing working asap. They tent to have company computers that are updated maybe once a year. For them it is important to have something to carry on your USB-drive that gives you a Companion with all options without having to think a lot or make some additional selections.
Size does matter of course, but I'd rather download and carry 2GB of modules than invest 5 minutes in making a selection of the modules I may need on my next job just to find out I missed one when I'm there.
Hence my suggestion: Don't bundle any modules except the internal with Companion and additionally offer an all-modules bundle. With bundle I mean a bundle of only the modules without Companion itself. That bundle could atomatically be updated the same way like today the beta version is updated.

Another thing just came to my mind: do we want to show module vs. core compatibility anywhere? Version numbers are one thing but the used version of the base-module / API is different.

@Julusian
Copy link
Member Author

We can't avoid that users want to install module x.y.z that they have obtained somewhere and that may have the same version number although it may or may not have different code. We can't guarantee that version numbers in the store are always correct either.

What I am afraid of is a user reporting a bug in 1.2.3 of a module, the maintainer then spending some hours trying to reproduce the issue and failing, because the user was running a modified build of the module which caused the bug. Maybe they modified it, maybe someone else they work with did and didn't tell them.
This might be me being overly paranoid about not trusting the users to be sensible, but users are frequently unpredictable and do unexpected things. So if a module dev can't trust a bug report saying 1.2.3 is broken is actually 1.2.3, then that can make their life a lot harder.
If I'm modifying some code quickly to make it work, I often wouldn't bother updating the version number, especially if I had no intention to distribute it when I made the change.

But it sounds like this separation has to go anyway to make the bundle import work with similar ability to installing from the store. If I really want to keep it, perhaps adding another file to each module which contains checksums of the other files would be enough to cause it to be flagged as modified unless someone really wanted to avoid that.

Another thing just came to my mind: do we want to show module vs. core compatibility anywhere? Version numbers are one thing but the used version of the base-module / API is different.

That is a good point to consider here. When installing from the store companion will know what api versions it supports, so it doesnt offer to install ones which it knows are incompatible. Same when importing a single module downloaded in the browser, we just need to make sure the browser version can give users good indicator of version support.

For the 'module-packs', I guess they will need to be versioned for what version of companion it targets. To keep it simple, use the same major+minor version numbers. We could either generate them at the same time as doing a release of companion, or just follow their own cycle but under the same numbering.
You often wouldn't be getting the latest version of each module, but it would be no worse than today.
And if there is one you explicitly want the newer version of, you could always download it separately
Making these packs is going to require some thought on workflow, but I am still leaning towards the first release should still bundle every module, and to drop some/all the release after.

@dnmeid
Copy link
Member

dnmeid commented Oct 21, 2024

So if a module dev can't trust a bug report saying 1.2.3 is broken is actually 1.2.3, then that can make their life a lot harder.

That's why I say we should rise awarenes of the developers for that topic. I think the users that do modify the code themselves won't file a bug report for the modified version. The problem is developers sending out modules with modified code and the same version number, so they are shooting themselves in the foot.
So far I often did it myself the same way but with a plugin system and you never know where a preview version spreads, I'd rather use unique beta version numbering.
A checksum though is always a good thing even if it is only for ensuring integrity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

Modules as downloadable plugins
3 participants