- Sponsor
-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a linstor
storage driver
#564
Comments
Hi I'm interested in working on this issue. May I be assigned to this? |
Hey there, I wouldn't recommend taking on this issue as this is the kind of work that I'm currently estimating at more than a month of full time dedicated work to get this done right |
That makes sense, thank you for letting me know |
@stgraber Can I take a stab at this? I would not be able to work full time on this. If there is no urgent deadline for the delivery, I can commit to working on this for the next 2~3 months and getting it done. |
Sure, you can start looking into this one. The first stage doesn't actually involve Incus too much. You should basically get a few systems (I'd do 3), give them some extra disks for use with Linstor and then deploy both an Incus cluster across those systems but with no storage configured at this stage. Then deploy Linstor on the same machines and after that, start figuring out what needs to be done through the Linstor Go client to create a pool, create volumes within that pool, create snapshots on those volumes, what are the main configuration options we'll want to expose on both pools and volumes and what needs to be done when an instance is moved from one system to another. That all should happen outside of Incus. Once you have some example Go code using Linstor's Go client which can handle those basic cases, then we can start putting all that into a new Incus storage driver, figure out what needs to be done to be able to do basic tests of Linstor in the Github Actions environment and then get it reviewed and merged. |
Frequent updates within this issue as you make progress through that will be key so anyone else who knows about Linstor can assist with the best ways to set things up and to run it. |
@stgraber Sounds good. Thanks for the detailed explanation. I will start working on setting up the required instances and post updates here based on it. |
@stgraber Could you please assign this to me? I will start work on this from this week. |
Done |
any updates? thanks |
@sharathsivakumar can we get an update? |
Hey @sharathsivakumar, how's it going with this one? |
Hi @sharathsivakumar! I'm very interested in this feature so if you need some help with Linstor or something and I know how to answer it, let me know. |
Hello folks! I'm also quite interested in this feature, so if there's anything I can help with let me know. For a minimal Linstor deploy for testing I think the Ansible playbooks provided by LINBIT might be a good start. A few months ago I had to do a demonstration of Linstor so I ended up making a very simple playbook that could also be used as a starting point. |
Cleared assignee as he's clearly inactive. |
Hey @stgraber! I'll be able to work on this feature part time. @winiciusallan is also interested in it, so we agreed to collaborate on the development. As a first step, we want to get a basic automation setup to quickly deploy an Linstor cluster for development. We're thinking about doing something similar to incus-deploy, in which we provision some VMs using Incus itself and then deploy Linstor using Ansible. Once that's done, we'll probably start to discuss the requirements for making Incus communicate with Linstor over it's API to reach a minimal set of config options for that integration. We'll probably need a list of endpoints and optional client certificates for mTLS, much like how it's done for OVN today. If it sounds good to you, could you assign the issue to me? |
Well we figured out today that Linstor may be a good fit for a new cluster we’re deploying at $job. If splitting (and coordinating) the workload between 3 people works for you two, you can count me in. I’ll be able to work on it on my office time: the sooner we can get our cluster up and running, the better :) |
Great to see all the interest! I think the first step is definitely to agree on minimal steps to get Linstor deployed and working on its own across 2-3 VMs. That way we can focus on the Incus integration with it. We should try to aim for something pretty simple and lightweight that can be reviewed and merged pretty quickly, that way it's then easier for multiple people to contribute improvements and missing features on top of that afterwards. |
@bensmrs that sounds great! @winiciusallan is working on automating a minimal Linstor deployment. We'll reuse this Ansible playbook for the deployment/configuration part, the only thing missing is the Terraform side of things to provision some VMs with Incus itself (like it's done with linstor-deploy). We should get that done in the next few days. I'll be working on the integration part-time at my job starting next week. Do you think we could discuss the next steps together at the start of the next week?
Agreed. We could discuss the basic shape of the integration together, and then implement an MVP with the basic features. @stgraber what set of features would be considered the bare minimal so we can start paralelizing the work? I think the config options for connecting with Linstor controller are an obvious candidate here, but I'm not sure about how much of the |
So at the pool level, we obviously need to be able to create and delete pools. That would leave things like snapshots, volume resize, migration, backup, ... to be implemented through follow-up PRs. Though some of those (migration, backups, ...) have generic helpers for non-optimized code paths that should just work out of the box. |
Sure! Feel free to ping me :) |
While doing the Terraform side of things to provision the VMs for Linstor, @winiciusallan and I figured that it would make more sense to just use incus-deploy as a base. With lxc/incus-deploy#20 we now have a way to deploy a basic but functional Linstor cluster alongside Incus. That should be enough for a development environment so we can start working on the integration itself. |
Hi @bensmrs ! I've sent to you an invite in Linkedin so we can discuss the teamwork and the decisions that we have made until now. |
Oh sorry, that’s not a network I often check… See you there! |
Hello folks! Just sharing a quick status update. I've been colaborating closely with @winiciusallan to get our MVP running as soon as possible. Here's a quick demo of creating a storage pool with the linstor driver. In this particular case, Incus creates a new resource group in Linstor. We can also specify a We'll now move on to the basics of volume creation. Once we get that working reasonably well, we'll open the initial PR with the basic implementation.
|
Adding this as its own feature request from discussions in #344.
There's been some interest recently in seeing Linstor added as a remote storage driver alongside Ceph and clustered LVM.
The initial things to look at are:
Go client for Linstor: https://github.com/LINBIT/golinstor
The text was updated successfully, but these errors were encountered: