-
Notifications
You must be signed in to change notification settings - Fork 48
Add the kvm-libvirt platform to lokoctl #810
base: master
Are you sure you want to change the base?
Conversation
d64a071
to
ab4a38d
Compare
Thanks @pothos! I'm looking at this. In the meantime, could you please fix the linting issues? If you don't have time to follow up, we can pick this up, too. |
Thanks for having a look. I can address some of the issues but I'm not sure about all of the linting issues because similar code is present in other Lokomotive platforms already (without disabling the linter for these lines). |
Yeah, the linter only checks added/modified lines. I suggest we fix the linting issues here and I can open a follow up issue to do the same for the other platforms. We need a green CI before we can merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR. On the technical side this seems to work very nicely.
I have multiple issues with the docs - see my inline comments.
docs/quickstarts/kvm-libvirt.md
Outdated
sure that a couple of container images fit on the node. | ||
|
||
```sh | ||
$ sudo dnf install wget bzip2 qemu-img # or sudo apt install wget bzip2 qemu-utils |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These packages are listed as requirements. If we explain how to install them then they are no longer requirements but rather a part of the guide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, should this be moved to the bullet point where the requirements are stated, like Fedora packages A B C or Debian/Ubuntu packages A B C?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO it could make sense to list per-distro packages for common distros in the requirements if that's the question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the PR and sticked to the steps with the distro comments.
Thanks, will try to go through it end of the week. |
Cool. As said, we can also help with matching the style to our existing guides. |
Feel free to push commits 👍 |
ab4a38d
to
6bfdc75
Compare
Rebased the PR but now bootstrap reports this error and times out: |
Thats because no charts are copied for installion. I have got it working, plus
Will update the PR tomorrow after cleanup. |
The KVM libvirt Terraform module was not yet available from lokoctl despite it being an easy way to try out Lokomotive without any cloud provider accounts (and their attached costs). It allows to use Lokomotive on any Flatcar Container Linux image, even local development builds. It also gives direct access to the VGA console for debugging. The cluster VMs can be shut down in virt-manager while they are not needed.
6bfdc75
to
bfda2c3
Compare
This commit updates the versions.tf and terraform providers to the required format for 0.13 version. Signed-off-by: Imran Pochi <imran@kinvolk.io>
This commit adds the control plane charts that need to be installed once the bootstrap control plane is running. Signed-off-by: Imran Pochi <imran@kinvolk.io>
Signed-off-by: Imran Pochi <imran@kinvolk.io>
bfda2c3
to
bd7ce15
Compare
Signed-off-by: Imran Pochi <imran@kinvolk.io>
* A Linux host OS. | ||
* At least 4 GB of available RAM. | ||
* An SSH key pair. | ||
* Terraform `v0.12.x` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now need v0.13.x
* An SSH key pair. | ||
* Terraform `v0.12.x` | ||
[installed](https://learn.hashicorp.com/terraform/getting-started/install.html#install-terraform). | ||
* [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for this, it gets automatically installed now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#1078 😄
VirtualCPUs int `hcl:"virtual_cpus,optional"` | ||
VirtualMemory int `hcl:"virtual_memory,optional"` | ||
CLCSnippets []string `hcl:"clc_snippets,optional"` | ||
Labels string `hcl:"labels,optional"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use map[string]string
as we are doing with other platforms
{{- if $pool.Labels }} | ||
labels = {{ $pool.Labels }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{{- if $pool.Labels }}
labels = {
{{- range $k, $v := $pool.Labels }}
"{{ $k }}" = "{{ $v }}",
{{- end }}
}
pkg/platform/kvmlibvirt/template.go
Outdated
provider "ct" { | ||
version = "~> 0.5.0" | ||
} | ||
|
||
provider "local" { | ||
version = "~> 1.2" | ||
} | ||
|
||
provider "null" { | ||
version = "~> 2.1" | ||
} | ||
|
||
provider "template" { | ||
version = "~> 2.1" | ||
} | ||
|
||
provider "tls" { | ||
version = "~> 2.0" | ||
} | ||
|
||
provider "random" { | ||
version = "~> 2.2" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this can be removed from here and moved to assets/terraform-modules/kvm-libvirt/flatcar-linux/kubernetes/versions.tf
} | ||
|
||
provider "libvirt" { | ||
uri = "qemu:///system" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just started setting up Terraform with existing libvirtd cluster (not lokomotive) and noticed this PR.
For those of us with a home 'server' it would be great if we can pass the kvm host through cluster configuration, perhaps along the lines of:
{{- if .Config.QemuURI }}
uri = {{.Config.QemuURI }}
{{else}}
uri = "qemu:///system"
{{- end}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To use a remote location? Maybe it's not easy to get this works with the image volume pool, or?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Providing a:
- external file eg:
https://example.com/flat-car.img
results in terraform runner (laptop) downloading it and then copying across the network to the server. I think this is reasonable. - laptop local file eg:
/path/to/flat-car.img
results in terraform runner copying across network to the server. Also reasonable.
It is unintuitive but I don't think there's an alternative to have remote kvm host fetch images - see: dmacvicar/terraform-provider-libvirt#299 (comment)
I currently supply a path to images that's actually a NFS mount from the 'server' which is kind of gross (looping data) but works in the sense that I don't need to keep the image on my laptop. 😅
# Qcow2 base OS volume that can be shared by VM's (domain's)
resource "libvirt_volume" "os_base" {
name = "${var.guest_hostname}-os_base"
source = "/mnt/nfs/path/to/ubuntu-focal-server-cloudimg-amd64.img"
}
# OS volume per guest VM
resource "libvirt_volume" "os" {
count = var.guest_count
name = "${var.guest_hostname}-${format("%02d", count.index)}-os.qcow2"
base_volume_id = libvirt_volume.os_base.id
pool = var.guest_pool_name
size = var.os_volume_size
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please let me know if I misunderstood the question.
If the concept is amenable, perhaps we could add a line to markdown file saying that image sources will be fetched by lokoctl executor and copied to remote kvm host?
Maybe I build a fork of this PR with the change and do some testing first. :)
Then I can do a small PR once this one merged or just post here that it works as suspected if this is still being iterated on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started building this PR and trying to run it against my remote kvm server.
On top of a couple of string quotes suggested here the changes are: https://github.com/kskewes/lokomotive/commit/6246cc76e8e7287759023565e5fc000677aa4266
It creates all of the libvirt domains and things but the default NAT networking doesn't enable access from lokoctl runner to hosts. It really needs to be a bridge network for me. I'll have another go later and can iterate in a fork.
DisableSelfHostedKubelet bool `hcl:"disable_self_hosted_kubelet,optional"` | ||
EnableReporting bool `hcl:"enable_reporting,optional"` | ||
EnableAggregation bool `hcl:"enable_aggregation,optional"` | ||
EnableTLSBootstrap bool `hcl:"enable_tsl_bootstrap,optional"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EnableTLSBootstrap bool `hcl:"enable_tsl_bootstrap,optional"` | |
EnableTLSBootstrap bool `hcl:"enable_tls_bootstrap,optional"` |
{{- end }} | ||
|
||
{{- if $pool.VirtualMemory }} | ||
virtual_memory {{ $pool.VirtualMemory }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
virtual_memory {{ $pool.VirtualMemory }} | |
virtual_memory = {{ $pool.VirtualMemory }} |
{{- end }} | ||
|
||
{{- if .Config.ControllerVirtualMemory}} | ||
virtual_memory {{ .Config.ControllerVirtualMemory}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
virtual_memory {{ .Config.ControllerVirtualMemory}} | |
virtual_memory = {{ .Config.ControllerVirtualMemory}} |
{{- end }} | ||
|
||
{{- if .Config.PodCidr }} | ||
pod_cidr = {{.Config.PodCidr }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod_cidr = {{.Config.PodCidr }} | |
pod_cidr = "{{.Config.PodCidr }}" |
pod_cidr = {{.Config.PodCidr }} | ||
{{- end }} | ||
{{- if .Config.ServiceCidr }} | ||
service_cidr = {{.Config.ServiceCidr }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
service_cidr = {{.Config.ServiceCidr }} | |
service_cidr = "{{.Config.ServiceCidr }}" |
cluster_domain_suffix = {{$.Config.ClusterDomainSuffix }} | ||
{{- end }} | ||
{{- if $.Config.ServiceCidr }} | ||
service_cidr = {{$.Config.ServiceCidr }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
service_cidr = {{$.Config.ServiceCidr }} | |
service_cidr = "{{$.Config.ServiceCidr }}" |
{{- end }} | ||
|
||
{{- if .Config.ClusterDomainSuffix }} | ||
cluster_domain_suffix = {{.Config.ClusterDomainSuffix }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster_domain_suffix = {{.Config.ClusterDomainSuffix }} | |
cluster_domain_suffix = "{{.Config.ClusterDomainSuffix }}" |
machine_domain = "{{$.Config.MachineDomain}}" | ||
cluster_name = "{{$.Config.ClusterName}}" | ||
{{- if $.Config.ClusterDomainSuffix }} | ||
cluster_domain_suffix = {{$.Config.ClusterDomainSuffix }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster_domain_suffix = {{$.Config.ClusterDomainSuffix }} | |
cluster_domain_suffix = "{{$.Config.ClusterDomainSuffix }}" |
{{- end }} | ||
|
||
{{- if .Config.NetworkIPAutodetectionMethod }} | ||
network_ip_autodetection_method = {{ .Config.NetworkIPAutodetectionMethod }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
network_ip_autodetection_method = {{ .Config.NetworkIPAutodetectionMethod }} | |
network_ip_autodetection_method = "{{ .Config.NetworkIPAutodetectionMethod }}" |
@kskewes if you're interested in Lokomotive using libvirt underneath, there is also #392, which might be in a better shape than this PR (though libvirt usage there is rather only for testing). |
Had a go at running up #392.
Ultimately I think either continuing with #810 or doing my own libvirt plus bare-metal quickstart might be the fastest way for me to come up to speed on flatcar linux and lokomotive. Then look at how I might be able to assist with #392. |
The KVM libvirt Terraform module was not yet available from lokoctl
despite it being an easy way to try out Lokomotive without any cloud
provider accounts (and their attached costs). It allows to use
Lokomotive on any Flatcar Container Linux image, even local development
builds. It also gives direct access to the VGA console for debugging.
The cluster VMs can be shut down in virt-manager while they are not
needed.
(Reactivating the CI pipeline for libvirt is the next step.)
Closes: #214