-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v1.9.1] Upgrader confused about install disk #10069
Comments
This was referenced Dec 29, 2024
I receive the same error with the same dmesg output upgrading from 1.9.0 to 1.9.1. I have never used ZFS in my cluster. |
smira
added a commit
to smira/go-blockdevice
that referenced
this issue
Jan 13, 2025
Adjust the order of probing once again: * push up probes which have a strict magic match (xfs, extfs, squashfs, etc.), and for filesystems which are commonly used in Talos * keep GPT after that, as it doesn't have strict magic, but still should come early enough before other probes * ZFS has a very wide way of looking for a superblock, which might match ZFS at the end of the disk in a partition, while the disk is actually GPT, so keep it low (and it's only an optional extension) See siderolabs/talos#10069 Signed-off-by: Andrey Smirnov <andrey.smirnov@siderolabs.com>
smira
added a commit
to smira/go-blockdevice
that referenced
this issue
Jan 13, 2025
Adjust the order of probing once again: * push up probes which have a strict magic match (xfs, extfs, squashfs, etc.), and for filesystems which are commonly used in Talos * keep GPT after that, as it doesn't have strict magic, but still should come early enough before other probes * ZFS has a very wide way of looking for a superblock, which might match ZFS at the end of the disk in a partition, while the disk is actually GPT, so keep it low (and it's only an optional extension) See siderolabs/talos#10069 Signed-off-by: Andrey Smirnov <andrey.smirnov@siderolabs.com>
smira
added a commit
to smira/talos
that referenced
this issue
Jan 16, 2025
Fixes siderolabs#10069 Pulls in siderolabs/go-blockdevice#122 Signed-off-by: Andrey Smirnov <andrey.smirnov@siderolabs.com> (cherry picked from commit 5bc3e34)
Confirmed fixed on my end, thanks! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Bug Report
Description
Upgrading our bare metal nodes from v1.8.4 to v1.9.1 fails. The installer claims the selected install disk (which is the same as the v1.8.4 install disk from which Talos is running) is formatted with ZFS.
We do have several disks formatted with ZFS, so it seems like the installer is getting confused about which disk it's probing, despite it claiming that it's looking at the correct install disk (
/dev/nvme0n1
, in this case).Up until now, our install-related machine config looked like this, where we specified the install disk very carefully:
When the v1.9.1 upgrade failed, we then reconfigured the machine config to make use of the new
diskSelector
config, and now the installer-related machine config looks like this:but the problem persists.
Logs
dmesg
output during failed upgrade:List of disks:
Two of the Micron 3.8TB NVMe drives are formatted with ZFS, and the other is whatever format Mayastor is using.
Relevant part of
talosctl mounts
output showing that Talos is in fact running onnvme0n1
:Environment
talosctl version --nodes <problematic nodes>
]kubectl version --short
]Bare metal
x86_64-linux
.The text was updated successfully, but these errors were encountered: