Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

starting minikube with podman fails - Error validating CNI config file /etc/cni/net.d/minikube.conflist: plugin dows not support config version #17754

Closed
boemitsu opened this issue Dec 8, 2023 · 11 comments
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@boemitsu
Copy link

boemitsu commented Dec 8, 2023

What Happened?

when trying to run

minikube start --driver=podman

I get the below error message: How to fix this issue? thx for any advise

πŸ˜„ minikube v1.32.0 on Ubuntu 22.04
✨ Using the podman driver based on user configuration
πŸ“Œ Using Podman driver with root privileges
πŸ‘ Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
E1208 14:10:55.746357 70256 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426
πŸ”₯ Creating podman container (CPUs=2, Memory=3900MB) ...
βœ‹ Stopping node "minikube" ...
πŸ”₯ Deleting "minikube" in podman ...
🀦 StartHost failed, but will try again: creating host: create: creating: create kic node: create container: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec --memory=3900mb -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42: exit status 127
stdout:

stderr:
time="2023-12-08T14:10:59+01:00" level=warning msg="Error validating CNI config file /etc/cni/net.d/minikube.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
time="2023-12-08T14:10:59+01:00" level=warning msg="Error validating CNI config file /etc/cni/net.d/minikube.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
time="2023-12-08T14:10:59+01:00" level=error msg="error loading cached network config: network "minikube" not found in CNI cache"
time="2023-12-08T14:10:59+01:00" level=warning msg="falling back to loading from existing plugins on disk"
time="2023-12-08T14:10:59+01:00" level=warning msg="Error validating CNI config file /etc/cni/net.d/minikube.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
time="2023-12-08T14:10:59+01:00" level=error msg="Error tearing down partially created network namespace for container bbee02eb911d3a00ec5e2dbf14a881275964f52b266689b8c430508e29c93811: CNI network "minikube" not found"
Error: error configuring network namespace for container bbee02eb911d3a00ec5e2dbf14a881275964f52b266689b8c430508e29c93811: CNI network "minikube" not found

πŸ”₯ Creating podman container (CPUs=2, Memory=3900MB) ...
😿 Failed to start podman container. Running "minikube delete" may fix it: creating host: create: creating: setting up container node: creating volume for minikube container: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
time="2023-12-08T14:11:15+01:00" level=warning msg="Error validating CNI config file /etc/cni/net.d/minikube.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: volume with name minikube already exists: volume already exists

❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
time="2023-12-08T14:11:15+01:00" level=warning msg="Error validating CNI config file /etc/cni/net.d/minikube.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: volume with name minikube already exists: volume already exists

--
minikube.conflist

{
"args": {
"podman_labels": {
"created_by.minikube.sigs.k8s.io": "true",
"name.minikube.sigs.k8s.io": "minikube"
}
},
"cniVersion": "1.0.0",
"name": "minikube",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman1",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [
{
"dst": "0.0.0.0/0"
}
],
"ranges": [
[
{
"subnet": "192.168.58.0/24",
"gateway": "192.168.58.1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall",
"backend": ""
},
{
"type": "tuning"
},
{
"type": "dnsname",
"domainName": "dns.podman",
"capabilities": {
"aliases": true
}
}
]
}
~

Podman
Version: 3.4.4
API Version: 3.4.4
Go Version: go1.18.1
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64

minikube version: v1.32.0

containernetworking-plugins is already the newest version (0.9.1+ds1-1).

Attach the log file

na

Operating System

Ubuntu

Driver

Podman

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 8, 2023

Hmm, that version of podman (3.4.4) should have created the network with cniVersion: 0.4.0

If you create a new network with sudo podman network create, does the config look OK ?

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 8, 2023
@boemitsu
Copy link
Author

boemitsu commented Dec 9, 2023

config doesn't look good...

$ sudo podman network create
WARN[0000] Error validating CNI config file /etc/cni/net.d/minikube.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"] 
/etc/cni/net.d/cni-podman2.conflist

@boemitsu
Copy link
Author

boemitsu commented Dec 9, 2023

it's a fresh ubuntu 22.04.3 installation, i was following the guide https://minikube.sigs.k8s.io/docs/start/ for installation of minikube

``` $ sudo apt search podman
Sorting... Done
Full Text Search... Done
catatonit/jammy,now 0.1.7-1 amd64 [installed,automatic]
  init process for containers

cockpit-podman/jammy,jammy 45-1 all
  Cockpit component for Podman containers

conmon/jammy,now 2.0.25+ds1-1.1 amd64 [installed,automatic]
  OCI container runtime monitor

golang-github-containernetworking-plugin-dnsname/jammy,now 1.3.1+ds1-2 amd64 [installed,automatic]
  name resolution for containers

podman/jammy-updates,jammy-security,now 3.4.4+ds1-1ubuntu1.22.04.2 amd64 [installed]
  engine to run OCI-based containers in Pods

podman-docker/jammy-updates,jammy-security 3.4.4+ds1-1ubuntu1.22.04.2 amd64
  engine to run OCI-based containers in Pods - wrapper for docker

podman-toolbox/jammy 0.0.99.2-2ubuntu1 amd64
  unprivileged development environment using containers

resource-agents-extra/jammy-updates 1:4.7.0-1ubuntu7.2 amd64
  Cluster Resource Agents

ruby-docker-api/jammy,jammy 2.2.0-1 all
  Ruby gem to interact with docker.io remote API

@boemitsu
Copy link
Author

boemitsu commented Dec 9, 2023

apt list --all-versions podman
Listing... Done
podman/jammy-updates,jammy-security,now 3.4.4+ds1-1ubuntu1.22.04.2 amd64 [installed]
podman/jammy 3.4.4+ds1-1ubuntu1 amd64

changed to the other version of podman

sudo apt install podman=3.4.4+ds1-1ubuntu1

restarted minikube from scratch...and it worked :)

minikube delete --all
...
minikube start --driver=podman
πŸ˜„  minikube v1.32.0 on Ubuntu 22.04
✨  Using the podman driver based on user configuration
πŸ“Œ  Using Podman driver with root privileges
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E1209 09:14:08.522951   42018 cache.go:189] Error downloading kic artifacts:  not yet implemented, see issue #8426
πŸ”₯  Creating podman container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Configuring RBAC rules ...
πŸ”—  Configuring bridge CNI (Container Networking Interface) ...
πŸ”Ž  Verifying Kubernetes components...
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ’‘  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@boemitsu
Copy link
Author

just for the sake of completeness, the shared.conflist fils now also has the cniVersion 0.4.0, so it seems to be an issue of the installed podman version...

   "cniVersion": "0.4.0",
   "name": "shared",
   "plugins": [
      {
         "type": "bridge",
         "bridge": "cni-podman0",
         "isGateway": true,
         "ipMasq": true,
         "hairpinMode": true,
         "ipam": {
            "type": "host-local",
            "routes": [
               {
                  "dst": "0.0.0.0/0"
               }
            ],
            "ranges": [
               [
                  {
                     "subnet": "10.88.2.0/24",
                     "gateway": "10.88.2.1"
                  }
               ]
            ]
         }
      },
      {
         "type": "portmap",
         "capabilities": {
            "portMappings": true
         }
      },
      {
         "type": "firewall",
         "backend": ""
      },
      {
         "type": "tuning"
      },
      {
         "type": "dnsname",
         "domainName": "dns.podman",
         "capabilities": {
            "aliases": true
         }
      }

@fredjeck
Copy link

fredjeck commented Jan 27, 2024

Hmm, that version of podman (3.4.4) should have created the network with cniVersion: 0.4.0

If you create a new network with sudo podman network create, does the config look OK ?

Chiming in as i am currently struggling with the same issue.

> podman version
Version:      3.4.4
API Version:  3.4.4

> podman network create
/home/fred/.config/cni/net.d/cni-podman0.conflist

>  podman network ls
WARN[0000] Error validating CNI config file /home/fred/.config/cni/net.d/cni-podman0.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"] 
NETWORK ID    NAME         VERSION     PLUGINS
39e9c7a64c68  cni-podman0  1.0.0       bridge,portmap,firewall,tuning,dnsname

So looks like the issue is on podman

Edit : podman launchpad issue reference :
https://bugs.launchpad.net/ubuntu/+source/libpod/+bug/2024394

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 25, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@nkolatsis
Copy link

nkolatsis commented Aug 7, 2024

For anyone here now, the solution was found in the launchpad topic. This fixed the issue for me:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

6 participants