Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative to sudo, for remote podman #6809

Closed
afbjorklund opened this issue Jun 28, 2020 · 21 comments
Closed

Alternative to sudo, for remote podman #6809

afbjorklund opened this issue Jun 28, 2020 · 21 comments
Assignees
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@afbjorklund
Copy link
Contributor

When transitioning from the sudo varlink bridge to the new rest API, is there an alternative to logging in as root with ssh ?

Like adding a "podman" root-equivalent group, or starting the "podman.sock" socket as some other privileged user* perhaps.

* I think CoreOS is doing this (for the core user) ?

Running rootless isn't the question here, it's about root.


Basically wondering what to use for the CONTAINER_HOST

Here is how they are using DOCKER_HOST (i.e. for docker):

If you have ssh login for root on the remote machine set export DOCKER_HOST=ssh://root@example.com (may be considered bad security practices)

To use a non-root user user set export DOCKER_HOST=ssh://user@example.com after adding user to the docker group using sudo usermod -aG docker user. (tested on Ubuntu 18.04)

@afbjorklund
Copy link
Contributor Author

Also enquiring about the usual approach to run local podman, currently using passwordless sudo to do it (sudo -n podman)

@baude
Copy link
Member

baude commented Jun 29, 2020

@afbjorklund we talked about this just last week. Probably still looking for a solution here. What are your thoughts or what do you favor.

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Jun 29, 2020

I think root logins might be disabled by default, so I would need to enable them

PermitRootLogin prohibit-password

And then copy the authorized keys during boot, from the user to the /root dir.

mkdir /root/.ssh
chmod 700 /root/.ssh
cp /home/docker/.ssh/authorized_keys /root/.ssh/
chmod 600 /root/.ssh/authorized_keys

I'm not sure what the implications of chown'ing the podman.sock is, or how you do it.

For dockerd I think it just uses the default settings, which uses the docker group.

  -G, --group string                            Group for the unix socket (default "docker")

Would you edit the user (or group) in the systemd unit somewhere perhaps ?

[Socket]
ListenStream=%t/podman/podman.sock
SocketMode=0660

https://www.freedesktop.org/software/systemd/man/systemd.socket.html

SocketUser=, SocketGroup=

Takes a UNIX user/group name. When specified, all AF_UNIX sockets and FIFO nodes in the file system are owned by the specified user and group. If unset (the default), the nodes are owned by the root user/group (if run in system context) or the invoking user/group (if run in user context). If only a user is specified but no group, then the group is derived from the user's default group.

SocketUser=docker
SocketGroup=docker

@afbjorklund
Copy link
Contributor Author

For minikube/machine, there is a user called "docker" who is part of group "docker".

It's also a member of the group "wheel", which will enable it to use sudo at will...

/etc/sudoers

root ALL=(ALL) ALL
%wheel ALL=(ALL) NOPASSWD: ALL

The group might also be called "sudo" (in Ubuntu), but it works the same way.


Currently we are using this, to run sudo varlink bridge to activate io.podman socket
And then we prefix all commands that are run over ssh, like: sudo -n podman load.

It would probably have been easier to add a "podman" root group, but it seemed undesired ?
All the podman documentation refers to running "sudo podman" (unlike running just "docker")

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Jun 29, 2020

Currently, if I run the new podman-remote (2.0.1) it will silently fall back to building the images locally instead. :-(

The previous versions (2.0.0) would give an error, after trying to contact a local socket - rather than use the varlink...

Error: Get http://d/v1.0.0/libpod../../../_ping: dial unix ///run/user/1000/podman/podman.sock: connect: no such file or directory

So one either has to use old podman-remote (1.9.3, since 1.8.x hangs) or we need to add support for this new socket.

Unfortunately it is not included by default:

/usr/lib/systemd
/usr/lib/systemd/system
/usr/lib/systemd/system/io.podman.service
/usr/lib/systemd/system/io.podman.socket
/usr/lib/systemd/user
/usr/lib/systemd/user/io.podman.service
/usr/lib/systemd/user/io.podman.socket

Or at least it is missing from Ubuntu 20.04

podman/unknown,now 2.0.1~1 amd64 [installed]

@rhatdan
Copy link
Member

rhatdan commented Jun 29, 2020

I am fine with creating a podman group and adding write access to the socket, but not by default, This would have to be configured in the containers.conf with strong words about how dangerous this is.

Setting up podman group access to the root running podman, is equivalent to giving sudo without password access to root, and potentially worse.

Is there something we could do as an alternative with systemd? IE I turn on the systemd podman.sock socket for a particular user and it sets up permissions for just this user.
Bottom line, I would want this to be specific to the remote service over ssh rather then something local users of a linux system start doing.

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Jun 29, 2020

It seems to work, if starting the socket manually and changing the owner (also on the directory, not only the socket)

drwx------ 2 docker root 60 Jun 29 16:35 /run/podman
srw------- 1 docker root  0 Jun 29 16:35 /run/podman/podman.sock

But one has to add the --remote (even to podman-remote), and one has to add the path and the secure param.

CONTAINER_HOST=ssh://docker@127.0.0.1:39375/run/podman/podman.sock?secure=true

And it seems like $CONTAINER_HOST stopped working, so have to use --url "$CONTAINER_HOST" for it to work.

podman-remote --remote --url $CONTAINER_HOST version


 podman-remote version
Version:      2.0.1
API Version:  1
Go Version:   go1.13.8
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64
 podman-remote --remote version
Error: Get http://d/v1.0.0/libpod../../../_ping: dial unix ///run/user/1000/podman/podman.sock: connect: no such file or directory
 podman-remote --remote --url $CONTAINER_HOST version
ERRO[0000] Failed to parse known_hosts: ...
ERRO[0000] Failed to parse known_hosts: ...
Client:
Version:      2.0.1
API Version:  1
Go Version:   go1.13.8
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Server:
Version:      2.0.1
API Version:  0
Go Version:   go1.13.8
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Jul 5, 2020

The non-working podman-remote and the missing podman.socket was due to the Ubuntu 20.04 packaging* (not using make)

* #6598 (comment)

When building from source, both are OK. That is, the binary does have the buildflags and the APIv2 services do get installed...

Ubuntu:


[  179s] install  -m 644 contrib/varlink/io.podman.socket /usr/src/packages/BUILD/debian/tmp/usr/lib/systemd/system/io.podman.socket
[  179s] install  -m 644 contrib/varlink/io.podman.socket /usr/src/packages/BUILD/debian/tmp/usr/lib/systemd/user/io.podman.socket
[  179s] install  -m 644 contrib/varlink/io.podman.service /usr/src/packages/BUILD/debian/tmp/usr/lib/systemd/system/io.podman.service
[  179s] # User units are ordered differently, we can't make the *system* multi-user.target depend on a user unit.
[  179s] # For user units the default.target that's the default is fine.
[  179s] sed -e 's,^WantedBy=.*,WantedBy=default.target,' < contrib/varlink/io.podman.service > /usr/src/packages/BUILD/debian/tmp/usr/lib/systemd/user/io.podman.service

Source:

# Install APIV2 services
install  -m 644 contrib/systemd/user/podman.socket /usr/local/lib/systemd/user/podman.socket
install  -m 644 contrib/systemd/user/podman.service /usr/local/lib/systemd/user/podman.service
install  -m 644 contrib/systemd/system/podman.socket /usr/local/lib/systemd/system/podman.socket
install  -m 644 contrib/systemd/system/podman.service /usr/local/lib/systemd/system/podman.service

@lsm5 : that bug in user io.podman.service seems to also be there in user/podman.service, was it reported somewhere ?

https://github.com/containers/libpod/blob/v2.0.1/contrib/systemd/user/podman.service#L16

@afbjorklund
Copy link
Contributor Author

@jwhonce

Now that podman-machine doesn't work anymore, here is how to do the set up with vagrant:

Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "fedora/32-cloud-base"

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "1024"
  end

  config.vm.provision "shell", inline: <<-SHELL
    yum install -y podman

    groupadd -f -r podman

    #systemctl edit podman.socket
    mkdir -p /etc/systemd/system/podman.socket.d
    cat >/etc/systemd/system/podman.socket.d/override.conf <<EOF
[Socket]
SocketMode=0660
SocketUser=root
SocketGroup=podman
EOF
    systemctl daemon-reload
    echo "d /run/podman 0770 root podman" > /etc/tmpfiles.d/podman.conf
    sudo systemd-tmpfiles --create

    systemctl enable podman.socket
    systemctl start podman.socket

    usermod -aG podman $SUDO_USER
  SHELL
end

This installs podman, and adds a "podman" system group with socket access (like docker).

Then one can use vagrant up to boot it, and vagrant ssh-config to get the configuration.

Important variables from ssh_config:

Host default
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile $PWD/.vagrant/machines/default/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL

Then we export CONTAINER_HOST=ssh://vagrant@127.0.0.1:2222/run/podman/podman.sock
and export CONTAINER_SSHKEY=$PWD/.vagrant/machines/default/virtualbox/private_key

Which enables us to access it remotely (unfortunately --remote is still broken and --url required)
But if we make sure to use the old special binary and pass explicit parameters as a workaround:

$ podman --remote version
Version:      2.0.2
API Version:  1
Go Version:   go1.14.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64
$ podman-remote version
Error: Get "http://d/v1.0.0/libpod../../../_ping": dial unix ///run/user/1000/podman/podman.sock: connect: no such file or directory
$ podman-remote --url "$CONTAINER_HOST" --identity "$CONTAINER_SSHKEY" version
Client:
Version:      2.0.2
API Version:  1
Go Version:   go1.14.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Server:
Version:      2.0.2
API Version:  0
Go Version:   go1.14.3
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

The vagrant .box is about the same size as the fedora .iso (250% boot2podman)

290M    Fedora-Cloud-Base-Vagrant-32-1.6.x86_64.vagrant-virtualbox.box

For linux users it is also possible to use the libvirt/kvm box instead of virtualbox.

See https://vagrantcloud.com/search and https://alt.fedoraproject.org/cloud/

@afbjorklund
Copy link
Contributor Author

Full example here: https://boot2podman.github.io/2020/07/22/machine-replacement.html

It looks like the "tmpfiles.d" was the missing piece, when it came to changing the group...

@github-actions
Copy link

github-actions bot commented Sep 4, 2020

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Sep 8, 2020

@ashley-cui PTAL

@mheon mheon assigned ashley-cui and unassigned jwhonce Sep 8, 2020
@afbjorklund
Copy link
Contributor Author

afbjorklund commented Sep 8, 2020

For what it is worth, the systemd units are still missing in podman 2.0.6~1 as well.

    podman |    2.0.6~1 | https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04  Packages

It only has the varlink units:

/usr/lib/systemd
/usr/lib/systemd/system
/usr/lib/systemd/system/io.podman.service
/usr/lib/systemd/system/io.podman.socket
/usr/lib/systemd/user
/usr/lib/systemd/user/io.podman.service
/usr/lib/systemd/user/io.podman.socket

Gives error: Failed to enable unit, unit podman.socket does not exist.

It does include varlink, though.

@rhatdan
Copy link
Member

rhatdan commented Sep 8, 2020

You mean they are not shipped within an RPM?

@rhatdan
Copy link
Member

rhatdan commented Sep 8, 2020

@lsm5 PTAL

@lsm5 lsm5 self-assigned this Sep 8, 2020
@lsm5
Copy link
Member

lsm5 commented Sep 8, 2020

@afbjorklund 2.0.6~2 fixes the issue of unitfiles. https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/podman . PTAL.

@afbjorklund
Copy link
Contributor Author

Thank you, works now.

default: Created symlink /etc/systemd/system/sockets.target.wants/podman.socket → /lib/systemd/system/podman.socket.

Client:
Version:      2.0.6
API Version:  1
Go Version:   go1.14.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Server:
Version:      2.0.6
API Version:  0
Go Version:   go1.14.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

This was Ubuntu 20.04.

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Sep 11, 2020

We now have two working solutions, either connect as root@ and use the default - or change the group and use user@.

@ashley-cui :
I'm not sure if you want to document it anywhere on podman.io, but now we have a solution to replace the sudo varlink:

$ minikube podman-env
export PODMAN_VARLINK_BRIDGE="/usr/bin/ssh -F /dev/null -o ConnectionAttempts=3
-o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet
-o PasswordAuthentication=no -o ServerAliveInterval=60 -o
StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@127.0.0.1 -o
IdentitiesOnly=yes -i /home/anders/.minikube/machines/minikube/id_rsa -p 34749
-- sudo varlink -A \'podman varlink \\\$VARLINK_ADDRESS\' bridge"
export CONTAINER_HOST=ssh://docker@127.0.0.1:34749/run/podman/podman.sock
export CONTAINER_SSHKEY=/home/anders/.minikube/machines/minikube/id_rsa
export MINIKUBE_ACTIVE_PODMAN="minikube"

# To point your shell to minikube's podman service, run:
# eval $(minikube -p minikube podman-env)

So you can close the ticket...

But I think we will wait for 2.1.

@afbjorklund
Copy link
Contributor Author

Note that in the case of minikube we are using podman to load images for crio, that is why the root requirement here...

There's also some missing pieces in the minikube OS, in order to run rootless containers (mainly because it wasn't needed)

docker@minikube:~$ podman pull busybox
Trying to pull docker.io/library/busybox...
Getting image source signatures
Copying blob df8698476c65 done  
Copying config 6858809bf6 done  
Writing manifest to image destination
Storing signatures
ERRO[0004] Error while applying layer: ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument 
  ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Trying to pull quay.io/busybox...
  error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>404 Not Found</title>\n<h1>Not Found</h1>\n<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>\n"
Error: unable to pull busybox: 2 errors occurred:
	* Error committing the finished image: error adding layer with blob "sha256:df8698476c65c2ee7ca0e9dbc2b1c8b1c91bce555819a9aaab724ac64241ba67": ApplyLayer exit status 1 stdout:  stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
	* Error initializing source docker://quay.io/busybox:latest: Error reading manifest latest in quay.io/busybox: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>404 Not Found</title>\n<h1>Not Found</h1>\n<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>\n"

But we do allow the user to run containers with podman, mostly to avoid them having to run two VMs (1 podman, 1 k8s).

@ashley-cui
Copy link
Member

@afbjorklund looks like you've found working solutions, so i'm going to close this issue. Re-open if there's more to be done here.

@afbjorklund
Copy link
Contributor Author

@ashley-cui : yes, the only thing remaining is to actually do it (code). Should have podman support back for next major release.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests

6 participants