Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle initContainer for Pods #6480

Closed
durandx opened this issue Jun 3, 2020 · 36 comments
Closed

Handle initContainer for Pods #6480

durandx opened this issue Jun 3, 2020 · 36 comments
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. pods

Comments

@durandx
Copy link

durandx commented Jun 3, 2020

/kind feature

A nice feature to have in podman could be handle initContainer in the same way that K8S handle it.

The main problem is that container images and architecture has to be modified between pods in K8s and Podman : K8s can use a dedicated initContainer to properly initialise a pod whereas in podman we have to combine initialisation and core runtime in a single container. This limit the reuse and compatiblity between K8S and podman even for very simple pods.

@openshift-ci-robot openshift-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 3, 2020
@mheon
Copy link
Member

mheon commented Jun 3, 2020

Creating the init containers seems fairly easy, and I think we can order pod startup to ensure that they are the first things to run in the pod without a problem (we already have a built-in startup dependency model), but the behaviour where Kube waits until an init container has cleanly exited to start the next one is problematic - all our dependency ordering is based on the dependency moving to the "running" state, and from a glance at the code this won't be easy to fix.

We'd probably need to add the concept of init containers to Libpod pods, and modify the podman pod start command to automatically handle this, to make this happen. This does sound useful outside of play kube so I'm not opposed to doing it this way.

@haircommander Thoughts?

@haircommander
Copy link
Collaborator

Theoretically, we could spoof init containers on podman side, and not expose to libpod.
the work flow could be

  • loop through and create all init containers
  • wait until each have exited
  • clear the pod of all but infra container
  • create and start all of the normal containers as is happening now

@haircommander
Copy link
Collaborator

though, if you think it'd be useful outside of play kube, exposing to libpod would keep play kube code cleaner

@mheon
Copy link
Member

mheon commented Jun 3, 2020

If there's an expectation that init containers run more than once - IE, on each start of the pod - I think libpod is a good place. Kube doesn't really hit this, given they never restart a stopped pod, just recreate, but we will.

@mheon
Copy link
Member

mheon commented Jun 3, 2020

We'll also have to modify pod status - a pod where everything except the init containers is running is still a running pod, where I think we'd consider it partially stopped.

@haircommander
Copy link
Collaborator

If there's an expectation that init containers run more than once - IE, on each start of the pod - I think libpod is a good place. Kube doesn't really hit this, given they never restart a stopped pod, just recreate, but we will.

yeah we'd need to restart the init container process if we restarted the pod

We'll also have to modify pod status - a pod where everything except the init containers is running is still a running pod, where I think we'd consider it partially stopped.

I think a pod state: initializing would be clear for when the init containers are running. I think the containers could be ignored (or they could be cleaned up) in figuring out if the pod is running with little problem

@github-actions
Copy link

github-actions bot commented Jul 4, 2020

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 6, 2020

@haircommander Are you working on this?

@haircommander
Copy link
Collaborator

I am not currently working on this!

@rhatdan
Copy link
Member

rhatdan commented Jul 6, 2020

@ryanchpowell Want to take a look?

@ryanchpowell
Copy link
Collaborator

@rhatdan sounds good, looking into!

@rhatdan
Copy link
Member

rhatdan commented Dec 24, 2020

This issue is still out there.

@mheon @haircommander @durandx is this still something we should pursue?
@zhangguanzhang @Luap99 @saschagrunert WDYT?

@zhangguanzhang
Copy link
Collaborator

How to record the startup sequence of initContainers

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@durandx
Copy link
Author

durandx commented Jan 25, 2021

@rhatdan From my part, I think it is still a good idea to have initContainers. One of the use case could be to have a generic container (ex: tomcat) that can be associated with a specific container started with initContainer whose role is to copy, configure and deploy one or more app(s) (ex: my-app.war).

@rhatdan
Copy link
Member

rhatdan commented Jan 25, 2021

Interested in working on it?

@cjeanner
Copy link

Hello there,
I'll follow this one - there's a potential use-case within tripleo/osp for a specific service.
The initContainers would prepare the needed configurations and DB before the app actually starts. This feature would allow us to remove a lot of python code and rely on podman itself to manage an ephemeral service.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Mar 23, 2021

Coming up on a year later and no one has worked on this.

@zhangguanzhang
Copy link
Collaborator

Coming up on a year later and no one has worked on this.

How to record the startup sequence of initContainers

@sshnaidm
Copy link
Member

sshnaidm commented Apr 19, 2021

Just wonder if any progress here? As @cjeanner mentioned above, it can be a killer feature for TripleO/OSP in Openstack, would help a lot.
Thanks.

@rhatdan
Copy link
Member

rhatdan commented Apr 19, 2021

@mheon would the work you have done on "requires" help with this? IE Could we put requires into a POD, so that we have one of the containers start before the others in the pod?

@mheon
Copy link
Member

mheon commented Apr 19, 2021

Sure, you can likely wire this up manually in the CLI now.

@mheon
Copy link
Member

mheon commented Apr 19, 2021

You'd basically create the initContainer first, then have every other container in the pod do a --requires <initcontainer>.

@mheon
Copy link
Member

mheon commented Apr 19, 2021

Unfortunately, requires only guarantees the container was started, not that whatever app inside has successfully run to completion, so there may still be some difficulty there.

@sshnaidm
Copy link
Member

I think initContainers should run and finish before any containers start in the pod, at least according to their definition. To @zhangguanzhang question - need to take list of initContainers in yaml, run them and wait for completion. And only then start all other container(s) in the pod.

@zhangguanzhang
Copy link
Collaborator

I think initContainers should run and finish before any containers start in the pod, at least according to their definition. To @zhangguanzhang question - need to take list of initContainers in yaml, run them and wait for completion. And only then start all other container(s) in the pod.

for podman generate kube, the initContainers' record need from the container's info

@sshnaidm
Copy link
Member

for podman generate kube, the initContainers' record need from the container's info

Can we start maybe from podman play kube support?
Probably we can use a flag --initcontainer true for podman run/create command to indicate it's an initContainer for possible kube YAML generation.

@rhatdan
Copy link
Member

rhatdan commented Apr 28, 2021

I could see podman play kube, looking for init containers, and running them within a pod, and then waiting for them to complete before executing the rest of the containers.

Perhaps we could extend the concept for a POD as well, so that an indicator of an initcontiner means that all of the other containers in the POD would not start until the initcontainer is done.

@candlerb
Copy link

candlerb commented May 14, 2021

ISTM that podman play kube is where this adds the most value. Otherwise, you can simulate it by running the containers sequentially in a script:

# Create empty pod
podman pod create -p 8080:80/tcp --name web

# Run the init container and wait for it to complete
podman run --pod web -v "$PWD":/usr/local/apache2/htdocs/ ubuntu:18.04 \
    sh -c "date >/usr/local/apache2/htdocs/index.html; sleep 2"

# Now start the main container
podman run -d --pod web -v "$PWD":/usr/local/apache2/htdocs/ docker.io/library/httpd

# Test
curl localhost:8080
>>> Fri May 14 08:34:12 UTC 2021

However there are issues:

  • it shows the pod status as "Degraded" (unless you actually remove the init container, using podman rm)
  • podman pod restart web restarts both containers simultaneously

So it would be better if the init container(s) in the pod could be marked for special treatment. Obviously this would also become important if podman were to provide automatic container restarts.

@candlerb
Copy link

There was a point raised earlier about whether restarting a pod should re-run its init containers or not.

I can see use cases for both scenarios:

  • Right now, I'm using an initContainer to create a network bridge (virbr0), which is used by the main containers. For this application I'd definitely need the initContainer to run again if the pod were stopped and started; networking is ephemeral.
  • However I can also see cases where someone uses an initContainer to set up data in an empty data volume, and they don't want to wipe this on container restart

On balance, I think the most useful behaviour is always to run the initContainers on pod (re)start. If someone wants an initContainer which doesn't reinitialize a data volume, they can easily check first whether the data is there or not.

This doesn't matter too much given that k8s doesn't support restarting pods anyway, but it might be useful for those running pods under systemd.

@rhatdan rhatdan added pods and removed stale-issue labels May 20, 2021
@github-actions
Copy link

github-actions bot commented Jul 5, 2021

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 7, 2021

Still no one has worked on this?
Any volunteers looking for a good project?

@baude
Copy link
Member

baude commented Jul 9, 2021

I'll take it ....

@baude baude self-assigned this Jul 9, 2021
@baude baude added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Jul 14, 2021
@zhangguanzhang
Copy link
Collaborator

Fixes by: #11011
/close

@openshift-ci openshift-ci bot closed this as completed Aug 6, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 6, 2021

@zhangguanzhang: Closing this issue.

In response to this:

Fixes by: #11011
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. pods
Projects
None yet
Development

No branches or pull requests