Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Append hosts to dependency container's /etc/hosts file #2518

Merged
merged 1 commit into from
Mar 5, 2019

Conversation

haircommander
Copy link
Collaborator

Before, any container with a netNS dependency simply used its dependency container's hosts file, and didn't abide its configuration (mainly --add-host). Fix this by always appending to the dependency container's hosts file, creating one if necessary.

fixes: #2504

Note: there will be a different PR for a similar action with resolv.conf, but considering resolv.conf is more complex, and this fixes the issue, I decided to file this first.

c.state.BindMounts["/etc/resolv.conf"] = resolvPath
}

// check if dependency container has an /etc/hosts file
hostsPath, exists := bindMounts["/etc/hosts"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You've a defaultHostsFile defined now, could you use it here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomSweeneyRedHat I wasn't sure if there should be a distinction of the map access in bind mounts (which happens quite a bit), and the access of the physical file /etc/hosts on the host system. If you think the former should be added, I'd also like to do the same with resolv.conf

// If this file didn't exist before, save it in the depCtr's bind mount
// so it isn't overwritten in the future
if !exists {
depCtr.state.BindMounts["/etc/hosts"] = hostsPath
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto defaultHostFile

hostsPath = defaultHostFile
}
// only go through all of this if we want to add a hosts file
if len(hostsPath) > 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll let others way in. I'm muddling if we should remove the checks except " if !exists {" on line 712 or not? I think if a container doesn't have a hosts file we'd want to create it regardless? @mheon thoughts?

// so it isn't overwritten in the future
if !exists {
depCtr.state.BindMounts["/etc/hosts"] = hostsPath
depCtr.save()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should catch this error and log it, if nothing else

// generate a hosts file for the dependency container,
// based on either its old hosts file, or the default,
// and add the relevant information from the new container (hosts and IP)
hostsPath, err = depCtr.generateHosts(hostsPath, c)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dunno if I like this... We won't pick this up until a restart, so it's not really shared.

Do we need to force sharing of /etc/hosts in this case, or is it sufficient to create it just for this container? We override resolv.conf for custom DNS settings when sharing network namespaces.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would other containers that are added to the namespace see it if a container they don't depend on (because they are laterally equal in the dependency tree)?

My other question is: where besides /etc/host creation is the BindMount() map needed? If a dependency container was started up and had an empty /etc/hosts, but needed it for whatever process that was running (that shouldn't be restarted to see these changes), wouldn't that process see the changes when the new container appended to the depCtr's /etc/hosts file? I guess the question is: the state of a process that runs will be out of sync, but only on the libpod end and not from the perspective of namespace management, which I think would work, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use the map to determine what bind mounts to add when creating the container spec. If we add to it after container creation time, it will never be picked up before a restart. Actually, now that I think about it, we force a regeneration of the bind mounts on each restart, so we'll never pick up this change - we'll overwrite it on restart.

@rhatdan
Copy link
Member

rhatdan commented Mar 4, 2019

Containers in the same Network Namespace are supposed to share the same /etc/resolv.conf and /etc/hosts file. These files either need to be created when the pod is created or the first container is added to a pod. Then each container that gets added to the pod needs to share the pods (initial containers) network and thus its resolv.conf and hosts file.

Currently podman run -t --net=container:UUID
Does the right thing for sharing hosts and resolv.conf. This code has to be extended for pods.

Bottom line if there are 10 containers in a POD their is ONE resolv.conf and ONE hosts file bind mounted into each container.

@mheon
Copy link
Member

mheon commented Mar 4, 2019

The issue is that we don't always create /etc/hosts as of now.

I think the solution is to force an unconditional creation/bind mount of /etc/hosts from /etc/hosts in the image. Potential problem: we won't commit changes to hosts when we commit the container anymore.

@rhatdan
Copy link
Member

rhatdan commented Mar 4, 2019

I think that is fine. Does Docker commit changes to /etc/hosts? I would figure that is usually a mistake to commit them.

@haircommander
Copy link
Collaborator Author

/etc/hosts was unconditionally created. This PR now errors out if for some reason it wasn't but the dependency container is one for netNS.

@rhatdan containers in the netNS did share the hosts file, the problem was --add-host didn't write anything to the dependency containers hosts, just ignored the flag. This PR now appends the contents of a new container using the --add-host flag, which fixes the issue.

PTAL @mheon @rhatdan

@haircommander
Copy link
Collaborator Author

haircommander commented Mar 4, 2019

Also @mheon and I talked about the restart problem (a dependency container won't retain the hosts a new container adds to it after restarting), and it was decided there wasn't anything to do about it. Not really sure how to document this...

@rhatdan
Copy link
Member

rhatdan commented Mar 4, 2019

@haircommander Lets talk about this tomorrow.

@haircommander
Copy link
Collaborator Author

bot, retest this please

@mheon
Copy link
Member

mheon commented Mar 5, 2019

/approve

@openshift-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: haircommander, mheon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 5, 2019
@mheon mheon added Release Notes 1.2.0 and removed approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Mar 5, 2019
Before, any container with a netNS dependency simply used its dependency container's hosts file, and didn't abide its configuration (mainly --add-host). Fix this by always appending to the dependency container's hosts file, creating one if necessary.

Signed-off-by: Peter Hunt <pehunt@redhat.com>
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 5, 2019
@rhatdan
Copy link
Member

rhatdan commented Mar 5, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 5, 2019
@openshift-merge-robot openshift-merge-robot merged commit 40f7843 into containers:master Mar 5, 2019
@giuseppe
Copy link
Member

it seems to break the use case of: podman pod stop -f $POD && podman pod start $POD with:

ERRO[0000] "error starting some containers: container already exists" 
ERRO[0000] "error creating hosts file for container 288b0a539086c0789da33391971504a67ac172b9cab609ac264776145ad0bd06 which depends on container 2deda01da8ff8c1f7c896997707f2e5f958a60254c072f8bbc0b619c0dfee325: unable to open /var/run/containers/storage/overlay-containers/2deda01da8ff8c1f7c896997707f2e5f958a60254c072f8bbc0b619c0dfee325/userdata/hosts: open /var/run/containers/storage/overlay-containers/2deda01da8ff8c1f7c896997707f2e5f958a60254c072f8bbc0b619c0dfee325/userdata/hosts: no such file or directory" 
Error: error creating hosts file for container 36799180c4a76ab1bdd6e0e83d94a22c7c1fe077b3a0db98926654662826e541 which depends on container 2deda01da8ff8c1f7c896997707f2e5f958a60254c072f8bbc0b619c0dfee325: unable to open /var/run/containers/storage/overlay-containers/2deda01da8ff8c1f7c896997707f2e5f958a60254c072f8bbc0b619c0dfee325/userdata/hosts: open /var/run/containers/storage/overlay-containers/2deda01da8ff8c1f7c896997707f2e5f958a60254c072f8bbc0b619c0dfee325/userdata/hosts: no such file or directory

@mheon
Copy link
Member

mheon commented Mar 10, 2019

Confirmed, looking at it

@haircommander
Copy link
Collaborator Author

I actually got it @mheon

@haircommander
Copy link
Collaborator Author

I know what's wrong here

@mheon
Copy link
Member

mheon commented Mar 10, 2019

Alright I think I know what's going on here... Amazed it isn't happening for resolv too.

@haircommander
Copy link
Collaborator Author

it's from assuming we had an /etc/hosts file I think

@haircommander
Copy link
Collaborator Author

haircommander commented Mar 10, 2019

nevermind it looks like #2603 fixes. good find @mheon

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 27, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 27, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

podman: --add-host does not work when specifying --pod
7 participants