Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pass networks to container clone #14059

Merged
merged 1 commit into from
May 5, 2022
Merged

Conversation

cdoern
Copy link
Contributor

@cdoern cdoern commented Apr 29, 2022

since the network config is a string map, json.unmarshal does not recognize
the config and spec as the same entity, need to map this option manually

resolves #13713

Signed-off-by: cdoern cbdoer23@g.holycross.edu

Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the problem. The json marshal/unmarshal will handle it just fine.

Podman stores the network information inside an extra db bucket and not the container config. I guess container clone must read from the there and not the settings in the container config.

@cdoern
Copy link
Contributor Author

cdoern commented Apr 29, 2022

ah I see @Luap99 I will correct my comment but this fixes the issue, right?

[charliedoern@charliesthinkpad podman]$ bin/podman run --network=tl --name test alpine ip a
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob df9b9388f04a done  
Copying config 0ac33e5f5a done  
Writing manifest to image destination
Storing signatures
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 9e:4f:37:68:91:da brd ff:ff:ff:ff:ff:ff
    inet 10.89.0.2/24 brd 10.89.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::9c4f:37ff:fe68:91da/64 scope link tentative 
       valid_lft forever preferred_lft forever
[charliedoern@charliesthinkpad podman]$ bin/podman container clone test
7ac29969bb80d3a42d7b4f5156f05f79aa16de818c7bbe23cdf1c576ecf067fc
[charliedoern@charliesthinkpad podman]$ bin/podman start --attach test-clone
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether c2:10:a1:d3:db:32 brd ff:ff:ff:ff:ff:ff
    inet 10.89.0.3/24 brd 10.89.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c010:a1ff:fed3:db32/64 scope link tentative 
       valid_lft forever preferred_lft forever

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2022

Try with network connect/disconnect. Networks must always be read from the db, see c.networks().

@cdoern
Copy link
Contributor Author

cdoern commented Apr 29, 2022

Try with network connect/disconnect. Networks must always be read from the db, see c.networks().

yes it works with connect/disconnect.


[charliedoern@charliesthinkpad podman]$ podman network connect tl test-clone
Error: container 7ac29969bb80d3a42d7b4f5156f05f79aa16de818c7bbe23cdf1c576ecf067fc is already connected to network "tl": network already exists
[charliedoern@charliesthinkpad podman]$ podman network disconnect tl test-clone
[charliedoern@charliesthinkpad podman]$ podman network connect tl test-clone
[charliedoern@charliesthinkpad podman]$ podman start --attach test-clone
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether aa:97:42:87:71:f9 brd ff:ff:ff:ff:ff:ff
    inet 10.89.0.5/24 brd 10.89.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a897:42ff:fe87:71f9/64 scope link tentative 
       valid_lft forever preferred_lft forever

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2022

Also if you connect a second network?

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2022

You have to connect before you clone

@cdoern
Copy link
Contributor Author

cdoern commented Apr 29, 2022

@Luap99 I will add some tests for these cases

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2022

I just tried on main branch and it looks like it already works without this change. I assume something else merged in the meantime which fixed this problem

@cdoern
Copy link
Contributor Author

cdoern commented Apr 29, 2022

@Luap99 I dont see how that could be possible, specg.Networks in ConfigToSpec was always empty! let me check one more time.

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2022

nvm I tested the wrong container, but it does not work with this patch either.
config.Networks are always nil because we unset it since it has to be read from the db directly.

@cdoern
Copy link
Contributor Author

cdoern commented Apr 29, 2022

I will keep working on this and add some more tests. This breaks running a cloned ctr in a pod too which makes me think this solution works and we were just never passing the network options. I will look at the original --network path and see what I am missing

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2022

Ok I see the problem I thought I unset the networks but the logic does this to late after the config is written:

// Set the original value to nil. We can safe some space by not storing it in the config
// since we store it in a different mutable bucket anyway.
ctr.config.Networks = nil

So you end up with networks in the config, however no caller should ever use this, always use c.networks()

Your patch is correct because Networks from the config has a different json tag (newNetworks) due backport compat.

So the only thing missing is do get the updated network list from the db. I guess it would make sense to attach this in c.Config() since outside caller should not worry about this detail IMO.

@cdoern
Copy link
Contributor Author

cdoern commented Apr 29, 2022

@Luap99 can I just get the whole config again from the db in Config() and deep copy that?

@Luap99
Copy link
Member

Luap99 commented Apr 29, 2022

I was thinking something like this:

diff --git a/libpod/container.go b/libpod/container.go
index 3e7ab7b0a..7afa78dc8 100644
--- a/libpod/container.go
+++ b/libpod/container.go
@@ -288,6 +288,12 @@ func (c *Container) Config() *ContainerConfig {
                return nil
        }
 
+       networks, err := c.networks()
+       if err != nil {
+               return nil
+       }
+       returnConfig.Networks = networks
+
        return returnConfig
 }
 

@cdoern cdoern force-pushed the clone branch 3 times, most recently from 01b0f83 to a36a4ac Compare April 29, 2022 22:22
pkg/domain/infra/abi/containers.go Outdated Show resolved Hide resolved
pkg/domain/infra/abi/containers.go Outdated Show resolved Hide resolved
@umohnani8
Copy link
Member

LGTM

@rhatdan
Copy link
Member

rhatdan commented May 2, 2022

@Luap99 PTAL

test/e2e/container_clone_test.go Outdated Show resolved Hide resolved
pkg/specgen/generate/container_create.go Show resolved Hide resolved
libpod/container.go Show resolved Hide resolved
since the network config is a string map, json.unmarshal does not recognize
the config and spec as the same entity, need to map this option manually

resolves containers#13713

Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Copy link
Member

@Luap99 Luap99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, one nit but not worth to repush

Comment on lines +292 to +293
_, ok := inspect.InspectContainerToJSON()[0].NetworkSettings.Networks["testing123"]
Expect(ok).To(BeTrue())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit please use HaveKeyValue() for such tests.

If this check fails you currently would get a very unhelpful expected false to be true message.

@Luap99
Copy link
Member

Luap99 commented May 5, 2022

/approve
/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 5, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 5, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cdoern, Luap99

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 5, 2022
@openshift-merge-robot openshift-merge-robot merged commit 7af4612 into containers:main May 5, 2022
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

podman container clone looses network info
5 participants