Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to build images in play kube #11180

Merged
merged 1 commit into from
Aug 18, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions cmd/podman/play/kube.go
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,9 @@ func init() {
configmapFlagName := "configmap"
flags.StringSliceVar(&kubeOptions.ConfigMaps, configmapFlagName, []string{}, "`Pathname` of a YAML file containing a kubernetes configmap")
_ = kubeCmd.RegisterFlagCompletionFunc(configmapFlagName, completion.AutocompleteDefault)

buildFlagName := "build"
flags.BoolVar(&kubeOptions.Build, buildFlagName, false, "Build all images in a YAML (given Containerfiles exist)")
}
_ = flags.MarkHidden("signature-policy")
}
Expand Down
34 changes: 34 additions & 0 deletions docs/source/markdown/podman-play-kube.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,36 @@ A Kubernetes PersistentVolumeClaim represents a Podman named volume. Only the Pe
- volume.podman.io/gid
- volume.podman.io/mount-options

Play kube is capable of building images on the fly given the correct directory layout and Containerfiles. This
option is not available for remote clients yet. Consider the following excerpt from a YAML file:
```
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- command:
- top
- name: container
value: podman
image: foobar
...
```

If there is a directory named `foobar` in the current working directory with a file named `Containerfile` or `Dockerfile`,
Podman play kube will build that image and name it `foobar`. An example directory structure for this example would look
like:
```
|- mykubefiles
|- myplayfile.yaml
|- foobar
|- Containerfile
```

The build will consider `foobar` to be the context directory for the build. If there is an image in local storage
called `foobar`, the image will not be built unless the `--build` flag is used.

## OPTIONS

#### **--authfile**=*path*
Expand All @@ -45,6 +75,10 @@ If the authorization state is not found there, $HOME/.docker/config.json is chec
Note: You can also override the default path of the authentication file by setting the REGISTRY\_AUTH\_FILE
environment variable. `export REGISTRY_AUTH_FILE=path`

#### **--build**

Build images even if they are found in the local storage.

#### **--cert-dir**=*path*

Use certificates at *path* (\*.crt, \*.cert, \*.key) to connect to the registry.
Expand Down
2 changes: 2 additions & 0 deletions pkg/domain/entities/play.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ import (
type PlayKubeOptions struct {
// Authfile - path to an authentication file.
Authfile string
// Indicator to build all images with Containerfile or Dockerfile
Build bool
// CertDir - to a directory containing TLS certifications and keys.
CertDir string
// Username for authenticating against the registry.
Expand Down
131 changes: 104 additions & 27 deletions pkg/domain/infra/abi/play.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,11 @@ import (
"io"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"

buildahDefine "github.com/containers/buildah/define"
"github.com/containers/common/libimage"
"github.com/containers/common/pkg/config"
"github.com/containers/image/v5/types"
Expand Down Expand Up @@ -266,39 +268,69 @@ func (ic *ContainerEngine) playKubePod(ctx context.Context, podName string, podY
}

containers := make([]*libpod.Container, 0, len(podYAML.Spec.Containers))
cwd, err := os.Getwd()
if err != nil {
return nil, err
}
for _, container := range podYAML.Spec.Containers {
// Contains all labels obtained from kube
labels := make(map[string]string)

// NOTE: set the pull policy to "newer". This will cover cases
// where the "latest" tag requires a pull and will also
// transparently handle "localhost/" prefixed files which *may*
// refer to a locally built image OR an image running a
// registry on localhost.
pullPolicy := config.PullPolicyNewer
if len(container.ImagePullPolicy) > 0 {
// Make sure to lower the strings since K8s pull policy
// may be capitalized (see bugzilla.redhat.com/show_bug.cgi?id=1985905).
rawPolicy := string(container.ImagePullPolicy)
pullPolicy, err = config.ParsePullPolicy(strings.ToLower(rawPolicy))
if err != nil {
return nil, err
}
var pulledImage *libimage.Image
buildFile, err := getBuildFile(container.Image, cwd)
if err != nil {
return nil, err
}
// This ensures the image is the image store
pullOptions := &libimage.PullOptions{}
pullOptions.AuthFilePath = options.Authfile
pullOptions.CertDirPath = options.CertDir
pullOptions.SignaturePolicyPath = options.SignaturePolicy
pullOptions.Writer = writer
pullOptions.Username = options.Username
pullOptions.Password = options.Password
pullOptions.InsecureSkipTLSVerify = options.SkipTLSVerify

pulledImages, err := ic.Libpod.LibimageRuntime().Pull(ctx, container.Image, pullPolicy, pullOptions)
existsLocally, err := ic.Libpod.LibimageRuntime().Exists(container.Image)
if err != nil {
return nil, err
}
if (len(buildFile) > 0 && !existsLocally) || (len(buildFile) > 0 && options.Build) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a point to exitstsLocally is this triggers with or without it? Are options.Build and buildFile exclusive to one another?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems incorrect - the previous code used a pull policy of Newer, which will pull if a newer image is available on the registry - this will not happen because of the existsLocally check here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basic thinking: if an image exists locally and you run in a directory that contains a folder with the name of the image, even if --build is not given, we will not pull the image if a newer version exists.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@baude Did you see this one?

buildOpts := new(buildahDefine.BuildOptions)
commonOpts := new(buildahDefine.CommonBuildOptions)
TomSweeneyRedHat marked this conversation as resolved.
Show resolved Hide resolved
buildOpts.ConfigureNetwork = buildahDefine.NetworkDefault
buildOpts.Isolation = buildahDefine.IsolationChroot
buildOpts.CommonBuildOpts = commonOpts
buildOpts.Output = container.Image
if _, _, err := ic.Libpod.Build(ctx, *buildOpts, []string{buildFile}...); err != nil {
baude marked this conversation as resolved.
Show resolved Hide resolved
return nil, err
}
i, _, err := ic.Libpod.LibimageRuntime().LookupImage(container.Image, new(libimage.LookupImageOptions))
if err != nil {
return nil, err
}
pulledImage = i
} else {
// NOTE: set the pull policy to "newer". This will cover cases
// where the "latest" tag requires a pull and will also
// transparently handle "localhost/" prefixed files which *may*
// refer to a locally built image OR an image running a
// registry on localhost.
pullPolicy := config.PullPolicyNewer
if len(container.ImagePullPolicy) > 0 {
// Make sure to lower the strings since K8s pull policy
// may be capitalized (see bugzilla.redhat.com/show_bug.cgi?id=1985905).
rawPolicy := string(container.ImagePullPolicy)
pullPolicy, err = config.ParsePullPolicy(strings.ToLower(rawPolicy))
if err != nil {
return nil, err
}
}
// This ensures the image is the image store
pullOptions := &libimage.PullOptions{}
pullOptions.AuthFilePath = options.Authfile
pullOptions.CertDirPath = options.CertDir
pullOptions.SignaturePolicyPath = options.SignaturePolicy
pullOptions.Writer = writer
pullOptions.Username = options.Username
pullOptions.Password = options.Password
pullOptions.InsecureSkipTLSVerify = options.SkipTLSVerify

pulledImages, err := ic.Libpod.LibimageRuntime().Pull(ctx, container.Image, pullPolicy, pullOptions)
if err != nil {
return nil, err
}
pulledImage = pulledImages[0]
}

// Handle kube annotations
for k, v := range annotations {
Expand All @@ -318,7 +350,7 @@ func (ic *ContainerEngine) playKubePod(ctx context.Context, podName string, podY

specgenOpts := kube.CtrSpecGenOptions{
Container: container,
Image: pulledImages[0],
Image: pulledImage,
Volumes: volumes,
PodID: pod.ID(),
PodName: podName,
Expand Down Expand Up @@ -509,3 +541,48 @@ func sortKubeKinds(documentList [][]byte) ([][]byte, error) {

return sortedDocumentList, nil
}
func imageNamePrefix(imageName string) string {
prefix := imageName
s := strings.Split(prefix, ":")
if len(s) > 0 {
prefix = s[0]
}
s = strings.Split(prefix, "/")
if len(s) > 0 {
prefix = s[len(s)-1]
}
s = strings.Split(prefix, "@")
if len(s) > 0 {
prefix = s[0]
}
return prefix
}

func getBuildFile(imageName string, cwd string) (string, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we do a similar thing in Buildah. Might be nice at some point to have a routine in some common area that both projects could use.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buildDirName := imageNamePrefix(imageName)
containerfilePath := filepath.Join(cwd, buildDirName, "Containerfile")
dockerfilePath := filepath.Join(cwd, buildDirName, "Dockerfile")

_, err := os.Stat(filepath.Join(containerfilePath))
if err == nil {
logrus.Debugf("building %s with %s", imageName, containerfilePath)
return containerfilePath, nil
}
// If the error is not because the file does not exist, take
// a mulligan and try Dockerfile. If that also fails, return that
// error
if err != nil && !os.IsNotExist(err) {
logrus.Errorf("%v: unable to check for %s", err, containerfilePath)
}

_, err = os.Stat(filepath.Join(dockerfilePath))
if err == nil {
logrus.Debugf("building %s with %s", imageName, dockerfilePath)
return dockerfilePath, nil
}
// Strike two
if os.IsNotExist(err) {
return "", nil
}
return "", err
}
15 changes: 15 additions & 0 deletions test/e2e/common_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -841,3 +841,18 @@ func (p *PodmanTestIntegration) buildImage(dockerfile, imageName string, layers
output := session.OutputToStringArray()
return output[len(output)-1]
}

func writeYaml(content string, fileName string) error {
f, err := os.Create(fileName)
if err != nil {
return err
}
defer f.Close()

_, err = f.WriteString(content)
if err != nil {
return err
}

return nil
}
Loading