Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for s3 for docker magic #641

Closed
akranga opened this issue Jan 5, 2019 · 8 comments
Closed

Support for s3 for docker magic #641

akranga opened this issue Jan 5, 2019 · 8 comments

Comments

@akranga
Copy link

akranga commented Jan 5, 2019

At present '_component_builder.py' throws "ERROR:Error: 's3://....' should be a GCS path."

    220     self._gcs_base = gcs_base
    221     if not self._check_gcs_path(self._gcs_base):
--> 222       raise Exception('ImageBuild __init__ failure.')
    223     self._gcs_path = os.path.join(self._gcs_base, self._tarball_name)
    224     self._target_image = target_image

Because kaniko supports s3. It will probably will make sense to support it as well. In this case user can benefit from "local" minio.

@xiaozhouX
Copy link
Contributor

Agree with that, I think we also need to use other type storage, like by defining a default pvc for kaniko.
And so do the Image url and registry credential.
Is pipeline have a plan for make some feature integration well with other cloudProvider? I would like to help for that, maybe with some PRs? @IronPan

@akranga
Copy link
Author

akranga commented Jan 7, 2019

I shall be happy to contribute as well. Few questions I have got in my mind.

  1. To support backends like minio. We need somehow to configure boto and let kfp or magic now about it.

  2. Kaniko spec generation. Currently it is not super flexible (like deliver registry secret). Which is ok. But need some directions in these area

@gaoning777
Copy link
Contributor

The GCS directory is merely used for temporary build files and we are planning to replace the cloud dependency with k8s native configmap. Feel free to send a PR to use configmap to store temporary files such as kaniko spec, dockerfile, etc.
And, pushing to other registries is not supported for now. Kaniko supports it, though. This feature requires a little more work involving creating k8s secrets, mounting secrets to the kaniko build pod, etc. Feel free to send a PR. Thanks

@Ark-kun
Copy link
Contributor

Ark-kun commented Jan 9, 2019

@gaoning777 AFAIK, GCS supports the s3:// protocol, so we could enable such URIs.

@IronPan
Copy link
Member

IronPan commented Jan 9, 2019

Having kaniko cloud provider agnostic would be ideal. Kaniko supports local directory (ref) and my guess is creating a K8s emptyDir volume and mount to the kaniko container is good enough as the temporary storage.

@akranga @xiaozhouX does this sounds a good plan? Contribution is very welcomed.

@IronPan
Copy link
Member

IronPan commented Jan 9, 2019

Also kaniko spec is currently located here Ideally, it should be parameterized so that the caller can decide where the resulted image stores to.

@gaoning777
Copy link
Contributor

@gaoning777 AFAIK, GCS supports the s3:// protocol, so we could enable such URIs.

Yes, however the users need to set up the credentials for pushing to the S3.
Instructions are here.

@gaoning777
Copy link
Contributor

duplicate of #345.

Linchin pushed a commit to Linchin/pipelines that referenced this issue Apr 11, 2023
* Add to playbook for auto deploy infrastructure.

* Fix link

* Fix checkout link.

* Support downloading unipped binaries.

* Playbook.

* Latest.

* Log changing permissions.
magdalenakuhn17 pushed a commit to magdalenakuhn17/pipelines that referenced this issue Oct 22, 2023
HumairAK pushed a commit to red-hat-data-services/data-science-pipelines that referenced this issue Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants