Skip to content
This repository has been archived by the owner on Apr 21, 2019. It is now read-only.

kubefed not installing right .... #187

Closed
rangapv opened this issue Jan 10, 2018 · 8 comments
Closed

kubefed not installing right .... #187

rangapv opened this issue Jan 10, 2018 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@rangapv
Copy link

rangapv commented Jan 10, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

Labels:
/ sig federation

What happened:
kubefed install not working right...

I did the following.....
Step 1:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz

Step 2:
sudo snap install kubefed --classic
kubefed version

Error:
/snap/kubefed/260/kubefed: 1: /snap/kubefed/260/kubefed: Syntax error: redirection unexpected

What you expected to happen:
kubefed version should display the right version number
Like kubefed version #(like 1.7.0)

How to reproduce it (as minimally and precisely as possible):
Clean Install of K*s 1.9 and then the above Step1 and step 2

Anything else we need to know?:

Environment:

*Kubernetes version (use kubectl version): 1.9
*Cloud provider or hardware configuration: GCP
*OS (e.g. from /etc/os-release): Ubuntu 16.04
*Kernel (e.g. uname -a): 4.13.0-1002-gcp #5-Ubuntu SMP Tue Dec 5 13:20:17 UTC 2017 x86_64 x86_64 *x86_64 GNU/Linux
*Install tools: snap
*Others:

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 10, 2018
@Blasterdick
Copy link

Blasterdick commented Jan 10, 2018

+1, all the same except kernel is 4.10

Temporarily using source binaries, as of lack of kubefed binary in the current tarball (instruction).

@rangapv
Copy link
Author

rangapv commented Jan 11, 2018

Hi,
How do I find the source binaries, Pls provide the same.
Until the community provides an alternative
Regards

@Blasterdick
Copy link

Blasterdick commented Jan 11, 2018

@rangapv

  • Clone this repo
  • Install golang with sudo snap install --classic go
  • cd federation && make

If everything compiled as it has to, you should see kubefed binary in subfolders like _output or with the command find ./ -type f -name kubefed. Then simply put it in a needed folder.

@Blasterdick
Copy link

@rangapv Seems like if you build binaries out of the source, kubefed would create a deployments, which would expect for fcp util in hyperkube image, which isn't there.

So, it's better to download tarball with pre-compiled binaries. I found, that 1.8.5 tarball contains kubefed (while 1.9.1 don't) so here's the link to download.

@rangapv
Copy link
Author

rangapv commented Jan 12, 2018

@Blasterdick Thanks a bunch! I installed it and the API server is running....
I have a few queries though...IF need be I can raise it at any other paltform on your choosing..for now those queries are...
1.even though I am giving the {--apiserver-enable-basic-auth=false} to false it is either not log-ing "in" and for kubefed init with {--apiserver-enable-basic-auth=true} it asks for a auth login/password...which I managed to get from the kubectl config view and it gives me an API get view..Do I expect to see the Dashboard by any chance just like the kubernetes dashboard deployment.. ?
2. how do I decode the password from the secrets which the kubefed is apparently taking....

The kubefed init that i used is

sudo kubefed init fellowship --host-cluster-context=kubernetes-admin@kubernetes --dns-provider="google-clouddns" --dns-zone-name="example1.com." --etcd-persistent-storage=false --api-server-service-type="NodePort" --api-server-advertise-address="10.128.0.22" --apiserver-enable-basic-auth=true --apiserver-enable-token-auth=true --apiserver-arg-overrides="--anonymous-auth=false,--v=4" --controllermanager-arg-overrides="--controllers=services=false"

Regards
Ranga

@irfanurrehman irfanurrehman added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 1, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 2, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 1, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

5 participants