Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] feat(cli): experimental go frontend CLI #329

Merged
merged 1 commit into from
Oct 25, 2018
Merged

[WIP] feat(cli): experimental go frontend CLI #329

merged 1 commit into from
Oct 25, 2018

Conversation

edvald
Copy link
Collaborator

@edvald edvald commented Oct 12, 2018

This is just a step towards the Go CLI. We should be able to merge this without any impact on the current way of using garden, but since this introduces some changes to the build flows we may want to review and merge this.

To get this working, you need to do the following:

  1. Move your garden repo directory to $GOPATH/src/github.com/garden-io/garden
  2. Install dep (brew install dep)
  3. Run dep ensure in your repo directory
  4. Run gulp build to build the CLI

You may then want to alias garden to the built binary (garden-cli/build/garden) when developing.

This is all highly experimental, there are several steps remaining until we get to our final destination of having a single-binary installation.

@edvald edvald changed the title feat(cli): experimental go frontend CLI [WIP] feat(cli): experimental go frontend CLI Oct 12, 2018
@edvald edvald force-pushed the go-cli-pt1 branch 10 times, most recently from a1fc5d1 to 5b78a77 Compare October 12, 2018 18:25
@eysi09 eysi09 force-pushed the go-cli-pt1 branch 5 times, most recently from 37d11f7 to 0991b52 Compare October 23, 2018 09:03

RUN mkdir host-mount

CMD cp -r host-mount/. project/ \
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the volume already exists, this may fail. We could also do an initial rsync between the directories here (or if unison has a "one-way" sync command, that would also work).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible for the mount to not exist before the container starts up? Never seen that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also can we use absolute paths /host-mount it makes it more explicit. (although paths are relative to the working directly which would be / by default.

args := []string{"volume", "inspect", volumeName}
err := Exec(args, true)
if err != nil {
return false
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably want a TODO here to distinguish between different potential errors.

err = runSyncContainer(projectName, gitRoot, relPath)
if err != nil {
os.Exit(1)
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably better to just do check(err) here, or use log.Fatal(<some clearer message>). Same below.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we'd need to make sure the initial copy is complete before running the service below. We may need to run the sync container once with the copy/rsync command and wait for that to complete, before starting the unison process.

// Mount the project directory.
// TODO: sync into a garden-sync container instead
"--volume", fmt.Sprintf("%s:/project:delegated", gitRoot),
"--volume", fmt.Sprintf("%s/.git:/root/garden/.git", gardenHomeDir),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this doesn't seem right. ~/.garden is not supposed to contain a git root.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be /root/.git it's the users git config (but why do we need this?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's just a mistake, we shouldn't need this at all.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that one just slipped in accidentally. My bad.

# system dependencies
RUN apk add --no-cache unison

RUN mkdir host-mount
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest using absolute paths, not sure if / is guaranteed as a default working dir

if err != nil {
os.Exit(1)
}
}

// FIXME: We need a proper way to check if the command requires file system watch. This is not it.
func fsWatchEnabled(args []string) bool {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little skeptical about having this check. If we have a long-running sync container, it's also probably an unnecessary optimization. If we always start by doing an initial copy when creating the volume, then we should be able to assume that if the sync container is already running, that the synced volume is up to date. So we then don't have to have separate flows.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would we like to manage the sync containers in general? Should they just run along indefinitely, even though a user might have several projects and thus several unison processes running? Or should there be some clean up mechanism?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to implement some sort of timeout mechanism, so that they shut down after a certain time of inactivity. A little tricky to implement though, so I kinda filed that as an optimization we could make later. That said, the unison process shouldn't consume a lot of resources if the user isn't working on the project.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the approach above, I figured we'd stop the sync container after each run (and perhaps clean up any dangling sync containers) and then run it again next time a command requires file system watch. That would increase the start time of long running commands but not the other ones and there would always only be one sync process running.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...on second thought, that's probably going to be too slow.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different tradeoffs I guess, but I think the differing flows depending on the command, and the potentially long startup times are the worse choice. Especially when we'll be going over the network (although the implementation will be somewhat different then anyway).

garden-cli/sync.go Show resolved Hide resolved
)

// TODO: we may want to use one of the docker go client libs instead of shelling out here, but this works for now so...
func runGardenService(projectName string, gitRoot string, relPath string, args []string) error {
func runGardenService(projectName string, gitRoot string, relPath string, watch bool, args []string) error {
homeDir := getHomeDir()
gardenHomeDir := getGardenHomeDir()
containerName := "garden-run-" + randSeq(6)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use a predictable container and if its already running kill it and restart.

This could lead to issues if there is a disconnect event between our app and docker, then there are dangling containers running.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This container isn't persistent for now. We create a new one for each command. When we have an actual running service we will change this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I am suggesting we switch this out sooner rather than later. I rand into issues where things broke and I had dangling containers.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I guess the concern does stand for watching commands... One thing we could do would be using the parent process PID instead of the random letters, that would make it easy to see which containers have an attached CLI process.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmm we should just use a predictable name based on project (how ever we define that, name, path/*)

PID is pretty much random too. I am suggesting we need idempotency across runs (i.e command stopping and restarting)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry I totally miss understood this. I assumed when we ran all the commands they check for an existing container and exec into it.

Why do we do this instead of just starting up a container(if not already running) and exec'ing into it ? It seems like a simpler considering we are trying to watch events and state?

This would mean you would be able to behave more like client-> server model (albeit still with the exec instead of api)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might do that as an optimization actually - should for sure be faster to start. Just wanted to get this working and then make that one of the next steps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it will result in simpler easier to maintain code that is easier to develop on. It's also how I believed we did it originally.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It also means we can test things with simple docker exec -it as a rapid verification method.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We did plan on doing it that way, just haven't gotten there yet. @eysi09 up to you if we change it in this PR or later.

garden-cli/run.go Show resolved Hide resolved
sudo apt-get update
sudo apt install rsync cmake libicu-dev pkg-config golang-1.11-go
sudo apt -y install software-properties-common
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This installs a lot of stuff, maybe we can go a cheaper route with this. @drubin can you help us get the CI passing here? 👼

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am busy fixing this actually on my Fork drubin#1.

I am splitting every thing out so we can also leverage caching and multi jobs running at the same time, it will simplify all this if we can actually leverage the pre-built images for what they were meant for.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we need the CLI to pass I can always backport the building configs into this branch? but we still actively trying to fix a bunch of things so thought it could wait a few hours?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok great. Just wanted to see if the flag above would get it running.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh no stress, let's just wait then until you're done with your thing. Btw, very happy about everything you just said .)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should leave this for now. No point trying to optimise it further when we are going to replace it all any way. if you add another sudo apt-get update after add-apt-repository it should pass.

sudo apt-get update
sudo apt -y install software-properties-common
# Need software-properties-common for the add-apt-repository command
sudo add-apt-repository -y ppa:gophers/archive
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need another sudo apt-get update

@@ -0,0 +1,255 @@
package kubernetes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we delete this file? There is no need to merge a commented out file?

@@ -0,0 +1,7 @@
FROM alpine:3.8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we can move this out to garden-sync before merging? If its too much effort we can do it after.

I think it would clean things up technically we wont publish this as garden-cli its just a dependancy that the go cli will use just like garden-service?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on that. These are basically two separate modules with different code bases.

@eysi09 eysi09 force-pushed the go-cli-pt1 branch 3 times, most recently from eb65c98 to a9644b3 Compare October 25, 2018 12:05
Copy link
Contributor

@drubin drubin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, after a rebase

@eysi09 eysi09 merged commit 8ab5846 into master Oct 25, 2018
@eysi09 eysi09 deleted the go-cli-pt1 branch October 25, 2018 12:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants