-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tidy up where files come from #114
Comments
Maybe on reflection we should keep ibek only for inside container things. So that eliminates the CI bits and helm. Perhaps we get ec to do some of the CI work though. I'm also pretty sure we will find a few areas of overlap of configuration inside and outside the containers. |
But another possibility is we go all in on ibek and accept outside of container into it - roll in ec, helm and CI things. |
Here are some proposed changes which tighten things up.
|
ALSO: we currently mount the ioc-adaravis repo at /workspaces/ioc-adaravis in the devcontainer which is the default for devcontainers. This is annoying for the scirpts and Dockerfile because the location of the source and in particular the ibek-support sub module varies from one GIOC to the next. I propose that we always mount to:
WIth the changes in the above comment epics/ioc would then be a compilable thing that you can fix/run and have your changes propgated back to the repo automnatically. |
@coretl I propose that we change the ibek 'startup' command namespace to 'runtime'
|
sounds fine |
PROGRESS pm ibek change:
|
I've addressed all of the above in #115. Note that start.sh, stop.sh and check-live.sh are still scripts. But they now live as templates inside ibek at Looking at them I'm not sure of the benefit of converting to python. But this is an easy thing to add later if we think it makes sense. |
ibek is done. NEXT: sort out ec CLI and CI. Leaving this open to review if runtime launch / stop / check-live should get converted to python. This is because these files are a link between outside and inside of container. |
I'm pretty happy with ec as well now. Final tidy up is:
|
EC CI is done and its a reasonable framework for testing process.run functions via YAML! TODO - add the same framework for testing the ibek process.run based commands. |
broken the last remaining issue in here out into #137 |
@coretl @GDYendell
So I'm finding it confusing that the files that are crucial to getting a fully working IOC instance are coming from a variety of places.
When you are in a container and you find you have the wrong version of something or you want to reset something it may not be easy or clear where it comes from. I say this as the primary author of this framework! Goodness knows what a neophyte containers engineer will make of it all.
Some ideas for things that can be embedded in ibek itself are listed below. These would then always be a consistent set. (adding to this as I think of things)
epics-base
epics-base
blxxy
epics-base
ioc-xxx
epics-base
all over the place
Now you might say this is hiding many things inside a python module. At present the scripts can be inspected and possibly customised. I think good customisation hooks would mitigate this.
But the benefit is you should get a consistent, working set of things versioned along with ibek itself.
It would give us clear(ish) separation of responsibility as follows:
epics-base
is just a compilation of epics in a container: it will need to set a bunch of env vars to say where things are (/epics/epics-base /epics/support etc.) but that is all and hopefully only ibek needs to consume those env varsibek
manifests all the rest of the things that enable epics-containers framework, EXCEPT:-beamline repos
supply a list of IOC.YAML only , uses ibek to publish them to registry in CIioc-XXX
repos list a sequence of support modules to install uses ibek/ibek-support to compile them (mostly done already)ibek-support
provideshelm chart library
contains the common functions in the helm charts because its helm (but you could even embed that in ibek if you wanted to go the whole way)The text was updated successfully, but these errors were encountered: