Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IMP] Add option to automatically install repo requirements #28

Merged
merged 2 commits into from
Aug 24, 2017

Conversation

lasley
Copy link
Contributor

@lasley lasley commented Jun 10, 2017

This is a WIP & is temporarily based on #27 while I am testing, because I'm trying to be ultra lazy.

This PR adds an AUTO_REQUIREMENTS arg (currently defaults to True while I'm testing, but meant to be False). If the argument is true, requirements.txt files from addons defined in the addons.yml will be honored automatically.

As an added bonus, 80-requirements-txt is now Pythonified

@lasley lasley force-pushed the feature/requirements-auto branch 3 times, most recently from 38cbe5b to 70c0b91 Compare June 10, 2017 14:39
@pedrobaeza
Copy link
Member

But installing the requirements.txt from the repo if you're not going to use all the modules is a bit abusing. Can you read external_dependencies from the module manifests instead?

@pedrobaeza
Copy link
Member

@yajo what do you think about this overhead?

@lasley
Copy link
Contributor Author

lasley commented Jun 10, 2017

This is mainly meant for dev boxes, where I don't want to have to worry about my s/w engineers dealing with Ops (same use case as #27). I would never recommend either of these to be used in a production instance.

Honestly the real solution is an auto-installer in Odoo based on the external_dependencies key. I've been playing with that idea for a while, but haven't actually written anything yet (the challenge being translation of the python package name in the key to the pip package name for install).

@pedrobaeza
Copy link
Member

OK, then you should activate this by an environment variable only set on devel.yaml, not for all.

@lasley
Copy link
Contributor Author

lasley commented Jun 10, 2017

Is there a reason to dictate what someone does with a production server?
Scratch the above - this was before I realized scaffold. Forgot to delete before submitting.

The ONBUILD ARGS are translated into env variables when the child image builds, right?

environment variable only set on devel.yaml, not for all.

I'll be setting that (AUTO_REQUIREMENTS) to false once I'm done testing, so that it's disabled by default.

set on devel.yaml,

Oh I didn't even think about updating the scaffold too. I'll submit a PR for that once this is merged!

@pedrobaeza
Copy link
Member

OK, I'm not a Docker expert, so if doing that then you can decide later on yaml files, then go ahead. Sorry for the noise.

@yajo
Copy link
Contributor

yajo commented Jun 12, 2017

This is mainly meant for dev boxes

Problem is that for devel.yaml environment, no code exists at build time. Same as #27 (comment) and #26.

dev boxes, where I don't want to have to worry about my s/w engineers dealing with Ops

I don't feel that this is a good idea. Your developers should ensure the image includes required dependencies, or chances to get a red production build will be high.

Our approach with this project tries to be: provide tools that make developing easy and fast, and make production reproductible and stable. Adding autodiscovery in devel and removing it in prod adds chances to get different environments, so I'm not very positive with this feature.

OTOH, #28 (comment) would be awesome, but there's still the problem of #26, so we need a way to handle all of that. pip-installing in entrypoint is not an option IMHO (too slow for a boot process). Any ideas?

@yajo yajo self-assigned this Jun 12, 2017
@lasley
Copy link
Contributor Author

lasley commented Jun 12, 2017

Your developers should ensure the image includes required dependencies, or chances to get a red production build will be high.

My developers don't have any idea what is done on a production server, including how to make it green or red. My goal here is to make it so that my devs only have to worry about the code they are currently working in, and not how to install dependencies correctly for modules that are required by other modules they may use.

We have a completely separate process for building out production instances, which would be a combination of many different modules that these developers are making. This process would not use wildcards, and any issues would be detected during that QA. The two processes create a logical isolation between our development and ops departments, which makes it easier to onboard engineers (particularly juniors).

Sure, I get that any Python developer should know how to pip install something. They should not, however, be required to know how to track down which virtualenv the Odoo instance is running in, and how to pip install things to it & not system Python. They should also not be required to know how to modify a Docker file & rebuild their instance in order to dev for Odoo. It is also a hardline requirement to have all dev instances operating under the same architecture as our prod ones, so simply having them install local Odoos is not an solution either.

Problem is that for devel.yaml environment, no code exists at build time. Same as #27 (comment) and #26.

I don't understand the problem here? Isn't the devel.yml just a docker-compose file? I'm not even using Docker compose at the moment, I'm simply trying to build an Odoo image using docker build. The docker-compose architecture does not work for us using Rancher, which does not have the capability of building images.

Is there a key in the devel.yml that is changing some operation of the way code is installed? Is that AGGREGATE=false?

Adding autodiscovery in devel and removing it in prod adds chances to get different environments, so I'm not very positive with this feature.

I am not proposing to change the way anything operates, I am merely stating a use case. IMO it is not up to us to determine whether someone wants to do this on a production instance or not. My use case is devel, and is recommending against using in prod, but there is nothing to stop you from doing so.

but there's still the problem of #26,

Yeah so I definitely feel like I'm missing something here. I think answers to above question about aggregate key might clue me in.

pip-installing in entrypoint is not an option IMHO (too slow for a boot process). Any ideas?

Agreed - also this goes against what I've read of Docker practices.

@lasley lasley force-pushed the feature/requirements-auto branch 2 times, most recently from 38cbaae to aef2dcb Compare June 12, 2017 23:30
@lasley
Copy link
Contributor Author

lasley commented Jun 13, 2017

Followup now that I dug more for #27 - I totally get the problem now, so we can probably disregard most of my above.

Regarding the use of the manifest keys - I don't know of a reliable way to translate the Python Package we are provided in the manifest into to the PIP library that needs to be installed, do you?

And back to when we can perform this installation - I really don't like the entrypoint, because that's mutating the container after creation. We need something in the build process so that it can make its way into the images.

What if this feature were to be moved into the repo aggregation step instead? It's implicitly bound to this step, so it makes sense for the cutoff to be here. Looking at the scaffolds, it seems like the repo aggregation is the entrypoint as a setup script, so the installation would be performed there as well.

@yajo
Copy link
Contributor

yajo commented Jun 13, 2017

🤔 It could be at aggregation time, yes... But what if you have an additional dependency in your requirements.txt file that for some reason does not appear in any addon's external_dependencies? You still have the need to add requirements manually.

Besides, current design allows you to choose your pip-freezing method: Do you want to pin by version? Hash? not version-pinning? Do you want to install addons with pip instead of repos.yaml + addons.yaml? Just provide a valid requirements.txt file and let pip do the rest. It works pretty well as it is right now. So much that I was thinking on adding more of these:

  • npm_dependencies
  • gem_dependencies
  • apt_build_dependencies
  • apt_dependencies

So all you need, you write it there and all gets added at build time without the need of custom build scripts, no matter if you build for devel or prod...

Current problems in our dev team that would get fixed by this autodiscovery would be avoiding this:

  1. I try to install addon A.
  2. Odoo tells me that addon A needs dependency B.
  3. I add B to requirements.txt.
  4. I docker-compose build (which should take only some seconds usually).
  5. I install addon A with no problem.

Not much hassle really...

But still I'd want to cover your case if possible without losing current feature set, so what about this? What if we add a new script autopip that just spits out a valid requirements.txt file? All you'd have to do is run it with docker-compose run --rm odoo autopip, copy the output into your requirements.txt and rebuild.

There's another option: always copy the code, also in devel, so you can inspect and install, even if later you replace that path by a volume. But that would really slow down devel builds, and quick building there is a requirement IMHO.

@lasley lasley force-pushed the feature/requirements-auto branch from aef2dcb to 7d6aa0c Compare June 19, 2017 00:26
@lasley lasley force-pushed the feature/requirements-auto branch 3 times, most recently from 81bc015 to 4c2c99a Compare June 19, 2017 18:59
lasley referenced this pull request Jun 20, 2017
- More instructions on this feature.
- More instructions on ssh keys too.
- More DRY with FILE_APT_BUILD.
- Useful logging on dependencies installation commands.
- Some style homogeneization.
- Move install module inside odoobaselib namespace.
- Fix #35 by giving everyone read access to odoobaselib.
@lasley lasley force-pushed the feature/requirements-auto branch 3 times, most recently from 6fee3d4 to 7e825d7 Compare June 20, 2017 18:46
@lasley
Copy link
Contributor Author

lasley commented Aug 4, 2017

Ok so I've been thinking about this, and I'm not sure of a way that this feature can be supported in development environments that are not linking the addons during build time.

I don't think that this is a good reason to not support it in the other case though. Honestly I'm still even finding it easier in my dev environments to link the addons during build, then just mount my specific working addon as a volume.

In this way, I don't have to worry about 5 other repos being on the correct branches & instead we just use the same images that would be deployed to our customer prods.

@yajo
Copy link
Contributor

yajo commented Aug 7, 2017

I feel you shot yourself in the toe in #27 😕 . Now the addons get linked at run time, not build time, so not even production builds have a notion of "enabled addons". And pip-installing in entrypoint is out of question IMO. What would be your implementation strategy now on this?

Besides, I think I was a little bit wrong before. Devel images get addons in too, but they just get replaced by volumes at run time.

I'm thinking on a simple approach: add a script that outputs a valid requirements.txt file that you can copy-paste in dependencies/pip.txt. This way your developers don't need to care much about it, but still need to make sure the dependencies are in place. They can just:

docker-compose run --rm odoo addon-requirements > odoo/custom/dependencies/pip.txt
docker-compose build

The script works in run time after all entrypoints are processed, and the workflow produces git-versioned image-frozen dependencies, so it seems to cover all scenarios in a semi-automatic fashion that makes your life easier but still lets you have the control of the tool and is fully retrocompatible.

@lasley
Copy link
Contributor Author

lasley commented Aug 7, 2017

@yajo - I'm not sure what you mean, addons are definitely still linked during build time. This doesn't involve the linking of the addons though, only the download of them. It's probably worth noting that this works fine on my forked image, which has had all of these features for a while.

The requirements implementation is dumb - it doesn't matter what addons are enabled, just that the addon header is in the addons file. We can't determine which of the requirements goes to which module, so selectively installing isn't going to work.

I would prefer to have the requirements installed in my built image, as opposed to building them in on entrypoint. The entrypoint solution still involves a developer, which I am eliminating from this equation.

These instances should self maintain IMO, including dependency installations as new requirements are added to repos & new addons are introduced to pre-existing instances that didn't previously have them. Requiring that another step be performed by an engineer after deploying my images is not really an option on my end.

@yajo
Copy link
Contributor

yajo commented Aug 8, 2017

OK, I was just warning you about some problems I was seeing on this approach, but if you have been able to sort them out, there's no problem. Just push a valid PR with regression tests and let us review 😊 (this one has conflicts)

@lasley
Copy link
Contributor Author

lasley commented Aug 9, 2017

I think the Travis is unrelated here too

Traceback (most recent call last):
  File "/opt/odoo/common/build", line 26, in <module>
    subprocess.check_call(command)
  File "/usr/lib/python2.7/subprocess.py", line 535, in check_call
    retcode = call(*popenargs, **kwargs)
  File "/usr/lib/python2.7/subprocess.py", line 522, in call
    return Popen(*popenargs, **kwargs).wait()
  File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
    raise child_exception
OSError: [Errno 26] Text file busy

Copy link
Contributor

@yajo yajo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought you were going to investigate external_dependencies on addons found in addons.yaml, not to read the requirements.txt files possibly in repos.

In any case, it seems legit as a simple 1st approach.

Please rebase to see if travis goes green.

Also, could you add a test please?

req_file = os.path.join(
SRC_DIR, repo, 'requirements.txt',
)
if os.path.isfile(req_file):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EAFP

@@ -0,0 +1,16 @@
#!/usr/bin/env python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can remove this file.

@lasley
Copy link
Contributor Author

lasley commented Aug 10, 2017

I thought you were going to investigate external_dependencies on addons found in addons.yaml, not to read the requirements.txt files possibly in repos.

I haven't found a way to conquer this, but I am absolutely all ears if you have one. My issue is described here, but the TL;DR is that the PIP packages do not align with the Python namespaces they expose. We need to install the PIP packages, but the Odoo manifest uses the exposed namespaces.

@yajo yajo force-pushed the feature/requirements-auto branch from e4c1d0f to 4430752 Compare August 11, 2017 08:56
- Shortcut for installation that reduces the logic size.
- Cannot install from repositories before aggregation.
- Auto dependencies should come in before manual ones.
- Drop support of deprecated /opt/custom/src/requirements.txt file.
@yajo yajo force-pushed the feature/requirements-auto branch from 4430752 to a3cdb78 Compare August 11, 2017 09:04
@yajo yajo merged commit c7ec801 into Tecnativa:master Aug 24, 2017
@pedrobaeza
Copy link
Member

I'm reverting this as it causes images to not build, that we didn't anticipate.

The problem is the following:

  • all libraries declared on repositories are being installed.
  • some of them requires development headers to be present, but they are not automatically installed.
  • build failed then.

We should resubmit this thinking about this.

@@ -18,6 +18,7 @@ ONBUILD ARG DEPTH_MERGE=100
ONBUILD ARG CLEAN=true
ONBUILD ARG COMPILE=true
ONBUILD ARG CONFIG_BUILD=true
ONBUILD ARG AUTO_REQUIREMENTS=true
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should have been false

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, as a good sane compatibility default, but anyway the problem is still there, and you will face it when you set this param.

Copy link
Contributor Author

@lasley lasley Aug 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more for a development box, and while it can cause issues it also is incredibly helpful (at least it has been for me).

Is there any reason to not leave this feature as is, just with the good default? This is one of the last things I need to move off of my fork.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issues are during build too, which is the perfect time to resolve them. It really isn't a problem when you're setting up the box, only when a bunch of pre-setup boxes get the wrong default IMO

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, please resend the PR with that defaulting to False.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants