-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 569521 - An option in tycho configuration that let tycho always use cached content of the metadata. #140
Comments
IMO, what you suggest should be the default behavior if |
I recently have optimized the offline behavior of target handling, still the assumption
is not necessary at all and falsy IMO for this to work. Have you in the meantime considered using / install a local caching proxy? |
i don't get this that file, after the initial download of it (after i change a url or a version) will never download again.
not not really, that is another thing to setup and to maintain, the local copy is already there on disk.
yes this kind of works if it goes wrong so if i add that "--offline" flag to all our jenkins projects i have the opposite problem.. then when i check in a pom file with dependency changes or a new target file it suddenly bombs out.. By default for me, mvn (not counting tycho) doesn't really need that --offline flag, because mvn doesn't go online anyway for the stuff it already did download. Thats really something tycho does. |
Not sure what part is unclear but for
Your target uses for the part
I mean that tycho could fallback to downloaded artifacts whenever it tries to access a remote file and thus no hashcode or whatever is needed anyways
It is often the case that there is a nexus-proxy also (as maven mirrors can be failing/unreliable/slow/...) that might also acts as a P2 proxy. At least you have the following options to give more priority on this this:
|
i just NEVER see tycho doing a download after the initial download when i change the target. How can dependencies change? everything should be fixed. (in my npm package.json files i also have hard version, a human can only be responsible for upping stuff) should i remove then "includeMode="planner"" ? Because its really simple, i want that our jenkins that auto builds our product, always has exactly the same product build if i create it now, or create it in 2 months (if i didn't change the target) So i need to install a proxy on the same server as jenkins, and that jenkins server should then be redirected to always use that proxy for everything.. (or specific urls) i fully understand that if i change something in the target (or plain mvn dependencies) and it has at that time a problem with the network that it fails. that is fully logical. But after that it should just work. Resolve/Plan only once per target "revision" |
The content of the remote repositories can change; and more specifically, some content can be removed or replaced; and as a consumer of this content, you'll want your build to react to those changes (fail if something is missing, or get newer version if something is better). Artifacts in p2 repositories are like Maven snapshots, they do not carry any guarantee that they don't change upstream. |
I just wanted to note that this is possible (and when) just think about the trivial case that your jenkins has crashed, you need to start from an empty workspace and any of the external server was brought down/offline then you won't be able to build. Also someone might have decided in the meanwhile to deploy never versions, everything is possible...
not necessarily it could be a dedicated server as well, see Nexus for example, that could act as a proxy for maven as well as for P2.
The usual way is to specify mirror mappings in maven-settings.xml If you regular backup that proxy-server, even if eclipse and maven central decided to shut down you always would be able to build your 2-month-old job :-) |
Now today Many eclipse update sites where or are still down? Many things in our pipelines on jenkins build are not building anymore many jobs are all "red" am i really the only one that really finds this the most annoying "feature" of tycho?? I just started to pickup something else completely different today, because 2 cases i wanted to work on all depend on building stuff which do not work until now. i still don't get why you guys are saying someting can change, for me that is just not the case, everything in my target is hard coded to specific versions: https://github.com/Servoy/servoy-eclipse/blob/master/launch_targets/com.servoy.eclipse.target.target i really don't see after 1 download how anything can ever change . |
As the Eclipse-Jenkins itself is down, whatever "caching" would be in place won't help me for the jobs ;-)
Why not start hacking on the
:-) |
our jobs would be fine if tycho didn't need to constantly get over and over again the same thing, we don't use Eclipse-Jenkinks |
is there a mojo in tycho that generates a fully p2 site (with everything over all supported platforms) of the target file? Maybe i can do the same thing as we do already in our developer (to get around this problem of sites going down) where we do from our main.target file an export to local disk and have a also a "local.target" which points to that dir so the developer never have to go over the internet to download something Because if that would be the case, then very likely that at least 1 of my team suddenly has a problem because eclipse can't resolve the target because it tried to resolve and download it (we had that many times in the past that suddenly 1 developer couldn't really work anymore because the target was screwed) But if i can have a special mojo/profile in my target file that can export everything it has in the target so i can make a a backup of stuff and put in our our own S3 bucket. |
As mentioned before your best choice would we using a Nexus as p2 mirror. Beside that there is a mirror task but I have never used that: https://www.eclipse.org/tycho/sitedocs/tycho-extras/tycho-p2-extras-plugin/mirror-mojo.html |
But that seems to really mirror a whole p2 site (and those are only a few of them) not to mention that i use 2 orbit sites where i only take a few jars from If i have to guess then my combined p2 repo would be in the gigabytes... the best thing would be if i could somehow do the same thing as the target editor can do (export) the p2 site that is generated for our product has a size of just over 1GB that is over 3 platforms (windows, linux, osx) but that is until now also with the pack.gz files so it wil be way less when those are not generated anymore. The problem is a bit faking it is also quite hard But i understand there is no tycho option what the target file editor can do, thats a pity because that would help me. Then when i update the target file i just run the exporter once to generate a p2 site from the target file for all platforms, push that to an s3 bucket and then have an other target (that is the target the build uses that just uses that single repo) What i also could do is if i change the target file, build our product once, and feed the repo that is generated back into itself.. |
It seems quit configurable but as mentioned I don't have used it before so you might need to play around with it a bit I think... If I see right you can specify the IUs to mirror...
That's why I would suggest a dedicated solution for this, Nexus will only mirror that artifacts that you actually requested, and if you add new items they will automatically mirrored also, you don't need to modify your target but only supply a mirror entry in your maven settings, and you can use the same mirror for build server and local development...
A better approach would be to create an update-site, you can then control what is included and what not, and you can also control if transient items should be included.
You can clone the tycho reprository and create a mojo for this that takes a target as an input and mirror it. I even has once started such a feature, the idea there that one is able to deploy a target+its content as a maven artifact to have a true, never changing target, but dropped work on it as it does not worked that well for large targets as I initially expsected.
If you don't like to contribute yourself to tycho, and it really would help you with your business, as mentioned before you can sponsor me either via github or directly (just let me know if you need a personalized offer) to either work on this bug/feature or any additional feature you feel missing. Another option would be to find more people requesting this feature so it become more likely someone else like to contribute to this user-story... |
If I remember right @mickaelistria has done some fancy stuff for jbosstools, maybe there was something similar available? But I have never used that either... |
I think the following issues could be related: |
i am trying a bit setting up a nexus repo you can't group... But that would mean that i have a big list of repo's that some of them are also constantly changing first updating the target then go to the settings.xml on all the places that wants to use the proxy and also change/add the new proxy url there? thats a log of work all the time and a lot of maintenance.. |
created a pull: its a bit between offline mode that doesn't go remote at all and fully online only |
Didn't you state your target never changes ;-) Yes there is some kind of work needed, but one has to decide what are the goals here. As far as I remember you need to at least add each mirror URL once to the nexus (I think that's acceptable), and then you have two choices:
Great, I'll take a look at it asap. |
no my target changes once every 3 months (when i update our product to a new build) an then it should download all the new stuff but after that when the target file doesn't change anymore, tycho never has to download anything after the first time. I give up on Nexus that is just way to much work and way to much maintenance. Tycho should just not fail in this scenario I updated the pull, because when i did go to bed yesterday (at 1 o'clock...) i suddenly through wait i don't need to change the Transport (to be offline first). I can just use the normal one, So that is my latest pull, RemoteRepositoryCacheManager will never throw the error below so it completely stops. it should try to first also try the other suffix. Caused by: org.eclipse.equinox.p2.core.ProvisionException: Unknown Host: http://developer.servoy.com/sqlexplorer/content.xml.xz the problem for me is in AbstractRepositoryManager.loadRepository (p2 code)
but if the first one already fails it jus stops The thing is that it tries to try all the suffixes is because the p2.index which normally will say what suffixes should be used, is not cached and it falls back to a default one.. |
ok thats really annoying, everything is private in AbstractRepositoryManager in p2 code. |
@jcompagner I think it might be good to first make the P2 mode more extensible and allow caching the p2.index? |
yes i think so else we get hacks like: public class CachingTransport extends Transport {
public download(xxxx) { |
If you have opened bugs/enhancement request could you please link them here? The best would be provide a gerrit patch alongside with the request of course. |
@jcompagner Not sure if you have been sucessful in setting up your Nexus meanwhile. I can only say that we run Artifactory in our company, and it took like 10 minutes to set up an eclipse download mirror. It's not necessary to do that for each p2 site, we simply mirror download.eclipse.org (or some sub directory, not sure right now). That's also not a problem with disk space, since files are retrieved on first access only. So you need only as much disk space as a composite update site of your (transitive) dependencies would take. In our target files we always use fully versioned URLs, i.e. something like download.eclipse.org/.../xtext/2.25, therefore we can ensure that target files and p2 update site changes happen in sync. That is independent of whether you would use a mirror or not, but really makes life more simple than relying on top level composite update sites and tycho downloading new things or not. |
https://bugs.eclipse.org/bugs/show_bug.cgi?id=569521
this is still quite important to me, as an example today nothing builds anymore for us because of 1 target url is constantly failing, so all our builds and all our test are not running at all.
i really would like an option that if the hash of the target file (so the contents) is the same then everything from the target file can come from a cached location (under that hash) if it is cached (so the first download just places it under that cache)
for me if the target file doesn't change, the download the target file will do will never change (because everything is fixed by hard version numbers)
The text was updated successfully, but these errors were encountered: