-
-
Notifications
You must be signed in to change notification settings - Fork 364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Analyze assembly error - Reproduction of #528 and #2650 #2655
Conversation
It looks to me, the main class can't be found if the jar reaches a specific size and the main class is located at the end of the jar. I made some tests with my alpha-grade |
One thing I've found is that even
but if I manually cut out the prependShellScript of the problematic assembly jar and create a new jar (as shown below) then
Also, Could it be some kind of limitation in the Java zip reading implementation? It seems unlikely, but given that |
The JDK java code in that stack trace is https://github.com/openjdk/jdk/blob/9454b2bbe130fdbe86485b928b80d19156c709ee/src/java.base/share/classes/java/util/zip/ZipFile.java#L1189-L1217 |
Another data point: taking the problematic jar, This somewhat confirms that it isn't our implementation of create-a-zip-file that is problematic: a native unix That means the problem has to be the interaction between the JVM zip reading code and the the prepended shell script, since the problem remains even after swapping out the zip writing code, but the problem goes away if we remove the prepended shell script or we use a non-JVM program to read the prepended jar file |
I made almost the same tests a while ago. I think it worth to note that it's not the sheer file size, that makes the JRE zip implementation fail, as an assembly jar built with mill-spring-boot and the exact same dependencies can be started. The difference is, that the spring boot assembly contains a lot less zip entries, as it is not unpacking all the dependencies, but rather just embeds the original jars. So, it's either the entry count (=catalog size) or the position (index) of the main class in that catalog. An interesting test would be to add the main class first (don't append to the upstream assembly jar). Though, I don't think it will fix it, from looking at the JDK code. Maybe, we can count the zip entries ourself and print a warning if we detect, that it will be too large to be executable. What I also wonder, we can't be the first who fell into that trap. |
Another data point, I tried this with SBT-1.8.2 and SBT-Assembly 2.2.1, and it has the same failure mode: |
Another data point is that
|
Final bit of debugging: it appears that the misbehavior of A prepended jar with 65534 entries can be This was tested by applying the following diff $ git diff
diff --git a/integration/feature/assembly/repo/build.sc b/integration/feature/assembly/repo/build.sc
index 43d09cd415..62bf22aff7 100644
--- a/integration/feature/assembly/repo/build.sc
+++ b/integration/feature/assembly/repo/build.sc
@@ -30,5 +30,7 @@ object noExe extends Module {
object exe extends Module {
object small extends Setup
- object large extends Setup with ExtraDeps
+ object large extends Setup with ExtraDeps{
+ override def mainClass = Some("foo.bar.Main")
+ }
}
diff --git a/scalalib/src/mill/scalalib/Assembly.scala b/scalalib/src/mill/scalalib/Assembly.scala
index cd25d18152..92c1a81448 100644
--- a/scalalib/src/mill/scalalib/Assembly.scala
+++ b/scalalib/src/mill/scalalib/Assembly.scala
@@ -223,9 +223,12 @@ object Assembly {
)
manifest.build.write(manifestOut)
manifestOut.close()
-
+ var writeCount = 0
val (mappings, resourceCleaner) = Assembly.loadShadedClasspath(inputPaths, assemblyRules)
try {
+ // Good: 32000 48000 56000 60000 62000 62750 62762 62763 62764 62766
+ // Bad: 62767 62770 62775 63000 63016 63032 63064 63125 63250 63500 64000
+ val max = 62767
Assembly.groupAssemblyEntries(mappings, assemblyRules).foreach {
case (mapping, entry) =>
val path = zipFs.getPath(mapping).toAbsolutePath
@@ -238,8 +241,16 @@ object Assembly {
val cleaned = if (Files.exists(path)) separated else separated.drop(1)
val concatenated =
new SequenceInputStream(Collections.enumeration(cleaned.asJava))
- writeEntry(path, concatenated, append = true)
- case entry: WriteOnceEntry => writeEntry(path, entry.inputStream(), append = false)
+
+ if (writeCount < max){
+ writeCount += 1
+ writeEntry(path, concatenated, append = true)
+ }
+ case entry: WriteOnceEntry =>
+ if (writeCount < max) {
+ writeCount += 1
+ writeEntry(path, entry.inputStream(), append = false)
+ }
}
}
} finally { And running the following commands to test the success of
It seems that there must be some change in the handling of zip files when the number of entries reaches 65535. |
Thanks for this great analysis, @lihaoyi. It would be nice, if we could evaluate the entry count and warn the user, if it is too large. We could continue, or fail or produce a jar without prepend script in case of too many entries. Don't know which option is the best though. All are unpredictable. Or we make the behavior configurable and default to fail, which feels like the best option to me. |
I opened PR #3140, which detects faulty assembly JARs, bases on Haoyi's findings and fails the build with an actionable error message. |
Merged as part of #3140. |
Reprodcution of issues