-
Notifications
You must be signed in to change notification settings - Fork 9.1k
HDDS-1333. OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes #653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…compatible security classes. Contributed by Elek, Marton.
There are many unrelated changes in the PR? Can you rebase and update? |
This is a branch for 0.4.0 and I created the PR accidentally to trunk. I fixed it now. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
<resources> | ||
<resource> | ||
<directory>src/main/compose</directory> | ||
<filtering>true</filtering> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are filtering any files from compose dir?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think it's better to filter all the docker-compose and docker-config files.
ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION | ||
command: ["/opt/hadoop/bin/ozone","scm"] | ||
hadoop3: | ||
hadoop32: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we separate the hadoop 2 and 3 compose files first class (i.e create two separate compose dirs for them)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can do, if you prefer it, but I can't see any advantage. It would duplicate the work to maintain these docker-compose file. What I would do instead of this (long-term) is to create a parameter (.env file) for hadoop version and parameterize the test. But it will be way more slower (instead of starting one ozone cluster we need to start it again and again for each version)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One advantage i can think of is to have separate test suites for hadoop 2 and 3. With hadoop 2 and 3 being first class we can change there configs independently in future. However i am ok with this approch for current patch. We can split it later as desired.
*** Keywords *** | ||
|
||
Test hadoop dfs | ||
[arguments] ${imagename} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since each time imagename is different shall we move this to variable section? (i.e keynames will still not collide)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. It's not an imagename, in fact this is just any prefix of the name. Will be fixed in the last commit.
Should not contain ${result} Failed | ||
Should contain ${result} Creating Volume: ${volume} | ||
Create bucket | ||
Execute ozone sh bucket create /${volume}/${bucket} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After this shall we check if bucket creation is successful by listing it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Added an additional bucket info call.
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
Show resolved
Hide resolved
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
Show resolved
Hide resolved
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
Show resolved
Hide resolved
/** | ||
* Adapter to convert OzoneKey to a safe and simple Key implementation. | ||
*/ | ||
public static class IteratorAdapter implements Iterator<BasicKeyInfo> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is OzoneKeyIterator more relevant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what is the question. Can you please elaborate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean shall we rename IteratorAdapter to OzoneKeyIterator.
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
Show resolved
Hide resolved
💔 -1 overall
This message was automatically generated. |
</configuration> | ||
``` | ||
|
||
_Note_: You may also use `org.apache.hadoop.fs.ozone.OzoneFileSystem` without the `Basic` prefix. The `Basic` version doesn't support FS statistics and security tokens but can work together with older hadoop versions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bq. "The Basic
version doesn't support FS statistics and security tokens but can work together with older hadoop versions."
This is not accurate. If I understand correctly, the BasicOzoneFileSystem also support delegation token APIs but not FS statistics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, thanks. It's just because I have limited knowledge about the KeyProviderTokenIssuer. Can I write:
The `Basic` version doesn't support FS statistics and encryption zones but....
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
+1 |
prateekm let me know if there are other places that logging should also be improved in this patch. Author: Daniel Chen <dchen1@linkedin.com> Author: Daniel Chen <xrchen@uwaterloo.ca> Reviewers: Prateek Maheshwari <pmaheshwari@apache.org> Closes apache#653 from dxichen/add-restore-logging-info
The current ozonefs compatibility layer is broken by: HDDS-1299.
The spark jobs (including hadoop 2.7) can't be executed any more:
{code}
2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
at org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
at org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 43 more
{code}
See: https://issues.apache.org/jira/browse/HDDS-1333