Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: Document CometPlugin to start Comet in cluster mode #836

Merged
merged 6 commits into from
Aug 19, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/source/user-guide/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,3 +152,7 @@ To enable columnar shuffle which supports all partitioning and basic complex typ
```
--conf spark.comet.exec.shuffle.mode=jvm
```

### Cluster mode
Running in cluster mode it might be needed to set additional Spark configuration `--conf spark.plugins=org.apache.spark.CometPlugin`
to make the cluster resource managers respect Comet memory parameters. More [Memory Tuning in cluster mode](./tuning.md#memory-tuning-in-cluster-mode)
9 changes: 9 additions & 0 deletions docs/source/user-guide/tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,15 @@ Comet will allocate at least `spark.comet.memory.overhead.min` memory.

If both `spark.comet.memoryOverhead` and `spark.comet.memory.overhead.factor` are set, the former will be used.

## Memory Tuning in cluster mode
comphead marked this conversation as resolved.
Show resolved Hide resolved
Running the Comet in clusters like Kubernetes or YARN to make the resource manager respect correctly Comet memory parameters `spark.comet.memory*`
comphead marked this conversation as resolved.
Show resolved Hide resolved
it is needed to pass to the starting command line additional Spark configuration parameter `--conf spark.plugins=org.apache.spark.CometPlugin`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why we don't specify the plugin in all cases instead of --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions since the plugin does (or can) register the session extensions?

Other Spark accelerators specify a plugin.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats a great question. Its not needed for local run, but Spark in enterprises get started mostly in clusters. Perhaps we can also update the installation guide and make a remark about the plugin if people run the Comet in cluster mode

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is mostly because we develop the extension at the beginning of Comet project. So all documents and customs are to specify the extension. We develop the Comet plugin after a while when we want to have it to configure memory easily.


The resource managers respects Apache Spark memory configuration before starting the containers.

The `CometPlugin` plugin overrides `spark.executor.memoryOverhead` adding up the Comet memory configuration.


## Shuffle

Comet provides Comet shuffle features that can be used to improve the performance of your queries.
Expand Down