Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: Parallelize parquet metadata deserialization #17399

Merged
merged 1 commit into from
Jul 3, 2024
Merged

perf: Parallelize parquet metadata deserialization #17399

merged 1 commit into from
Jul 3, 2024

Conversation

ritchie46
Copy link
Member

Deserializing wide parquet metadata is expensive. This ensures we do it in parallel.

closes #17259

@nameexhaustion FYI.

@github-actions github-actions bot added performance Performance issues or improvements python Related to Python Polars rust Related to Rust Polars labels Jul 3, 2024
@ritchie46 ritchie46 merged commit 3a3600c into main Jul 3, 2024
20 checks passed
@ritchie46 ritchie46 deleted the meta branch July 3, 2024 13:02
@nameexhaustion
Copy link
Collaborator

Ah, in hindsight the code did seem to not really want to parse the metadata 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance issues or improvements python Related to Python Polars rust Related to Rust Polars
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Loading wide parquet data with scan_parquet is orders of magnitude slower than long data
2 participants