Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce JIT code generation #1849

Merged
merged 6 commits into from
Feb 24, 2022
Merged

Introduce JIT code generation #1849

merged 6 commits into from
Feb 24, 2022

Conversation

yjshen
Copy link
Member

@yjshen yjshen commented Feb 17, 2022

Which issue does this PR close?

Closes #1850.

Rationale for this change

With JIT codegen, we could generate specific code for each query to reduce branching overhead from the generalized interpret mode execution. Furthermore, we could reduce the memory footprint during the execution by chaining multiple Arrow compute kernels together and reusing the intermediate vectors.

For the row format recently introduced #1782, we could reduce much branching once row <-> record batch conversion codes are generated based on schema information.

What changes are included in this PR?

  • A new datafusion-jit module and its feature gate: jit (off-by-default)
    • A set of code construction API with best-effort type safety guarantee at compile time. Great thanks to @houqp for his help in the API design.
    • Code generation runtime based on Cranelift code generator.
  • Codegen version of row -> record batch conversion as an example.

Are there any user-facing changes?

No.

@github-actions github-actions bot added the datafusion Changes in the datafusion crate label Feb 17, 2022
)?;
Ok(())
},
)?;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is how we create an iterative version of Fibonacci calculation with the introduced FunctionBuilder API.

}

#[cfg(feature = "jit")]
fn gen_read_row(schema: &Arc<Schema>, assembler: &Assembler) -> Result<*const u8> {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the other example of how we generate code based on the schema to create a row to record batch deserializer.

@@ -44,6 +48,31 @@ pub fn read_as_batch(
output.output().map_err(DataFusionError::ArrowError)
}

/// Read `data` of raw-bytes rows starting at `offsets` out to a record batch
#[cfg(feature = "jit")]
pub fn read_as_batch_jit(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this is the example usage pattern: generate code and compile once, and run repeatedly.

@alamb
Copy link
Contributor

alamb commented Feb 17, 2022

This looks awesome @yjshen -- I will try and review it soon, though given its size and that I will be out of the office for the next few days it may take me some time.

Copy link
Member

@houqp houqp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Epic work @yjshen 👍 Solid foundation for wscg :D

datafusion/src/row/reader.rs Outdated Show resolved Hide resolved
@houqp houqp added the enhancement New feature or request label Feb 18, 2022
@yordan-pavlov
Copy link
Contributor

that's a great idea @yjshen; have you measured the effects on performance?

@yjshen
Copy link
Member Author

yjshen commented Feb 19, 2022

yes i am working on a bench. will post the results later

datafusion-jit/src/api.rs Outdated Show resolved Hide resolved
@yjshen
Copy link
Member Author

yjshen commented Feb 20, 2022

For the record batch to row conversion case, I saw a 15% performance boost while using JIT.

Benchmarking row serializer: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 283.9s, or reduce sample count to 10.
row serializer          time:   [2.6387 s 2.6547 s 2.6711 s]
                        change: [-1.1837% -0.2705% +0.6336%] (p = 0.56 > 0.05)
                        No change in performance detected.

Benchmarking row serializer jit: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 240.6s, or reduce sample count to 10.
row serializer jit      time:   [2.2340 s 2.2833 s 2.3500 s]
                        change: [-1.1701% +0.8754% +4.0019%] (p = 0.57 > 0.05)
                        No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
  7 (7.00%) high severe

datafusion-jit/src/api.rs Outdated Show resolved Hide resolved
Comment on lines +329 to +339
0 => write!(f, "nil"),
0x70 => write!(f, "bool"),
0x76 => write!(f, "i8"),
0x77 => write!(f, "i16"),
0x78 => write!(f, "i32"),
0x79 => write!(f, "i64"),
0x7b => write!(f, "f32"),
0x7c => write!(f, "f64"),
0x7e => write!(f, "small_ptr"),
0x7f => write!(f, "ptr"),
_ => write!(f, "unknown"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, can we match with the defined const? e.g. BOOL.code, instead of the actual code value here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems I cannot put BOOL.code here. Introducing another const may be too complex?

datafusion-jit/src/api.rs Show resolved Hide resolved
datafusion-jit/src/jit.rs Outdated Show resolved Hide resolved
}

#[cfg(feature = "jit")]
pub fn bench_write_batch_jit_dummy(schema: Arc<Schema>) -> Result<()> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this bench_write_batch_jit_dummy function used? if it isn't used should it be removed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used in jit bench now. I check in the benchmark in a later commit.

@houqp
Copy link
Member

houqp commented Feb 23, 2022

@viirya @alamb @yordan-pavlov do you want to take a final look before the merge?

Copy link
Contributor

@yordan-pavlov yordan-pavlov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me, thanks @yjshen

@houqp houqp merged commit 9e75ff5 into apache:master Feb 24, 2022
@yjshen yjshen deleted the jit branch February 26, 2022 03:55
Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just had a chance to review this module. Very cool stuff @yjshen 🏅 .. I would be (very) interested in helping push the JIT feature along.

I think using this JIT and the row format to speed up sorting / merging would be a very interesting project and quite relevant to IOx (and thus I could justify spending non trivial time on it). Perhaps I can take a swag at creating some benchmarks or something to kick off the process?

I don't want to start working on anything if you are already doing so

Comment on lines +72 to +74
let code_fn = unsafe {
std::mem::transmute::<_, fn(&RowReader, &mut MutableRecordBatch)>(code_ptr)
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

over time it would be good to try and encapsulate the unsafe code into a smaller number of places (e.g. perhaps have an interface that creates a row comparator)

@yjshen
Copy link
Member Author

yjshen commented Mar 1, 2022

I just had a chance to review this module. Very cool stuff @yjshen 🏅 .. I would be (very) interested in helping push the JIT feature along.

I think using this JIT and the row format to speed up sorting / merging would be a very interesting project and quite relevant to IOx (and thus I could justify spending non trivial time on it). Perhaps I can take a swag at creating some benchmarks or something to kick off the process?

Yes, please go ahead. I have just done some code research on DuckDB's radix sort based on the sort keys in raw bytes format. I think it's great to implement and benchmark the performance here. 1 2 and 3 are worth checking if you want to try out the DuckDB way we've discussed in #1708 (comment)

@alamb
Copy link
Contributor

alamb commented Mar 1, 2022

Yes, please go ahead. I have just done some code research on DuckDB's radix sort based on the sort keys in raw bytes format.

Sounds good -- I think I need to spend some more time studying the new JIT code and figuring out how to structure these changes. At least for merging I need to think about how best to use the row format. Will keep you updated on my thinking

@alamb
Copy link
Contributor

alamb commented Mar 6, 2022

BTW I have not forgotten (or reduced my interest in) working on JIT related code. However, I have not had any time to devote to it yet -- most of my time has been spent reviewing code and moving arrow release along. I'll try and get some time in this next week.

@yjshen
Copy link
Member Author

yjshen commented Mar 6, 2022

No need to hurry. I've turned my attention to aggregate for now and will do some benchmarks to evaluate the impact of row format on performance under different key cardinalities.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
datafusion Changes in the datafusion crate enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Introduce JIT code generation for performance improvement
6 participants