-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support 'col IN (a, b, c)' type expressions #652
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #652 +/- ##
==========================================
+ Coverage 84.08% 84.50% +0.42%
==========================================
Files 77 78 +1
Lines 17777 18340 +563
Branches 17777 18340 +563
==========================================
+ Hits 14948 15499 +551
+ Misses 2115 2097 -18
- Partials 714 744 +30 ☔ View full report in Codecov by Sentry. |
007b4e2
to
6977db9
Compare
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
6977db9
to
28a5648
Compare
@@ -208,7 +208,7 @@ impl TryFrom<&ArrowDataType> for DataType { | |||
ArrowDataType::Date64 => Ok(DataType::DATE), | |||
ArrowDataType::Timestamp(TimeUnit::Microsecond, None) => Ok(DataType::TIMESTAMP_NTZ), | |||
ArrowDataType::Timestamp(TimeUnit::Microsecond, Some(tz)) | |||
if tz.eq_ignore_ascii_case("utc") => | |||
if tz.eq_ignore_ascii_case("utc") || tz.eq_ignore_ascii_case("+00:00") => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data in arrow arrays should always represent a timestamp in UTC, so is this check even necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we ever get an answer for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OIC -- we probably shouldn't have been testing for even "utc"
case, because both Delta and arrow store physically UTC timestamps. The TZ of an arrow timestamp is only a hint about which TZ should be used to display the timestamp.
ad: &ArrayData, | ||
) -> BooleanArray { | ||
#[allow(deprecated)] | ||
let res = col.map(|val| val.map(|v| ad.array_elements().contains(&v.into()))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this handles NULL values correctly? See e.g. https://spark.apache.org/docs/3.5.1/sql-ref-null-semantics.html#innot-in-subquery-:
- TRUE is returned when the non-NULL value in question is found in the list
- FALSE is returned when the non-NULL value is not found in the list and the list does not contain NULL values
- UNKNOWN is returned when the value is NULL, or the non-NULL value is not found in the list and the list contains at least one NULL value
I think, instead of calling contains
, you could borrow the code from PredicateEvaluatorDefaults::finish_eval_variadic, with true
as the "dominator" value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think you could just invoke that method directly, with a properly crafted iterator?
// `v IN (k1, ..., kN)` is logically equivalent to `v = k1 OR ... OR v = kN`, so evaluate
// it as such, ensuring correct handling of NULL inputs (including `Scalar::Null`).
col.map(|v| {
PredicateEvaluatorDefaults::finish_eval_variadic(
VariadicOperator::Or,
inlist.iter().map(Some(Scalar::partial_cmp(v?, k?)? == Ordering::Equal)),
false,
)
})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was I correct in thinking that None
- no dominant value, but found Null
- should just be false
in this case?
ad: &ArrayData, | ||
) -> BooleanArray { | ||
#[allow(deprecated)] | ||
let res = col.map(|val| val.map(|v| ad.array_elements().contains(&v.into()))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside: We actually have a lurking bug -- Scalar
derives PartialEq
which will allow two Scalar::Null
to compare equal. But SQL semantics dictate that NULL doesn't compare equal to anything -- not even itself.
Our manual impl of PartialOrd for Scalar
does this correctly, but it breaks the rules for PartialEq:
If PartialOrd or Ord are also implemented for Self and Rhs, their methods must also be consistent with PartialEq (see the documentation of those traits for the exact requirements). It’s easy to accidentally make them disagree by deriving some of the traits and manually implementing others.
Looks like we'll need to define a manual impl PartialEq for Scalar
that follows the same approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is indeed not covered. Added an implementation for PartialEq
that mirrors PartialOrd
.
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Some( | ||
PredicateEvaluatorDefaults::finish_eval_variadic( | ||
VariadicOperator::Or, | ||
inlist.iter().map(|k| v.as_ref().map(|vv| vv == k)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't correct -- we need comparisons against Scalar::Null
to return None
. That's why I had previously recommended using Scalar::partial_cmp
instead of ==
.
Also, can we not use ?
to unwrap the various options here?
inlist.iter().map(|k| v.as_ref().map(|vv| vv == k)), | |
inlist.iter().map(Some(Scalar::partial_cmp(v?, k?)? == Ordering::Equal)), |
Unpacking that -- if the value we search for is NULL, or if the inlist entry is NULL, or if the two values are incomparable, then return None for that pair. Otherwise, return Some boolean indicating whether the values compared equal or not. That automatically covers the various required cases, and also makes us robust to any type mismatches that might creep in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: If we wanted to be a tad more efficient, we could also unpack v
outside the inner loop:
values.map(|v| {
let v = v?;
PredicateEvaluatorDefaults::finish_eval_variadic(...)
})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm -- empty in-lists pose a corner case with respect to unpacking v
:
NULL IN ()
Operator OR with zero inputs normally produces FALSE (which is correct if you stop to think about it) -- but unpacking a NULL v
first makes the operator return NULL instead (which is also correct if you squint, because NULL input always produces NULL output).
Unfortunately, the only clear docs I could find -- https://spark.apache.org/docs/3.5.1/sql-ref-null-semantics.html#innot-in-subquery- -- are also ambiguous:
Conceptually a IN expression is semantically equivalent to a set of equality condition separated by a disjunctive operator (OR).
... suggests FALSE while
UNKNOWN is returned when the value is NULL
... suggests NULL
The difference matters for NOT IN, because NULL NOT IN ()
would either return TRUE (keep rows) or NULL (drop row).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NOTE: SQL engines normally forbid statically empty in-list but do not forbid subqueries from producing an empty result.
I tried the following expression on three engines (sqlite, mysql, postgres):
SELECT 1 WHERE NULL NOT IN (SELECT 1 WHERE FALSE)
And all three returned 1
. So OR semantics prevail, and we must NOT unpack v
outside the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hoping I now considered all your comments, which essentially means going with your original version.
(ArrowDataType::Utf8, PrimitiveType::String) => op_in(inlist, str_op(column.as_string::<i32>().iter())), | ||
(ArrowDataType::LargeUtf8, PrimitiveType::String) => op_in(inlist, str_op(column.as_string::<i64>().iter())), | ||
(ArrowDataType::Utf8View, PrimitiveType::String) => op_in(inlist, str_op(column.as_string_view().iter())), | ||
(ArrowDataType::Int8, PrimitiveType::Byte) => op_in(inlist,op::<Int8Type>( column.as_ref(), Scalar::from)), | ||
(ArrowDataType::Int16, PrimitiveType::Short) => op_in(inlist,op::<Int16Type>(column.as_ref(), Scalar::from)), | ||
(ArrowDataType::Int32, PrimitiveType::Integer) => op_in(inlist,op::<Int32Type>(column.as_ref(), Scalar::from)), | ||
(ArrowDataType::Int64, PrimitiveType::Long) => op_in(inlist,op::<Int64Type>(column.as_ref(), Scalar::from)), | ||
(ArrowDataType::Float32, PrimitiveType::Float) => op_in(inlist,op::<Float32Type>(column.as_ref(), Scalar::from)), | ||
(ArrowDataType::Float64, PrimitiveType::Double) => op_in(inlist,op::<Float64Type>(column.as_ref(), Scalar::from)), | ||
(ArrowDataType::Date32, PrimitiveType::Date) => op_in(inlist,op::<Date32Type>(column.as_ref(), Scalar::Date)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are all a lot longer than 100 chars... why doesn't the fmt check blow up??
@@ -280,6 +281,84 @@ fn evaluate_expression( | |||
(ArrowDataType::Decimal256(_, _), Decimal256Type) | |||
} | |||
} | |||
(Column(name), Literal(Scalar::Array(ad))) => { | |||
fn op<T: ArrowPrimitiveType>( | |||
values: &dyn Array, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
values: &dyn Array, | |
values: ArrayRef, |
(avoids the need for .as_ref()
at the call site)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the main thing was that we need this to be a reference, otherwise the compiler starts complaining about lifetimes. I did shorten the code at the call-site a bit, hope that works as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would an ArrayRef
(= Arc<dyn Array>
) give lifetime problems, sorry?
We can always call as_ref()
on it to get a reference that lives at least as long as the arc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not the parameter itself, but the as_primitive
cast inside the functions returns a ref, which we then iterate over. This then causes issues with the iterator referencing data owned by the function.
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're in good shape now, just need missing tests.
kernel/src/expressions/scalars.rs
Outdated
} | ||
impl PartialEq for Scalar { | ||
fn eq(&self, other: &Scalar) -> bool { | ||
self.partial_cmp(other) == Some(Ordering::Equal) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aside: PartialOrd
requires Self: PartialEq
, but we can still invoke the former from the latter?
(feels somehow like a circular dependency, but I guess if it compiles it works)
@@ -692,6 +771,85 @@ mod tests { | |||
assert_eq!(in_result.as_ref(), &in_expected); | |||
} | |||
|
|||
#[test] | |||
fn test_column_in_array() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relating to #652 (comment) and #652 (comment) -- we don't seem to have any tests that cover NULL value semantics?
1 IN (1, NULL) -- TRUE
1 IN (2, NULL) -- NULL
NULL IN (1, 2) -- NULL
1 NOT IN (1, NULL) -- FALSE (**)
1 NOT IN (2, NULL) -- NULL
NULL NOT IN (1, 2) -- NULL
(**) NOTE from https://spark.apache.org/docs/3.5.1/sql-ref-null-semantics.html#innot-in-subquery-:
NOT IN always returns UNKNOWN when the list contains NULL, regardless of the input value. This is because IN returns UNKNOWN if the value is not in the list containing NULL, and because NOT UNKNOWN is again UNKNOWN.
IMO, that explanation is confusing and factually incorrect. If we explain it in terms of NOT(OR):
1 NOT IN (1, NULL)
= NOT(1 IN (1, NULL))
= NOT(1 = 1 OR 1 = NULL)
= NOT(1 = 1) AND NOT(1 = NULL)
= 1 != 1 AND 1 != NULL
= FALSE AND NULL
= FALSE
As additional support for my claim: sqlite, postgres, and mysql all return FALSE (not NULL) for that expression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some test according to the cases mentioned above, hopefully covering all cases. This uncovered some cases where we were not handling NULLs correctly in the other in-list branches, mainly b/c the arrow kernels don't seem to be adhering to the SQL NULL semantics.
In addition to the engines above, I also tried duckdb and datafusion, which also support @scovich's claim.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, is this something worth upstreaming to arrow-rs
similar to the *_kleene
variants for other kernels?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this something worth upstreaming to
arrow-rs
similar to the*_kleene
variants for other kernels?
Potentially, yes? A benefit of upstreaming is that it should be able to embed the null tests directly (like we did with the finish_eval_variadic
call), instead of requiring a second pass for the null_count
like you had to do here?
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
.map(|k| Some(lit.partial_cmp(k)? == Ordering::Equal)), | ||
false, | ||
); | ||
Ok(Arc::new(BooleanArray::from(vec![exists; batch.num_rows()]))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aside from the null handling, I think we should return the result for each row in the input batch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds right? It's a literal evaluation, so the result should be the same for all rows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also checked with other engines, they do the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we split out the PartialEq
fix and the Decimal support as two changes of their own? AFAIK, the three changes are orthogonal, and colocated here only because delta-rs needs them all?
Meanwhile, the PartialEq
should have easily been able to merge before now, had it been separate, and it's looking like our decimal woes could unnecessarily delay in-list support that is otherwise pretty close to merge-ready.
(Literal(_), Column(_)) => { | ||
(Literal(lit), Column(_)) => { | ||
if lit.is_null() { | ||
return Ok(Arc::new(BooleanArray::from(vec![None; batch.num_rows()]))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return Ok(Arc::new(BooleanArray::from(vec![None; batch.num_rows()]))); | |
return Ok(Arc::new(BooleanArray::new_null(batch.num_rows()))); |
let in_list_result = | ||
in_list_utf8(string_arr, right_arr).map_err(Error::generic_err)?; | ||
return Ok(wrap_comparison_result( | ||
in_list_result | ||
.iter() | ||
.zip(right_arr.iter()) | ||
.map(|(res, arr)| match (res, arr) { | ||
(Some(false), Some(arr)) if arr.null_count() > 0 => None, | ||
_ => res, | ||
}) | ||
.collect(), | ||
)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be cleaner to chain all that up at once, and then wrap up the result?
let in_list_result = | |
in_list_utf8(string_arr, right_arr).map_err(Error::generic_err)?; | |
return Ok(wrap_comparison_result( | |
in_list_result | |
.iter() | |
.zip(right_arr.iter()) | |
.map(|(res, arr)| match (res, arr) { | |
(Some(false), Some(arr)) if arr.null_count() > 0 => None, | |
_ => res, | |
}) | |
.collect(), | |
)); | |
let in_list_result = in_list_utf8(string_arr, right_arr) | |
.map_err(Error::generic_err)? | |
.iter() | |
.zip(right_arr) | |
.map(|(res, arr)| match (res, arr) { | |
(Some(false), Some(arr)) if arr.null_count() > 0 => None, | |
_ => res, | |
}) | |
.collect(); | |
return Ok(wrap_comparison_result(in_list_result)); |
return Ok(wrap_comparison_result( | ||
in_list_result | ||
.iter() | ||
.zip(right_arr.iter()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: zip
takes IntoIterator
, so we don't need to call .iter()
here
.map(|k| Some(lit.partial_cmp(k)? == Ordering::Equal)), | ||
false, | ||
); | ||
Ok(Arc::new(BooleanArray::from(vec![exists; batch.num_rows()]))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds right? It's a literal evaluation, so the result should be the same for all rows.
let in_op = Expression::binary(BinaryOperator::In, "one", column_expr!("item")); | ||
let in_result = | ||
evaluate_expression(&in_op, &batch, Some(&DeltaDataTypes::BOOLEAN)).unwrap(); | ||
let in_expected = BooleanArray::from(vec![Some(true), None, Some(true)]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test cannot ever produce Some(false)
because every inlist contains a NULL?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added cases to cover that case.
@@ -692,6 +771,85 @@ mod tests { | |||
assert_eq!(in_result.as_ref(), &in_expected); | |||
} | |||
|
|||
#[test] | |||
fn test_column_in_array() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this something worth upstreaming to
arrow-rs
similar to the*_kleene
variants for other kernels?
Potentially, yes? A benefit of upstreaming is that it should be able to embed the null tests directly (like we did with the finish_eval_variadic
call), instead of requiring a second pass for the null_count
like you had to do here?
kernel/src/engine/arrow_utils.rs
Outdated
let in_list_result = arrow_ord::comparison::in_list(prim_array, list_array).map_err(Error::generic_err)?; | ||
Ok(wrap_comparison_result( | ||
in_list_result | ||
.iter() | ||
.zip(list_array.iter()) | ||
.map(|(res, arr)| match (res, arr) { | ||
(Some(false), Some(arr)) if arr.null_count() > 0 => None, | ||
_ => res, | ||
}) | ||
.collect(), | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See other comment about formatting. But also, should we factor this logic out as a helper function instead of duplicating it?
PredicateEvaluatorDefaults::finish_eval_variadic( | ||
VariadicOperator::Or, | ||
inlist | ||
.iter() | ||
.map(|k| Some(v.as_ref()?.partial_cmp(k)? == Ordering::Equal)), | ||
false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With a second use site now (below), we should probably factor this out as a helper function?
We just need to figure out how to handle v.as_ref()?
here vs. plain lit
below, without causing an unintentional (and incorrect) early-out here. Maybe the helper just needs to take Option<&Scalar>
; then this call site here would pass v.as_ref()
and the other would pass Some(lit)
? Compiler should optimize away the useless ?
in the latter case.
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
macro_rules! prim_array_cmp { | ||
( $left_arr: ident, $right_arr: ident, $(($data_ty: pat, $prim_ty: ty)),+ ) => { | ||
|
||
return match $left_arr.data_type() { | ||
$( | ||
$data_ty => { | ||
let prim_array = $left_arr.as_primitive_opt::<$prim_ty>() | ||
.ok_or(Error::invalid_expression( | ||
format!("Cannot cast to primitive array: {}", $left_arr.data_type())) | ||
)?; | ||
let list_array = $right_arr.as_list_opt::<i32>() | ||
.ok_or(Error::invalid_expression( | ||
format!("Cannot cast to list array: {}", $right_arr.data_type())) | ||
)?; | ||
arrow_ord::comparison::in_list(prim_array, list_array).map(wrap_comparison_result) | ||
} | ||
)+ | ||
_ => Err(ArrowError::CastError( | ||
format!("Bad Comparison between: {:?} and {:?}", | ||
$left_arr.data_type(), | ||
$right_arr.data_type()) | ||
) | ||
) | ||
}.map_err(Error::generic_err); | ||
}; | ||
} | ||
|
||
pub(crate) use prim_array_cmp; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this macro relies on functions defined elsewhere and is only useful for the inlist implementation so I moved it over there.
if let Some(right_arr) = right_arr.as_list_opt::<i32>() { | ||
return Ok(fix_in_list_result( | ||
in_list_utf8(string_arr, right_arr).map_err(Error::generic_err)?, | ||
right_arr.iter(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow I could not get it to work w/o calling iter
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That function takes an IntoIterator<Item = Option<ArrayRef>>
, which would consume right_arr
while still borrowed by the previous function arg. I suspect you could satisfy the borrow checker by passing &right_arr
(whose IntoIter
should be the same as calling .iter()
), or by factoring out the in_list_utf8
call so the borrow dies before you start building args for the second call:
let result = in_list_utf8(string_arr, right_arr).map_err(Error::generic_err)?;
return Ok(fix_in_list_result(result, right_arr);
(the latter approach has the nice side effect of simplifying the code as well, even if cargo fmt will likely put the map_err
call on its own line)
ad, str_op(column.as_string_view()) | ||
), | ||
(ArrowDataType::Int8, PrimitiveType::Byte) => is_in_list( | ||
ad,op::<Int8Type>( &column, Scalar::from) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aside: I'm surprised cargo fmt didn't adjust this?
ad,op::<Int8Type>( &column, Scalar::from) | |
ad, op::<Int8Type>(&column, Scalar::from) |
(but I guess it doesn't run reliably on this code in the first place)
} | ||
(Literal(lit), Literal(Scalar::Array(ad))) => { | ||
let res = is_in_list(ad, Some(Some(lit.clone()))); | ||
let exists = res.is_valid(0).then(|| res.value(0)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aside: I'm a bit surprised arrow doesn't provide an optional getter, but nothing obvious turned up when I looked.
Ok(wrap_comparison_result(arr)) | ||
} | ||
(Literal(lit), Literal(Scalar::Array(ad))) => { | ||
let res = is_in_list(ad, Some(Some(lit.clone()))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clever!
// helper function to make arrow in_list* kernel results comliant with SQL NULL semantics. | ||
// Specifically, if an item is not found in the in-list, but the in-list contains NULLs, the | ||
// result should be NULL (UNKNOWN) as well. | ||
fn fix_in_list_result( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Perhaps more self-documenting as fix_arrow_in_list_result
?
.collect() | ||
} | ||
|
||
// helper function to make arrow in_list* kernel results comliant with SQL NULL semantics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// helper function to make arrow in_list* kernel results comliant with SQL NULL semantics. | |
// helper function to make arrow in_list* kernel results compliant with SQL NULL semantics. |
or
// helper function to make arrow in_list* kernel results comliant with SQL NULL semantics. | |
// helper function to make arrow in_list* kernel results comply with SQL NULL semantics. |
// Specifically, if an item is not found in the in-list, but the in-list contains NULLs, the | ||
// result should be NULL (UNKNOWN) as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Specifically, if an item is not found in the in-list, but the in-list contains NULLs, the | |
// result should be NULL (UNKNOWN) as well. | |
// Specifically, if an item is not found in an in-list that contains NULLs, the | |
// result should be NULL (UNKNOWN) instead of FALSE. |
@@ -352,6 +305,219 @@ fn evaluate_expression( | |||
} | |||
} | |||
|
|||
fn eval_in_list( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method makes a very narrow waist now; should we consider moving it (along with the helpers it invokes) to an inlist
sub-module of their own?
The idea came up because I noticed our unit tests are overly high-level -- they test the overall expression eval rather than these specific methods. Ideally, we would have three flavors of unit tests:
- One for
fix_in_list_result
that verifies its documented behavior wrt true/false/null - Another for
is_in_list
that can work with slices of scalars instead of needing to build record batches (and which only needs to work with 1-2 types because it focuses primarily on null handling semantics) - A final test that covers
eval_in_list
. It does need to work with expressions and record batches, but just needs a quick case for each type to verify that the wiring is correct, since the actual in-list behavior is covered by the other two (sets of) unit tests.
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's really hard to review a PR that moves unrelated code between files while also making changes. Can we either keep arrow_expression.rs as the top-level module (legal), or do the file rename as a pre-factor PR that merges first?
let arr = match (column.data_type(), data_type) { | ||
(ArrowDataType::Utf8, PrimitiveType::String) => is_in_list( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: weirdly deep indent?
(we really need that cargo fmt fix to merge...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from the mod in macro thing, it seems we were also hitting rust-lang/rustfmt#3863. Shortened the violating string a bit and it seems cargo fmt is working on in_list.rs
now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, nasty bug there... a bit surprised it's gone unfixed for so many years now.
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Sorry about that! Reverted to previous layout. Right now we do have a nice mix of modules using the "mod" pattern, and modules, that use an upper level file. Maybe we should reconcile that? While writing more tests, I realized we were missing some type coverage, as well as handling of |
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Yes, I agree it would good to harmonize our file naming. Should we go with mod.rs or modname.rs tho? Part of me doesn't love a zillion "mod.rs" files (which mod was it again?), but in practice my editor adds the directory it's in to disambiguate so it's not really a problem? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking really good.
The improvements to the code structure made it easier to see some significant opportunities for additional improvements (deduplication of code, more functionality, etc). Some of those belong in this PR, while others should probably be handled as a follow-on PR.
@@ -208,7 +208,7 @@ impl TryFrom<&ArrowDataType> for DataType { | |||
ArrowDataType::Date64 => Ok(DataType::DATE), | |||
ArrowDataType::Timestamp(TimeUnit::Microsecond, None) => Ok(DataType::TIMESTAMP_NTZ), | |||
ArrowDataType::Timestamp(TimeUnit::Microsecond, Some(tz)) | |||
if tz.eq_ignore_ascii_case("utc") => | |||
if tz.eq_ignore_ascii_case("utc") || tz.eq_ignore_ascii_case("+00:00") => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we ever get an answer for this?
|
||
pub(super) fn eval_in_list( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: worth a doc comment explaining the approach?
|
||
macro_rules! prim_array_cmp { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth a doc comment explaining what this macro is for and how it works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly, looking at this macro vs. the new code, I'm not sure we come out ahead with the macro? Maybe it's better (as a follow-up) to take the same approach with both lit-col and col-array cases?
batch: &RecordBatch, | ||
left: &Expression, | ||
right: &Expression, | ||
) -> Result<Arc<dyn Array>, Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
) -> Result<Arc<dyn Array>, Error> { | |
) -> Result<ArrayRef, Error> { |
match (left, right) { | ||
(Literal(lit), Column(_)) => { | ||
if lit.is_null() { | ||
return Ok(Arc::new(BooleanArray::new_null(batch.num_rows()))); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few things --
tiny: IMO it's harder to read a mix of match arms and if/else vs. just adding a second match arm, if the latter is otherwise sensible.
small: Given how two of the match arms are really large, and the others are really small, would it make sense to move the (Literal(lit), Literal(Scalar::Array(ad)))
case here as well, for easier finding+reading?
medium : We can widen the NULL-search optimization match arm to cover both columns and literal arrays.
medium: once the null-check is factored out as its own match arm (see above), the existing implementation of the literal-column case trivially covers the (currently missing) column-column case as well (because we use evaluate_expression
to create the input array we operate on). Just need to adjust the pattern we match on.
bigger: Every match arm duplicates Ok(Arc::new(...))
, and the wrap_comparison_result
helper doesn't actually help because it's more typing and adds a layer of indirection. Recommend to strip all that away and just have the match return a BooleanArray
that we can wrap once and return.
match (left, right) { | |
(Literal(lit), Column(_)) => { | |
if lit.is_null() { | |
return Ok(Arc::new(BooleanArray::new_null(batch.num_rows()))); | |
} | |
let result = match (left, right) { | |
(Literal(Scalar::Null(_)), Column(_) | Literal(Scalar::Array(_))) => { | |
// Searching any in-list for NULL always returns NULL -- no need to actually search | |
BooleanArray::new_null(batch.num_rows() | |
} | |
(Literal(lit), Literal(Scalar::Array(ad))) => { | |
// Search the literal in-list once and then replicate the returned single-row result | |
let exists = is_in_list(ad, Some(Some(lit.clone()))).iter().next(); | |
BooleanArray::from(vec![exists; batch.num_rows()]) | |
} | |
(Literal(_) | Column(_), Column(_)) => { |
and then the match ends with:
(Column(name), Literal(Scalar::Array(ad))) => {
...
// safety: as_* methods on arrow arrays panic if the wrong type is requested,
// so we always verify the data type first.
match (column.data_type(), data_type) {
...
}
}
(l, r) => return Err(Error::invalid_expression(format!(
"Invalid right value for (NOT) IN comparison, left is: {l} right is: {r}"
))),
};
Ok(Arc::new(result))
(ArrowDataType::Interval(IntervalUnit::YearMonth), IntervalYearMonthType), | ||
(ArrowDataType::Interval(IntervalUnit::MonthDayNano), IntervalMonthDayNanoType), | ||
(ArrowDataType::Decimal128(_, _), Decimal128Type), | ||
(ArrowDataType::Decimal256(_, _), Decimal256Type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delta doesn't support 256-bit decimals. The spec says
The precision and scale can be up to 38
... which corresponds to a 128-bit decimal.
let arr = match (column.data_type(), data_type) { | ||
(ArrowDataType::Utf8, PrimitiveType::String) => { | ||
is_in_list(ad, str_op(column.as_string::<i32>())) | ||
} | ||
(ArrowDataType::LargeUtf8, PrimitiveType::String) => { | ||
is_in_list(ad, str_op(column.as_string::<i64>())) | ||
} | ||
(ArrowDataType::Utf8View, PrimitiveType::String) => { | ||
is_in_list(ad, str_op(column.as_string_view())) | ||
} | ||
(ArrowDataType::Int8, PrimitiveType::Byte) => { | ||
is_in_list(ad, op::<Int8Type>(&column, Scalar::from)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aside: This approach does not fully eliminate the risk of a panic, if a flaw in the code produced a mismatch between the data type we manually check for and the cast we later attempt. I'd love it if we could use as_xxx_opt
methods instead, if it didn't completely bloat up the code.
Maybe a small non-hygienic column_as!
macro could help?
macro magic
See this playground test
macro_rules! column_as {
($what: ident $(:: < $t: ty >)? ) => {
paste! { column.[<as_ $what _opt>] $(::<$t>)? () }
.ok_or(Error::invalid_expression(format!(
"Cannot cast {} to {}", column.data_type(), data_type
)))
};
}
(we'd need to upgrade the paste
crate from dev-dependency to full dependency)
That would allow:
let arr = match (column.data_type(), data_type) { | |
(ArrowDataType::Utf8, PrimitiveType::String) => { | |
is_in_list(ad, str_op(column.as_string::<i32>())) | |
} | |
(ArrowDataType::LargeUtf8, PrimitiveType::String) => { | |
is_in_list(ad, str_op(column.as_string::<i64>())) | |
} | |
(ArrowDataType::Utf8View, PrimitiveType::String) => { | |
is_in_list(ad, str_op(column.as_string_view())) | |
} | |
(ArrowDataType::Int8, PrimitiveType::Byte) => { | |
is_in_list(ad, op::<Int8Type>(&column, Scalar::from)) | |
} | |
let arr = match (column.data_type(), data_type) { | |
(ArrowDataType::Utf8, PrimitiveType::String) => { | |
is_in_list(ad, scalars_from(column_as!(string::<i32>)?)) | |
} | |
(ArrowDataType::LargeUtf8, PrimitiveType::String) => { | |
is_in_list(ad, scalars_from(column_as!(string::<i64>)?)) | |
} | |
(ArrowDataType::Utf8View, PrimitiveType::String) => { | |
is_in_list(ad, scalars_from(column_as!(string_view)?)) | |
} | |
(_, PrimitiveType::Byte) => { | |
is_in_list(ad, scalars_from(column_as!(primitive::<Int8Type>)?)) | |
} |
Match arms can ignore the arrow data type when there's a 1:1 relationship (ie for the numeric types), while still leveraging both types in ambiguous cases like string or binary or timestamp. Either way, the result of the column_as!
invocation is the ultimate decider of whether the match succeeded.
The str_op
and binary_op
helpers would be replaced by a single generic scalars_from
that takes an iterator of impl Into<Scalar>
. Which would also capture e.g. the boolean case that currently is a bit awkward.
The special cases like Date
and Timestamp[Ntz]
, which can't use Scalar::from
, would still need today's op
(tho maybe we should rename it as to_scalars
for clarity):
(ArrowDataType::Date32, PrimitiveType::Date) => {
is_in_list(ad, to_scalars(column_as!(primitive::<Date32Type>)?, Scalar::Date))
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a big enough change (and probably relates strongly to the recommendation to harmonize or eliminate the prim_array_cmp!
macro) that we should probably tackle both at the same time in a follow-up PR.
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
Signed-off-by: Robert Pack <robstar.pack@gmail.com>
@scovich - made changes according to feedback. The first new commit should cover the changes that should land in this PR. In the second one I gave refactoring the macros a try, after that its just small stuff. In case we feel the macro refactoring needs (considerably) more work, we can move that into a separate PR. |
// we should at least cast string / large string arrays to the same type, and may as well see if | ||
// we can cast other types. | ||
let left_arr = if left_arr.data_type() == list_field.data_type() { | ||
left_arr | ||
} else { | ||
cast(left_arr.as_ref(), list_field.data_type()).map_err(Error::generic_err)? | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
while technically not really the right place to cast values, it may actually help us when we start to support type widening. Then again, there values should really be cast when reading the data.
Nonetheless it makes the code simpler if we always try to cast, and I thought its Ok if we do this here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This relates to the way arrow has multiple physical types for the same logical type? And things like type widening would just come "for free" as a side effect?
Does arrow cast
allow narrowing casts tho? If it does, then the comparison could produce wrong results due to information loss. IIRC our type widening code had to actively verify it was a widening cast before attempting the actual cast
? Either way, to really get this right we'd need to identify the "wider" of the two sides and cast the other side to match it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking really good. The only blocker at this point is the question about narrowing casts.
// we should at least cast string / large string arrays to the same type, and may as well see if | ||
// we can cast other types. | ||
let left_arr = if left_arr.data_type() == list_field.data_type() { | ||
left_arr | ||
} else { | ||
cast(left_arr.as_ref(), list_field.data_type()).map_err(Error::generic_err)? | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This relates to the way arrow has multiple physical types for the same logical type? And things like type widening would just come "for free" as a side effect?
Does arrow cast
allow narrowing casts tho? If it does, then the comparison could produce wrong results due to information loss. IIRC our type widening code had to actively verify it was a widening cast before attempting the actual cast
? Either way, to really get this right we'd need to identify the "wider" of the two sides and cast the other side to match it.
fn scalars_from<'a>( | ||
values: impl IntoIterator<Item = Option<impl Into<Scalar>>> + 'a, | ||
) -> impl IntoIterator<Item = Option<Scalar>> + 'a { | ||
values.into_iter().map(|v| v.map(|v| v.into())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: one fewer v
to get confused with?
values.into_iter().map(|v| v.map(|v| v.into())) | |
values.into_iter().map(|v| v.map(Into::into)) |
(Literal(_) | Column(_), Column(_)) => { | ||
let right_arr = evaluate_expression(right, batch, None)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Rule of 30 says we should split out these two really fat match arms (~60 and ~100 LoC, respectively) as their own helper functions. Besides improving the readability of the code structure, removing two levels of indentation might convince cargo fmt to be less aggressive with line breaks.
It would also allow cleaner control flow within the logic, because e.g. ?
is scoped to just that match arm's body instead of the entire match (see e.g. all those eval_arrow(...)?
calls in the match that starts at L71 below, and the return Err
for the default case of that same match).
) => is_in_list( | ||
ad, | ||
to_scalars( | ||
column_as!(primitive::<TimestampMicrosecondType>)?, | ||
Scalar::Timestamp, | ||
), | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not sure it's worth the trouble, but could do
) => is_in_list( | |
ad, | |
to_scalars( | |
column_as!(primitive::<TimestampMicrosecondType>)?, | |
Scalar::Timestamp, | |
), | |
), | |
) => { | |
let column = column_as!(primitive::<TimestampMicrosecondType>)?; | |
is_in_list(ad, to_scalars(column, Scalar::Timestamp)) | |
} |
(again below)
($t: ty ) => { | ||
left_arr | ||
.as_primitive_opt::<$t>() | ||
.ok_or(Error::invalid_expression(format!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.ok_or(Error::invalid_expression(format!( | |
.ok_or_else(|| Error::invalid_expression(format!( |
(otherwise we're creating the error eagerly and just throwing it away most of the time)
(again below)
What changes are proposed in this pull request?
Currently, evaluation expressions of type
col IN (a, b, c)
is missing an implementation. While this might be the exact case @scovich cautioned us about, where the rhs might get significant in size and should really be handled asEngineData
, I hope that we at least do not make things worse here. Unfortunatelydelta-rs
already has support for these types of expressions, so the main intend right now is to retain feature parity over there while migrating.How was this change tested?
Additional tests for specific expression flavor.