Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support insert into statement in sqllogictest #4496

Merged
merged 2 commits into from
Dec 4, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions datafusion/core/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ doc-comment = "0.3"
env_logger = "0.10"
parquet-test-utils = { path = "../../parquet-test-utils" }
rstest = "0.16.0"
sqlparser = "0.27"
test-utils = { path = "../../test-utils" }

[[bench]]
Expand Down
13 changes: 10 additions & 3 deletions datafusion/core/src/datasource/memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ use std::sync::Arc;
use arrow::datatypes::SchemaRef;
use arrow::record_batch::RecordBatch;
use async_trait::async_trait;
use parking_lot::RwLock;

use crate::datasource::{TableProvider, TableType};
use crate::error::{DataFusionError, Result};
Expand All @@ -40,7 +41,7 @@ use crate::physical_plan::{repartition::RepartitionExec, Partitioning};
#[derive(Debug)]
pub struct MemTable {
schema: SchemaRef,
batches: Vec<Vec<RecordBatch>>,
batches: Arc<RwLock<Vec<Vec<RecordBatch>>>>,
}

impl MemTable {
Expand All @@ -53,7 +54,7 @@ impl MemTable {
{
Ok(Self {
schema,
batches: partitions,
batches: Arc::new(RwLock::new(partitions)),
})
} else {
Err(DataFusionError::Plan(
Expand Down Expand Up @@ -117,6 +118,11 @@ impl MemTable {
}
MemTable::try_new(schema.clone(), data)
}

/// Get record batches in MemTable
pub fn get_batches(&self) -> Arc<RwLock<Vec<Vec<RecordBatch>>>> {
self.batches.clone()
}
}

#[async_trait]
Expand All @@ -140,8 +146,9 @@ impl TableProvider for MemTable {
_filters: &[Expr],
_limit: Option<usize>,
) -> Result<Arc<dyn ExecutionPlan>> {
let batches = self.batches.read();
Ok(Arc::new(MemoryExec::try_new(
&self.batches.clone(),
&(*batches).clone(),
self.schema(),
projection.cloned(),
)?))
Expand Down
16 changes: 16 additions & 0 deletions datafusion/core/src/execution/context.rs
Original file line number Diff line number Diff line change
Expand Up @@ -958,6 +958,22 @@ impl SessionContext {
}
}

/// Return a [`TabelProvider`] for the specified table.
pub fn table_provider<'a>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 it might be nice to refactor fn table() to call this function now to avoid some duplication.

&self,
table_ref: impl Into<TableReference<'a>>,
) -> Result<Arc<dyn TableProvider>> {
let table_ref = table_ref.into();
let schema = self.state.read().schema_for_ref(table_ref)?;
match schema.table(table_ref.table()) {
Some(ref provider) => Ok(Arc::clone(provider)),
_ => Err(DataFusionError::Plan(format!(
"No table named '{}'",
table_ref.table()
))),
}
}

/// Returns the set of available tables in the default catalog and
/// schema.
///
Expand Down
83 changes: 83 additions & 0 deletions datafusion/core/tests/sqllogictests/src/error.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

use datafusion_common::DataFusionError;
use sqllogictest::TestError;
use sqlparser::parser::ParserError;
use std::error;
use std::fmt::{Display, Formatter};

pub type Result<T> = std::result::Result<T, DFSqlLogicTestError>;

/// DataFusion sql-logicaltest error
#[derive(Debug)]
pub enum DFSqlLogicTestError {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 this is a good idea

/// Error from sqllogictest-rs
SqlLogicTest(TestError),
/// Error from datafusion
DataFusion(DataFusionError),
/// Error returned when SQL is syntactically incorrect.
Sql(ParserError),
/// Error returned on a branch that we know it is possible
/// but to which we still have no implementation for.
/// Often, these errors are tracked in our issue tracker.
NotImplemented(String),
/// Error returned from DFSqlLogicTest inner
Internal(String),
Comment on lines +38 to +40
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we simply panic in the sqllogic runner in these cases so the location of the error is easier to see

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

}

impl From<TestError> for DFSqlLogicTestError {
fn from(value: TestError) -> Self {
DFSqlLogicTestError::SqlLogicTest(value)
}
}

impl From<DataFusionError> for DFSqlLogicTestError {
fn from(value: DataFusionError) -> Self {
DFSqlLogicTestError::DataFusion(value)
}
}

impl From<ParserError> for DFSqlLogicTestError {
fn from(value: ParserError) -> Self {
DFSqlLogicTestError::Sql(value)
}
}

impl Display for DFSqlLogicTestError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
DFSqlLogicTestError::SqlLogicTest(error) => write!(
f,
"SqlLogicTest error(from sqllogictest-rs crate): {}",
error
),
DFSqlLogicTestError::DataFusion(error) => {
write!(f, "DataFusion error: {}", error)
}
DFSqlLogicTestError::Sql(error) => write!(f, "SQL Parser error: {}", error),
DFSqlLogicTestError::NotImplemented(error) => {
write!(f, "This feature is not implemented yet: {}", error)
}
DFSqlLogicTestError::Internal(error) => {
write!(f, "Internal error: {}", error)
}
}
}
}

impl error::Error for DFSqlLogicTestError {}
96 changes: 96 additions & 0 deletions datafusion/core/tests/sqllogictests/src/insert/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

mod util;

use crate::error::{DFSqlLogicTestError, Result};
use crate::insert::util::LogicTestContextProvider;
use datafusion::datasource::MemTable;
use datafusion::prelude::SessionContext;
use datafusion_common::{DFSchema, DataFusionError};
use datafusion_expr::Expr as DFExpr;
use datafusion_sql::parser::{DFParser, Statement};
use datafusion_sql::planner::SqlToRel;
use sqlparser::ast::{Expr, SetExpr, Statement as SQLStatement};
use std::collections::HashMap;

pub async fn insert(ctx: &SessionContext, sql: String) -> Result<String> {
// First, use sqlparser to get table name and insert values
let mut table_name = "".to_string();
let mut insert_values: Vec<Vec<Expr>> = vec![];
if let Statement::Statement(statement) = &DFParser::parse_sql(&sql)?[0] {
if let SQLStatement::Insert {
table_name: name,
source,
..
} = &**statement
{
// Todo: check columns match table schema
table_name = name.to_string();
match &*source.body {
SetExpr::Values(values) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very clever

insert_values = values.0.clone();
}
_ => {
return Err(DFSqlLogicTestError::NotImplemented(
"Only support insert values".to_string(),
));
}
}
}
} else {
return Err(DFSqlLogicTestError::Internal(format!(
"{:?} not an insert statement",
sql
)));
}

// Second, get table by table name
// Here we assume table must be in memory table.
let table_provider = ctx.table_provider(table_name.as_str())?;
let table_batches = table_provider
.as_any()
.downcast_ref::<MemTable>()
.ok_or_else(|| {
DFSqlLogicTestError::NotImplemented(
"only support use memory table in logictest".to_string(),
)
})?
.get_batches();

// Third, transfer insert values to `RecordBatch`
// Attention: schema info can be ignored. (insert values don't contain schema info)
let sql_to_rel = SqlToRel::new(&LogicTestContextProvider {});
let mut insert_batches = Vec::with_capacity(insert_values.len());
for row in insert_values.into_iter() {
let logical_exprs = row
.into_iter()
.map(|expr| {
sql_to_rel.sql_to_rex(expr, &DFSchema::empty(), &mut HashMap::new())
})
.collect::<std::result::Result<Vec<DFExpr>, DataFusionError>>()?;
// Directly use `select` to get `RecordBatch`
let dataframe = ctx.read_empty()?;
insert_batches.push(dataframe.select(logical_exprs)?.collect().await?)
}

// Final, append the `RecordBatch` to memtable's batches
let mut table_batches = table_batches.write();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than changing the batches in the existing memtable, what would you think about creating a new memtable with the same name with the new values (rather than modifying the original one)

I think you might be able to avoid changes to SessionContext and MemTable entirely.

Something like this (untested)

// fetch existing batches
let mut existing_batches = ctx.table(table_name.as_str()).collect();
// append new batch
exsiting_batches.extend(insert_batches)

// Replace table provider provider
let new_provider = MemTable::try_new(batches[0].schema(), vec![batches]);
ctx.register_table(table_name, new_provider)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have thought about the way you mentioned, (also need to delete original memtable).

-- But for performance reasons, I choose the current way: modifying the original one.

If you think the changes to add interior mutability to MemTable don't make sense, I can change it in the following ticket! (I don't have a strong bias for both ways)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think keeping MemTable as simple as possible is likely the best approach -- so for that reason I prefer to remove the interior mutability.

I can give it a shot if you agree -- I think the performance of copying record batches (for reasonably small data such as what is in the test) will be ok

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, agree! Thanks @alamb . I'll refactor it later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much!

table_batches.extend(insert_batches);

Ok("".to_string())
}
50 changes: 50 additions & 0 deletions datafusion/core/tests/sqllogictests/src/insert/util.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

use arrow::datatypes::DataType;
use datafusion_common::{ScalarValue, TableReference};
use datafusion_expr::{AggregateUDF, ScalarUDF, TableSource};
use datafusion_sql::planner::ContextProvider;
use std::sync::Arc;

pub struct LogicTestContextProvider {}

// Only a mock, don't need to implement
impl ContextProvider for LogicTestContextProvider {
fn get_table_provider(
&self,
_name: TableReference,
) -> datafusion_common::Result<Arc<dyn TableSource>> {
todo!()
}

fn get_function_meta(&self, _name: &str) -> Option<Arc<ScalarUDF>> {
todo!()
}

fn get_aggregate_meta(&self, _name: &str) -> Option<Arc<AggregateUDF>> {
todo!()
}

fn get_variable_type(&self, _variable_names: &[String]) -> Option<DataType> {
todo!()
}

fn get_config_option(&self, _variable: &str) -> Option<ScalarValue> {
todo!()
}
}
17 changes: 13 additions & 4 deletions datafusion/core/tests/sqllogictests/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,11 @@ use datafusion::prelude::{SessionConfig, SessionContext};
use std::path::Path;
use std::time::Duration;

use sqllogictest::TestError;
pub type Result<T> = std::result::Result<T, TestError>;
use crate::error::{DFSqlLogicTestError, Result};
use crate::insert::insert;

mod error;
mod insert;
mod setup;
mod utils;

Expand All @@ -37,7 +39,7 @@ pub struct DataFusion {

#[async_trait]
impl sqllogictest::AsyncDB for DataFusion {
type Error = TestError;
type Error = DFSqlLogicTestError;

async fn run(&mut self, sql: &str) -> Result<String> {
println!("[{}] Running query: \"{}\"", self.file_name, sql);
Expand Down Expand Up @@ -138,7 +140,14 @@ fn format_batches(batches: &[RecordBatch]) -> Result<String> {
}

async fn run_query(ctx: &SessionContext, sql: impl Into<String>) -> Result<String> {
let df = ctx.sql(&sql.into()).await.unwrap();
let sql = sql.into();
// Check if the sql is `insert`
if sql.trim_start().to_lowercase().starts_with("insert") {
// Process the insert statement
insert(ctx, sql).await?;
return Ok("".to_string());
}
Comment on lines +144 to +149
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I like this basic approach (special case the sql and route it to the test runner implementation).

One thing that might be worth doing is to actually try and parse the input into sql, to detect INSERT statements though I think string manipulation is fine too or we could do this later

    // Handle any test only special case statements
    let sql = sql.into();
    match DFParser::parse_sql(&sql) {
        Ok(Statement(Insert)) => {
        //debug!("Parsed statement: {:#?}", stmt);

        }
        Err(_) => {
            // ignore anything else, including errors -- they will be handled by the sql context below
        }
    };

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, more generic way, will fix in next PR.

let df = ctx.sql(sql.as_str()).await.unwrap();
let results: Vec<RecordBatch> = df.collect().await.unwrap();
let formatted_batches = format_batches(&results)?;
Ok(formatted_batches)
Expand Down
Loading