You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we have an existing table in a location L, the only way of populating the Table in the Catalog without rewriting the data is to CREATE EXTERNAL TABLE with LOCATION L. The current implementation forces us to add columnsToIndex parameters to the OPTION SQL clause because we use the same interface for creating a new table and linking a Table to an external directory.
How to reproduce?
1. Code that triggered the bug, or steps to reproduce:
vallocation= tmpDir +"/external_student/"valdata= createTestData(spark)
data.write.format("qbeast").option("columnsToIndex", "id,name").save(location)
spark.sql(
s"CREATE EXTERNAL TABLE student (id INT, name STRING, age INT) "+s"USING qbeast "+s"LOCATION '$location'")
2. Branch and commit id:
main
3. Spark version:
3.4.1
4. Hadoop version:
3.3.1
5. How are you running Spark?
Are you running Spark inside a container? Are you launching the app on a remote K8s cluster? Or are you just running the tests in a local computer?
Local computer
6. Stack trace:
The text was updated successfully, but these errors were encountered:
osopardo1
changed the title
CREATE EXTERNAL TABLE should load existing configuration
CREATE EXTERNAL TABLE with existing LOCATION should load existing configuration
Dec 4, 2023
osopardo1
changed the title
CREATE EXTERNAL TABLE with existing LOCATION should load existing configuration
CREATE EXTERNAL TABLE with LOCATION should load existing configuration
Dec 4, 2023
What went wrong?
If we have an existing table in a location
L
, the only way of populating the Table in the Catalog without rewriting the data is to CREATE EXTERNAL TABLE with LOCATION L. The current implementation forces us to addcolumnsToIndex
parameters to theOPTION
SQL clause because we use the same interface for creating a new table and linking a Table to an external directory.How to reproduce?
1. Code that triggered the bug, or steps to reproduce:
2. Branch and commit id:
main
3. Spark version:
3.4.1
4. Hadoop version:
3.3.1
5. How are you running Spark?
Are you running Spark inside a container? Are you launching the app on a remote K8s cluster? Or are you just running the tests in a local computer?
Local computer
6. Stack trace:
The text was updated successfully, but these errors were encountered: