Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The number of handle columns exceeds the length of columns_info #2750

Closed
oneday521 opened this issue Aug 9, 2023 · 3 comments · Fixed by #2785
Closed

The number of handle columns exceeds the length of columns_info #2750

oneday521 opened this issue Aug 9, 2023 · 3 comments · Fixed by #2785

Comments

@oneday521
Copy link

General Question

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 51.0 failed 4 times, most recent failure: Lost task 0.3 in stage 51.0 (TID 2116) : com.pingcap.tikv.exception.TiClientInternalException: Error reading region:
at com.pingcap.tikv.operation.iterator.DAGIterator.doReadNextRegionChunks(DAGIterator.java:190)
at com.pingcap.tikv.operation.iterator.DAGIterator.readNextRegionChunks(DAGIterator.java:167)
at com.pingcap.tikv.operation.iterator.DAGIterator.hasNext(DAGIterator.java:113)
at org.apache.spark.sql.tispark.TiRowRDD$$anon$1.hasNext(TiRowRDD.scala:70)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.columnartorow_nextBatch_0$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177)
at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: com.pingcap.tikv.exception.RegionTaskException: Handle region task failed:
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at com.pingcap.tikv.operation.iterator.DAGIterator.doReadNextRegionChunks(DAGIterator.java:185)
... 19 more
Caused by: com.pingcap.tikv.exception.RegionTaskException: Handle region task failed:
at com.pingcap.tikv.operation.iterator.DAGIterator.process(DAGIterator.java:233)
at com.pingcap.tikv.operation.iterator.DAGIterator.lambda$submitTasks$1(DAGIterator.java:91)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
Caused by: com.pingcap.tikv.exception.GrpcException: [components/tidb_query_executors/src/index_scan_executor.rs:109]: The number of handle columns exceeds the length of columns_info
at com.pingcap.tikv.region.RegionStoreClient.handleCopResponse(RegionStoreClient.java:734)
at com.pingcap.tikv.region.RegionStoreClient.coprocess(RegionStoreClient.java:681)
at com.pingcap.tikv.operation.iterator.DAGIterator.process(DAGIterator.java:220)
... 7 more

@github-actions
Copy link

github-actions bot commented Sep 9, 2023

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Sep 9, 2023
@github-actions
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

@shiyuhang0
Copy link
Member

shiyuhang0 commented Jul 17, 2024

Also meet the issue. reproduce steps:

  1. DDL
CREATE TABLE `test`.`t` (
  `CI_NO` varchar(64) NOT NULL,
  `AC_DT` bigint(20) NOT NULL,
  `SRC_KEY` varchar(100) NOT NULL,
  PRIMARY KEY (`SRC_KEY`,`CI_NO`,`AC_DT`) /*T![clustered_index] CLUSTERED */,
  KEY `IDX_FLOW_01` (`CI_NO`,`AC_DT`)
)
  1. spark.sql("select ci_no,ac_dt from tidb_catalog.test.t1").show()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants