Skip to content

Commit

Permalink
[SPARK-26811][SQL][FOLLOWUP] fix some documentation
Browse files Browse the repository at this point in the history
## What changes were proposed in this pull request?

It's a followup of apache#24012 , to fix 2 documentation:
1. `SupportsRead` and `SupportsWrite` are not internal anymore. They are public interfaces now.
2. `Scan` should link the `BATCH_READ` instead of hardcoding it.

## How was this patch tested?
N/A

Closes apache#24285 from cloud-fan/doc.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
  • Loading branch information
cloud-fan authored and mccheah committed May 24, 2019
1 parent 48e2590 commit ffc48da
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
import org.apache.spark.sql.util.CaseInsensitiveStringMap;

/**
* An internal base interface of mix-in interfaces for readable {@link Table}. This adds
* A mix-in interface of {@link Table}, to indicate that it's readable. This adds
* {@link #newScanBuilder(CaseInsensitiveStringMap)} that is used to create a scan for batch,
* micro-batch, or continuous processing.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
import org.apache.spark.sql.util.CaseInsensitiveStringMap;

/**
* An internal base interface of mix-in interfaces for writable {@link Table}. This adds
* A mix-in interface of {@link Table}, to indicate that it's writable. This adds
* {@link #newWriteBuilder(CaseInsensitiveStringMap)} that is used to create a write
* for batch or streaming.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
import org.apache.spark.sql.sources.v2.SupportsContinuousRead;
import org.apache.spark.sql.sources.v2.SupportsMicroBatchRead;
import org.apache.spark.sql.sources.v2.Table;
import org.apache.spark.sql.sources.v2.TableCapability;

/**
* A logical representation of a data source scan. This interface is used to provide logical
Expand All @@ -32,8 +33,8 @@
* This logical representation is shared between batch scan, micro-batch streaming scan and
* continuous streaming scan. Data sources must implement the corresponding methods in this
* interface, to match what the table promises to support. For example, {@link #toBatch()} must be
* implemented, if the {@link Table} that creates this {@link Scan} returns BATCH_READ support in
* its {@link Table#capabilities()}.
* implemented, if the {@link Table} that creates this {@link Scan} returns
* {@link TableCapability#BATCH_READ} support in its {@link Table#capabilities()}.
* </p>
*/
@Evolving
Expand Down Expand Up @@ -61,7 +62,8 @@ default String description() {
/**
* Returns the physical representation of this scan for batch query. By default this method throws
* exception, data sources must overwrite this method to provide an implementation, if the
* {@link Table} that creates this returns batch read support in its {@link Table#capabilities()}.
* {@link Table} that creates this scan returns {@link TableCapability#BATCH_READ} in its
* {@link Table#capabilities()}.
*
* @throws UnsupportedOperationException
*/
Expand Down

0 comments on commit ffc48da

Please sign in to comment.