Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Fix spelling #813

Merged
merged 1 commit into from
Apr 1, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions R/R/data_interface.R
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,7 @@ spark_read_shapefile <- function(sc,

lapply(names(options), function(name) {
if (!name %in% c("")) {
warning(paste0("Ignoring unkown option '", name,"'"))
warning(paste0("Ignoring unknown option '", name,"'"))
}
})

Expand Down Expand Up @@ -452,7 +452,7 @@ spark_read_geojson <- function(sc,
if ("skip_syntactically_invalid_geometries" %in% names(options)) final_skip <- options[["skip_syntactically_invalid_geometries"]] else final_skip <- TRUE
lapply(names(options), function(name) {
if (!name %in% c("allow_invalid_geometries", "skip_syntactically_invalid_geometries")) {
warning(paste0("Ignoring unkown option '", name,"'"))
warning(paste0("Ignoring unknown option '", name,"'"))
}
})

Expand Down
2 changes: 1 addition & 1 deletion R/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,4 @@ mean_area_sdf <- polygon_sdf %>%
print(mean_area_sdf)
```

Notice that all of the above can open up many interesting possiblities. For example, one can extract ML features from geospatial data in Spark dataframes, build a ML pipeline using `ml_*` family of functions in `{sparklyr}` to work with such features, and if the output of a ML model happens to be a geospatial object as well, one can even apply visualization routines in `{apache.sedona}` to visualize the difference between any predicted geometry and the corresponding ground truth.
Notice that all of the above can open up many interesting possibilities. For example, one can extract ML features from geospatial data in Spark dataframes, build a ML pipeline using `ml_*` family of functions in `{sparklyr}` to work with such features, and if the output of a ML model happens to be a geospatial object as well, one can even apply visualization routines in `{apache.sedona}` to visualize the difference between any predicted geometry and the corresponding ground truth.
2 changes: 1 addition & 1 deletion R/vignettes/articles/apache-sedona.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ data_tbl %>%

## Manipulating

The dbplyr interface transparently translates dbplyr worklfows into SQL, and gives access to all Apache Sedona SQL functions:
The dbplyr interface transparently translates dbplyr workflows into SQL, and gives access to all Apache Sedona SQL functions:

* [Vector functions](../../../api/sql/Function/)
* [Vector predicates](../../../api/sql/Predicate/)
Expand Down
2 changes: 1 addition & 1 deletion docs/api/sql/Raster-loader.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Sedona provides two types of raster DataFrame loaders. They both use Sedona buil

## Load any raster to RasterUDT format

The raster loader of Sedona leverages Spark built-in binary data source and works with several RS RasterUDT constrcutors to produce RasterUDT type. Each raster is a row in the resulting DataFrame and stored in a `RasterUDT` format.
The raster loader of Sedona leverages Spark built-in binary data source and works with several RS RasterUDT constructors to produce RasterUDT type. Each raster is a row in the resulting DataFrame and stored in a `RasterUDT` format.

### Load raster to a binary DataFrame

Expand Down
6 changes: 3 additions & 3 deletions docs/tutorial/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ For Postgis there is no need to add a query to convert geometry types since it's
=== "Scala"

```scala
// For any JDBC data source, inluding Postgis.
// For any JDBC data source, including Postgis.
val df = sparkSession.read.format("jdbc")
// Other options.
.option("query", "SELECT id, ST_AsBinary(geom) as geom FROM my_table")
Expand All @@ -323,7 +323,7 @@ For Postgis there is no need to add a query to convert geometry types since it's
=== "Java"

```java
// For any JDBC data source, inluding Postgis.
// For any JDBC data source, including Postgis.
Dataset<Row> df = sparkSession.read().format("jdbc")
// Other options.
.option("query", "SELECT id, ST_AsBinary(geom) as geom FROM my_table")
Expand All @@ -341,7 +341,7 @@ For Postgis there is no need to add a query to convert geometry types since it's
=== "Python"

```python
# For any JDBC data source, inluding Postgis.
# For any JDBC data source, including Postgis.
df = (sparkSession.read.format("jdbc")
# Other options.
.option("query", "SELECT id, ST_AsBinary(geom) as geom FROM my_table")
Expand Down