Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Fix spelling in Markdown and Python files #758

Merged
merged 1 commit into from
Feb 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/api/sql/Function.md
Original file line number Diff line number Diff line change
Expand Up @@ -839,7 +839,7 @@ Result:
!!!note
In Sedona up to and including version 1.2 the behaviour of ST_MakeValid was different.
Be sure to check you code when upgrading. The previous implementation only worked for (multi)polygons and had a different interpretation of the second, boolean, argument.
It would also sometimes return multiple geometries for a single geomtry input.
It would also sometimes return multiple geometries for a single geometry input.

## ST_MinimumBoundingCircle

Expand Down
2 changes: 1 addition & 1 deletion docs/api/sql/Raster-loader.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ There are three more optional parameters for reading GeoTiff:

```html
|-- readfromCRS: Coordinate reference system of the geometry coordinates representing the location of the Geotiff. An example value of readfromCRS is EPSG:4326.
|-- readToCRS: If you want to tranform the Geotiff location geometry coordinates to a different coordinate reference system, you can define the target coordinate reference system with this option.
|-- readToCRS: If you want to transform the Geotiff location geometry coordinates to a different coordinate reference system, you can define the target coordinate reference system with this option.
|-- disableErrorInCRS: (Default value false) => Indicates whether to ignore errors in CRS transformation.
```

Expand Down
2 changes: 1 addition & 1 deletion docs/api/sql/Raster-operators.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ val multiplyDF = spark.sql("select RS_Divide(band1, band2) as divideBands from d

Introduction: Fetch a subset of region from given Geotiff image based on minimumX, minimumY, maximumX and maximumY index as well original height and width of image

Format: `RS_FetchRegion (Band: Array[Double], coordinates: Array[Int], dimenstions: Array[Int])`
Format: `RS_FetchRegion (Band: Array[Double], coordinates: Array[Int], dimensions: Array[Int])`

Since: `v1.1.0`

Expand Down
2 changes: 1 addition & 1 deletion docs/api/viz/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ FROM pixels

#### Produce uniform colors - scatter plot

If a mandatory color name is put as the third input argument, this function will directly ouput this color, without considering the weights. In this case, every pixel will possess the same color.
If a mandatory color name is put as the third input argument, this function will directly output this color, without considering the weights. In this case, every pixel will possess the same color.

Spark SQL example:
```SQL
Expand Down
2 changes: 1 addition & 1 deletion docs/community/contributor.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ Once Sedona graduates, the PMC chair will make the request.

Once the new PMC subscribes to the Sedona mailing lists using his/her ASF account, one of the PMC needs to add the new PMC to the Whimsy system (https://whimsy.apache.org/roster/pmc/sedona).

### PMC annoucement
### PMC announcement

This is the email to announce the new committer to sedona-dev once the account has been created.

Expand Down
4 changes: 2 additions & 2 deletions docs/community/publish.md
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ No -1 votes

The vote thread (Permalink from https://lists.apache.org/list.html):

I will make an annoucement soon.
I will make an announcement soon.

```

Expand Down Expand Up @@ -406,7 +406,7 @@ rm *.asc

## 9. Release Sedona Python and Zeppelin

You must have the maintainer priviledge of `https://pypi.org/project/apache-sedona/` and `https://www.npmjs.com/package/apache-sedona`
You must have the maintainer privilege of `https://pypi.org/project/apache-sedona/` and `https://www.npmjs.com/package/apache-sedona`

```bash
#!/bin/bash
Expand Down
2 changes: 1 addition & 1 deletion docs/community/rule.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The project welcomes contributions. You can contribute to Sedona code or documen

The following sections brief the workflow of how to complete a contribution.

## Pick / Annouce a task using JIRA
## Pick / Announce a task using JIRA

It is important to confirm that your contribution is acceptable. You should create a JIRA ticket or pick an existing ticket. A new JIRA ticket will be automatically sent to `dev@sedona.apache.org`

Expand Down
2 changes: 1 addition & 1 deletion docs/community/vote.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This page is for Sedona community to vote a Sedona release. The script below is tested on MacOS.

In order to vote a Sedona release, you must provide your checklist inlcuding the following minimum requirement:
In order to vote a Sedona release, you must provide your checklist including the following minimum requirement:

* Download links are valid
* Checksums and PGP signatures are valid
Expand Down
2 changes: 1 addition & 1 deletion docs/setup/install-r.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ At the moment `apache.sedona` consists of the following components:

To ensure Sedona serialization routines, UDTs, and UDFs are properly
registered when creating a Spark session, one simply needs to attach
`apache.sedona` before instantiating a Spark conneciton. apache.sedona
`apache.sedona` before instantiating a Spark connection. apache.sedona
will take care of the rest. For example,

``` r
Expand Down
6 changes: 3 additions & 3 deletions docs/setup/maven-coordinates.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ Sedona Flink has four modules :`sedona-core, sedona-sql, sedona-python-adapter,
## Use Sedona fat jars

!!!warning
For Scala/Java/Python/R users, this is the most common way to use Sedona in your environment. Do not use separate Sedona jars othwerwise you will get dependency conflicts. `sedona-python-adapter` already contains all you need.
For Scala/Java/Python/R users, this is the most common way to use Sedona in your environment. Do not use separate Sedona jars otherwise you will get dependency conflicts. `sedona-python-adapter` already contains all you need.

The optional GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This libary is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
The optional GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distribution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.

!!! abstract "Sedona with Apache Spark"

Expand Down Expand Up @@ -234,7 +234,7 @@ Under MIT License. Please make sure you exclude jts and jackson from this librar

### GeoTools 24.0+

GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This libary is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.
GeoTools library is required only if you want to use CRS transformation and ShapefileReader. This wrapper library is a re-distriution of GeoTools official jars. The only purpose of this library is to bring GeoTools jars from OSGEO repository to Maven Central. This library is under GNU Lesser General Public License (LGPL) license so we cannot package it in Sedona official release.

```xml
<!-- https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
Expand Down
4 changes: 2 additions & 2 deletions docs/setup/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,11 +320,11 @@ This version is a maintenance release on Sedona 1.0.0 line. It includes bug fixe

### Known issue

In Sedona v1.0.1 and eariler versions, the Spark dependency in setup.py was configured to be ==< v3.1.0== [by mistake](https://github.com/apache/sedona/blob/8235924ac80939cbf2ce562b0209b71833ed9429/python/setup.py#L39). When you install Sedona Python (apache-sedona v1.0.1) from Pypi, pip might unstall PySpark 3.1.1 and install PySpark 3.0.2 on your machine.
In Sedona v1.0.1 and earlier versions, the Spark dependency in setup.py was configured to be ==< v3.1.0== [by mistake](https://github.com/apache/sedona/blob/8235924ac80939cbf2ce562b0209b71833ed9429/python/setup.py#L39). When you install Sedona Python (apache-sedona v1.0.1) from Pypi, pip might uninstall PySpark 3.1.1 and install PySpark 3.0.2 on your machine.

Three ways to fix this:

1. After install apache-sedona v1.0.1, unstall PySpark 3.0.2 and reinstall PySpark 3.1.1
1. After install apache-sedona v1.0.1, uninstall PySpark 3.0.2 and reinstall PySpark 3.1.1

2. Ask pip not to install Sedona dependencies: `pip install --no-deps apache-sedona`

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial/core-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ query_result = RangeQuery.SpatialRangeQuery(

The output format of the spatial range query is another RDD which consists of GeoData objects.

SpatialRangeQuery result can be used as RDD with map or other spark RDD funtions. Also it can be used as
SpatialRangeQuery result can be used as RDD with map or other spark RDD functions. Also it can be used as
Python objects when using collect method.
Example:

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial/flink/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ The first EPSG code EPSG:4326 in `ST_Transform` is the source CRS of the geometr

The second EPSG code EPSG:3857 in `ST_Transform` is the target CRS of the geometries. It is the most common meter-based CRS.

This `ST_Transform` transform the CRS of these geomtries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)
This `ST_Transform` transform the CRS of these geometries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)

!!!note
Read [SedonaSQL ST_Transform API](../../../api/flink/Function/#st_transform) to learn different spatial query predicates.
Expand Down
6 changes: 3 additions & 3 deletions docs/tutorial/rdd.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ objectRDD.CRSTransform(sourceCrsCode, targetCrsCode, false)
`false` in CRSTransform(sourceCrsCode, targetCrsCode, false) means that it will not tolerate Datum shift. If you want it to be lenient, use `true` instead.

!!!warning
CRS transformation should be done right after creating each SpatialRDD, otherwise it will lead to wrong query results. For instace, use something like this:
CRS transformation should be done right after creating each SpatialRDD, otherwise it will lead to wrong query results. For instance, use something like this:
```Scala
var objectRDD = new PointRDD(sc, pointRDDInputLocation, pointRDDOffset, pointRDDSplitter, carryOtherAttributes)
objectRDD.CRSTransform("epsg:4326", "epsg:3857", false)
Expand Down Expand Up @@ -410,7 +410,7 @@ val result = JoinQuery.SpatialJoinQuery(objectRDD, queryWindowRDD, usingIndex, s
FROM city, superhero
WHERE ST_Contains(city.geom, superhero.geom);
```
Find the super heros in each city
Find the superheroes in each city

### Use spatial partitioning

Expand Down Expand Up @@ -502,7 +502,7 @@ The output format of the distance join query is [here](#output-format_2).
FROM city, superhero
WHERE ST_Distance(city.geom, superhero.geom) <= 10;
```
Find the super heros within 10 miles of each city
Find the superheroes within 10 miles of each city

## Save to permanent storage

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial/sql-r.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ modified_polygon_sdf <- polygon_sdf %>%
```


Notice that all of the above can open up many interesting possiblities. For
Notice that all of the above can open up many interesting possibilities. For
example, one can extract ML features from geospatial data in Spark
dataframes, build a ML pipeline using `ml_*` family of functions in
`sparklyr` to work with such features, and if the output of a ML model
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ The first EPSG code EPSG:4326 in `ST_Transform` is the source CRS of the geometr

The second EPSG code EPSG:3857 in `ST_Transform` is the target CRS of the geometries. It is the most common meter-based CRS.

This `ST_Transform` transform the CRS of these geomtries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)
This `ST_Transform` transform the CRS of these geometries from EPSG:4326 to EPSG:3857. The details CRS information can be found on [EPSG.io](https://epsg.io/)

The coordinates of polygons have been changed. The output will be like this:

Expand Down
6 changes: 3 additions & 3 deletions docs/tutorial/viz.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ SedonaViz provides native support for general cartographic design by extending S
SedonaViz offers Map Visualization SQL. This gives users a more flexible way to design beautiful map visualization effects including scatter plots and heat maps. SedonaViz RDD API is also available.

!!!note
All SedonaViz SQL/DataFrame APIs are explained in [SedonaViz API](../../api/viz/sql). Please see [Viz exmaple project](https://github.com/apache/sedona/tree/master/examples/viz)
All SedonaViz SQL/DataFrame APIs are explained in [SedonaViz API](../../api/viz/sql). Please see [Viz example project](https://github.com/apache/sedona/tree/master/examples/viz)

## Why scalable map visualization?

Expand All @@ -14,7 +14,7 @@ Data visualization allows users to summarize, analyze and reason about data. Gua
SedonaViz encapsulates the main steps of map visualization process, e.g., pixelize, aggregate, and render, into a set of massively parallelized GeoViz operators and the user can assemble any customized styles.

## Visualize SpatialRDD
This tutorial mainly focuses on explaining SQL/DataFrame API. SedonaViz RDD example can be found in Please see [Viz exmaple project](https://github.com/apache/sedona/tree/master/examples/viz)
This tutorial mainly focuses on explaining SQL/DataFrame API. SedonaViz RDD example can be found in Please see [Viz example project](https://github.com/apache/sedona/tree/master/examples/viz)

## Set up dependencies
1. Read [Sedona Maven Central coordinates](../setup/maven-coordinates.md)
Expand Down Expand Up @@ -108,7 +108,7 @@ LATERAL VIEW explode(ST_Pixelize(ST_Transform(shape, 'epsg:4326','epsg:3857'), 2
This will give you a 256*256 resolution image after you run ST_Render at the end of this tutorial.

!!!warning
We highly suggest that you should use ST_Transform to transfrom coordiantes to a visualization-specific coordinate sysmte such as epsg:3857. Otherwise you map may look distorted.
We highly suggest that you should use ST_Transform to transform coordiantes to a visualization-specific coordinate system such as epsg:3857. Otherwise you map may look distorted.

### Aggregate pixels

Expand Down
2 changes: 1 addition & 1 deletion python/tests/core/test_rdd.py
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,7 @@ def test_crs_transformed_spatial_range_query(self):
object_rdd, range_query_window, False, False
)

def test_crs_tranformed_spatial_range_query_using_index(self):
def test_crs_transformed_spatial_range_query_using_index(self):
object_rdd = PointRDD(
sparkContext=self.sc,
InputLocation=point_rdd_input_location,
Expand Down