Skip to content

Commit ce5f8b7

Browse files
authored
Typos clean-up & IDE warning fixes (#573)
1 parent 69befbc commit ce5f8b7

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+83
-144
lines changed

CHANGELOG.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
9494
- Simplified flow statuses within Flow System (no more Queued or Scheduled status)
9595
- Extended flow start conditions with more debug information for UI needs
9696
- Simplified flow cancellation API:
97-
- Cancelling in Waiting/Running states is accepted, and aborts the flow and it's associated tasks
97+
- Cancelling in Waiting/Running states is accepted, and aborts the flow, and it's associated tasks
9898
- Cancelling in Waiting/Running states also automatically pauses flow configuration
9999

100100
## [0.162.1] - 2024-02-28
@@ -142,7 +142,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
142142

143143
## [0.157.0] - 2024-02-12
144144
### Added
145-
- Complete support for `arm64` architecture (including M-series Apple silicon)
145+
- Complete support for `arm64` architecture (including M-series Apple Silicon)
146146
- `kamu-cli` now depends on multi-platform Datafusion, Spark, Flink, and Jupyter images allowing you to run data processing at native CPU speeds
147147
### Changed
148148
- Spark engine is upgraded to latest version of Spark 3.5
@@ -152,7 +152,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
152152

153153
## [0.156.3] - 2024-02-09
154154
### Added
155-
- Native support for `arm64` architecture (including M-series Apple silicon) in `kamu-cli` and `kamu-engine-datafusion`
155+
- Native support for `arm64` architecture (including M-series Apple Silicon) in `kamu-cli` and `kamu-engine-datafusion`
156156
- Note: Flink and Spark engine images still don't provide `arm64` architecture and continue to require QEMU
157157
### Changed
158158
- Flow system scheduling rules improved to respect system-wide throttling setting and take last successful run into account when rescheduling a flow or after a restart

DEVELOPER.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ When needing to test against a specific official release you can install it unde
7979
curl -s "https://get.kamu.dev" | KAMU_ALIAS=kamu-release sh
8080
```
8181

82-
New to Rust? Check out these [IDE configuration tip](#ide-tips).
82+
New to Rust? Check out these [IDE configuration tips](#ide-configuration).
8383

8484

8585
### Configure Podman as Default Runtime (Recommended)

images/demo/jupyter/kamu-start-hook.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ set -eo pipefail
44

55
# Generate Kamu Node auth token if GitHub access token is provided
66
if [ -n "${GITHUB_TOKEN}" ] && [ -n "${GITHUB_LOGIN}" ] && [ -n "${KAMU_JWT_SECRET}" ] && [ -n "${KAMU_NODE_URL}" ]; then
7-
kamu_token=$(kamu system generate-token --gh-login ${GITHUB_LOGIN} --gh-access-token ${GITHUB_TOKEN})
7+
kamu_token=$(kamu system generate-token --gh-login "${GITHUB_LOGIN}" --gh-access-token "${GITHUB_TOKEN}")
88
kamu login --user --access-token "${kamu_token}" "${KAMU_NODE_URL#odf+}"
99
fi
1010

images/demo/user-home/01 - Kamu Basics (COVID-19 example)/01 - Introduction.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -524,7 +524,7 @@
524524
"\n",
525525
"<div class=\"alert alert-block alert-warning\">\n",
526526
"\n",
527-
"Note that if you just type `df` in a cell - you will get an error. That's because by default this kernel executes operations in the remore PySpark environment. To access `df` you need to use `%%local` cell command which will execute code in this local Python kernel.\n",
527+
"Note that if you just type `df` in a cell - you will get an error. That's because by default this kernel executes operations in the remote PySpark environment. To access `df` you need to use `%%local` cell command which will execute code in this local Python kernel.\n",
528528
" \n",
529529
"</div>\n",
530530
"\n",

images/demo/user-home/01 - Kamu Basics (COVID-19 example)/02 - Collaboration.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
"- Decentralized storage like IPFS, Arweave (see next tutorial on \"Web3 Data\")\n",
6060
"- Or even some old FTP server (see [full list](https://docs.kamu.dev/node/deploy/storage/))\n",
6161
"\n",
62-
"As a reporitory for this demo we will use [**Kamu Node**](https://docs.kamu.dev/node/) - you can think of it as a small server on top of some storage (AWS S3 or Minio in this case) that speaks ORF protocol and provides a bunch of cool additional features, like highly optimized uploads/downloads, dataset search, and even executing remote SQL queries.\n",
62+
"As a repository for this demo we will use [**Kamu Node**](https://docs.kamu.dev/node/) - you can think of it as a small server on top of some storage (AWS S3 or Minio in this case) that speaks ORF protocol and provides a bunch of cool additional features, like highly optimized uploads/downloads, dataset search, and even executing remote SQL queries.\n",
6363
"\n",
6464
"<div class=\"alert alert-block alert-success\">\n",
6565
"So let's add the node as a repository:\n",

images/demo/user-home/01 - Kamu Basics (COVID-19 example)/03 - Trust.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,7 @@
244244
"- [Learning materials](https://docs.kamu.dev/cli/get-started/learning-materials/)\n",
245245
"- and [external datasets](https://github.com/kamu-data/kamu-contrib).\n",
246246
"\n",
247-
"All examples are conviniently located in `~/XX - Other Examples` directory of this Jupyter server.\n",
247+
"All examples are conveniently located in `~/XX - Other Examples` directory of this Jupyter server.\n",
248248
"\n",
249249
"Also, please tell us what you think about this demo:\n",
250250
"- by [joining our Discord](https://discord.gg/nU6TXRQNXC)\n",

images/demo/user-home/01 - Kamu Basics (COVID-19 example)/init-chapter-3.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ set -e
44
rm -rf .kamu
55
kamu init
66

7-
kamu repo add kamu-node ${KAMU_NODE_URL}
7+
kamu repo add kamu-node "${KAMU_NODE_URL}"
88
kamu pull kamu-node/kamu/covid19.british-columbia.case-details --no-alias
99
kamu pull kamu-node/kamu/covid19.ontario.case-details
1010
kamu add datasets/canada*.yaml

images/demo/user-home/02 - Web3 Data (Ethereum trading example)/init-chapter-3.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ kamu add \
1717
datasets/account.transactions.yaml
1818

1919
kamu pull account.transactions account.tokens.transfers
20-
kamu pull --set-watermark `date --iso-8601=s` account.transactions
21-
kamu pull --set-watermark `date --iso-8601=s` account.tokens.transfers
20+
kamu pull --set-watermark "$(date --iso-8601=s)" account.transactions
21+
kamu pull --set-watermark "$(date --iso-8601=s)" account.tokens.transfers
2222

2323
kamu pull --all

images/demo/user-home/02 - Web3 Data (Ethereum trading example)/init-chapter-all.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ kamu pull "${REPO_BASE_URL}co.alphavantage.tickers.daily.spy"
1313
kamu add -r datasets/
1414

1515
kamu pull account.transactions account.tokens.transfers
16-
kamu pull --set-watermark `date --iso-8601=s` account.transactions
17-
kamu pull --set-watermark `date --iso-8601=s` account.tokens.transfers
16+
kamu pull --set-watermark "$(date --iso-8601=s)" account.transactions
17+
kamu pull --set-watermark "$(date --iso-8601=s)" account.tokens.transfers
1818

1919
kamu pull --all

resources/cli-reference.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -653,7 +653,7 @@ Push local data into a repository
653653

654654
Use this command to share your new dataset or new data with others. All changes performed by this command are atomic and non-destructive. This command will analyze the state of the dataset at the repository and will only upload data and metadata that wasn't previously seen.
655655

656-
Similarly to git, if someone else modified the dataset concurrently with you - your push will be rejected and you will have to resolve the conflict.
656+
Similarly to git, if someone else modified the dataset concurrently with you - your push will be rejected, and you will have to resolve the conflict.
657657

658658
**Examples:**
659659

@@ -695,7 +695,7 @@ Use this command to rename a dataset in your local workspace. Renaming is safe i
695695

696696
**Examples:**
697697

698-
Renaming is often useful when you pull a remote dataset by URL and it gets auto-assigned not the most convenient name:
698+
Renaming is often useful when you pull a remote dataset by URL, and it gets auto-assigned not the most convenient name:
699699

700700
kamu pull ipfs://bafy...a0da
701701
kamu rename bafy...a0da my.dataset
@@ -827,7 +827,7 @@ Manage set of remote aliases associated with datasets
827827
* `add` — Adds a remote alias to a dataset
828828
* `delete` — Deletes a remote alias associated with a dataset
829829

830-
When you pull and push datasets from repositories kamu uses aliases to let you avoid specifying the full remote referente each time. Aliases are usually created the first time you do a push or pull and saved for later. If you have an unusual setup (e.g. pushing to multiple repositories) you can use this command to manage the aliases.
830+
When you pull and push datasets from repositories kamu uses aliases to let you avoid specifying the full remote reference each time. Aliases are usually created the first time you do a push or pull and saved for later. If you have an unusual setup (e.g. pushing to multiple repositories) you can use this command to manage the aliases.
831831

832832
**Examples:**
833833

@@ -947,7 +947,7 @@ Executes an SQL query or drops you into an SQL shell
947947

948948
**Options:**
949949

950-
* `--url <URL>` — URL of a running JDBC server (e.g jdbc:hive2://example.com:10000)
950+
* `--url <URL>` — URL of a running JDBC server (e.g. jdbc:hive2://example.com:10000)
951951
* `-c`, `--command <CMD>` — SQL command to run
952952
* `--script <FILE>` — SQL script file to execute
953953
* `--engine <ENG>` — Engine type to use for this SQL session
@@ -1201,7 +1201,7 @@ There are two types of compactions: soft and hard.
12011201

12021202
Soft compactions produce new files while leaving the old blocks intact. This allows for faster queries, while still preserving the accurate history of how dataset evolved over time.
12031203

1204-
Hard compactions rewrite the history of the dataset as if data was originally written in big batches. They allow to shrink the history of a dataset to just a few blocks, reclaim the space used by old data files, but at the expense of history loss. Hard compactions will rewrite the metadata chain, changing block hashes. Therefore they will **break all downstream datasets** that depend on them.
1204+
Hard compactions rewrite the history of the dataset as if data was originally written in big batches. They allow to shrink the history of a dataset to just a few blocks, reclaim the space used by old data files, but at the expense of history loss. Hard compactions will rewrite the metadata chain, changing block hashes. Therefore, they will **break all downstream datasets** that depend on them.
12051205

12061206
**Examples:**
12071207

resources/schema.gql

+1-1
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ type Dataset {
258258
"""
259259
alias: DatasetAlias!
260260
"""
261-
Returns the kind of a dataset (Root or Derivative)
261+
Returns the kind of dataset (Root or Derivative)
262262
"""
263263
kind: DatasetKind!
264264
"""

src/adapter/graphql/src/extensions.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ struct ErrorBacktraceFormatter<'a>(&'a dyn std::error::Error);
8484

8585
impl<'a> std::fmt::Display for ErrorBacktraceFormatter<'a> {
8686
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
87-
// Uses the inner-most captured backtrace
87+
// Uses the innermost captured backtrace
8888
let mut error = Some(self.0);
8989
let mut backtrace = None;
9090
while let Some(e) = error {

src/adapter/graphql/src/queries/datasets/dataset.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ impl Dataset {
7878
self.dataset_handle.alias.clone().into()
7979
}
8080

81-
/// Returns the kind of a dataset (Root or Derivative)
81+
/// Returns the kind of dataset (Root or Derivative)
8282
async fn kind(&self, ctx: &Context<'_>) -> Result<DatasetKind> {
8383
let dataset = self.get_dataset(ctx).await?;
8484
let summary = dataset

src/adapter/http/src/api_error.rs

+3-2
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,9 @@ use kamu::domain::*;
3434
///
3535
/// A conversion between the domain error and [`ApiError`] has to exist. We on
3636
/// purpose avoid [From] and [Into] traits and using [`IntoApiError`] instead as
37-
/// we want this conversion to be explicit - it's too easy to put a questionmark
38-
/// operator on a fallible operation without thinking what it will actually do.
37+
/// we want this conversion to be explicit - it's too easy to put a question
38+
/// mark operator on a fallible operation without thinking what it will actually
39+
/// do.
3940
///
4041
/// Note that in between handlers different errors have different meaning, e.g.
4142
/// an absence of a dataset in one handler should lead to `404 Not Found`, while

src/app/cli/src/app.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ pub fn prepare_dependencies_graph_repository(
176176
.add_value(current_account_subject)
177177
.add::<kamu::domain::auth::AlwaysHappyDatasetActionAuthorizer>()
178178
.add::<kamu::DependencyGraphServiceInMemory>()
179-
// Don't add it's own initializer, leave optional dependency uninitialized
179+
// Don't add its own initializer, leave optional dependency uninitialized
180180
.build();
181181

182182
let dataset_repo = special_catalog_for_graph.get_one().unwrap();
@@ -288,7 +288,7 @@ pub fn configure_base_catalog(
288288

289289
b.add::<accounts::AccountService>();
290290

291-
// No Github login possible for single-tenant workspace
291+
// No GitHub login possible for single-tenant workspace
292292
if multi_tenant_workspace {
293293
b.add::<kamu_adapter_oauth::OAuthGithub>();
294294
}

src/app/cli/src/cli_parser.rs

+5-5
Original file line numberDiff line numberDiff line change
@@ -805,7 +805,7 @@ pub fn cli() -> Command {
805805
r#"
806806
Use this command to share your new dataset or new data with others. All changes performed by this command are atomic and non-destructive. This command will analyze the state of the dataset at the repository and will only upload data and metadata that wasn't previously seen.
807807
808-
Similarly to git, if someone else modified the dataset concurrently with you - your push will be rejected and you will have to resolve the conflict.
808+
Similarly to git, if someone else modified the dataset concurrently with you - your push will be rejected, and you will have to resolve the conflict.
809809
810810
**Examples:**
811811
@@ -850,7 +850,7 @@ pub fn cli() -> Command {
850850
851851
**Examples:**
852852
853-
Renaming is often useful when you pull a remote dataset by URL and it gets auto-assigned not the most convenient name:
853+
Renaming is often useful when you pull a remote dataset by URL, and it gets auto-assigned not the most convenient name:
854854
855855
kamu pull ipfs://bafy...a0da
856856
kamu rename bafy...a0da my.dataset
@@ -1012,7 +1012,7 @@ pub fn cli() -> Command {
10121012
])
10131013
.after_help(indoc::indoc!(
10141014
r#"
1015-
When you pull and push datasets from repositories kamu uses aliases to let you avoid specifying the full remote referente each time. Aliases are usually created the first time you do a push or pull and saved for later. If you have an unusual setup (e.g. pushing to multiple repositories) you can use this command to manage the aliases.
1015+
When you pull and push datasets from repositories kamu uses aliases to let you avoid specifying the full remote reference each time. Aliases are usually created the first time you do a push or pull and saved for later. If you have an unusual setup (e.g. pushing to multiple repositories) you can use this command to manage the aliases.
10161016
10171017
**Examples:**
10181018
@@ -1108,7 +1108,7 @@ pub fn cli() -> Command {
11081108
Arg::new("url")
11091109
.long("url")
11101110
.value_name("URL")
1111-
.help("URL of a running JDBC server (e.g jdbc:hive2://example.com:10000)"),
1111+
.help("URL of a running JDBC server (e.g. jdbc:hive2://example.com:10000)"),
11121112
Arg::new("command")
11131113
.short('c')
11141114
.long("command")
@@ -1291,7 +1291,7 @@ pub fn cli() -> Command {
12911291
12921292
Soft compactions produce new files while leaving the old blocks intact. This allows for faster queries, while still preserving the accurate history of how dataset evolved over time.
12931293
1294-
Hard compactions rewrite the history of the dataset as if data was originally written in big batches. They allow to shrink the history of a dataset to just a few blocks, reclaim the space used by old data files, but at the expense of history loss. Hard compactions will rewrite the metadata chain, changing block hashes. Therefore they will **break all downstream datasets** that depend on them.
1294+
Hard compactions rewrite the history of the dataset as if data was originally written in big batches. They allow to shrink the history of a dataset to just a few blocks, reclaim the space used by old data files, but at the expense of history loss. Hard compactions will rewrite the metadata chain, changing block hashes. Therefore, they will **break all downstream datasets** that depend on them.
12951295
12961296
**Examples:**
12971297

src/app/cli/src/commands/log_command.rs

+3-3
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ pub struct LogCommand {
2727
dataset_repo: Arc<dyn DatasetRepository>,
2828
dataset_action_authorizer: Arc<dyn auth::DatasetActionAuthorizer>,
2929
dataset_ref: DatasetRef,
30-
outout_format: Option<String>,
30+
output_format: Option<String>,
3131
filter: Option<String>,
3232
limit: usize,
3333
output_config: Arc<OutputConfig>,
@@ -47,7 +47,7 @@ impl LogCommand {
4747
dataset_repo,
4848
dataset_action_authorizer,
4949
dataset_ref,
50-
outout_format: outout_format.map(ToOwned::to_owned),
50+
output_format: outout_format.map(ToOwned::to_owned),
5151
filter: filter.map(ToOwned::to_owned),
5252
limit,
5353
output_config,
@@ -88,7 +88,7 @@ impl Command for LogCommand {
8888
.await?;
8989

9090
let mut renderer: Box<dyn MetadataRenderer> = match (
91-
self.outout_format.as_deref(),
91+
self.output_format.as_deref(),
9292
self.output_config.is_tty && self.output_config.verbosity_level == 0,
9393
) {
9494
(None, true) => Box::new(PagedAsciiRenderer::new(id_to_alias_lookup, self.limit)),

src/app/cli/src/commands/new_dataset_command.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ impl NewDatasetCommand {
105105
- date
106106
- city
107107
# Lets you manipulate names of the system columns to avoid conflicts
108-
# or use names better suited for yout data.
108+
# or use names better suited for your data.
109109
# See: https://docs.kamu.dev/odf/reference/#setvocab
110110
- kind: SetVocab
111111
eventTimeColumn: date
@@ -149,7 +149,7 @@ impl NewDatasetCommand {
149149
population + 1 as population
150150
from `com.example.city-populations`
151151
# Lets you manipulate names of the system columns to avoid
152-
# conflicts or use names better suited for yout data.
152+
# conflicts or use names better suited for your data.
153153
# See: https://docs.kamu.dev/odf/reference/#setvocab
154154
- kind: SetVocab
155155
eventTimeColumn: date

src/app/cli/src/commands/system_diagnose_command.rs

+5-5
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ pub struct SystemDiagnoseCommand {
4343
dataset_repo: Arc<dyn DatasetRepository>,
4444
verification_svc: Arc<dyn VerificationService>,
4545
container_runtime: Arc<ContainerRuntime>,
46-
is_in_workpace: bool,
46+
is_in_workspace: bool,
4747
run_info_dir: PathBuf,
4848
}
4949

@@ -52,14 +52,14 @@ impl SystemDiagnoseCommand {
5252
dataset_repo: Arc<dyn DatasetRepository>,
5353
verification_svc: Arc<dyn VerificationService>,
5454
container_runtime: Arc<ContainerRuntime>,
55-
is_in_workpace: bool,
55+
is_in_workspace: bool,
5656
run_info_dir: PathBuf,
5757
) -> Self {
5858
Self {
5959
dataset_repo,
6060
verification_svc,
6161
container_runtime,
62-
is_in_workpace,
62+
is_in_workspace,
6363
run_info_dir,
6464
}
6565
}
@@ -92,7 +92,7 @@ impl Command for SystemDiagnoseCommand {
9292
}),
9393
];
9494
// Add checks which required workspace initialization
95-
if self.is_in_workpace {
95+
if self.is_in_workspace {
9696
diagnostic_checks.push(Box::new(CheckWorkspaceConsistent {
9797
dataset_repo: self.dataset_repo.clone(),
9898
verification_svc: self.verification_svc.clone(),
@@ -110,7 +110,7 @@ impl Command for SystemDiagnoseCommand {
110110
}
111111
}
112112

113-
if !self.is_in_workpace {
113+
if !self.is_in_workspace {
114114
writeln!(out, "{}", style("Directory is not kamu workspace").yellow())?;
115115
}
116116
Ok(())

0 commit comments

Comments
 (0)