Skip to content

Commit

Permalink
Verify spelling fixes pass check-spelling
Browse files Browse the repository at this point in the history
  • Loading branch information
jsoref authored Jun 10, 2024
2 parents 0ccc7a6 + eb7db70 commit 8acb7e4
Show file tree
Hide file tree
Showing 23 changed files with 74 additions and 74 deletions.
24 changes: 12 additions & 12 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ name: Flow-IPC pipeline
# - All of the above but with certain run-time sanitizers (as of this writing ASAN/LSAN, UBSAN, TSAN,
# and possibly clang-MSAN) enabled at build time as well. (RelWithDebInfo is a sufficient base build type
# for a sanitizer-enabled build; a subset of compilers -- as opposed to all of them -- is also sufficient
# for pragamtic reasons.)
# for pragmatic reasons.)
# - doc-and-release:
# - Summary: 1, generate documentation from the source code (using Doxygen et al) and make it conveniently
# available. 2, update GitHub Release and GitHub Pages web site automatically with any relevant info, namely
Expand All @@ -52,7 +52,7 @@ name: Flow-IPC pipeline
# *testing the ability to generate it*. It's a subtle difference, but it's important to note it, because
# if we wanted to test the ins and outs of doc generation in various environments then we could have a much more
# complex and longer pipeline -- and perhaps we should... but that's not the goal *here*.
# - Output: Make it available as workflow artfiact for download via GitHub Actions UI.
# - Output: Make it available as workflow artifact for download via GitHub Actions UI.
# Hence human can simply download it, whether it's for the `main` tip or a specific release version.
# - Side effect: (On pushes/merges to `main` branch *only*) Check-in the *generated docs* back into `main`
# itself, so that (1) they're available for local perusal by any person cloning/pulling `main` and (2) ditto
Expand All @@ -75,7 +75,7 @@ name: Flow-IPC pipeline
# with a side of official releases being accessible and comprehensively presented in customary ways.
# - TODO: It *could* be argued that those 2 goals are related but separate, and perhaps they should be 2 separate
# jobs. However, at least as of this writing, there's definite overlap between them, and combining the 2
# makes pragamtic sense. It's worth revisiting periodically perhaps.
# makes pragmatic sense. It's worth revisiting periodically perhaps.

on:
# Want to merge to development tip? Should probably pass these builds/tests and doc generation first (the latter
Expand Down Expand Up @@ -270,7 +270,7 @@ jobs:
- id: release
conan-profile-build-type: Release
conan-profile-jemalloc-build-type: Release
# Leaving no-lto at default (false); full-on-optimized-no-debug is the quentessential LTO use case.
# Leaving no-lto at default (false); full-on-optimized-no-debug is the quintessential LTO use case.
- id: relwithdebinfo
conan-profile-build-type: RelWithDebInfo
conan-profile-jemalloc-build-type: Release
Expand Down Expand Up @@ -379,7 +379,7 @@ jobs:

# We concentrate on clang sanitizers; they are newer/nicer; also MSAN is clang-only. So gcc ones excluded.
# Attention! Excluding some sanitizer job(s) (with these reasons):
# - MSAN: MSAN protects against reads of ununitialized memory; it is clang-only (not gcc), unlike the other
# - MSAN: MSAN protects against reads of uninitialized memory; it is clang-only (not gcc), unlike the other
# *SAN. Its mission overlaps at least partially with UBSAN's; for example for sure there were a couple of
# uninitialized reads in test code which UBSAN caught. Its current state -- if not excluded -- is as
# follows: 1, due to (as of this writing) building dependencies, including the capnp compiler binary used
Expand Down Expand Up @@ -476,7 +476,7 @@ jobs:
# test). The proper technique is: 1, which of the suppression contexts (see above) are relevant?
# (Safest is to specify all contexts; as you'll see just below, it's fine if there are no files in a given
# context. However it would make code tedious to specify that way everywhere; so it's fine to skip contexts where
# we know that these days there are no suppresisons.) Let the contexts' dirs be $DIR_A, $DIR_B, .... Then:
# we know that these days there are no suppressions.) Let the contexts' dirs be $DIR_A, $DIR_B, .... Then:
# 2, `{ cat $DIR_A/${{ env.san-suppress-cfg-in-file1 }} $DIR_A/${{ env.san-suppress-cfg-in-file2 }} \
# $DIR_B/${{ env.san-suppress-cfg-in-file1 }} $DIR_B/${{ env.san-suppress-cfg-in-file2 }} \
# ... \
Expand Down Expand Up @@ -635,7 +635,7 @@ jobs:
# hidden configure-script-generated binary abort => capnp binary build fails.
# - capnp binary fails MSAN at startup; hence capnp-compilation of our .capnp schemas fails.
# We have worked around all these, so the thing altogether works. It's just somewhat odd and entropy-ridden;
# and might cause maintanability problems over time, as it has already in the past.
# and might cause maintainability problems over time, as it has already in the past.
# The TODO is to be more judicious about it
# and only apply these things to the libs/executables we want it applied. It is probably not so simple;
# but worst-case it should be possible to use something like build-type-cflags-override to target our code;
Expand Down Expand Up @@ -689,7 +689,7 @@ jobs:
# As it stands, whatever matrix compiler/build-type is chosen applies not just to our code (correct)
# and 3rd party libraries we link like lib{boost_*|capnp|kj|jemalloc} (semi-optional but good) but also
# unfortunately any items built from source during "Install Flow-IPC dependencies" step that we then
# use during during the build step for our own code subsequently. At a minimum this will slow down
# use during the build step for our own code subsequently. At a minimum this will slow down
# such programs. (For the time being we accept this as not-so-bad; to target this config at some things
# but not others is hard/a ticket.) In particular, though, the capnp compiler binary is built this way;
# and our "Build targets" step uses it to build key things (namely convert .capnp schemas into .c++
Expand Down Expand Up @@ -720,7 +720,7 @@ jobs:
# sanitizers' suppressions. (Note, though, that MSAN does not have a run-time suppression
# system; only these ignore-lists. The others do also have ignore-lists though.
# The format is totally different between the 2 types of suppression.)
# Our MSAN support is budding compared to UBSAN/ASAN/TSAN; so just specify the one ingore-list file
# Our MSAN support is budding compared to UBSAN/ASAN/TSAN; so just specify the one ignore-list file
# we have now. TODO: If/when MSAN support gets filled out like the others', then use a context system
# a-la env.setup-tests-env.
cat ${{ github.workspace }}/flow/src/sanitize/msan/ignore_list_${{ matrix.compiler.name }}.cfg \
Expand Down Expand Up @@ -793,7 +793,7 @@ jobs:
# just to our code or 3rd party libraries but also any other stuff built
# (due to setting C[XX]FLAGS in Conan profile). Targeting just the exact
# stuff we want with those is hard and a separate project/ticket.
# In the meantime -fno-sanitize-recover causes completely unrealted program
# In the meantime -fno-sanitize-recover causes completely unrelated program
# halts during the very-early step, when building dependencies including
# capnp; some autotools configure.sh fails crazily, and nothing can work
# from that point on due to dependencies-install step failing. So at this
Expand All @@ -804,7 +804,7 @@ jobs:

# Here, as in all other tests below, we assemble a suppressions file in case this is a sanitized
# run; and we follow the procedure explained near setup-tests-env definition. To reiterate: to avoid
# tedium, but at the cost of mantainability of this file (meaning if a suppressions context is added then
# tedium, but at the cost of maintainability of this file (meaning if a suppressions context is added then
# a few lines would need to be added here), we only list those contexts where *any* sanitizer has
# *any* suppression; otherwise we skip it for brevity. `find . -name 'suppressions*.cfg` is pretty useful
# to determine their presence in addition to whether the test itself has its specific suppressions of any kind.
Expand Down Expand Up @@ -1087,7 +1087,7 @@ jobs:
# The following [Exercise mode] tests follow the instructions in bin/transport_test/README.txt.
# Note that the creation of ~/bin/ex_..._run and placement of executables there, plus
# /tmp/var/run for run-time files (PID files and similar), is a necessary consequence of
# the ipc::session safety model for estabshing IPC conversations (sessions).
# the ipc::session safety model for establishing IPC conversations (sessions).

- name: Prepare IPC-session safety-friendly run-time environment for [transport_test - Exercise mode]
if: |
Expand Down
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The master branch in each repo is called `main`. Thus any contribution will inv
We have some automated CI/CD pipelines. Namely `flow`, being special as a self-contained project, has the
pipeline steps in `flow/.github/workflows/main.yml` -- this is Flow's dedicated CI/CD pipeline; and `ipc`,
covering Flow-IPC as an overall monolithic project, similarly has Flow-IPC's CI/CD pipeline steps in
`.github/worksflows/main.yml`. Therefore:
`.github/workflows/main.yml`. Therefore:
- Certain automated build/test/doc-generation runs occur when:
- creating a PR against `flow` repo;
- updating that PR;
Expand Down Expand Up @@ -152,7 +152,7 @@ and checked-in using the `ipc/` pipeline. (Search for `git push` in the two `ma
We have already mentioned this above.

The above steps for *locally* generating the documentation are provided only
so you can locally test soure code changes' effects on the resulting docs.
so you can locally test source code changes' effects on the resulting docs.
Locally generating and verifying docs, after changing source code, is a good idea.
However it's also possible (and for some people/situations preferable) to skip it.
The CI/CD pipeline will mandatorily generate the docs, when a PR is created or updated, as we explained above.
Expand Down
4 changes: 2 additions & 2 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ To build Flow-IPC (including Flow):
but the basics are as follows. CMake is very flexible and powerful; we've tried not to mess with that principle
in our build script(s).
1. Choose a tool. `ccmake` will allow you to interactively configure aspects of the build system, including
showing docs for various knobs our CMakeLists.txt (and friends) have made availale. `cmake` will do so without
showing docs for various knobs our CMakeLists.txt (and friends) have made available. `cmake` will do so without
asking questions; you'll need to provide all required inputs on the command line. Let's assume `cmake` below,
but you can use whichever makes sense for you.
2. Choose a working *build directory*, somewhere outside the present `ipc` distribution. Let's call this
Expand Down Expand Up @@ -160,7 +160,7 @@ The documentation consists of:
- (minor) comments, about the build, in `CMakeLists.txt`, `*.cmake`, `conanfile.py` (in various directories
including this one where the top-level `CMakeLists.txt` lives);
- (major/main) documentation directly in the comments throughout the source code; these have been,
and can be again, conviently generated using certain tools (namely Doxygen and friends), via the
and can be again, conveniently generated using certain tools (namely Doxygen and friends), via the
above-shown `make ipc_doc_public ipc_doc_full flow_doc_public flow_doc_full` command.
- The generated documentation consists of:
- (Flow-IPC proper) a clickable guided Manual + Reference.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Observations (tested using decent server-grade hardware):
- Also significantly more RAM might be used at points.
- For very small messages the two techniques perform similarly: ~100 microseconds.

The code for this, when using Flow-IPC, is straighforward. Here's how it might look on the client side:
The code for this, when using Flow-IPC, is straightforward. Here's how it might look on the client side:

~~~cpp
// Specify that we *do* want zero-copy behavior, by merely choosing our backing-session type.
Expand Down
10 changes: 5 additions & 5 deletions src/doc/manual/b-api_overview.dox.txt
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ Having obtained a `Session`, the application can open transport channels (and, i
>
> // NOTE: Upon opening session, capabilities of `session` on either side are **exactly the same**.
> // Client/server status matters only when establishing the IPC conversation;
> // the conversation itself once established is arbitrariy and up to you fully.
> // the conversation itself once established is arbitrary and up to you fully.
> ~~~
>
> Open channel(s) in a session:
Expand Down Expand Up @@ -311,12 +311,12 @@ All transport APIs at this layer, as well the structured layer (see below), have
> chan.send_blob(...); // No problem.
> ~~~

All those subleties about different types of pipes in a `Channel` bundle completely disappear when one deals with structured-data `struc::Channel`s. They are a higher-layer abstraction and will leverage whatever `transport::Channel` it adapts. In addition to handling capnp-encoded structured data and SHM-backed zero-copy, it also provides basics like request/response, request-method multiplexing, and a bit more. So let's get into that.
All those subtleties about different types of pipes in a `Channel` bundle completely disappear when one deals with structured-data `struc::Channel`s. They are a higher-layer abstraction and will leverage whatever `transport::Channel` it adapts. In addition to handling capnp-encoded structured data and SHM-backed zero-copy, it also provides basics like request/response, request-method multiplexing, and a bit more. So let's get into that.

@anchor api_overview_transport_struc
Transport (structured)
----------------------
While a `Channel` transports blobs and/or native handles, it is likely the Flow-IPC user will want to be able to transmit schema-based structured data, gaining the benefits of that approach including arbitrary data-structuree complexity and forward/backward-compatibility. [capnp (Cap'n Proto)](https://capnproto.org) is the best-in-class third-party framework for schema-based structured data; ipc::transport's structured layer works by integrating with capnp.
While a `Channel` transports blobs and/or native handles, it is likely the Flow-IPC user will want to be able to transmit schema-based structured data, gaining the benefits of that approach including arbitrary data-structure complexity and forward/backward-compatibility. [capnp (Cap'n Proto)](https://capnproto.org) is the best-in-class third-party framework for schema-based structured data; ipc::transport's structured layer works by integrating with capnp.

To deal with structured data instead of mere blobs (though a schema-based structure can, of course, itself store blobs such as images), one simply constructs an ipc::transport::struc::Channel, feeding it an `std::move()`d already-opened @link ipc::transport::Channel Channel@endlink. This is called **upgrading** an unstructured `Channel` to a `struc::Channel`. A key template parameter to `struc::Channel` is a capnp-generated root schema class of the user's choice. This declares, at compile-time, what data structures (messages) one can transmit via that `struc::Channel` (and an identically-typed counterpart `struc::Channel` in the opposing process).

Expand Down Expand Up @@ -523,7 +523,7 @@ At no point do you have to worry about naming a SHM pool, removing it from the f
> Session::Structured_channel<...> chan(...,
> Channel_base::S_SERIALIZE_VIA_SESSION_SHM, ...); // <-- CHANGED LINE.
> // --- CHANGE TO --v
> // Now the the message can live *past* the session: until the Session_server is destroyed, meaning
> // Now the message can live *past* the session: until the Session_server is destroyed, meaning
> // your process accepting incoming sessions exits. For example, suppose your session-server accepts
> // (sessions) from many session-client processes of one application over its lifetime; session-client A sends
> // a PutCache message containing a large file's contents to session-server which memorizes it (to serve later);
Expand Down Expand Up @@ -611,7 +611,7 @@ Further capabilities are outside our scope here; but the main point is: At a min
> }
> ~~~
>
> Example of trasmitting a SHM-backed native data structure to another process follows. We transmit a handle through a capnp structured message here, but it can be done using any IPC mechanism whatsoever; even (e.g.) a file.
> Example of transmitting a SHM-backed native data structure to another process follows. We transmit a handle through a capnp structured message here, but it can be done using any IPC mechanism whatsoever; even (e.g.) a file.
>
> Schema which includes a native-object-in-SHM field:
> ~~~{.capnp}
Expand Down
2 changes: 1 addition & 1 deletion src/doc/manual/c-setup.dox.txt
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ The simplest thing to do (though, again, we do not recommend it, as it's giving
The next simplest thing, and likely suitable at least for prototyping situations, is to output Flow-IPC logs to stdout and/or stderr. To do so construct a `flow::log::Simple_ostream_logger` (near the top of your application most likely), passing in the desired verbosity `enum` setting to its constructor's `Config` arg; plus `std::cout` and/or `std::cerr`. Then pass-in a pointer to this `Logger` throughout your application, when Flow-IPC requires a `Logger*` argument. Logs will go to stdout and/or stderr. (However beware that some log interleaving may occur if other parts of your application also log to the same stream concurrently -- unless they, too, use the same `Simple_ostream_logger` object for this purpose.)

If your application is not based on `flow::log` (which of course is very much a possibility) then you will, longer-term, want to instead hook up your log system of choice to Flow-IPC. Don't worry: this is not hard. You need to implement the `flow::log::Logger` interface which consists of basically two parts:
- `bool should_log()` which determines whethere a given message (based on its severity `enum` and, possibly, `Component` input) should in fact be output (for example your `should_log()` might translate the input `flow::log::Sev` to your own verbosity setting and output `true` or `false` accordingly).
- `bool should_log()` which determines whether a given message (based on its severity `enum` and, possibly, `Component` input) should in fact be output (for example your `should_log()` might translate the input `flow::log::Sev` to your own verbosity setting and output `true` or `false` accordingly).
- `void do_log()` which takes a pointer to the message string and metadata info (severity, file/line info among a few other things) and outputs some subset of this information as it sees fit. For example it might forward these to your own logging API -- perhaps prefixing the message with some indication it's coming from Flow-IPC.

Lastly, if your application *is* based on `flow::log` -- or you would consider making it be that way -- then we'd recommend the use of `flow::log::Async_file_logger`. This is a heavy-duty file logger: it performs log writing asynchronously in a separate thread and is rotation-friendly. (It even will, optionally, capture SIGHUP itself and reopen the file, so that your rotate daemon might rename the now-completed log file, moving it out of the way and archiving it or what-not.) `Async_file_logger` is meant for heavy-duty logging (and as of this writing may be gaining more features such as gzip-on-the-fly).
Expand Down
Loading

0 comments on commit 8acb7e4

Please sign in to comment.