diff --git a/README.md b/README.md index f70633a1c..4f077cfff 100644 --- a/README.md +++ b/README.md @@ -29,9 +29,9 @@ solution before trying out the samples to ensure that all needed packages are in ## Releasing -Releasing of the NuGet package is done by GitHub actions CI from master branch when a new version is pushed. +Releasing of the NuGet package is done by GitHub actions CI from the main branch when a new version is pushed. -Releasing of docs is done by GitHub actions CI on each push to master branch. +Releasing of docs is done by GitHub actions CI on each push to the main branch. ## Support and community diff --git a/docs/library/CsvFile.fsx b/docs/library/CsvFile.fsx index c483148cb..69e3d5669 100644 --- a/docs/library/CsvFile.fsx +++ b/docs/library/CsvFile.fsx @@ -68,7 +68,7 @@ but please note that this will increase memory usage and should not be used in l ## Using CSV extensions -Now we look at a number of extensions that become available after +Now, we look at a number of extensions that become available after opening the `cref:T:FSharp.Data.CsvExtensionsModule` namespace. Once opened, we can write: * `row?column` uses the dynamic operator to obtain the column value named `column`; diff --git a/docs/library/CsvProvider.fsx b/docs/library/CsvProvider.fsx index 79d14ee99..47b1873ab 100644 --- a/docs/library/CsvProvider.fsx +++ b/docs/library/CsvProvider.fsx @@ -47,7 +47,7 @@ present on the columns of that sample. The column names are obtained from the fi -The type provider is located in the `FSharp.Data.dll` assembly. Assuming the package is referenged +The type provider is located in the `FSharp.Data.dll` assembly. Assuming the package is referenced we can access its namespace as follows: *) @@ -127,8 +127,8 @@ looks as follows: As you can see, the second and third columns are annotated with `metre` and `s`, respectively. To use units of measure in our code, we need to open the namespace with -standard unit names. Then we pass the `SmallTest.csv` file to the type provider as -a static argument. Also note that in this case we're using the same data at runtime, +standard unit names. Then, we pass the `SmallTest.csv` file to the type provider as +a static argument. Also, note that in this case, we're using the same data at runtime, so we use the `GetSample` method instead of calling `Load` and passing the same parameter again. *) @@ -173,11 +173,11 @@ meters per second against a value in kilometres per hour. ## Custom separators and tab-separated files -By default, the CSV type provider uses comma (`,`) as a separator. However, CSV -files sometime use a different separator character than `,`. In some European +By default, the CSV type provider uses a comma (`,`) as a separator. However, CSV +files sometimes use a different separator character than `,`. In some European countries, `,` is already used as the numeric decimal separator, so a semicolon (`;`) is used instead to separate CSV columns. The `CsvProvider` has an optional `Separators` static parameter -where you can specify what to use as separator. This means that you can consume +where you can specify what to use as a separator. This means that you can consume any textual tabular format. Here is an example using `;` as a separator: *) @@ -199,7 +199,7 @@ samples for the Statistical Computing language R. A short description of the dat If you are parsing a tab-separated file that uses `\t` as the separator, you can also specify the separator explicitly. However, if you're using an url or file that has the `.tsv` extension, the type provider will use `\t` by default. In the following example, -we also set `IgnoreErrors` static parameter to `true` so that lines with incorrect number of elements +we also set `IgnoreErrors` static parameter to `true` so that lines with an incorrect number of elements are automatically skipped (the sample file ([`data/MortalityNY.csv`](../data/MortalityNY.tsv)) contains additional unstructured data at the end): *) @@ -225,13 +225,13 @@ for r in mortalityNy.Rows do Finally, note that it is also possible to specify multiple different separators for the `CsvProvider`. This might be useful if a file is irregular and contains -rows separated by either semicolon or a colon. You can use: +rows separated by either a semicolon or a colon. You can use: `CsvProvider<"../data/AirQuality.csv", Separators=";,", ResolutionFolder=ResolutionFolder>`. ## Missing values It is quite common in statistical datasets for some values to be missing. If -you open the [`data/AirQuality.csv`](../data/AirQuality.csv) file you will see +you open the [`data/AirQuality.csv`](../data/AirQuality.csv) file, you will see that some values for the ozone observations are marked `#N/A`. Such values are parsed as float and will be marked with `Double.NaN` in F#. The values `NaN`, `NA`, `N/A`, `#N/A`, `:`, `-`, `TBA`, and `TBD` @@ -278,8 +278,8 @@ will be set to either `int`, `int64`, `decimal`, or `float`, in that order of pr If a value is missing in any row, by default the CSV type provider will infer a nullable (for `int` and `int64`) or an optional (for `bool`, `DateTime` and `Guid`). When a `decimal` would be inferred but there are missing values, we will infer a `float` instead, and use `Double.NaN` to represent those missing values. The `string` type is already inherently nullable, -so by default we won't generate a `string option`. If you prefer to use optionals in all cases, you can set the static parameter -`PreferOptionals` to `true`. In that case you'll never get an empty string or a `Double.NaN` and will always get a `None` instead. +so by default, we won't generate a `string option`. If you prefer to use optionals in all cases, you can set the static parameter +`PreferOptionals` to `true`. In that case, you'll never get an empty string or a `Double.NaN` and will always get a `None` instead. If you have other preferences, e.g. if you want a column to be a `float` instead of a `decimal`, you can override the default behaviour by specifying the types in the header column between braces, similar to what can be done to @@ -347,7 +347,7 @@ You don't need to override all the columns, you can skip the ones to leave as de For example, in the titanic training dataset from Kaggle ([`data/Titanic.csv`](../data/Titanic.csv)), if you want to rename the 3rd column (the `PClass` column) to `Passenger Class` and override the 6th column (the `Fare` column) to be a `float` instead of a `decimal`, you can define only that, and leave -the other columns blank in the schema (you also don't need to add all the trailing commas). +the other columns as blank in the schema (you also don't need to add all the trailing commas). *) type Titanic1 = @@ -383,7 +383,7 @@ You can even mix and match the two syntaxes like this `Schema="int64,DidSurvive, In addition to reading, `CsvProvider` also has support for transforming the row collection of CSV files. The operations available are `Filter`, `Take`, `TakeWhile`, `Skip`, `SkipWhile`, and `Truncate`. All these operations -preserve the schema, so after transforming you can save the results by using one of the overloads of +preserve the schema, so after transforming, you can save the results by using one of the overloads of the `Save` method. You can also use the `SaveToString()` to get the output directly as a string. *) diff --git a/docs/library/HtmlCssSelectors.fsx b/docs/library/HtmlCssSelectors.fsx index 7a2893d55..ad1ef5f9e 100644 --- a/docs/library/HtmlCssSelectors.fsx +++ b/docs/library/HtmlCssSelectors.fsx @@ -31,7 +31,7 @@ This article demonstrates how to use HTML CSS selectors to browse the DOM of par We use the `cref:T:FSharp.Data.HtmlDocument` type and associated `cref:T:FSharp.Data.HtmlDocumentModule` module and `cref:T:FSharp.Data.HtmlDocumentExtensions` extensions. -Usage of CSS selectors is a very natural way to parse HTML when we come from Web developments. +The usage of CSS selectors is a very natural way to parse HTML when we come from Web developments. The HTML CSS selectors are based on the [JQuery selectors](https://api.jquery.com/category/selectors/). To use CSS selectors, reference the FSharp.Data package. You then need to open `FSharp.Data` namespace, which automatically exposes extension methods that implement the CSS selectors. @@ -50,7 +50,7 @@ let doc = HtmlDocument.Load(googleUrl) (*** include-fsi-merged-output ***) (** To make sure we extract search results only, we will parse links in the `
` with id `search`. -Then we can , for example, use the direct descendants selector to select another `
` with the +Then we can, for example, use the direct descendants selector to select another `
` with the id `ires`. The CSS selector to do so is `div#search > div#ires`: *) let links = @@ -70,7 +70,7 @@ let links = The rest of the selector (written as `li.g > div.s`) skips the first 4 sub-results targeting GitHub pages, so we only extract proper links. -Now we might want the pages titles associated with their URLs. To do this, we can use the `List.zip` function: +Now, we might want the page titles associated with their URLs. To do this, we can use the `List.zip` function: *) let searchResults = @@ -85,7 +85,7 @@ let searchResults = ## Practice 2: Search F# books on Google Books We will parse links of the Google Books web site, searching for `F#`. After downloading the document, -we simply ensure to match good links with their CSS's styles and DOM's hierachy. In case of Google Books, +we simply ensure to match good links with their CSS's styles and DOM's hierarchy. In case of Google Books, we need to look for `
` with `class` set to `g`, then for `

` with CSS class `r` and then for all `` elements: *) let fsys = "https://www.google.com/search?tbm=bks&q=F%23" @@ -107,7 +107,7 @@ You can also refer to the table below for a complete list of supported selectors ### Attribute Contains Prefix Selector -Finds all links with an english hreflang attribute. +Finds all links with an English hreflang attribute. *) let englishDoc = HtmlDocument.Parse( diff --git a/docs/library/HtmlParser.fsx b/docs/library/HtmlParser.fsx index 41b75fe9b..2138d7f7d 100644 --- a/docs/library/HtmlParser.fsx +++ b/docs/library/HtmlParser.fsx @@ -38,7 +38,7 @@ independently of the actual HTML Type provider. open FSharp.Data (** -The following example uses Google to search for `FSharp.Data` then parses the first set of +The following example uses Google to search for `FSharp.Data` and then parses the first set of search results from the page, extracting the URL and Title of the link. We use the `cref:T:FSharp.Data.HtmlDocument` type. @@ -52,7 +52,7 @@ let results = HtmlDocument.Load("http://www.google.co.uk/search?q=FSharp.Data") (** Now that we have a loaded HTML document we can begin to extract data from it. -Firstly we want to extract all of the anchor tags `a` out of the document, then +Firstly, we want to extract all of the anchor tags `a` out of the document, then inspect the links to see if it has a `href` attribute, using `cref:M:FSharp.Data.HtmlDocumentExtensions.Descendants`. If it does, extract the value, which in this case is the url that the search result is pointing to, and additionally the `InnerText` of the anchor tag to provide the name of the web page for the search result @@ -70,7 +70,7 @@ let links = (** Now that we have extracted our search results you will notice that there are lots of -other links to various Google services and cached/similar results. Ideally we would +other links to various Google services and cached/similar results. Ideally, we would like to filter these results as we are probably not interested in them. At this point we simply have a sequence of Tuples, so F# makes this trivial using `Seq.filter` and `Seq.map`. diff --git a/docs/library/HtmlProvider.fsx b/docs/library/HtmlProvider.fsx index 7aac19812..032242f8f 100644 --- a/docs/library/HtmlProvider.fsx +++ b/docs/library/HtmlProvider.fsx @@ -63,7 +63,7 @@ The generated type provides a type space of tables that it has managed to parse Each type's name is derived from either the id, title, name, summary or caption attributes/tags provided. If none of these entities exist then the table will simply be named `Tablexx` where xx is the position in the HTML document if all of the tables were flattened out into a list. The `Load` method allows reading the data from a file or web resource. We could also have used a web URL instead of a local file in the sample parameter of the type provider. -The following sample calls the `Load` method with an URL that points to a live version of the same page on wikipedia. +The following sample calls the `Load` method with an URL that points to a live version of the same page on Wikipedia. *) // Download the table for the 2017 F1 calendar from Wikipedia let f1Calendar = F1_2017.Load(F1_2017_URL).Tables.Calendar @@ -95,7 +95,7 @@ be parsed as dates) while other columns are inferred as the correct type where p ### Parsing Nuget package stats -This small sample shows how the HTML Type Provider can be used to scrape data from a website. In this example we analyze the download counts of the FSharp.Data package on NuGet. +This small sample shows how the HTML Type Provider can be used to scrape data from a website. In this example, we analyze the download counts of the FSharp.Data package on NuGet. Note that we're using the live URL as the sample, so we can just use the default constructor as the runtime data will be the same as the compile time data. *) @@ -107,7 +107,7 @@ type NugetStats = HtmlProvider<"https://www.nuget.org/packages/FSharp.Data"> // load the live package stats for FSharp.Data let rawStats = NugetStats().Tables.``Version History of FSharp.Data`` -// helper function to analyze version numbers from nuget +// helper function to analyze version numbers from Nuget let getMinorVersion (v: string) = System .Text @@ -120,7 +120,7 @@ let getMinorVersion (v: string) = ) .Value -// group by minor version and calculate download count +// group by minor version and calculate the download count let stats = rawStats.Rows |> Seq.groupBy (fun r -> getMinorVersion r.Version) diff --git a/docs/library/Http.fsx b/docs/library/Http.fsx index a69fecd1a..a84c09469 100644 --- a/docs/library/Http.fsx +++ b/docs/library/Http.fsx @@ -126,7 +126,7 @@ Http.RequestString( (** ## Getting extra information -Note that in the previous snippet, if you don't specify a valid API key, you'll get a (401) Unathorized error, +Note that in the previous snippet, if you don't specify a valid API key, you'll get a (401) Unauthorized error, and that will throw an exception. Unlike when using `WebRequest` directly, the exception message will still include the response content, so it's easier to debug in F# interactive when the server returns extra info. diff --git a/docs/library/JsonProvider.fsx b/docs/library/JsonProvider.fsx index de9d5acd7..03301a21a 100644 --- a/docs/library/JsonProvider.fsx +++ b/docs/library/JsonProvider.fsx @@ -142,7 +142,7 @@ for item in People.GetSamples() do (** The inferred type for `items` is a collection of (anonymous) JSON entities - each entity -has properties `Name` and `Age`. As `Age` is not available for all records in the sample +has properties `Name` and `Age`. As `Age` is unavailable for all records in the sample data set, it is inferred as `option`. The above sample uses `Option.iter` to print the value only when it is available. @@ -171,11 +171,11 @@ between the two options. This is similar to the handling of heterogeneous arrays Note that we have a `GetSamples` method because the sample is a JSON list. If it was a JSON object, we would have a `GetSample` method instead. -#### More complex object type on root level +#### More complex object type on the root level If you want the root type to be an object type, not an array, but -you need more samples at root level, you can use the `SampleIsList` parameter. -Applied to the previous example this would be: +you need more samples at the root level, you can use the `SampleIsList` parameter. +Applied to the previous example, this would be: *) @@ -226,7 +226,7 @@ In the previous example, `Code` is inferred as a `float`, even though it looks more like it should be a `string`. (`4E5` is interpreted as an exponential float notation instead of a string) -Now let's enable inline schemas: +Now, let's enable inline schemas: *) open FSharp.Data.Runtime.StructuralInference @@ -274,7 +274,7 @@ unit if the sample contains other values... ## Loading WorldBank data -Now let's use the type provider to process some real data. We use a data set returned by +Now, let's use the type provider to process some real data. We use a data set returned by [the WorldBank](https://data.worldbank.org), which has (roughly) the following structure: [lang=js] @@ -286,7 +286,7 @@ Now let's use the type provider to process some real data. We use a data set ret "country": {"id":"CZ","value":"Czech Republic"}, "value":"16.6567773464055","decimal":"1","date":"2010"} ] ] -The response to a request contains an array with two items. The first item is a record +The response to a request contains an array of two items. The first item is a record with general information about the response (page, total pages, etc.) and the second item is another array which contains the actual data points. For every data point, we get some information and the actual `value`. Note that the `value` is passed as a string @@ -339,7 +339,7 @@ it to print the result only when the data point is available. ## Parsing Twitter stream -We now look on how to parse tweets returned by the [Twitter API](http://dev.twitter.com/). +We now look at how to parse tweets returned by the [Twitter API](http://dev.twitter.com/). Tweets are quite heterogeneous, so we infer the structure from a _list_ of inputs rather than from just a single input. To do that, we use the file [`data/TwitterStream.json`](../data/TwitterStream.json) (containing a list of tweets) and pass an optional parameter `SampleIsList=true` which tells the @@ -437,7 +437,7 @@ newIssue.JsonValue.Request "https://api.github.com/repos/fsharp/FSharp.Data/issu You can use the types created by JSON type provider in a public API of a library that you are building, but there is one important thing to keep in mind - when the user references your library, the type -provider will be loaded and the types will be generated at that time (the JSON provider is not +provider will be loaded, and the types will be generated at that time (the JSON provider is not currently a _generative_ type provider). This means that the type provider will need to be able to access the sample JSON. This works fine when the sample is specified inline, but it won't work when the sample is specified as a local file (unless you distribute the samples with your library). diff --git a/docs/library/JsonValue.fsx b/docs/library/JsonValue.fsx index 0e81a833c..c904110a7 100644 --- a/docs/library/JsonValue.fsx +++ b/docs/library/JsonValue.fsx @@ -68,7 +68,7 @@ of extensions that become available after opening the `cref:T:FSharp.Data.JsonEx module. Once opened, we can write: * `value.AsBoolean()` returns the value as boolean if it is either `true` or `false`. - * `value.AsInteger()` returns the value as integer if it is numeric and can be + * `value.AsInteger()` returns the value as an integer if it is numeric and can be converted to an integer; `value.AsInteger64()`, `value.AsDecimal()` and `value.AsFloat()` behave similarly. * `value.AsString()` returns the value as a string. diff --git a/docs/library/WorldBank.fsx b/docs/library/WorldBank.fsx index 225c15d7f..66ea649ed 100644 --- a/docs/library/WorldBank.fsx +++ b/docs/library/WorldBank.fsx @@ -85,7 +85,7 @@ WorldBank.GetDataContext() (** The above snippet specified "World Development Indicators" as the name of the data -source (a collection of commonly available indicators) and it set the optional argument +source (a collection of commonly available indicators) and it sets the optional argument `Asynchronous` to `true`. As a result, properties such as `Gross capital formation (% of GDP)` will now have a type `Async<(int * int)[]>` meaning that they represent an asynchronous computation that can be started and will eventually diff --git a/docs/library/XmlProvider.fsx b/docs/library/XmlProvider.fsx index cc4daac1c..96ff7e1a9 100644 --- a/docs/library/XmlProvider.fsx +++ b/docs/library/XmlProvider.fsx @@ -33,7 +33,7 @@ in a statically typed way. We first look at how the structure is inferred and th demonstrate the provider by parsing an RSS feed. The XML Type Provider provides statically typed access to XML documents. -It takes a sample document as an input (or document containing a root XML node with +It takes a sample document as an input (or a document containing a root XML node with multiple child nodes that are used as samples). The generated type can then be used to read files with the same structure @@ -215,8 +215,8 @@ unit if the sample contains other values... (** ## Processing philosophers -In this section we look at an example that demonstrates how the type provider works -on a simple document that lists authors that write about a specific topic. The +In this section, we look at an example that demonstrates how the type provider works +on a simple document that lists authors who write about a specific topic. The sample document [`data/Writers.xml`](../data/Writers.xml) looks as follows: [lang=xml] @@ -240,7 +240,7 @@ let authors = (** When initializing the `XmlProvider`, we can pass it a file name or a web URL. -The `Load` and `AsyncLoad` methods allows reading the data from a file or from a web resource. The +The `Load` and `AsyncLoad` methods allow reading the data from a file or from a web resource. The `Parse` method takes the data as a string, so we can now print the information as follows: *) @@ -260,7 +260,7 @@ for author in topic.Authors do (*** include-fsi-merged-output ***) (** -The value `topic` has a property `Topic` (of type `string`) which returns the value +The value `topic` has a property `Topic` (of type `string`), which returns the value of the attribute with the same name. It also has a property `Authors` that returns an array with all the authors. The `Born` property is missing for some authors, so it becomes `option` and we need to print it using `Option.iter`. @@ -285,7 +285,7 @@ Consider for example, the following sample (a simplified version of

-Here, a `
` element can contain other `
` elements and it is quite clear that +Here, a `
` element can contain other `
` elements, and it is quite clear that they should all have the same type - we want to be able to write a recursive function that processes `
` elements. To make this possible, you need to set an optional parameter `Global` to `true`: @@ -297,7 +297,7 @@ let html = Html.GetSample() (** When the `Global` parameter is `true`, the type provider _unifies_ all elements of the same name. This means that all `
` elements have the same type (with a union -of all attributes and all possible children nodes that appear in the sample document). +of all attributes and all possible child nodes that appear in the sample document). The type is located under a type `Html`, so we can write a `printDiv` function that takes `Html.Div` and acts as follows: @@ -319,17 +319,17 @@ printDiv html (** The function first prints all text included as `` (the element never has any -attributes in our sample, so it is inferred as `string`), then it recursively prints +attributes in our sample, so it is inferred as `string`), and then it recursively prints the content of all `
` elements. If the element does not contain nested elements, then we print the `Value` (inner text). ## Loading Directly from a File or URL -In many cases we might want to define schema using a local sample file, but then directly +In many cases, we might want to define schema using a local sample file, but then directly load the data from disk or from a URL either synchronously (with `Load`) or asynchronously (with `AsyncLoad`). -For this example I am using the US Census data set from `https://api.census.gov/data.xml`, a sample of +For this example, I am using the US Census data set from `https://api.census.gov/data.xml`, a sample of which I have used here for `../data/Census.xml`. This sample is greatly reduced from the live data, so that it contains only the elements and attributes relevant to us: @@ -370,7 +370,7 @@ let apiLinks = (** This US Census data is an interesting dataset with this top level API returning hundreds of other -datasets each with their own API. Here we use the Census data to get a list of titles and URLs for +datasets each with their own API. Here, we use the Census data to get a list of titles and URLs for the lower level APIs. *) @@ -403,7 +403,7 @@ let cacheJanitor () = (** ## Reading RSS feeds -To conclude this introduction with a more interesting example, let's look how to parse an +To conclude this introduction with a more interesting example, let's look at how to parse an RSS feed. As discussed earlier, we can use relative paths or web addresses when calling the type provider: *) @@ -530,7 +530,7 @@ printfn "%s was born in %d" turing.Surname turing.BirthDate.Year (** The properties of the provided type are derived from the schema instead of being inferred from samples. -Usually a schema is not specified as plain text but stored in a file like +Usually, a schema is not specified as plain text but stored in a file like [`data/po.xsd`](../data/po.xsd) and the uri is set in the `Schema` parameter: *) @@ -545,7 +545,7 @@ type RssXsd = XmlProvider (** -The schema is expected to define a root element (a global element with complex type). +The schema is expected to define a root element (a global element with a complex type). In case of multiple root elements: *) @@ -611,7 +611,7 @@ type FooSequence = """> (** -here a valid xml element is parsed as an instance of the provided type, with two properties corresponding to `bar`and `baz` elements, where the former is an array in order to hold multiple elements: +here a valid xml element is parsed as an instance of the provided type, with two properties corresponding to `bar` and `baz` elements, where the former is an array in order to hold multiple elements: *) let fooSequence = @@ -741,12 +741,12 @@ is still available. An important design decision is to focus on elements and not on complex types; while the latter may be valuable in schema design, our goal is simply to obtain an easy and safe way to access xml data. -In other words the provided types are not intended for domain modeling (it's one of the very few cases +In other words, the provided types are not intended for domain modeling (it's one of the very few cases where optional properties are preferred to sum types). Hence, we do not provide types corresponding to complex types in a schema but only corresponding -to elements (of course the underlying complex types still affect the shape of the provided types +to elements (of course, the underlying complex types still affect the shape of the provided types but this happens only implicitly). -Focusing on element shapes let us generate a type that should be essentially the same as one +Focusing on element shapes lets us generate a type that should be essentially the same as one inferred from a significant set of valid samples. This allows a smooth transition (replacing `Sample` with `Schema`) when a schema becomes available. diff --git a/docs/tutorials/JsonToXml.fsx b/docs/tutorials/JsonToXml.fsx index 3fca72e1a..b3a522bf9 100644 --- a/docs/tutorials/JsonToXml.fsx +++ b/docs/tutorials/JsonToXml.fsx @@ -64,7 +64,7 @@ produce a different value). Converting XML to JSON ---------------------- -Although XML and JSON are quite similar formats, there is a number of subtle differences. +Although XML and JSON are quite similar formats, there are a number of subtle differences. In particular, XML distinguishes between _attributes_ and _child elements_. Moreover, all XML elements have a name, while JSON arrays or records are anonymous (but records have named fields). Consider, for example, the following XML: @@ -77,7 +77,7 @@ have named fields). Consider, for example, the following XML: The JSON that we produce will ignore the top-level element name (`channel`). It produces -a record that contains a unique field for every attribute and a name of a child element. +a record that contains a unique field for every attribute and the name of a child element. If an element appears multiple times, it is turned into an array: [lang=js] @@ -89,10 +89,10 @@ If an element appears multiple times, it is turned into an array: As you can see, the `item` element has been automatically pluralized to `items` and the array contains two record values that consist of the `value` attribute. -The conversion function is a recursive function that takes a `XElement` and produces +The conversion function is a recursive function that takes an `XElement` and produces `cref:T:FSharp.Data.JsonValue`. It builds JSON records (using `JsonValue.Record`) and arrays (using `JsonValue.Array`). All attribute values are turned into `JsonValue.String` - the -sample does not imlement more sophisticated conversion that would turn numeric +sample does not implement a more sophisticated conversion that would turn numeric attributes to a corresponding JSON type: *)