diff --git a/previews/PR80/customprocessing.html b/previews/PR80/customprocessing.html deleted file mode 100644 index 388cfc50..00000000 --- a/previews/PR80/customprocessing.html +++ /dev/null @@ -1,29 +0,0 @@ - -5. Custom pre- and post-processing · Literate.jl

5. Custom pre- and post-processing

Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.

All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.

preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.

Example: Adding current date

As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:

# # Example
-# This example was generated DATEOFTODAY
-
-x = 1 // 3

where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example

function update_date(content)
-    content = replace(content, "DATEOFTODAY" => Date(now()))
-    return content
-end

which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:

Literate.markdown("input.jl", "outputdir"; preprocess = update_date)

Example: Replacing include calls with included code

Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.

We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.

A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate

# # Replace includes
-# This is an example to replace `include` calls with the actual file content.
-
-include("file1.jl")
-
-# Cool, we just saw the result of the above code snippet. Here is one more:
-
-include("file2.jl")

Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:

function replace_includes(str)
-
-    included = ["file1.jl", "file2.jl"]
-
-    # Here the path loads the files from their proper directory,
-    # which may not be the directory of the `examples.jl` file!
-    path = "directory/to/example/files/"
-
-    for ex in included
-        content = read(path*ex, String)
-        str = replace(str, "include(\"$(ex)\")" => content)
-    end
-    return str
-end

(of course replace included with your respective files)

Finally, you simply pass this function to e.g. Literate.markdown as

Literate.markdown("examples.jl", "path/to/save/markdown";
-                  name = "markdown_file_name", preprocess = replace_includes)

and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!

This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.

diff --git a/previews/PR80/customprocessing/index.html b/previews/PR80/customprocessing/index.html new file mode 100644 index 00000000..bb2b03b1 --- /dev/null +++ b/previews/PR80/customprocessing/index.html @@ -0,0 +1,29 @@ + +5. Custom pre- and post-processing · Literate.jl

5. Custom pre- and post-processing

Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.

All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.

preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.

Example: Adding current date

As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:

# # Example
+# This example was generated DATEOFTODAY
+
+x = 1 // 3

where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example

function update_date(content)
+    content = replace(content, "DATEOFTODAY" => Date(now()))
+    return content
+end

which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:

Literate.markdown("input.jl", "outputdir"; preprocess = update_date)

Example: Replacing include calls with included code

Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.

We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.

A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate

# # Replace includes
+# This is an example to replace `include` calls with the actual file content.
+
+include("file1.jl")
+
+# Cool, we just saw the result of the above code snippet. Here is one more:
+
+include("file2.jl")

Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:

function replace_includes(str)
+
+    included = ["file1.jl", "file2.jl"]
+
+    # Here the path loads the files from their proper directory,
+    # which may not be the directory of the `examples.jl` file!
+    path = "directory/to/example/files/"
+
+    for ex in included
+        content = read(path*ex, String)
+        str = replace(str, "include(\"$(ex)\")" => content)
+    end
+    return str
+end

(of course replace included with your respective files)

Finally, you simply pass this function to e.g. Literate.markdown as

Literate.markdown("examples.jl", "path/to/save/markdown";
+                  name = "markdown_file_name", preprocess = replace_includes)

and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!

This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.

diff --git a/previews/PR80/documenter.html b/previews/PR80/documenter.html deleted file mode 100644 index 8b065ba6..00000000 --- a/previews/PR80/documenter.html +++ /dev/null @@ -1,12 +0,0 @@ - -6. Interaction with Documenter.jl · Literate.jl

6. Interaction with Documenter.jl

Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:

Literate.markdown:

  • The default code fence will change from
    ```julia
    -# code
    -```
    to Documenters @example blocks:
    ```@examples $(name)
    -# code
    -```
  • The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
    ```@meta
    -EditURL = "$(relpath(inputfile, outputdir))"
    -```

Literate.notebook:

  • Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
  • Documenter style markdown math
    ```math
    -\int f dx
    -```
    is replaced with notebook compatible
    $$
    -\int f dx
    -$$

Literate.script:

  • Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
diff --git a/previews/PR80/documenter/index.html b/previews/PR80/documenter/index.html new file mode 100644 index 00000000..c4a144fe --- /dev/null +++ b/previews/PR80/documenter/index.html @@ -0,0 +1,12 @@ + +6. Interaction with Documenter.jl · Literate.jl

6. Interaction with Documenter.jl

Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:

Literate.markdown:

  • The default code fence will change from
    ```julia
    +# code
    +```
    to Documenters @example blocks:
    ```@examples $(name)
    +# code
    +```
  • The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
    ```@meta
    +EditURL = "$(relpath(inputfile, outputdir))"
    +```

Literate.notebook:

  • Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
  • Documenter style markdown math
    ```math
    +\int f dx
    +```
    is replaced with notebook compatible
    $$
    +\int f dx
    +$$

Literate.script:

  • Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
diff --git a/previews/PR80/fileformat.html b/previews/PR80/fileformat.html deleted file mode 100644 index 0c2f7556..00000000 --- a/previews/PR80/fileformat.html +++ /dev/null @@ -1,18 +0,0 @@ - -2. File Format · Literate.jl

2. File Format

The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.

2.1. Syntax

The basic syntax is simple:

  • lines starting with # are treated as markdown,
  • all other lines are treated as julia code.

Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.

Lets look at a simple example:

# # Rational numbers
-#
-# In julia rational numbers can be constructed with the `//` operator.
-# Lets define two rational numbers, `x` and `y`:
-
-## Define variable x and y
-x = 1//3
-y = 2//5
-
-# When adding `x` and `y` together we obtain a new rational number:
-
-z = x + y

In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:

  • The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
  • The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.

For simple use this is all you need to know. The following additional special syntax can also be used:

There is also some default convenience replacements that will always be performed, see Default Replacements.

2.2. Filtering Lines

It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:

  • #md: line exclusive to markdown output,
  • #nb: line exclusive to notebook output,
  • #jl: line exclusive to script output,
  • #src: line exclusive to the source code and thus filtered out unconditionally.

Lines starting with one of these tokens are filtered out in the preprocessing step.

Tip

The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.

Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:

#md # ```@docs
-#md # Literate.markdown
-#md # Literate.notebook
-#md # Literate.markdown
-#md # ```

The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.

The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:

using Test                      #src
-@test result == expected_result #src

2.3. Default Replacements

The following convenience "macros" are always expanded:

  • @__NAME__

    expands to the name keyword argument to Literate.markdown, Literate.notebook and Literate.script (defaults to the filename of the input file).

  • @__REPO_ROOT_URL__

    expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.

  • @__NBVIEWER_ROOT_URL__

    expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.

  • @__BINDER_ROOT_URL__

    expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:

    [![Binder](https://mybinder.org/badge_logo.svg)](@__BINDER_ROOT_URL__/path/to/notebook.inpynb)
Note

@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.

diff --git a/previews/PR80/fileformat/index.html b/previews/PR80/fileformat/index.html new file mode 100644 index 00000000..2a66fb55 --- /dev/null +++ b/previews/PR80/fileformat/index.html @@ -0,0 +1,18 @@ + +2. File Format · Literate.jl

2. File Format

The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.

2.1. Syntax

The basic syntax is simple:

  • lines starting with # are treated as markdown,
  • all other lines are treated as julia code.

Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.

Lets look at a simple example:

# # Rational numbers
+#
+# In julia rational numbers can be constructed with the `//` operator.
+# Lets define two rational numbers, `x` and `y`:
+
+## Define variable x and y
+x = 1//3
+y = 2//5
+
+# When adding `x` and `y` together we obtain a new rational number:
+
+z = x + y

In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:

  • The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
  • The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.

For simple use this is all you need to know. The following additional special syntax can also be used:

There is also some default convenience replacements that will always be performed, see Default Replacements.

2.2. Filtering Lines

It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:

  • #md: line exclusive to markdown output,
  • #nb: line exclusive to notebook output,
  • #jl: line exclusive to script output,
  • #src: line exclusive to the source code and thus filtered out unconditionally.

Lines starting with one of these tokens are filtered out in the preprocessing step.

Tip

The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.

Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:

#md # ```@docs
+#md # Literate.markdown
+#md # Literate.notebook
+#md # Literate.markdown
+#md # ```

The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.

The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:

using Test                      #src
+@test result == expected_result #src

2.3. Default Replacements

The following convenience "macros" are always expanded:

  • @__NAME__

    expands to the name keyword argument to Literate.markdown, Literate.notebook and Literate.script (defaults to the filename of the input file).

  • @__REPO_ROOT_URL__

    expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.

  • @__NBVIEWER_ROOT_URL__

    expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.

  • @__BINDER_ROOT_URL__

    expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:

    [![Binder](https://mybinder.org/badge_logo.svg)](@__BINDER_ROOT_URL__/path/to/notebook.inpynb)
Note

@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.

diff --git a/previews/PR80/generated/example.ipynb b/previews/PR80/generated/example.ipynb index b3bd6b18..f07bb8e7 100644 --- a/previews/PR80/generated/example.ipynb +++ b/previews/PR80/generated/example.ipynb @@ -188,114 +188,114 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", "0\n", "\n", - "\n", + "\n", "5\n", "\n", - "\n", + "\n", "10\n", "\n", - "\n", + "\n", "15\n", "\n", - "\n", + "\n", "-1.0\n", "\n", - "\n", + "\n", "-0.5\n", "\n", - "\n", + "\n", "0.0\n", "\n", - "\n", + "\n", "0.5\n", "\n", - "\n", + "\n", "1.0\n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", "y1\n", "\n", - "\n", - "\n", + "\n", "y2\n", "\n", "\n" @@ -525,114 +525,114 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", "0\n", "\n", - "\n", + "\n", "5\n", "\n", - "\n", + "\n", "10\n", "\n", - "\n", + "\n", "15\n", "\n", - "\n", + "\n", "-1.0\n", "\n", - "\n", + "\n", "-0.5\n", "\n", - "\n", + "\n", "0.0\n", "\n", - "\n", + "\n", "0.5\n", "\n", - "\n", + "\n", "1.0\n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", "y1\n", "\n", - "\n", - "\n", + "\n", "y2\n", "\n", "\n" diff --git a/previews/PR80/generated/example.html b/previews/PR80/generated/example/index.html similarity index 79% rename from previews/PR80/generated/example.html rename to previews/PR80/generated/example/index.html index 7f00e4b9..85e034c0 100644 --- a/previews/PR80/generated/example.html +++ b/previews/PR80/generated/example/index.html @@ -1,6 +1,6 @@ -7. Example · Literate.jl

7. Example

This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.

It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.

Basic syntax

The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:

x = 1//3
-y = 2//5
2//5

In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.

It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):

  • This line starts with #md and is thus only visible in the markdown output.

The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.

x + y
11//15
x * y
2//15

Output Capturing

Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.

Note

Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.

function foo()
+7. Example · Literate.jl

7. Example

This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.

It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.

Basic syntax

The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:

x = 1//3
+y = 2//5
2//5

In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.

It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):

  • This line starts with #md and is thus only visible in the markdown output.

The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.

x + y
11//15
x * y
2//15

Output Capturing

Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.

Note

Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.

function foo()
     println("This string is printed to stdout.")
     return [1, 2, 3, 4]
 end
@@ -16,114 +16,114 @@
 plot(x, [y1, y2])
- + - - + - - + - - - - - - - - - - - - - - - - - - - - - + 0 - + 5 - + 10 - + 15 - + -1.0 - + -0.5 - + 0.0 - + 0.5 - + 1.0 - - - - - - + y1 - - + y2 -

Custom processing

It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:

z = 1.0 + 2.0im
1.0 + 2.0im

Documenter.jl interaction

In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:

\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]

using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.

This page was generated using Literate.jl.

+

Custom processing

It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:

z = 1.0 + 2.0im
1.0 + 2.0im

Documenter.jl interaction

In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:

\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]

using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.

This page was generated using Literate.jl.

diff --git a/previews/PR80/generated/name.html b/previews/PR80/generated/name.html deleted file mode 100644 index a0249a04..00000000 --- a/previews/PR80/generated/name.html +++ /dev/null @@ -1,2 +0,0 @@ - -Rational numbers · Literate.jl

Rational numbers

In julia rational numbers can be constructed with the // operator. Lets define two rational numbers, x and y:

x = 1//3
1//3
y = 2//5
2//5

When adding x and y together we obtain a new rational number:

z = x + y
11//15
diff --git a/previews/PR80/generated/name/index.html b/previews/PR80/generated/name/index.html new file mode 100644 index 00000000..51f2c046 --- /dev/null +++ b/previews/PR80/generated/name/index.html @@ -0,0 +1,2 @@ + +Rational numbers · Literate.jl

Rational numbers

In julia rational numbers can be constructed with the // operator. Lets define two rational numbers, x and y:

x = 1//3
1//3
y = 2//5
2//5

When adding x and y together we obtain a new rational number:

z = x + y
11//15
diff --git a/previews/PR80/index.html b/previews/PR80/index.html index 5e98c6d7..9ba68da0 100644 --- a/previews/PR80/index.html +++ b/previews/PR80/index.html @@ -1,2 +1,2 @@ -1. Introduction · Literate.jl

1. Introduction

Welcome to the documentation for Literate – a simplistic package for Literate Programming.

What?

Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.

The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!

The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:

Why?

Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.

It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.

It is also common that packages include examples in the documentation, for example by using Documenter.jl @example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.

Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.

+1. Introduction · Literate.jl

1. Introduction

Welcome to the documentation for Literate – a simplistic package for Literate Programming.

What?

Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.

The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!

The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:

Why?

Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.

It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.

It is also common that packages include examples in the documentation, for example by using Documenter.jl @example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.

Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.

diff --git a/previews/PR80/outputformats.html b/previews/PR80/outputformats.html deleted file mode 100644 index 0d44883d..00000000 --- a/previews/PR80/outputformats.html +++ /dev/null @@ -1,38 +0,0 @@ - -4. Output Formats · Literate.jl

4. Output Formats

When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:

# # Rational numbers
-#
-# In julia rational numbers can be constructed with the `//` operator.
-# Lets define two rational numbers, `x` and `y`:
-
-x = 1//3
-#-
-y = 2//5
-
-# When adding `x` and `y` together we obtain a new rational number:
-
-z = x + y

and see how this is rendered in each of the output formats.

4.1. Markdown Output

The (default) markdown output of the source snippet above is as follows

```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
-```
-
-# Rational numbers
-
-In julia rational numbers can be constructed with the `//` operator.
-Lets define two rational numbers, `x` and `y`:
-
-```@example name
-x = 1//3
-```
-
-```@example name
-y = 2//5
-```
-
-When adding `x` and `y` together we obtain a new rational number:
-
-```@example name
-z = x + y
-```

We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.

Some of the output rendering can be controlled with keyword arguments to Literate.markdown:

Literate.markdownFunction
Literate.markdown(inputfile, outputdir; kwargs...)

Generate a markdown file from inputfile and write the result to the directory outputdir.

Keyword arguments:

  • name: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
  • preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
  • documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.
  • codefence: A Pair of opening and closing code fence. Defaults to
    "```@example $(name)" => "```"
    if documenter = true and
    "```julia" => "```"
    if documenter = false.
  • credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source

4.2. Notebook Output

The (default) notebook output of the source snippet can be seen here: notebook.ipynb.

We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:

Literate.notebookFunction
Literate.notebook(inputfile, outputdir; kwargs...)

Generate a notebook from inputfile and write the result to outputdir.

Keyword arguments:

  • name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
  • preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
  • execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
  • documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
  • credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source

Notebook metadata

Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows

%% optional ignored text [type] {optional metadata JSON}

Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.

4.3. Script Output

The (default) script output of the source snippet above is as follows

x = 1//3
-
-y = 2//5
-
-z = x + y

We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:

Literate.scriptFunction
Literate.script(inputfile, outputdir; kwargs...)

Generate a plain script file from inputfile and write the result to outputdir.

Keyword arguments:

  • name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
  • preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
  • documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
  • keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
  • credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
diff --git a/previews/PR80/outputformats/index.html b/previews/PR80/outputformats/index.html new file mode 100644 index 00000000..19536d0d --- /dev/null +++ b/previews/PR80/outputformats/index.html @@ -0,0 +1,38 @@ + +4. Output Formats · Literate.jl

4. Output Formats

When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:

# # Rational numbers
+#
+# In julia rational numbers can be constructed with the `//` operator.
+# Lets define two rational numbers, `x` and `y`:
+
+x = 1//3
+#-
+y = 2//5
+
+# When adding `x` and `y` together we obtain a new rational number:
+
+z = x + y

and see how this is rendered in each of the output formats.

4.1. Markdown Output

The (default) markdown output of the source snippet above is as follows

```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+```
+
+# Rational numbers
+
+In julia rational numbers can be constructed with the `//` operator.
+Lets define two rational numbers, `x` and `y`:
+
+```@example name
+x = 1//3
+```
+
+```@example name
+y = 2//5
+```
+
+When adding `x` and `y` together we obtain a new rational number:
+
+```@example name
+z = x + y
+```

We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.

Some of the output rendering can be controlled with keyword arguments to Literate.markdown:

Literate.markdownFunction
Literate.markdown(inputfile, outputdir; kwargs...)

Generate a markdown file from inputfile and write the result to the directory outputdir.

Keyword arguments:

  • name: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
  • preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
  • documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.
  • codefence: A Pair of opening and closing code fence. Defaults to
    "```@example $(name)" => "```"
    if documenter = true and
    "```julia" => "```"
    if documenter = false.
  • credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source

4.2. Notebook Output

The (default) notebook output of the source snippet can be seen here: notebook.ipynb.

We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:

Literate.notebookFunction
Literate.notebook(inputfile, outputdir; kwargs...)

Generate a notebook from inputfile and write the result to outputdir.

Keyword arguments:

  • name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
  • preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
  • execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
  • documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
  • credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source

Notebook metadata

Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows

%% optional ignored text [type] {optional metadata JSON}

Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.

4.3. Script Output

The (default) script output of the source snippet above is as follows

x = 1//3
+
+y = 2//5
+
+z = x + y

We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:

Literate.scriptFunction
Literate.script(inputfile, outputdir; kwargs...)

Generate a plain script file from inputfile and write the result to outputdir.

Keyword arguments:

  • name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
  • preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
  • documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
  • keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
  • credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
diff --git a/previews/PR80/pipeline.html b/previews/PR80/pipeline.html deleted file mode 100644 index bd08ad54..00000000 --- a/previews/PR80/pipeline.html +++ /dev/null @@ -1,32 +0,0 @@ - -3. Processing pipeline · Literate.jl

3. Processing pipeline

The generation of output follows the same pipeline for all output formats:

  1. Pre-processing
  2. Parsing
  3. Document generation
  4. Post-processing
  5. Writing to file

3.1. Pre-processing

The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.

The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.

3.2. Parsing

After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:

# # Rational numbers                                                     <- markdown
-#                                                                        <- markdown
-# In julia rational numbers can be constructed with the `//` operator.   <- markdown
-# Lets define two rational numbers, `x` and `y`:                         <- markdown
-                                                                         <- code
-## Define variable x and y                                               <- code
-x = 1 // 3                                                               <- code
-y = 2 // 5                                                               <- code
-                                                                         <- code
-# When adding `x` and `y` together we obtain a new rational number:      <- markdown
-                                                                         <- code
-z = x + y                                                                <- code

In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:

# # Rational numbers                                                     ┐
-#                                                                        │
-# In julia rational numbers can be constructed with the `//` operator.   │ markdown
-# Lets define two rational numbers, `x` and `y`:                         ┘
-                                                                         ┐
-## Define variable x and y                                               │
-x = 1 // 3                                                               │
-y = 2 // 5                                                               │ code
-                                                                         ┘
-# When adding `x` and `y` together we obtain a new rational number:      ] markdown
-                                                                         ┐
-z = x + y                                                                ┘ code

In the last parsing step all empty leading and trailing lines for each chunk are removed, but empty lines within the same block are kept. The leading # tokens are also removed from the markdown chunks. Finally we would end up with the following 4 chunks:

Chunks #1:

# Rational numbers
-
-In julia rational numbers can be constructed with the `//` operator.
-Lets define two rational numbers, `x` and `y`:

Chunk #2:

# Define variable x and y
-x = 1 // 3
-y = 2 // 5

Chunk #3:

When adding `x` and `y` together we obtain a new rational number:

Chunk #4:

z = x + y

It is then up to the Document generation step to decide how these chunks should be treated.

Custom control over chunk splits

Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":

x = 1 // 3
-y = 2 // 5
-#-
-z = x + y

The example above would result in two consecutive code-chunks.

Tip

The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.

It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.

3.3. Document generation

After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:

  • Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
  • Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
  • Script output: markdown chunks are discarded, code chunks are printed as-is.

3.4. Post-processing

When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.

3.5. Writing to file

The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.

diff --git a/previews/PR80/pipeline/index.html b/previews/PR80/pipeline/index.html new file mode 100644 index 00000000..6262af2d --- /dev/null +++ b/previews/PR80/pipeline/index.html @@ -0,0 +1,32 @@ + +3. Processing pipeline · Literate.jl

3. Processing pipeline

The generation of output follows the same pipeline for all output formats:

  1. Pre-processing
  2. Parsing
  3. Document generation
  4. Post-processing
  5. Writing to file

3.1. Pre-processing

The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.

The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.

3.2. Parsing

After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:

# # Rational numbers                                                     <- markdown
+#                                                                        <- markdown
+# In julia rational numbers can be constructed with the `//` operator.   <- markdown
+# Lets define two rational numbers, `x` and `y`:                         <- markdown
+                                                                         <- code
+## Define variable x and y                                               <- code
+x = 1 // 3                                                               <- code
+y = 2 // 5                                                               <- code
+                                                                         <- code
+# When adding `x` and `y` together we obtain a new rational number:      <- markdown
+                                                                         <- code
+z = x + y                                                                <- code

In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:

# # Rational numbers                                                     ┐
+#                                                                        │
+# In julia rational numbers can be constructed with the `//` operator.   │ markdown
+# Lets define two rational numbers, `x` and `y`:                         ┘
+                                                                         ┐
+## Define variable x and y                                               │
+x = 1 // 3                                                               │
+y = 2 // 5                                                               │ code
+                                                                         ┘
+# When adding `x` and `y` together we obtain a new rational number:      ] markdown
+                                                                         ┐
+z = x + y                                                                ┘ code

In the last parsing step all empty leading and trailing lines for each chunk are removed, but empty lines within the same block are kept. The leading # tokens are also removed from the markdown chunks. Finally we would end up with the following 4 chunks:

Chunks #1:

# Rational numbers
+
+In julia rational numbers can be constructed with the `//` operator.
+Lets define two rational numbers, `x` and `y`:

Chunk #2:

# Define variable x and y
+x = 1 // 3
+y = 2 // 5

Chunk #3:

When adding `x` and `y` together we obtain a new rational number:

Chunk #4:

z = x + y

It is then up to the Document generation step to decide how these chunks should be treated.

Custom control over chunk splits

Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":

x = 1 // 3
+y = 2 // 5
+#-
+z = x + y

The example above would result in two consecutive code-chunks.

Tip

The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.

It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.

3.3. Document generation

After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:

  • Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
  • Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
  • Script output: markdown chunks are discarded, code chunks are printed as-is.

3.4. Post-processing

When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.

3.5. Writing to file

The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.

diff --git a/previews/PR80/search.html b/previews/PR80/search.html deleted file mode 100644 index 9887d9ff..00000000 --- a/previews/PR80/search.html +++ /dev/null @@ -1,2 +0,0 @@ - -Search · Literate.jl

Loading search...

    diff --git a/previews/PR80/search/index.html b/previews/PR80/search/index.html new file mode 100644 index 00000000..1e8274d5 --- /dev/null +++ b/previews/PR80/search/index.html @@ -0,0 +1,2 @@ + +Search · Literate.jl

    Loading search...

      diff --git a/previews/PR80/search_index.js b/previews/PR80/search_index.js index 0c94f7be..1f58326d 100644 --- a/previews/PR80/search_index.js +++ b/previews/PR80/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"pipeline.html#**3.**-Processing-pipeline-1","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"","category":"section"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The generation of output follows the same pipeline for all output formats:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Pre-processing\nParsing\nDocument generation\nPost-processing\nWriting to file","category":"page"},{"location":"pipeline.html#Pre-processing-1","page":"3. Processing pipeline","title":"3.1. Pre-processing","text":"","category":"section"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The next step is to perform all of the built-in default replacements. CRLF style line endings (\"\\r\\n\") are replaced with LF line endings (\"\\n\") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience \"macros\" described in Default Replacements is expanded.","category":"page"},{"location":"pipeline.html#Parsing-1","page":"3. Processing pipeline","title":"3.2. Parsing","text":"","category":"section"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# # Rational numbers <- markdown\n# <- markdown\n# In julia rational numbers can be constructed with the `//` operator. <- markdown\n# Lets define two rational numbers, `x` and `y`: <- markdown\n <- code\n## Define variable x and y <- code\nx = 1 // 3 <- code\ny = 2 // 5 <- code\n <- code\n# When adding `x` and `y` together we obtain a new rational number: <- markdown\n <- code\nz = x + y <- code","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"In the next step the lines are grouped into \"chunks\" of markdown and code. This is done by simply collecting adjacent lines of the same \"type\" into chunks:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# # Rational numbers ┐\n# │\n# In julia rational numbers can be constructed with the `//` operator. │ markdown\n# Lets define two rational numbers, `x` and `y`: ┘\n ┐\n## Define variable x and y │\nx = 1 // 3 │\ny = 2 // 5 │ code\n ┘\n# When adding `x` and `y` together we obtain a new rational number: ] markdown\n ┐\nz = x + y ┘ code","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"In the last parsing step all empty leading and trailing lines for each chunk are removed, but empty lines within the same block are kept. The leading # tokens are also removed from the markdown chunks. Finally we would end up with the following 4 chunks:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunks #1:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# Rational numbers\n\nIn julia rational numbers can be constructed with the `//` operator.\nLets define two rational numbers, `x` and `y`:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunk #2:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# Define variable x and y\nx = 1 // 3\ny = 2 // 5","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunk #3:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"When adding `x` and `y` together we obtain a new rational number:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunk #4:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"z = x + y","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"It is then up to the Document generation step to decide how these chunks should be treated.","category":"page"},{"location":"pipeline.html#Custom-control-over-chunk-splits-1","page":"3. Processing pipeline","title":"Custom control over chunk splits","text":"","category":"section"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as \"chunk-splitters\":","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"x = 1 // 3\ny = 2 // 5\n#-\nz = x + y","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The example above would result in two consecutive code-chunks.","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"tip: Tip\nThe rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's \"continued\"-blocks, see the Documenter manual.","category":"page"},{"location":"pipeline.html#Document-generation-1","page":"3. Processing pipeline","title":"3.3. Document generation","text":"","category":"section"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:","category":"page"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),\nNotebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,\nScript output: markdown chunks are discarded, code chunks are printed as-is.","category":"page"},{"location":"pipeline.html#Post-processing-1","page":"3. Processing pipeline","title":"3.4. Post-processing","text":"","category":"section"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.","category":"page"},{"location":"pipeline.html#Writing-to-file-1","page":"3. Processing pipeline","title":"3.5. Writing to file","text":"","category":"section"},{"location":"pipeline.html#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.","category":"page"},{"location":"documenter.html#Interaction-with-Documenter-1","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"","category":"section"},{"location":"documenter.html#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:","category":"page"},{"location":"documenter.html#[Literate.markdown](@ref):-1","page":"6. Interaction with Documenter.jl","title":"Literate.markdown:","text":"","category":"section"},{"location":"documenter.html#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"The default code fence will change from\n```julia\n# code\n```\nto Documenters @example blocks:\n```@examples $(name)\n# code\n```\nThe following @meta block will be added to the top of the markdown page, which redirects the \"Edit on GitHub\" link on the top of the page to the source file rather than the generated .md file:\n```@meta\nEditURL = \"$(relpath(inputfile, outputdir))\"\n```","category":"page"},{"location":"documenter.html#[Literate.notebook](@ref):-1","page":"6. Interaction with Documenter.jl","title":"Literate.notebook:","text":"","category":"section"},{"location":"documenter.html#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.\nDocumenter style markdown math\n```math\n\\int f dx\n```\nis replaced with notebook compatible\n$$\n\\int f dx\n$$","category":"page"},{"location":"documenter.html#[Literate.script](@ref):-1","page":"6. Interaction with Documenter.jl","title":"Literate.script:","text":"","category":"section"},{"location":"documenter.html#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.","category":"page"},{"location":"customprocessing.html#Custom-pre-and-post-processing-1","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"","category":"section"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default \"transformation\" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.","category":"page"},{"location":"customprocessing.html#Example:-Adding-current-date-1","page":"5. Custom pre- and post-processing","title":"Example: Adding current date","text":"","category":"section"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"# # Example\n# This example was generated DATEOFTODAY\n\nx = 1 // 3","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"function update_date(content)\n content = replace(content, \"DATEOFTODAY\" => Date(now()))\n return content\nend","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"which would replace every occurrence of \"DATEOFTODAY\" with the current date. We would now simply give this function to the generator, for example:","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Literate.markdown(\"input.jl\", \"outputdir\"; preprocess = update_date)","category":"page"},{"location":"customprocessing.html#Example:-Replacing-include-calls-with-included-code-1","page":"5. Custom pre- and post-processing","title":"Example: Replacing include calls with included code","text":"","category":"section"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"# # Replace includes\n# This is an example to replace `include` calls with the actual file content.\n\ninclude(\"file1.jl\")\n\n# Cool, we just saw the result of the above code snippet. Here is one more:\n\ninclude(\"file2.jl\")","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"function replace_includes(str)\n\n included = [\"file1.jl\", \"file2.jl\"]\n\n # Here the path loads the files from their proper directory,\n # which may not be the directory of the `examples.jl` file!\n path = \"directory/to/example/files/\"\n\n for ex in included\n content = read(path*ex, String)\n str = replace(str, \"include(\\\"$(ex)\\\")\" => content)\n end\n return str\nend","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"(of course replace included with your respective files)","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Finally, you simply pass this function to e.g. Literate.markdown as","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Literate.markdown(\"examples.jl\", \"path/to/save/markdown\";\n name = \"markdown_file_name\", preprocess = replace_includes)","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!","category":"page"},{"location":"customprocessing.html#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.","category":"page"},{"location":"outputformats.html#Output-Formats-1","page":"4. Output Formats","title":"4. Output Formats","text":"","category":"section"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"import Markdown\nMarkdown.parse(\"```julia\\n\" * rstrip(read(\"outputformats.jl\", String)) * \"\\n```\")","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"and see how this is rendered in each of the output formats.","category":"page"},{"location":"outputformats.html#Markdown-Output-1","page":"4. Output Formats","title":"4.1. Markdown Output","text":"","category":"section"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"The (default) markdown output of the source snippet above is as follows","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"import Markdown\nfile = joinpath(@__DIR__, \"../src/generated/name.md\")\nstr = \"````markdown\\n\" * rstrip(read(file, String)) * \"\\n````\"\nrm(file)\nMarkdown.parse(str)","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the \"Edit on GitHub\" link for the page, see Interaction with Documenter.","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"Some of the output rendering can be controlled with keyword arguments to Literate.markdown:","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"Literate.markdown","category":"page"},{"location":"outputformats.html#Literate.markdown","page":"4. Output Formats","title":"Literate.markdown","text":"Literate.markdown(inputfile, outputdir; kwargs...)\n\nGenerate a markdown file from inputfile and write the result to the directory outputdir.\n\nKeyword arguments:\n\nname: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.\npreprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.\ndocumenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.\ncodefence: A Pair of opening and closing code fence. Defaults to\n\"```@example $(name)\" => \"```\"\nif documenter = true and\n\"```julia\" => \"```\"\nif documenter = false.\ncredit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.\n\n\n\n\n\n","category":"function"},{"location":"outputformats.html#Notebook-Output-1","page":"4. Output Formats","title":"4.2. Notebook Output","text":"","category":"section"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"The (default) notebook output of the source snippet can be seen here: notebook.ipynb.","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"Literate.notebook","category":"page"},{"location":"outputformats.html#Literate.notebook","page":"4. Output Formats","title":"Literate.notebook","text":"Literate.notebook(inputfile, outputdir; kwargs...)\n\nGenerate a notebook from inputfile and write the result to outputdir.\n\nKeyword arguments:\n\nname: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.\npreprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.\nexecute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.\ndocumenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.\ncredit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.\n\n\n\n\n\n","category":"function"},{"location":"outputformats.html#Notebook-metadata-1","page":"4. Output Formats","title":"Notebook metadata","text":"","category":"section"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"%% optional ignored text [type] {optional metadata JSON}","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.","category":"page"},{"location":"outputformats.html#Script-Output-1","page":"4. Output Formats","title":"4.3. Script Output","text":"","category":"section"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"The (default) script output of the source snippet above is as follows","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"import Markdown\nfile = joinpath(@__DIR__, \"../src/generated/outputformats.jl\")\nstr = \"```julia\\n\" * rstrip(read(file, String)) * \"\\n```\"\nrm(file)\nMarkdown.parse(str)","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:","category":"page"},{"location":"outputformats.html#","page":"4. Output Formats","title":"4. Output Formats","text":"Literate.script","category":"page"},{"location":"outputformats.html#Literate.script","page":"4. Output Formats","title":"Literate.script","text":"Literate.script(inputfile, outputdir; kwargs...)\n\nGenerate a plain script file from inputfile and write the result to outputdir.\n\nKeyword arguments:\n\nname: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.\npreprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.\ndocumenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.\nkeep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.\ncredit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.\n\n\n\n\n\n","category":"function"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"EditURL = \"https://github.com/fredrikekre/Literate.jl/blob/master/examples/example.jl\"","category":"page"},{"location":"generated/example.html#**7.**-Example-1","page":"7. Example","title":"7. Example","text":"","category":"section"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"(Image: ) (Image: )","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.","category":"page"},{"location":"generated/example.html#Basic-syntax-1","page":"7. Example","title":"Basic syntax","text":"","category":"section"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"x = 1//3\ny = 2//5","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"This line starts with #md and is thus only visible in the markdown output.","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"x + y","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"x * y","category":"page"},{"location":"generated/example.html#Output-Capturing-1","page":"7. Example","title":"Output Capturing","text":"","category":"section"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"note: Note\nNote that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of \"This string is printed to stdout.\" is hidden.","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"function foo()\n println(\"This string is printed to stdout.\")\n return [1, 2, 3, 4]\nend\n\nfoo()","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"using Plots\nx = range(0, stop=6π, length=1000)\ny1 = sin.(x)\ny2 = cos.(x)\nplot(x, [y1, y2])","category":"page"},{"location":"generated/example.html#Custom-processing-1","page":"7. Example","title":"Custom processing","text":"","category":"section"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"z = 1.0 + 2.0im","category":"page"},{"location":"generated/example.html#documenter-interaction-1","page":"7. Example","title":"Documenter.jl interaction","text":"","category":"section"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"int_Omega nabla v cdot nabla u mathrmdOmega = int_Omega v f mathrmdOmega","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"using Documenters math syntax. Documenters syntax is automatically changed to \\begin{equation} ... \\end{equation} in the notebook output to display correctly.","category":"page"},{"location":"generated/example.html#","page":"7. Example","title":"7. Example","text":"This page was generated using Literate.jl.","category":"page"},{"location":"fileformat.html#**2.**-File-Format-1","page":"2. File Format","title":"2. File Format","text":"","category":"section"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.","category":"page"},{"location":"fileformat.html#Syntax-1","page":"2. File Format","title":"2.1. Syntax","text":"","category":"section"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"The basic syntax is simple:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"lines starting with # are treated as markdown,\nall other lines are treated as julia code.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"Lets look at a simple example:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"# # Rational numbers\n#\n# In julia rational numbers can be constructed with the `//` operator.\n# Lets define two rational numbers, `x` and `y`:\n\n## Define variable x and y\nx = 1//3\ny = 2//5\n\n# When adding `x` and `y` together we obtain a new rational number:\n\nz = x + y","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).\nThe script is \"self-explanatory\", i.e. the markdown lines works as comments and thus serve as good documentation on its own.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"For simple use this is all you need to know. The following additional special syntax can also be used:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,\n#- (#+): tag to manually control chunk-splits, see Custom control over chunk splits.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"There is also some default convenience replacements that will always be performed, see Default Replacements.","category":"page"},{"location":"fileformat.html#Filtering-Lines-1","page":"2. File Format","title":"2.2. Filtering Lines","text":"","category":"section"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of \"tokens\" that can be used to mark the purpose of certain lines:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"#md: line exclusive to markdown output,\n#nb: line exclusive to notebook output,\n#jl: line exclusive to script output,\n#src: line exclusive to the source code and thus filtered out unconditionally.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"Lines starting with one of these tokens are filtered out in the preprocessing step.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"tip: Tip\nThe tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"#md # ```@docs\n#md # Literate.markdown\n#md # Literate.notebook\n#md # Literate.markdown\n#md # ```","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"using Test #src\n@test result == expected_result #src","category":"page"},{"location":"fileformat.html#Default-Replacements-1","page":"2. File Format","title":"2.3. Default Replacements","text":"","category":"section"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"The following convenience \"macros\" are always expanded:","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"@__NAME__\nexpands to the name keyword argument to Literate.markdown, Literate.notebook and Literate.script (defaults to the filename of the input file).\n@__REPO_ROOT_URL__\nexpands to https://github.com/$(ENV[\"TRAVIS_REPO_SLUG\"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.\n@__NBVIEWER_ROOT_URL__\nexpands to https://nbviewer.jupyter.org/github/$(ENV[\"TRAVIS_REPO_SLUG\"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.\n@__BINDER_ROOT_URL__\nexpands to https://mybinder.org/v2/gh/$(ENV[\"TRAVIS_REPO_SLUG\"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:\n[![Binder](https://mybinder.org/badge_logo.svg)](@__BINDER_ROOT_URL__/path/to/notebook.inpynb)","category":"page"},{"location":"fileformat.html#","page":"2. File Format","title":"2. File Format","text":"note: Note\n@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.","category":"page"},{"location":"index.html#**1.**-Introduction-1","page":"1. Introduction","title":"1. Introduction","text":"","category":"section"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"Welcome to the documentation for Literate – a simplistic package for Literate Programming.","category":"page"},{"location":"index.html#What?-1","page":"1. Introduction","title":"What?","text":"","category":"section"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to \"clean\" the source from all metadata, and produce a pure Julia script.","category":"page"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!","category":"page"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:","category":"page"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"Literate.markdown: generates a markdown file\nLiterate.notebook: generates an (optionally executed) notebook\nLiterate.script: generates a plain script file, removing all metadata and special syntax.","category":"page"},{"location":"index.html#Why?-1","page":"1. Introduction","title":"Why?","text":"","category":"section"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.","category":"page"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"It is quite common that packages have \"example notebooks\" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very \"rich\" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.","category":"page"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"It is also common that packages include examples in the documentation, for example by using Documenter.jl @example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.","category":"page"},{"location":"index.html#","page":"1. Introduction","title":"1. Introduction","text":"Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.","category":"page"},{"location":"generated/name.html#","page":"Rational numbers","title":"Rational numbers","text":"EditURL = \"https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl\"","category":"page"},{"location":"generated/name.html#Rational-numbers-1","page":"Rational numbers","title":"Rational numbers","text":"","category":"section"},{"location":"generated/name.html#","page":"Rational numbers","title":"Rational numbers","text":"In julia rational numbers can be constructed with the // operator. Lets define two rational numbers, x and y:","category":"page"},{"location":"generated/name.html#","page":"Rational numbers","title":"Rational numbers","text":"x = 1//3","category":"page"},{"location":"generated/name.html#","page":"Rational numbers","title":"Rational numbers","text":"y = 2//5","category":"page"},{"location":"generated/name.html#","page":"Rational numbers","title":"Rational numbers","text":"When adding x and y together we obtain a new rational number:","category":"page"},{"location":"generated/name.html#","page":"Rational numbers","title":"Rational numbers","text":"z = x + y","category":"page"}] +[{"location":"pipeline/#**3.**-Processing-pipeline-1","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"","category":"section"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The generation of output follows the same pipeline for all output formats:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Pre-processing\nParsing\nDocument generation\nPost-processing\nWriting to file","category":"page"},{"location":"pipeline/#Pre-processing-1","page":"3. Processing pipeline","title":"3.1. Pre-processing","text":"","category":"section"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The next step is to perform all of the built-in default replacements. CRLF style line endings (\"\\r\\n\") are replaced with LF line endings (\"\\n\") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience \"macros\" described in Default Replacements is expanded.","category":"page"},{"location":"pipeline/#Parsing-1","page":"3. Processing pipeline","title":"3.2. Parsing","text":"","category":"section"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# # Rational numbers <- markdown\n# <- markdown\n# In julia rational numbers can be constructed with the `//` operator. <- markdown\n# Lets define two rational numbers, `x` and `y`: <- markdown\n <- code\n## Define variable x and y <- code\nx = 1 // 3 <- code\ny = 2 // 5 <- code\n <- code\n# When adding `x` and `y` together we obtain a new rational number: <- markdown\n <- code\nz = x + y <- code","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"In the next step the lines are grouped into \"chunks\" of markdown and code. This is done by simply collecting adjacent lines of the same \"type\" into chunks:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# # Rational numbers ┐\n# │\n# In julia rational numbers can be constructed with the `//` operator. │ markdown\n# Lets define two rational numbers, `x` and `y`: ┘\n ┐\n## Define variable x and y │\nx = 1 // 3 │\ny = 2 // 5 │ code\n ┘\n# When adding `x` and `y` together we obtain a new rational number: ] markdown\n ┐\nz = x + y ┘ code","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"In the last parsing step all empty leading and trailing lines for each chunk are removed, but empty lines within the same block are kept. The leading # tokens are also removed from the markdown chunks. Finally we would end up with the following 4 chunks:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunks #1:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# Rational numbers\n\nIn julia rational numbers can be constructed with the `//` operator.\nLets define two rational numbers, `x` and `y`:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunk #2:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"# Define variable x and y\nx = 1 // 3\ny = 2 // 5","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunk #3:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"When adding `x` and `y` together we obtain a new rational number:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Chunk #4:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"z = x + y","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"It is then up to the Document generation step to decide how these chunks should be treated.","category":"page"},{"location":"pipeline/#Custom-control-over-chunk-splits-1","page":"3. Processing pipeline","title":"Custom control over chunk splits","text":"","category":"section"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as \"chunk-splitters\":","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"x = 1 // 3\ny = 2 // 5\n#-\nz = x + y","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The example above would result in two consecutive code-chunks.","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"tip: Tip\nThe rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's \"continued\"-blocks, see the Documenter manual.","category":"page"},{"location":"pipeline/#Document-generation-1","page":"3. Processing pipeline","title":"3.3. Document generation","text":"","category":"section"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:","category":"page"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),\nNotebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,\nScript output: markdown chunks are discarded, code chunks are printed as-is.","category":"page"},{"location":"pipeline/#Post-processing-1","page":"3. Processing pipeline","title":"3.4. Post-processing","text":"","category":"section"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.","category":"page"},{"location":"pipeline/#Writing-to-file-1","page":"3. Processing pipeline","title":"3.5. Writing to file","text":"","category":"section"},{"location":"pipeline/#","page":"3. Processing pipeline","title":"3. Processing pipeline","text":"The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.","category":"page"},{"location":"documenter/#Interaction-with-Documenter-1","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"","category":"section"},{"location":"documenter/#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:","category":"page"},{"location":"documenter/#[Literate.markdown](@ref):-1","page":"6. Interaction with Documenter.jl","title":"Literate.markdown:","text":"","category":"section"},{"location":"documenter/#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"The default code fence will change from\n```julia\n# code\n```\nto Documenters @example blocks:\n```@examples $(name)\n# code\n```\nThe following @meta block will be added to the top of the markdown page, which redirects the \"Edit on GitHub\" link on the top of the page to the source file rather than the generated .md file:\n```@meta\nEditURL = \"$(relpath(inputfile, outputdir))\"\n```","category":"page"},{"location":"documenter/#[Literate.notebook](@ref):-1","page":"6. Interaction with Documenter.jl","title":"Literate.notebook:","text":"","category":"section"},{"location":"documenter/#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.\nDocumenter style markdown math\n```math\n\\int f dx\n```\nis replaced with notebook compatible\n$$\n\\int f dx\n$$","category":"page"},{"location":"documenter/#[Literate.script](@ref):-1","page":"6. Interaction with Documenter.jl","title":"Literate.script:","text":"","category":"section"},{"location":"documenter/#","page":"6. Interaction with Documenter.jl","title":"6. Interaction with Documenter.jl","text":"Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.","category":"page"},{"location":"customprocessing/#Custom-pre-and-post-processing-1","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"","category":"section"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default \"transformation\" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.","category":"page"},{"location":"customprocessing/#Example:-Adding-current-date-1","page":"5. Custom pre- and post-processing","title":"Example: Adding current date","text":"","category":"section"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"# # Example\n# This example was generated DATEOFTODAY\n\nx = 1 // 3","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"function update_date(content)\n content = replace(content, \"DATEOFTODAY\" => Date(now()))\n return content\nend","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"which would replace every occurrence of \"DATEOFTODAY\" with the current date. We would now simply give this function to the generator, for example:","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Literate.markdown(\"input.jl\", \"outputdir\"; preprocess = update_date)","category":"page"},{"location":"customprocessing/#Example:-Replacing-include-calls-with-included-code-1","page":"5. Custom pre- and post-processing","title":"Example: Replacing include calls with included code","text":"","category":"section"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"# # Replace includes\n# This is an example to replace `include` calls with the actual file content.\n\ninclude(\"file1.jl\")\n\n# Cool, we just saw the result of the above code snippet. Here is one more:\n\ninclude(\"file2.jl\")","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"function replace_includes(str)\n\n included = [\"file1.jl\", \"file2.jl\"]\n\n # Here the path loads the files from their proper directory,\n # which may not be the directory of the `examples.jl` file!\n path = \"directory/to/example/files/\"\n\n for ex in included\n content = read(path*ex, String)\n str = replace(str, \"include(\\\"$(ex)\\\")\" => content)\n end\n return str\nend","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"(of course replace included with your respective files)","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Finally, you simply pass this function to e.g. Literate.markdown as","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"Literate.markdown(\"examples.jl\", \"path/to/save/markdown\";\n name = \"markdown_file_name\", preprocess = replace_includes)","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!","category":"page"},{"location":"customprocessing/#","page":"5. Custom pre- and post-processing","title":"5. Custom pre- and post-processing","text":"This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.","category":"page"},{"location":"outputformats/#Output-Formats-1","page":"4. Output Formats","title":"4. Output Formats","text":"","category":"section"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"import Markdown\nMarkdown.parse(\"```julia\\n\" * rstrip(read(\"outputformats.jl\", String)) * \"\\n```\")","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"and see how this is rendered in each of the output formats.","category":"page"},{"location":"outputformats/#Markdown-Output-1","page":"4. Output Formats","title":"4.1. Markdown Output","text":"","category":"section"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"The (default) markdown output of the source snippet above is as follows","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"import Markdown\nfile = joinpath(@__DIR__, \"../src/generated/name.md\")\nstr = \"````markdown\\n\" * rstrip(read(file, String)) * \"\\n````\"\nrm(file)\nMarkdown.parse(str)","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the \"Edit on GitHub\" link for the page, see Interaction with Documenter.","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"Some of the output rendering can be controlled with keyword arguments to Literate.markdown:","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"Literate.markdown","category":"page"},{"location":"outputformats/#Literate.markdown","page":"4. Output Formats","title":"Literate.markdown","text":"Literate.markdown(inputfile, outputdir; kwargs...)\n\nGenerate a markdown file from inputfile and write the result to the directory outputdir.\n\nKeyword arguments:\n\nname: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.\npreprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.\ndocumenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.\ncodefence: A Pair of opening and closing code fence. Defaults to\n\"```@example $(name)\" => \"```\"\nif documenter = true and\n\"```julia\" => \"```\"\nif documenter = false.\ncredit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.\n\n\n\n\n\n","category":"function"},{"location":"outputformats/#Notebook-Output-1","page":"4. Output Formats","title":"4.2. Notebook Output","text":"","category":"section"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"The (default) notebook output of the source snippet can be seen here: notebook.ipynb.","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"Literate.notebook","category":"page"},{"location":"outputformats/#Literate.notebook","page":"4. Output Formats","title":"Literate.notebook","text":"Literate.notebook(inputfile, outputdir; kwargs...)\n\nGenerate a notebook from inputfile and write the result to outputdir.\n\nKeyword arguments:\n\nname: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.\npreprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.\nexecute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.\ndocumenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.\ncredit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.\n\n\n\n\n\n","category":"function"},{"location":"outputformats/#Notebook-metadata-1","page":"4. Output Formats","title":"Notebook metadata","text":"","category":"section"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"%% optional ignored text [type] {optional metadata JSON}","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.","category":"page"},{"location":"outputformats/#Script-Output-1","page":"4. Output Formats","title":"4.3. Script Output","text":"","category":"section"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"The (default) script output of the source snippet above is as follows","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"import Markdown\nfile = joinpath(@__DIR__, \"../src/generated/outputformats.jl\")\nstr = \"```julia\\n\" * rstrip(read(file, String)) * \"\\n```\"\nrm(file)\nMarkdown.parse(str)","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:","category":"page"},{"location":"outputformats/#","page":"4. Output Formats","title":"4. Output Formats","text":"Literate.script","category":"page"},{"location":"outputformats/#Literate.script","page":"4. Output Formats","title":"Literate.script","text":"Literate.script(inputfile, outputdir; kwargs...)\n\nGenerate a plain script file from inputfile and write the result to outputdir.\n\nKeyword arguments:\n\nname: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.\npreprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.\ndocumenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.\nkeep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.\ncredit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.\n\n\n\n\n\n","category":"function"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"EditURL = \"https://github.com/fredrikekre/Literate.jl/blob/master/examples/example.jl\"","category":"page"},{"location":"generated/example/#**7.**-Example-1","page":"7. Example","title":"7. Example","text":"","category":"section"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"(Image: ) (Image: )","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.","category":"page"},{"location":"generated/example/#Basic-syntax-1","page":"7. Example","title":"Basic syntax","text":"","category":"section"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"x = 1//3\ny = 2//5","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"This line starts with #md and is thus only visible in the markdown output.","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"x + y","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"x * y","category":"page"},{"location":"generated/example/#Output-Capturing-1","page":"7. Example","title":"Output Capturing","text":"","category":"section"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"note: Note\nNote that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of \"This string is printed to stdout.\" is hidden.","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"function foo()\n println(\"This string is printed to stdout.\")\n return [1, 2, 3, 4]\nend\n\nfoo()","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"using Plots\nx = range(0, stop=6π, length=1000)\ny1 = sin.(x)\ny2 = cos.(x)\nplot(x, [y1, y2])","category":"page"},{"location":"generated/example/#Custom-processing-1","page":"7. Example","title":"Custom processing","text":"","category":"section"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"z = 1.0 + 2.0im","category":"page"},{"location":"generated/example/#documenter-interaction-1","page":"7. Example","title":"Documenter.jl interaction","text":"","category":"section"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"int_Omega nabla v cdot nabla u mathrmdOmega = int_Omega v f mathrmdOmega","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"using Documenters math syntax. Documenters syntax is automatically changed to \\begin{equation} ... \\end{equation} in the notebook output to display correctly.","category":"page"},{"location":"generated/example/#","page":"7. Example","title":"7. Example","text":"This page was generated using Literate.jl.","category":"page"},{"location":"fileformat/#**2.**-File-Format-1","page":"2. File Format","title":"2. File Format","text":"","category":"section"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.","category":"page"},{"location":"fileformat/#Syntax-1","page":"2. File Format","title":"2.1. Syntax","text":"","category":"section"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"The basic syntax is simple:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"lines starting with # are treated as markdown,\nall other lines are treated as julia code.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"Lets look at a simple example:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"# # Rational numbers\n#\n# In julia rational numbers can be constructed with the `//` operator.\n# Lets define two rational numbers, `x` and `y`:\n\n## Define variable x and y\nx = 1//3\ny = 2//5\n\n# When adding `x` and `y` together we obtain a new rational number:\n\nz = x + y","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).\nThe script is \"self-explanatory\", i.e. the markdown lines works as comments and thus serve as good documentation on its own.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"For simple use this is all you need to know. The following additional special syntax can also be used:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,\n#- (#+): tag to manually control chunk-splits, see Custom control over chunk splits.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"There is also some default convenience replacements that will always be performed, see Default Replacements.","category":"page"},{"location":"fileformat/#Filtering-Lines-1","page":"2. File Format","title":"2.2. Filtering Lines","text":"","category":"section"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of \"tokens\" that can be used to mark the purpose of certain lines:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"#md: line exclusive to markdown output,\n#nb: line exclusive to notebook output,\n#jl: line exclusive to script output,\n#src: line exclusive to the source code and thus filtered out unconditionally.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"Lines starting with one of these tokens are filtered out in the preprocessing step.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"tip: Tip\nThe tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"#md # ```@docs\n#md # Literate.markdown\n#md # Literate.notebook\n#md # Literate.markdown\n#md # ```","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"using Test #src\n@test result == expected_result #src","category":"page"},{"location":"fileformat/#Default-Replacements-1","page":"2. File Format","title":"2.3. Default Replacements","text":"","category":"section"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"The following convenience \"macros\" are always expanded:","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"@__NAME__\nexpands to the name keyword argument to Literate.markdown, Literate.notebook and Literate.script (defaults to the filename of the input file).\n@__REPO_ROOT_URL__\nexpands to https://github.com/$(ENV[\"TRAVIS_REPO_SLUG\"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.\n@__NBVIEWER_ROOT_URL__\nexpands to https://nbviewer.jupyter.org/github/$(ENV[\"TRAVIS_REPO_SLUG\"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.\n@__BINDER_ROOT_URL__\nexpands to https://mybinder.org/v2/gh/$(ENV[\"TRAVIS_REPO_SLUG\"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:\n[![Binder](https://mybinder.org/badge_logo.svg)](@__BINDER_ROOT_URL__/path/to/notebook.inpynb)","category":"page"},{"location":"fileformat/#","page":"2. File Format","title":"2. File Format","text":"note: Note\n@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.","category":"page"},{"location":"#**1.**-Introduction-1","page":"1. Introduction","title":"1. Introduction","text":"","category":"section"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"Welcome to the documentation for Literate – a simplistic package for Literate Programming.","category":"page"},{"location":"#What?-1","page":"1. Introduction","title":"What?","text":"","category":"section"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to \"clean\" the source from all metadata, and produce a pure Julia script.","category":"page"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!","category":"page"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:","category":"page"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"Literate.markdown: generates a markdown file\nLiterate.notebook: generates an (optionally executed) notebook\nLiterate.script: generates a plain script file, removing all metadata and special syntax.","category":"page"},{"location":"#Why?-1","page":"1. Introduction","title":"Why?","text":"","category":"section"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.","category":"page"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"It is quite common that packages have \"example notebooks\" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very \"rich\" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.","category":"page"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"It is also common that packages include examples in the documentation, for example by using Documenter.jl @example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.","category":"page"},{"location":"#","page":"1. Introduction","title":"1. Introduction","text":"Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.","category":"page"},{"location":"generated/name/#","page":"Rational numbers","title":"Rational numbers","text":"EditURL = \"https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl\"","category":"page"},{"location":"generated/name/#Rational-numbers-1","page":"Rational numbers","title":"Rational numbers","text":"","category":"section"},{"location":"generated/name/#","page":"Rational numbers","title":"Rational numbers","text":"In julia rational numbers can be constructed with the // operator. Lets define two rational numbers, x and y:","category":"page"},{"location":"generated/name/#","page":"Rational numbers","title":"Rational numbers","text":"x = 1//3","category":"page"},{"location":"generated/name/#","page":"Rational numbers","title":"Rational numbers","text":"y = 2//5","category":"page"},{"location":"generated/name/#","page":"Rational numbers","title":"Rational numbers","text":"When adding x and y together we obtain a new rational number:","category":"page"},{"location":"generated/name/#","page":"Rational numbers","title":"Rational numbers","text":"z = x + y","category":"page"}] }