Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix formatting on code samples in docs #851

Merged
merged 1 commit into from
Sep 20, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Docs/Primer_01.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ If you don't know about Parallel.For it is a function that provides a really eas
that takes an index. Then the function is called from some thread with an index. There are no guarantees
about what core an index is run on, or what order the threads are run, but you get a **very** simple
interface for running parallel functions.
```C#
```c#
using System;
using System.Threading.Tasks;

Expand All @@ -103,7 +103,7 @@ public static class Program
}
```
Running the same program as a kernel is **very** similar:
```C#
```c#
using ILGPU;
using ILGPU.Runtime;
using ILGPU.Runtime.CPU;
Expand Down
2 changes: 1 addition & 1 deletion Docs/Tutorial_01.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ For a single device: context.GetPreferredDevice(preferCPU);

For multiple devices: context.GetPreferredDevices(preferCPU, matchingDevicesOnly);

```C#
```c#
using System;
using ILGPU;
using ILGPU.Runtime;
Expand Down
2 changes: 1 addition & 1 deletion Docs/Tutorial_02.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ All device side memory management happens in the host code through the MemoryBuf
The sample goes over the basics of managing memory via MemoryBuffers. There will be far more
in depth memory management in the later tutorials.

```C#
```c#
using System;

using ILGPU;
Expand Down
14 changes: 7 additions & 7 deletions Docs/Tutorial_03.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ In this tutorial we actually do work on the GPU!
I think the easiest way to explain this is taking the simplest example I can think of and decomposing it.

This is a modified version of the sample from Primer 01.
```C#
```c#
using ILGPU;
using ILGPU.Runtime;
using System;
Expand Down Expand Up @@ -57,22 +57,22 @@ public static class Program
## The following parts already have detailed explainations in other tutorials:

#### [Context and an accelerator.](Tutorial_01.md)
```C#
```c#
Context context = Context.CreateDefault();
Accelerator accelerator = context.GetPreferredDevice(preferCPU: false)
.CreateAccelerator(context);
```
Creates an Accelerator using GetPreferredDevice to hopefully get the "best" device.

#### [Some kind of data and output device memory](Tutorial_02.md)
```C#
```c#
MemoryBuffer1D<int, Stride1D.Dense> deviceData = accelerator.Allocate1D(new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 });
MemoryBuffer1D<int, Stride1D.Dense> deviceOutput = accelerator.Allocate1D<int>(10_000);
```

Loads some example data into the device memory, using dense striding.

```C#
```c#
int[] hostOutput = deviceOutput.GetAsArray1D();
```

Expand All @@ -82,7 +82,7 @@ After we run the kernel we need to get the data as host memory to use it in CPU
Ok now we get to the juicy bits.

#### The kernel function definition.
```C#
```c#
static void Kernel(Index1D i, ArrayView<int> data, ArrayView<int> output)
{
output[i] = data[i % data.Length];
Expand Down Expand Up @@ -122,7 +122,7 @@ try to avoid branches<sup>1</sup> and code that would change in different kernel
to avoid is threads that are running different instructions, this is called divergence.

#### The loaded instance of a kernel.
```C#
```c#
Action<Index1D, ArrayView<int>, ArrayView<int>> loadedKernel =
accelerator.LoadAutoGroupedStreamKernel<Index1D, ArrayView<int>, ArrayView<int>>(Kernel);
```
Expand All @@ -136,7 +136,7 @@ explicitly compile it.
If you are having issues compiling code try testing with the CPUAccelerator.

#### The actual kernel call and device synchronize.
```C#
```c#
loadedKernel((int)deviceOutput.Length, deviceData.View, deviceOutput.View);
accelerator.Synchronize();
```
Expand Down