From 3059d17afd6156cc1b91a72263076b00b55234a3 Mon Sep 17 00:00:00 2001 From: Dawid Wysocki Date: Sat, 17 Sep 2022 13:53:47 +0200 Subject: [PATCH] Docs: fix formatting on code samples --- Docs/Primer_01.md | 4 ++-- Docs/Tutorial_01.md | 2 +- Docs/Tutorial_02.md | 2 +- Docs/Tutorial_03.md | 14 +++++++------- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/Docs/Primer_01.md b/Docs/Primer_01.md index b36196c38..4bd35a764 100644 --- a/Docs/Primer_01.md +++ b/Docs/Primer_01.md @@ -81,7 +81,7 @@ If you don't know about Parallel.For it is a function that provides a really eas that takes an index. Then the function is called from some thread with an index. There are no guarantees about what core an index is run on, or what order the threads are run, but you get a **very** simple interface for running parallel functions. -```C# +```c# using System; using System.Threading.Tasks; @@ -103,7 +103,7 @@ public static class Program } ``` Running the same program as a kernel is **very** similar: -```C# +```c# using ILGPU; using ILGPU.Runtime; using ILGPU.Runtime.CPU; diff --git a/Docs/Tutorial_01.md b/Docs/Tutorial_01.md index 00303fe85..8a81993d3 100644 --- a/Docs/Tutorial_01.md +++ b/Docs/Tutorial_01.md @@ -180,7 +180,7 @@ For a single device: context.GetPreferredDevice(preferCPU); For multiple devices: context.GetPreferredDevices(preferCPU, matchingDevicesOnly); -```C# +```c# using System; using ILGPU; using ILGPU.Runtime; diff --git a/Docs/Tutorial_02.md b/Docs/Tutorial_02.md index 332c4c554..8d86e7a97 100644 --- a/Docs/Tutorial_02.md +++ b/Docs/Tutorial_02.md @@ -70,7 +70,7 @@ All device side memory management happens in the host code through the MemoryBuf The sample goes over the basics of managing memory via MemoryBuffers. There will be far more in depth memory management in the later tutorials. -```C# +```c# using System; using ILGPU; diff --git a/Docs/Tutorial_03.md b/Docs/Tutorial_03.md index 902cb6f9f..836f28ad6 100644 --- a/Docs/Tutorial_03.md +++ b/Docs/Tutorial_03.md @@ -5,7 +5,7 @@ In this tutorial we actually do work on the GPU! I think the easiest way to explain this is taking the simplest example I can think of and decomposing it. This is a modified version of the sample from Primer 01. -```C# +```c# using ILGPU; using ILGPU.Runtime; using System; @@ -57,7 +57,7 @@ public static class Program ## The following parts already have detailed explainations in other tutorials: #### [Context and an accelerator.](Tutorial_01.md) -```C# +```c# Context context = Context.CreateDefault(); Accelerator accelerator = context.GetPreferredDevice(preferCPU: false) .CreateAccelerator(context); @@ -65,14 +65,14 @@ Accelerator accelerator = context.GetPreferredDevice(preferCPU: false) Creates an Accelerator using GetPreferredDevice to hopefully get the "best" device. #### [Some kind of data and output device memory](Tutorial_02.md) -```C# +```c# MemoryBuffer1D deviceData = accelerator.Allocate1D(new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }); MemoryBuffer1D deviceOutput = accelerator.Allocate1D(10_000); ``` Loads some example data into the device memory, using dense striding. -```C# +```c# int[] hostOutput = deviceOutput.GetAsArray1D(); ``` @@ -82,7 +82,7 @@ After we run the kernel we need to get the data as host memory to use it in CPU Ok now we get to the juicy bits. #### The kernel function definition. -```C# +```c# static void Kernel(Index1D i, ArrayView data, ArrayView output) { output[i] = data[i % data.Length]; @@ -122,7 +122,7 @@ try to avoid branches1 and code that would change in different kernel to avoid is threads that are running different instructions, this is called divergence. #### The loaded instance of a kernel. -```C# +```c# Action, ArrayView> loadedKernel = accelerator.LoadAutoGroupedStreamKernel, ArrayView>(Kernel); ``` @@ -136,7 +136,7 @@ explicitly compile it. If you are having issues compiling code try testing with the CPUAccelerator. #### The actual kernel call and device synchronize. -```C# +```c# loadedKernel((int)deviceOutput.Length, deviceData.View, deviceOutput.View); accelerator.Synchronize(); ```