Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mod: Beginner Guide: rewrote the gamma correction section #349

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 8 additions & 22 deletions src/content/docs/current/Guides/Your First Shader/0_intro.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ This tutorial is based on one written by [saada2006](https://saada2006.github.io

You will need:
- A suitable text editor for modifying shader code. One popular choice is [Visual Studio Code](https://code.visualstudio.com/), but any program which can edit text files is sufficient. A simpler but popular editor is [Notepad++](https://notepad-plus-plus.org/).
- A computer capable of running OpenGL 3.3. If your computer was manufactured in the last 15 years it likely supports OpenGL 3.3. Note that whilst OpenGL works on MacOS, it is deprecated, and some things may not work correctly.
- A computer capable of running OpenGL 3.3. If your computer was manufactured in the last 15 years it likely supports OpenGL 3.3. Note that whilst OpenGL works on macOS, Apple will not update the drivers to support any version past OpenGL 4.1 and some things may not work correctly.
- A willingness to learn. Shaders are hard, and you can't just pick them up overnight. Do not expect to complete this tutorial and become the next Sonic Ether.
- An instance of Minecraft with Iris, Optifine, or Oculus installed. Since this tutorial is in the Iris documentation, it is assumed you are using Iris.

Expand All @@ -35,32 +35,18 @@ In Iris, it is also useful to enable debug mode. You can do this by pressing <kb

## OpenGL and GLSL

Old versions of Minecraft used an ancient version of OpenGL - OpenGL 2.0. Most older shader packs, for this reason, were written using GLSL version 120. However, since Minecraft 1.17, Minecraft was updated to use version 3.2, which targets version 150. However, shader loader mods like Iris and Optifine allow the use of any version of OpenGL/GLSL the user's hardware supports. Since OpenGL 3.3 requires 'DX10 Class' hardware, any computer released in the last decade should support it, so this is what we will be using. OpenGL 3.3 uses GLSL version 330.
Minecraft is built using OpenGL, a high-level graphics API. OpenGL itself is not a piece of software; rather, it merely describes a specification for creating computer graphics that hardware developers are responsible for implementing. OpenGL uses GLSL (OpenGL Shading Language), which is the language Minecraft shader packs are coded in. GLSL is syntactically similar to C but introduces many extra features specifically designed for working with graphics.
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved

For the next part it is important to note that since most of the game's development was done on OpenGL 2.0, the rendering pipeline the game uses does not take advantage of modern hardware features.

## Rendering the blocks

Now with that out of the way, we can focus on how Minecraft actually does it's rendering. Minecraft is a voxel game, and therefore it does not follow the normal style of rendering that is present in most games. First of all, Minecraft has to render a large amount of blocks, which could be different types of blocks. Rendering each block as it's own draw call is a really bad idea for performance. What Minecraft does is batch vertices into chunks, so that each chunk becomes it's own draw call. To texture each block, Minecraft uses a texture atlas.

Lighting in Minecraft is a bit different from how it is done in other games. Minecraft needs to support an arbitrary number of light sources, with the features of old OpenGL versions, and have decent performance on slow hardware like iGPUs or the GT 710. There also needs to be occlusion detection for the lights, that is, a light behind a wall cannot light up what is in front of the the wall. Doing this the “normal” way would require storing all lights in a texture and having a texture atlas of shadow maps for each light. This doesn't support area lighting, so lighting from blocks like glowstone up close will look bad, and this would be insanely costly. Imagine how slow rendering the nether would be, since each lava block in the nether needs to be processed. Minecraft needs a different approach from this.

Some of you who play Minecraft will know that each block has a lighting level, which comes from both torches and how exposed a block is to the sky. Minecraft reuses this information for lighting the blocks. Each vertex has a `vec2` attribute known as the “lightmap coordinates”. The x value represents lighting from blocks like torches and glowstone, while the y value represents how much the vertex is exposed to the sky. These values in older versions of Minecraft are from 0 to 15, but in newer versions it can be up to the 200s.
Old versions of Minecraft used an ancient version of OpenGL - OpenGL 2.0. Most older shader packs, for this reason, were written using GLSL version 120. However, since Minecraft 1.17, Minecraft was updated to use OpenGL 3.2, which targets version 150. However, shader loader mods like Iris and Optifine allow the use of any version of OpenGL/GLSL the user's hardware supports. Since OpenGL 3.3 requires 'DX10 Class' hardware, any computer released in the last decade should support it, so this is what we will be using. OpenGL 3.3 uses GLSL version 330.
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved

The lightmap alone is not enough to light the block. It somehow has to be converted to a lighting color which then has to be multiplied by the block color to obtain the final color that gets displayed on your screen. Minecraft by default uses the light map coordinates (after doing math to move them to the [0, 1] range) as texture coordinates to look up a lighting color value from a lightmap texture in the fragment shader. The lighting color value gets multiplied by the block color and then displayed on your screen. See the [Optifine documentation on this](https://github.com/sp614x/optifine/blob/15ef31064323d7e1c5959ab8f9e8a260f0750124/OptiFineDoc/doc/shaders.txt#L263) for more details. We won't be using the light map coordinates to look up from the lightmap texture, instead using this coordinate to calculate a light level.
For the next part it is important to note that since most of the game's development was done on OpenGL 2.0, the rendering pipeline the game uses does not take advantage of modern hardware features.

## How Shaders Work

To understand how shaders work, let's understand how the shader pipeline works. One of the types of programs you will work with a lot are 'fullscreen passes'. These are passes which run for every pixel on the screen. These are the simplest form of a shader program, and are very useful for post processing effects, or anything which doesn't require information about what's not on screen.

Iris and Optifine also provide you with what are known as 'gbuffers' passes. There are different gbuffers passes for different things, here are a few examples:
- `gbuffers_terrain` - all solid terrain
- `gbuffers_water` - all translucent terrain
- `gbuffers_textured` - particles
- `gbuffers_entities` - entities
A "shader" is, by definition, any program that is executed on the GPU. For Minecraft, these shader programs (also know as passes) can be sorted into two distinct categories: composite passes, which execute for the entire screen, and gbuffers passes, which execute only for specific geometry. Each individual program is composed of several shader stages, two of which are required: a vertex shader, which executes once for each vertex of the geometry, and a fragment shader, which executes once for every pixel that covers the geometry. You can optionally include a compute, geometry, and/or tesselation stage for each pass.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't call them composite passes, rather fullscreen passes, and probably put both that and gbuffers in quotes

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My language here was taken directly from the Iris Docs. Notably, there, they're called "composite-style" and "gbuffers-style" passes. I agree with both of your points, though.


These passes run for every vertex of every item rendered onscreen. This allows us to get information about blocks and entities, and store them in textures for later use in fullscreen passes.

The final type of pass is the shadow pass, which runs before all other passes. This pass renders all terrain from the perspective of the sun/moon to a few buffers we can access later, known as the 'shadow maps'. The main shadow map stores how far away the closest thing it can see is. From this, we can check if something is further from the sun than the closest thing it can see, and if it is, it must be in shadow. We will cover this later on in the tutorial.
:::tip[CPU vs GPU]
Odds are, all your previous code has been executed on the CPU. Shaders are instead executed on the GPU, also know as a graphics card or video card. CPUs are great at performing complex sequential tasks, whereas GPUs are great are executing simple tasks in parallel. In fact, modern high-end graphics cards are capable of performing *trillions* of calculations per second! You can read more about the differences [here](https://www.intel.com/content/www/us/en/products/docs/processors/cpu-vs-gpu.html).
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved
:::

For a full list of programs and what they do, see the [Iris Docs](https://shaders.properties/current/reference/programs/overview/).
28 changes: 15 additions & 13 deletions src/content/docs/current/Guides/Your First Shader/1_composite.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
---
title: Simple Post Effects
title: Your First Effect
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
title: Your First Effect
title: Your First Post Processing Effect

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't do this because of

image

That's on a 14-in laptop in full screen, which I think is how a decent amount of people will view the tutorial. "Your First Shader Effect" could be a good alternative, or you could just accept this

description: Write a basic post processing effect in the `composite` pass.
sidebar:
label: Simple Post Effects
label: Your First Effect
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
label: Your First Effect
label: Your First Post Processing Effect

order: 2
---

:::caution[Warning]
This tutorial is still being developed. Some statements may be incorrect, and things may change in the future. Got any feedback? [Comment on the tracking issue](https://github.com/IrisShaders/DocsPage/issues/327).
:::

## Setting up the file structure
## Setting Up the File Structure
Minecraft shaders require a specific structure of files in the right places to load code. While it's important to understand this structure, to save time, we will be working with the Base 330 pack from shaderLABS. Download it from [here](https://github.com/shaderLABS/Base-330), and extract it into your `shaderpacks` folder. You should have the following structure.

```
Expand All @@ -34,7 +34,7 @@ It is always important to respect the license associated with code when you use

When you select the shader in the shader selection screen, you should not see any errors in the logs.

## The `composite` pass
## The `composite` Pass
For this shader, we will be using the first `composite` pass. This is a full screen pass which runs just after all gbuffers programs have rendered.

First, let's open `composite.vsh`. This is the *vertex shader* for the `composite` program. Since `composite` is a fullscreen pass, this actually just renders a singular quad (a rectangular polygon) to the screen which exactly covers it. This means that your fullscreen passes are technically actually running on 3D geometry! Specifically, the vertex shader will run four times, one for each corner of this quad.
Expand All @@ -56,13 +56,13 @@ Let's analyse this.
#version 330 compatibility
```

This tells the shader what version of GLSL to use, as covered earlier.
Every shader's first line must be a version declaration. GLSL has two profiles: `compatibility` and `core`. Iris is better at patching the `compatibility` profile, so in the context of Minecraft there's no benefit to using `core`.
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved

```glsl
out vec2 texcoord;
```

This is a variable declaration, but a special one. The `out` keyword means that the value will be passed to the fragment shader. The fragment shader can then have a corresponding `in` declaration which allows it to recieve this value. **This is commonly referred to as a 'varying'.**
This is a variable declaration, but a special one. The `out` keyword means that the value will be passed to the fragment shader. The fragment shader can then have a corresponding `in` declaration which allows it to recieve this value. In older versions of GLSL, both `in` and `out` were replaced with the `varying` keyword, which people sometimes still call it.
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved

:::note[Note]
Values passed from vertex shaders to fragment shaders are interpolated! This means that if one vertex has a value of 0, and another has a value of 1, then a pixel halfway between these two vertices will have a value of 0.5. This can be prevented by adding the `flat` keyword the `in` and `out` declarations.
Expand All @@ -72,20 +72,23 @@ Values passed from vertex shaders to fragment shaders are interpolated! This mea
void main() {
```

The `main` function is the code that is run when the shader is invoked, just like in languages like C.
The `main` function is the code that is run when the shader is invoked, just like in C.
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved

```glsl
gl_Position = ftransform();
```

This transforms the position of the vertex from model space to clip space. The `ftransform` function is actually deprecated but Iris patches it to the relevant modern code. For more information, see the howto on [coordinate spaces](/current/how-to/coordinate_spaces).
This transforms the position of the vertex from model space to clip space. The `ftransform` function is actually deprecated but Iris patches it to the relevant modern code. For more information, see the Iris docs on [coordinate spaces](/current/how-to/coordinate_spaces).
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved

```glsl
texcoord = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
```

This gives us the 'texture coordinate' of the current vertex. This is more commonly known as the 'UV', and it is used so that the fragment shader knows where on the screen it is. These texture coordinates range from (0, 0) at the bottom left of the texture to (1, 1) at the top right.

:::tip[Swizzling]
In the previous line of code, you might have noticed some weird syntax: `.xy`. This is an operation unique to shading languages known as *swizzling*. You can read more about swizzling [here](https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)#Swizzling).
:::

Let's open `composite.fsh`. This is the *fragment shader* It runs for every pixel on the screen.

Expand Down Expand Up @@ -139,18 +142,17 @@ color = texture(colortex0, texcoord);

This reads the value in `colortex0` at position `texcoord` and stores it in `color`. For more info see [the OpenGL docs](https://registry.khronos.org/OpenGL-Refpages/gl4/html/texture.xhtml).

## Making it monochrome
To make a color monochrome, we need to operate only on the r, g, and b components of the color. We can access and modify these these with `color.rgb`.

To actually do the operation, we can take a bit of a shortcut, and set every component to the dot product with a 3d vector of `1/3`. This vector can be represented as `vec3(1.0/3.0)`. This operation is mathematically equivalent to taking one third of each component and adding them together.
## Making It Grayscale
A color is grayscale when the r, g, and b components all have the same value. We can compute this value by taking the dot product of the original color and a `vec3(1.0/3.0)`. This is mathematically equivalent to multiplying each channel by 1/3 and then summing the products together. If we set all three channels for color to this value, our game will be converted to grayscale.
kloppi417 marked this conversation as resolved.
Show resolved Hide resolved

:::tip[What's a dot product?]
If you don't know what a dot product is, you're probably new to linear algebra as well. We recommend studying up on it (or at least learning about unfamiliar concepts throughout the chapters on your own) as from this point on everything you do will be in relation to it somehow. A great place to get started is [3Blue1Brown's series of tutorials](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab).
:::

So, after we get the value of `color`, we can do:
```glsl
color.rgb = vec3(dot(color.rgb, vec3(1.0/3.0)));
float grayscale = dot(color.rgb, vec3(1.0 / 3.0));
color.rgb = vec3(grayscale);
```

Your screen should now look like this!
Expand Down
Loading