Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabled media fader for partially submerged clip volume. #99

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

TrajansRow
Copy link
Collaborator

Only applies to shader renderer.

@Hopper262
Copy link
Member

I know I've been sitting on this forever, sorry about that. I was hoping to refactor the faders and was going to review this patch just before that work. Given my schedule, that refactor won't happen any time soon, so I'll make time to review this in December. I do appreciate the improvement and look forward to getting it in place!

@TrajansRow
Copy link
Collaborator Author

While this patch works and is an improvement, I have since thought of a few reasons why you might not want to merge it.

For one, it looks jarring when partially submerged in water due to the different transmission spectrum depicted by the surface texture (turquoise) and the subsurface media (which is unnaturally blue).

Secondly, the math blows up if the player uses a physics model that permits viewing straight up or down because it’s impossible to calculate the intersection of two parallel planes.

Most importantly, I think it would be easier to solve this problem in the shader and not use faders for media at all - for that purpose, the math in this PR that calculates the media/znear intersection in device coordinates could be useful. If that’s something worth pursuing, there are a couple of approaches that I can think of; the simplest of which I prototyped earlier this year.

@Hopper262
Copy link
Member

Splitting the media out does seem like a good idea. I'd been planning to move all the faders into a shader pipeline: similar to bloom, we'd render to texture and write GLSL instead of the current limited OpenGL operations. We'd be able to fix the horrible blue then, among other issues, but your method would be more interesting for media specifically.

Now that Apple has officially deprecated OpenGL, though, I'd like to prioritize moving to a different graphics system over GL-specific refactoring. We could go to Vulkan, but bgfx seems more friendly so that's my leading candidate. Both should work for iOS as well, hopefully making it easier to share code with that port. My current plan is that 1.4 will focus on that task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants