-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling Scanline Simulation (future improvements) #16373
Comments
The current problem is that we don't -know- a good way to improve it that doesn't have fairly bad artifacting or other major issues of its own. I personally think the rolling scan feature as it is now, will scare people off BFI thinking it's an entirely useless/broken feature. But I didn't want to stand in the way of merging either, as it isn't my place, and as this code should not inhibit the existing full-frame BFI/shader sub-frame code paths from working as intended. Some of the best things we know of, for the issues this feature has, are trying to hide the joint lines behind scanlines in CRT filters, and having some overlap between rolling scan sections with brightness adjustment which replaces some of the tearing problem with horizontal strips of less motion blur reduction. Which in and of itself is a pretty apparent visual artifact. Also, a front-end solution like this wont be aware of what shaders are in use, and the screen resolution and Hz being used will also change where those rolling scan joint lines are in the image. Making trying to build front end code, or a shader specificially meant to be used in conjunction with this feature, need to account for a LOT of different joint line possibilities. If anyone can provide a solution to where the artifacting is minimal enough to compete with the existing full-frame BFI that has zero inherent artifacting (other than strobing itself being a little annoying, obviously), I am all for it though. There are a few side benefits to the rolling scan method over full-frame BFI when/if it works well. This is where @mdrejhon would be very handy. :) |
For the record, I find a double ON to be much less obtrusive than a double OFF flicker. |
Did you mean this response for my last reply on the previous PR regarding the 120hz bfi workaround? |
Yeah, I just put it here instead of there so we could close the lid on that one and continue discussion of improvements here. |
A sub-frame shader solution (to that 120hz workaround) wouldn't be able to inject an 'extra' sub-frame like a driver solution could. But I still think it might be better to 'hide' a feature that is purposefully injecting noticeable annoying artifacting in a shader rather than as a front-end option. So you'd maybe do something more like (100-0)-(100-0)-(50-50)-(0-100)-(0-100) style phase shift on a framecount%(adjustable number of how long you want between phase shifts). And keep in mind framecount intentionally doesn't increment on sub-frames, or sub-frames would mess with anything older that looks at framecount but isn't sub-frame aware. The 50-50 transition frame might be a less noticeable/annoying transition than just a straight flip like 100-0-0-100? Trading some of the very noticeable change in instantaneous average brightness for some transient motion blur, still annoying but maybe a -little- less distracting. |
Hi Roc-Y I presume this only happens when rolling scan line is turned on?
…On Fri, 16 Aug 2024, 17:42 Roc-Y, ***@***.***> wrote:
I don't know why this causes wide black bands in the shader I developed,
but I think if the Rolling Scanline Simulation feature only handles the
native resolution (e.g. 256*244), then my shader will behave normally.
20240817_003458.jpg (view on web)
<https://github.com/user-attachments/assets/546e0f9c-5d53-4801-a4f1-ca496e18e89b>
—
Reply to this email directly, view it on GitHub
<#16373 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AVKYGRTCSH6EVMKYMZTDUQDZRYTWLAVCNFSM6AAAAABE53OIC2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJTHAZDIOJRG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
There are no black lines after turning off the shader. It seems that as long as the resolution is enlarged in the shader, there will be black lines. It has nothing to do with whether it is a CRT shader. |
BTW, in fast horizontal scrolling, there can be tearing artifacts with rolling-scan. You need motion sufficiently fast (about 8 retropixels/frame or faster, which produces 2-pixel-offsets for 4-segment sharp-boundary rolling scan). This is fixed via using alphablend overlaps. However, gamma-correcting the overlaps can be challenging so that all pixels emit the same number of photons is challenging. And to put fadebehind effects (so that a short-shutter photo of rolling scan looks more similar to a short-shutter photo of a CRT). And even LCD GtG distorts the alphablend overlaps. So alphablend overlaps works best on OLEDs of a known gamma (and doing gamma correction, and disabling ABL). For LCD, sharp-boundary rolling scan is better (and tolerating the tearing artifacts during fast platformers). Then again using HDR ABL is wonderful, because you can convert SDR into HDR, and use the 25% window size to make rolling-scan strobe much brighter. This improves a lot if you use 8-segment rolling scan (60fps at 480Hz OLED) to reduce HDR window size per refresh cycle, allowing HDR OLED to have a much brighter picture during rolling BFI! Also, I have a TestUFO version of rolling scan BFI under development that actually simulates the behavior of a CRT beam more accurately (including phosphor fadebehind effect). Related: #10757 |
@songokukakaroto I've been working on subframe BFI shaders that you can try. There's a 240 Hz rolling scan one, but it's not great. As mdrejhon mentioned, the gamma issue means the fades aren't perfect. |
I have a surprise coming December 24, 2024 -- the world's most accurate CRT electron beam simulator shader, the "Temporal HLSL" complete enough to close BFIv3 (except for the beamrace part) if integrated into RetroArch. It looks better than this simulation. MIT license. @hizzlekizzle, you probably can port it quickly into Retroarch. You're welcome. |
@mdrejhon sounds exciting. I can't wait to check it out :D |
Even MAME could port it in, if they wish -- @cuavas put a spike on it mamedev/mame#6762 because they said it was a pipe dream. Ah well, RetroArch code contribution it is!
It's a reality today. I just finished working on a new glsl shader for a CRT electron beam simulator I've ever seen, especially when run on a 240Hz OLED. I'll publicize it with MIT source license via a github you can fork or port on 24th December 2024 -- my release date. When run on a 240Hz-480Hz OLED to emulate a 60Hz tube, it looks good enough that my shader might become the gensis of CRT-replacement OLEDs for arcade machines of 2030s when it's cheaper to buy these soon-cheaper OLEDs than source nearly-extinct tubes 10 years from now. |
Sounds great! |
Sneak preview! Slow mo version of realtime shader. These are genuine screenshots, played back in slow-motion, of it running in real time doing 60fps at 240Hz. Phosphor trailing is adjustable, for brightness-vs-motionclarity. Still works at 2xInHz (Min ~100Hz to emulate PAL CRT, min ~120Hz to emulate NTSC CRT) and up, scales infinitely (>1000Hz+) Yes, it looks like a slow motion video of a CRT tube. But these are PrintScreen screenshots! |
It's out now! I've published the article: I've released a MIT-licensed open source shader: Shadertoy animation demo (for 240Hz) Can easily adjust settings for 120Hz or 144Hz or 165Hz or 360Hz or 480Hz or 540Hz! Please implement it into Retroarch. Pretty please. |
Discussion also at #10757 |
Porting to RetroArch was a breeze. It's available here now: https://github.com/libretro/slang-shaders/blob/master/subframe-bfi/shaders/crt-beam-simulator.slang and will show up in the online updater in a few minutes/hours. I replaced the edge-blended version I had made with it, since this one is superior in every way lol. |
That was damn fast! Nice Christmas surprise. And I can combine CRT filters simultaneously too? Neat! You should rename the menu in RetroArch if possible, to at least catch attention -- that it is a new better shader. Also eventually, I'll add a phase-offset since I can reduce the latency of this CRT simulator by probably 1 frameslice (1 native refresh cycle) simply by adding +time as a constant. I need to experiment with my changes in ShaderToy in the coming week (It's Christmas). But it's absurdly fantastic to see an actual deployment the same day I released my simulation shader! Which releases will have it? PC, Mac, Linux? Can it also be ported to the mobile app for 120Hz OLED iPhone/iPad too? I notice that the shadertoy works great on those, even if not as good as 240Hz. TechSpot Readers:EDIT: TechSpot posted some publicity that contained a permalink to this comment. If you're looking for the original main copy of the shader that will get an improved version in January 2025, please go to my repository: www.github.com/blurbusters/crt-beam-simulator |
I just asked our Apple guy and he says the subframe stuff is available on nightly builds for iOS but will be included in the upcoming release. It doesn't persist, so you have to re-enable it on each launch, which is a drag, but nothing worth doing is ever easy :) But yeah, Mac/Win/Lin should be covered. Thanks for working on this and for designing (and licensing) it around sharing and easy integration into other projects. It was a breeze to port thanks to that foresight and generosity. |
Tim made one of the most important contributions to keep it bright and seam-free (variable-MPRT algorithm). Niche algorithms tend to be ignored by the display industry, so it's nice we could BYOA (Bring Your Own Algorithm) straight into RetroArch, just supply generic Hz, and the software can do the rest. And nice that you kept the LCD Saver Mode (maybe add a boolean toggle for it). OLEDs do not require that, and I kind of prefer it be done at the application level to avoid the slewing latency effect [0...1 native Hz]. Not a biggie for 240-480Hz, but turning it off will create constant latency for evenly-divisible refresh rates. |
Done! libretro/slang-shaders#668 I'm having fun running my subframes up higher than my monitor can push and setting the "FPS Divisor" up accordingly. It looks just like slow-motion camera footage of CRTs. You can get some pretty believable slo-mo captures by pairing it with p_mailn's metaCRT: |
We'd need to see a log of it failing to load to even guess, I'm afraid. This sort of issue is usually handled more effectively via forum/discord/subreddit, though, if you can pop over to one of those. |
How do you load this in RetroArch? When i load the presets nothing happens. I have a 240hz LCD monitor, what other options must i change to make it work? |
@Tasosgemah Enable shader sub-frames in the settings. |
Thanks, it works now. But i assume my monitor isn't good enough for it because even though the motion blur is reduced, it looks really bad. All the colors are very dark, there's some minor ghosting, some noticeable transparent horizontal stripes and random flickering that comes and goes. |
@Tasosgemah Something else must be wrong, I have a 160hz monitor set to 120hz for this and it looks super clean and I experience none of this. How many Shader Sub-Frames did you enable? |
6-Bit TN Panel Users
Fix your clipping first, www.testufo.com/blacks and www.testufo.com/whites
This gets around LCD inversion issues and FRC-patterned-dithering issues. e.g. 180 Hz instead of 240Hz on your 240Hz 6-bit TN panel
Aha! 6-bit panel processing. Most TN panels do 6bpc processing with even-number synchronous FRC (an even-numbered-refresh-cycle of temporal dithering) that will unfortunately be incompatible going band-free with this specific version of the algorithm. I can't fix that limitation, although I might make an optimization to try to temporally-dither it in the future. It will require major modifications to the photon-budgetting algorithm to bypass the 6bpc panel problem, but I will see what I can do over the long-term. The problem is that I have to temporally dither it spatially, not between adjacent CRT refresh cycles. As a workaround, try odd Hz, try using 180Hz on your 240Hz panel. That will alternate phases. Use ToastyX or NVIDIA Control Panel to create a 180Hz mode, and that will allow the panel to temporally dither its 6bpc behavior over opposite-polarity refresh cycles, and the banding should be MUCH less at odd-numbered native:simulated Hz ratios on 6bpc panels (64-shades per refresh cycle problem on TN panels) The 180Hz compromise is something you will have to do until you upgrade to IPS or OLED. |
@Skyyblaze Like I said, we don't have any way of forcing a shader through the menu currently, especially not in a way that wouldn't interfere with other shaders, so we'll likely keep it just in the shader pipeline for now. People are comfortable loading shaders this way and to do otherwise would likely cause confusion and other problems. But we'll see... The shader can handle fractional intervals for any arbitrary framerate, but RetroArch's subframe implementation only works with integer intervals currently. It would be nice to get it working with any full-speed refresh rate, but it would probably require rethinking the entire implementation. |
RetroArch CRT shader's is 100% immune to BFI LCD retention, with a conditional lag tradeoff:Will you at least enable the LCD_SAVER setting ("LCD Saver") to reduce latency on OLEDs by up to 1/60sec at 60Hz? The latency only automatically appears only when it detects exact-integer (native:emulated) Hz ratios, to prevent image retention on LCDs Decoupling the synchronous nature will also make it easier to eliminate the latency of the anti-burnin system built into this shader. And you will be able to more easily beam-race with the shader (using VSYNC OFF frameslice beam racing similar to how WinUAE does it with lagless VSYNC). Also there's no anti-burnin-logic latency for odd-divisors, and you can disable the setting for OLED (since OLEDs don't need the LCD_SAVER which adds up to almost 1/60sec latency during even-divisor native:simulated Hz ratios), kind of a clever trick, I might improve the shader to do this. It's just the pesky LCD voltage inversion thing that only happens at exact integer-number native:simulated Hz ratios. Not as quickly or as aggressively as monolithic BFI, and it may take hours to get the image retention on LCDs, it's ironic that OLEDs are immune to image retention with this algorithm, but LCD has some minor image retention (if this algorithm is run for hours with LCD_SAVER=false). RetroArch is safe because LCD_SAVER is always true in the current build as a hardcoded constant you cannot turn off. To avoid the LCD_SAVER latency (automatically kicks in at only even-numbered native:emulated Hz ratios), use an odd divisor (e.g. 180Hz) or disable LCD_SAVER. TL;DR: Nutshell
Albiet, as a hacky workaround I might instead reverse the slew direction (1.001 vs 0.999 shift) early in a simulated refresh cycle, so that you have at most, 1-native Hz latency of the LCD-anti-burnin logic, rather than 1-emulated-Hz latency of the LCD-anti-burnin logic. In other words, the internal native:emulated Hz slew would bounce back and fourth like a sinewave over a screen height of (native/emulated) Hz. You won't notice it when it's banding-free, only when there's banding, and the banding will move vertically less than 1mm/sec in the LCD_SAVER mode for most refresh rates. |
After your previous suggestion, I moved the LCD_ANTI_RETENTION bool to a runtime parameter, so that should be toggle-able now. Is there anything else that needs to be done to disable the LCD_SAVER function? |
Nope! That's all you need to do. Turning off the setting reduces input latency instantly, you replace a slow +[0...16.7ms] slewing effect with a constant-latency. How LCD Saver works to prevent retentionThe fact that I made the CRT emulator work at any non-integer native:emulated Hz ratios, was the piggyback I used to prevent LCD image retention. On by default, the menu setting is self-explanatory ("uh oh, I shouldn't touch it if I want to "Save my LCD"). That's why Retrotink menus names it "LCD Saver", a brief self-explanatory menu setting. "LCD Anti Retention" also does the same thing. So that means if your emulator is at 60Hz and native display is 120Hz, the CRT simulates ~59.985 Hz at 240Hz (From the 0.001 added to the 4 refresh cycle divisor, aka 240/4.001 instead of 240/4). Neat way to avoid a random extra BFI frame (flickers), since it slews so slowly. |
I created a 180hz profile and made sure it's working. The result isn't better. I'm still getting the stripes, only difference is that they are now static instead of moving upwards. The shader toy preview is also worse now because it's flickering intensely. |
The slow drift in banding is due to the LCD_SAVER automatically kicking in (only when you have even native:simulated Hz ratios).
Dang, I'm sorry to hear that. I can only do so much to try to overcome a displays' limitations in their FRC logic. I might make a global mode (e.g. like a globally-flashed CRT simulator). For me, that's a reasonably simple change. This will spread the banding globally full screen, to the point where. I'll add a GLOBAL_FLASH vs RASTER_SCAN, it just behaves like a variable-MPRT phosphor-BFI instead of CRT scanning. A non-existent display, but a potential compromise for your LCD. It's a 1 hour edit for me to invent a non-existent global-flash CRT tube, let me see if I can create a new shadertoy for ya, and see if it fixes the banding. (...Inventing non-existent displays in a software shader for the win...) Many (people replying to me on Blue Sky & on X) are saying it looks fantastic on many 240Hz FastIPS displays. |
Wow, thanks a ton for taking the time to help with my monitor. All this time i assumed it was a good monitor, i mean it served me awesomely until now :( I guess it's not the monitor's fault, it's quite old, it was great for it's time i guess. I'm just glad to know it's not a user issue but a hardware issue. |
@hizzlekizzle Alright thanks for the explanation, I can see the reasoning and it makes sense. In the light of this, could I suggest something else? How about a dedicated "Prepend CRT Beam Simulation" entry in the Shader Section of the Quick Menu next to "Prepend Shader" and "Append Shader"? That way users could quickly add the shader to whatever config they have right now without always having to navigate through folder structures. This is the only thing that annoys me as I often like to try out different shaders to see improvements or try newly added ones. |
Easy Suggestion For Shader-SEQUENCE-OrderEDIT changed word "priority" to "SEQUENCE" to fix misunderstanding.As a simple compromise, a single flag called "Suggested Shader SEQUENCE". (ShaderProperties.SEQUENCE = 50) Shaders that should come first should have lowest numbers, and shaders that can come at any orders would have 1000 (magic number) or null/undefined. Any shaders that must go in between can have different SEQUENCE numbers 2 to 999. I don't know if CRT Simulator should get ShaderProperties.SEQUENCE == 1, but it should get a pretty low number. Or you can have a few numbers (1 = should be on original resolution emulator buffer, 2 = anywhere, 3 = scaling/warping stages, 4 = post-process)
The "Think Like A CRT" Sequence-Of-Processing PrinciplePicture adjustments (brightness/color/saturation) affects signal or yoke electronics first. Then it goes into the electron gun. Then electron gun hits the phosphor mask. Process the shaders in the same temporal sequence for realism. Picture adjustment filters should be done first ("Think like a CRT" sequence principle), since it's done in the electronics of a CRT before it hits the electron gun. Besides, picture adjustments are fully incompatible with the variable-MPRT algorithm, so the picture adjustments should be committed to the low-rez emulator framebuffer before the CRT simulation stage. Or spread it out over [0...1000] (kinda like old-fashioned BASIC line numbers) so you can insert shaders in between safely. Obviously, some thinking needed what shader deserves what order. Whenever a user orders shaders in the wrong order, a reddish background appears under the shaders that are sorted in the wrong SEQUENCE order. It won't block the user, just remind the user to re-optimize the order of priorities. User can choose to violate recommended shader order or keep as is. Though it risks mis-thinking orders, but at least we'd only flag with a color coding ("Performance may be affected with this shader order" pops up as a tagline at top or bottom, while showing a highlighter background behind the offending shader); not preventing user. The Simple Sort Check SystemSimpe sort check. If sequence numbers are not in sorted order, flag (highlight) but not prevent the user from keeping as-is. This will allow us to re-optimize the order later, by editing or removing the shader SEQUENCE value. Meaning, workflow is UNMODIFIED. It'd just be that the out-of-order specific shader would show a highlighted background when a specific row is not in its suggested-sequence order. The shaders will still process as-is, like it already does today, just that it "flags" the user Easy? 1-day work? |
@mdrejhon Not sure if I understand you correctly and I'm not going to pretend I'm a GPU expert (at least not to the degree @Themaister is), but I think you can't prioritize shaders running on the GPU in any way. Or at least not that I'm aware of from the RetroArch side/perspective. You CAN set the priority of CPU threads yes from a program, but not GPU. I'd love to be corrected though if this is possible but from my understanding the GPU makes all those internal decisions and determines how the workload gets spread on whatever cores it uses for those programs. Regarding RetroArch, it doesn't really use a lot of threads, mostly to avoid unnecessary threading overhead. There are a few threads but not a lot, I think task queue lives on a separate thread when threading is enabled, audio can be put on a separate thread but it's optional and not the default (and not recommended IIRC), video can be put on another thread but again optional and not the default (but not recommended since it can give judder. I hear it gives a speedup on the Switch though which makes it a worthy tradeoff). Basically running video and audio on the main thread in RetroArch seems preferable. In most cases subsystems like XInput, the GPU driver already run their own threads behind the scenes when you use these APIs. Like if you set up an OpenGL context, Nvidia will spin up a render thread for it behind the scenes. |
You misunderstood my semantics. I meant sequence, not priority. "Item X should be processed first before Item Y" I didn't mean "thread priority semantices" but the serial take-turns principle. Editing my post to fix the confusion. Sorry. Please re-read my post. Thank you! Apologies. |
So you mean like giving certain shader presets (or shaders) a certain order/sequence, and then when appending/prepending shaders in RetroArch, there could be an option where it takes into account that order/sequence and then decides intelligently how the shader should be stacked on the current stack, whether it should be prepended or appended? Something to that effect? |
ExactlyRule 1: Think Like A CRT in Sequencing
Rule 2: Think Like A CRT in SequencingCRT filters that also simultaneously process certain adjustments might wish to decouple that to a different stage (or automatically disable that when the CRT beam simulator is activated). They can stay, as it can still simulate electron beam behaviors too (not just signal processing stage), so CRT picture is affected by picture processing (before beam) and by how phosphor behaves (after it hits phosphor). The variable-MPRT behavior will kind of interfere with filter-stage-processed picture adjustments (distorted colors, amplified banding). Also it's less compute to do picture adjustments on 320x240 framebuffers, than on the high-resolution final scaled framebuffer anyway. I am away from my main gaming rig, but if you automate it, you can use the new ".SEQUENCE" to auto-sort. Or if users are allowed to manually sort, simply use my "flag incorrect sort" suggestion if the shaders .SEQUENCE flag (a new property saved to all shaders) is not in ascending sort order. Rule 3: Think Like A CRT in Sequencing/ufo-alien-intelligence-tip-given 👽 |
Well, one thing you could do is play around with shader stacks in RetroArch, see what things work best, and once you arrive at some good results, you can then start devising some kind of numbering/ordering system for a lot of these shaders. RetroArch is quite flexible in how it lets you mix and match shaders, not only can you keep adding passes but there's also the 'append preset/'prepend preset' feature, so you can keep adding to the stack until you hit the limit. |
What do you think of my flagging idea instead? Users can sort the shaders anyway, it'd just be a highlighter like what I wrote about (can you reread that part if you don't understand my suggestion) |
Well I think first you need hard data and verifiable results on how certain things should be stacked or laid out for best results. We can only arrive at such test results through experimentation. I think you're in the best possible position to come up with some results for that. Based upon that analysis and data we can then proceed further. The specific implementation details up until now are not really important, I think first figuring out what things work best and how a system could be built around it based on verifiable tests is best. |
Hmm any idea why loading the CRT-Beam-Simulation shader causes N64 games to go black/have no video-output? I'm running ParaLLEl with Angrylion for the GFX plugin. |
@mdrejhon libretro/slang-shaders@dde0a17 For instance here's some presets I made using RetroArch's built-in in-menu shader stacking system. It's very easy to do this. So I'd suggest just running through nearly all of slang_shader's shaders (there's a lot of them all with their own purpose and intent), and then seeing what shaders work best in conjunction with crt-beam and then working out some kind of numbering scheme or whatever you had in mind. Not sure if I'm making sense or not but I think that could lead us to the best and most immediate results. |
Exactly why I made my suggestion -- We'd crowdsource the dataUpdated for clarity
|
I don't have time; this was an unfunded project. Even though it took me hundred hours of work of spare-time stuff since 2022; re-testing with tons of shaders gets challenging. Can't we just crowdsource using my suggestion? We can simply keep these new properties unset/undefined (most of them) and gradually fill these .SEQUENCE property values as time passes, over the next several months. I will contribute some of it. But may you at least you pave the road though to help testing easier? It's just a single property value (.sequence) and an automatic flag (indicator) for sorted true/false. (Tells me it's not in recommended sequence). You wouldn't enforce it. Maybe you're asking me to make a pull request on the .SEQUENCE value processor and the warning-message displayer (if the SEQUENCE numbers are not in sorted order). Would be a very minor modification. It would not affect the workflow, just a notification indicator to help quickly test sequences. That's exactly why I made my suggestion; to help me do it; and to help crowdsource the data.
Then at beginning:
This will help me research sort order more easily + help crowdsource the data. It's always more efficient to process shaders while the buffer is small (CRT-simulate the original 320x240 emulator framebuffer as one example) rather than when it's big (CRT-simulate a 3840x2160 framebuffer post-CRT-filter). You wouldn't disallow that, but you'd have a warning indicator light, a changed background color in an out-of-sequence shader, or some warning message appear somewhere, to guide the tinkerer and experimenter (and save a lot of time). Basically, don't interfere with users, but notify users of a potentially suboptimal sequence.
|
Are you available on Discord by the way? Or do you hang out on our Discord servers? I have a relatively high end TV, I'd be quite willing to discuss with you specific things to setup to see how much we can push this feature on this device. I think some of that entails setting up the right settings for the TV. I don't have a lot of that knowledge so it'd be best if I could contact you directly. |
[Exchanged handles privately, removed contact info as notifications is overflowing from the viral social media on BlueSky and X concurrently, seems people are amazed at the BlurBusters achievement] Chatting on discord, it appears it's a question best for @Themaister because he's the GPU expert; so I'll defer to him on what he thinks of my timesaver idea (the 2 hours work to reduce sequence experimentation workload by 90%, since one can simply add the sequence number piecemeal, one shader at a time, over months -- and it's permanently documented. I helped @LibretroAdmin improve colors on OLED; using SDR apparently produces better colors than HDR on his specific model -- this is just a side effect of the math being optimized for Adobe sRGB and some TVs really does a bad job of compressing SDR inside HDR. I wish TVs would give me better APIs to access linearized HDR for the bottom end of the HDR curve (up to the window size specified etc) Now, new problem: Black clipping and white clipping at the display level or GPU control panel level creates banding. So, calibrate your brightness/contrast first! (Retroarch brightness/contrast, done before CRT simulation, doesn't cause banding problems; as long as done before CRT simulation -- so color process your emulator framebuffer before piping it through CRT simulator) Now resuming an idea I got:
I have an idea; Even 16 colors of a 8-bit machine theoretically turns into 1 billion colors when you have the phosphorfade + brightnesscascade algorithm. Therefore, you get banding if you get any clipping in your Adobe sRGB colorspace. |
<Technical>Big wall of text warning Table of Contents
Select Simulation: [Plasma | DLP | CRT | CRT-VRR | Fast LCD | Slow LCD | OLED | 35mm projector] Yep. I have all the algorithms. It can be done. Just supply me with 1000Hz + direct access to nits value per pixel. I already produce examples of primitive wright brothers tests limited by limited Hz; there's TestUFO Interlacing, TestUFO DLP Colorwheel (run at 360Hz or it's annoying, wave your hand in front of it), TestUFO Variable-Blur BFI (Run at 240Hz), and I even can do CRT-style 30fps at 60Hz double images via pure software BFI on any LCD/OLED. Obviously, I will port the CRT simulator to TestUFO soon. However once we hit ~1000Hz, the number of algorithms I can do literally expands geometrically. This fully functioning CRT beam simulator is my micdrop here, after all... Now click ">details" to open the 15-page wall of text. Recalibrate to avoid black/white clipping!Every single Adobe sRGB color from RGB(0,0,0) thru RGB(255,255,255) must not be clipped, so you may be able to eliminate the banding by making sure you've precalibrated your display. It may not solve 100% of banding but give it a try, raise Brightness and reduce Contrast. 10-bit Helps 8-Bit in CRT SimulatorNow, if you can do a 10bit pipeline; yes, use it where possible, even if only at display end. This will slightly reduce math rounding errors during the mathematics inside the CRT simulator; the extra precision reduces banding issues because of how the CRT simulator upconverts the limited retro palettes to almost infinite possibilities due to the phosphorfades and per-channel cascades that can blend differetly for R versus G versus B (e.g. brightness cascade algorithm will ghost G only to next Hz, if R and B is dim...behaving kinda like original tube). My CRT shader seems to look better inside a 10bit SDR proccessing pipeline (in an offline DirectX app) to reduce the quantization errors caused by the gamma2linear() and linear2gamma() that is done twice per subpixel per native Hz, which will skew the Talbot-Plateau theorem extremely very slightly (e.g. viewing test patterns through the CRT simulator will appear to be roughly 7-bit color instead 8-bit color due to math rounding errors building up). Understanding Gamma vs LinearWhereupon you succesfully calibrate all the bands out, and then use an exact integer divisor native:emulated, unit-testing via mathing together the pixel values of the emulated CRT frames and dividing by native:emulated ratio, then stretching to 0..255, successfully yields the pixel values of original emulator framebuffer, but it won't, due to the 8bit math rounding errors, you see...). Ah well, one cannot win universally. RGB(64,64,64) is not HALF the photons of RGB(128,128,128) due to Gamma CurvesFor example, you can never get perfect 25%-linear grey with 8bit. RGB(127,127,127) is not half the brightness of RGB(254,254,254) in photoncount due to the gamma curve. So you necessarily have quantization errors, especially when doing two gamma curve computes per subpixel per refresh cycle, as I need to know linear for Talbot-Plateau Theorem, can't do that in gamma space, and there's no clean math formula to successfully linearize HDR on dispays that applies their tonemappings... But upconverting Adobe sRGB to 10bit, then piping through 10bit CRT simulator, outputting over 10bit HDMI, to a 10bit OLED, it can look real clean, since the 10bit stays really close to the original 8bit values from the quantization errors building up in the CRT simulator... However, despite the quantization errors, the CRT simulator still looks kickass. Let me tell you more... HDR Math Is Horribly OpaqueI wish 10bit HDR was easy enough to math on, but I want display manufacturers to improve their ability to communicate HDR metadata to me, so I can properly apply Talbot-Plateau theorem to different refresh cycles that is temporally predictable. This is still a math miracle, nontheless, almost an E=mc^2 simplification (look at how tiny the shader is). The Bedrock Of CRT Algorithm: The Easy Talbot-Plateau TheoremThis is because Talbot-Plateau Theorem is beautifully simple (this Theorem is the bedrock of the CRT algorithm, study up buddy!): Flash something twice as bright for half as long, and it's the same average brightness. And that's why I need the CRT simulator to run in linear colorspace, and properly subdivide brightness over multiple Hz in a "photoncount management" style system and go variable-MPRT. Easy peasy once you understand it, it's only high school math -- but most don't get it until they realize how simple Talbot-Plateau Theorem is. That's why www.blurbusters.com/area51 is now textbook reading at display manufacturers. The Photon Budgetting AlgorithmCredit: Timothy Lottes for this sheer brillance, he calls it "brightness redistribution". I call it "photon budgetting". Imagine you have a 100 mL of water you need to pour into four separate 25mL glasses that is served once a minute. The restaurant server can only serve one 25 mL glass per minute. But somebody ordered 60mL of water fast. You pour 25mL into the first two glasses, followed by 10mL on the third glass.
Exactly! Use photons instead of water. So if I want to emit a 60% linear white in one refresh cycle, but convert it to lower motion blur over four refresh cycles, I have to compress by** serving the photons quickly** That's how a 60%-linear white gets photon-budgetted into four refresh cycles for 60fps at 240Hz
Of course, this is per-pixel (time-correct relative to raster, which sweeps top to bottom). Remember, RGB(174,174,174) has almost exactly 40% of photons of RGB(255,255,255), because of the gamma curve formula, and I do 2 gamma curve computes per subpixel per refresh cycle. Shaders are math performance miracles. Now you understand the brightness-cascade algorithm, it reduce motion blur at the sam brightness level. Now that said, GAIN_VS_BLUR is often lower, often you have a value of 0.5, so you can compress even more. Instead of 60% linear white, you're serving only 30% linear white, which means:
Voila, exactly half as bright, BUT remove more than half the blur!Works even better at larger native:emulated Hz ratios, especially 3-4, than at 1, but you can see the magic going on, that the CRT simulator is brighter than classic BFI because of this math cleverness. Almost as if you're breaking laws of physics (violating Talbot Plateau Theorem), but not really -- you are selectively giving dimmer pixels a lower-motion-blur advantage -- simply because you can brighten (even clip to 255!!!) to help serve the photons faster to shorten pulsewidths for lower motion blur. It averages ala Talbot-Plateau Theorem. Sorry nothing faster than light (as quantum teleportation seems like, but not quite), but a very clever cheat that still complies with laws of physics, getting MORE than 50% average blur reduction while only losing half brightness. In some cases, I'm even seeing 75% blur reduction with only 50% reduction in brightness, simply because a lot of pixels in a dark dungeon game are easy to reduce the blur of, they're able to be made 4x brighter briefly, without clipping the brights out (which simply cascades to extra Hz for more blur for brighter pixels). So, understanding Talbot-Plateau theorem /combined/ with understanding gamma curves, lets you do this CRT emulation magic. Plasma simulator shader possible / DLP simulator shader possibleNot bothering with this because CRT is gold standard, just saying plasma subfield shaders are definitely indeedy possible, complete with christmas-dots, noise and plasma-contouring effects complete; temporally correct. The identical math magic can assist in massively brightening early experiments in plasma subfield emulators on a 600Hz OLED, and even DLP subfield emulators on a 1440Hz OLED, and so on. Optional noisy dither and optional rainbow artifacts too, if one wished (ugh!), piece of cake for me. But the CRT tube is still the gold stanard, I targetted that first obviously. Generic Hz + shader for the win, in BYOA (Bring Your Own Algorithm) approaches! I am in 30 research papers, www.blurbusters.com/area51 which you may need to watch as I have more skunkworks projects coming out in the new year. I was not paid to do the shader, this is a hobby part of my biz. But I do work with some display manufacturers on contract from time to time... Gained lots of skillz that led to this CRT electron beam flying spot simulation breakthrough! Generic brute Hz is the dawn of BYOA (Bring Your Own Algorithm) age. Open source display or television next, maybe?Give me 1000Hz-ish, direct access to the HDR-to-Nits lookup table, and I can have a menu: Select Simulation: [Plasma | DLP | CRT | CRT-VRR | Fast LCD | Slow LCD | OLED | 35mm projector] I already have temporal formulas for ALL of them. Yes. GtG simulators, subfield simulators, interlace simulators, strobe sequences, etc. Just add Hz. Watch your sports broadcast in OLED mode, but watch your movies in the 35mm projector simulator. Play your video games in the 60Hz or 120Hz or 85Hz whatever Hz CRT simulator. It's just all simple Blur Busters math trickery. User choice for the win? (Manufacturers, please make it happen. Christmas gift 2026 for me, pretty please, give me 1000Hz + direct access to your brightness lookup tables. We are already at 480Hz, and the 1000Hz OLEDs are already in the lab, hitting market in 2027. I now have enough networking contacts to provide (display+temporal) genius to pull this off. I don't want to be a display manufacturer, I just want to do algorithms for an open source TV. The same shaders running in an open source TV could be ported anywhere else, like Retrotink. It's just shader compute. I can make all sorts of algorithms happen if I only had Hz and direct access to nits-to-HDR-values lookup tables for Talbot Plateau Theorem (see how surprisingly simple the Theorem is from the restaurant-server metaphor above?) Next best thing, we can just do it in our own shaders, BYOA (Bring Your Own Algorithm). Brainstorm Per-Channel Phosphor Fade(Brainstorming... Long term, in the future, should have separate adjustable phosphor speeds for different channels, but for now, we optimized on brightness-cascade trick (variable MPRT trick). This would create green ghosting for moving whites, if I made green a slower phosphor than red/blue. Also maybe in future a Y=ax^x+bx+c "S-shaped curve", to allow the combination of prioritizing bright-first, while still having a phosphor decay in 2nd Hz onwards, etc.) </Technical> |
Is anybody else experiencing the same colour separation as I attempt to demonstrate in this video (filmed at 60fps on an iphone): Most visible around the 17 second mark To my eyes in person, there is no black banding or visible banding at all everything looks solid and as expected, all colours look accurate when static (albeit the whole screen has that almost sub perceptual hint of flicker that you always get with BFI), but during moments of fast lateral motion, there is a kind of seperation of colour channels. Very noticeable in this clip and reflects pretty well what I see with my own eyes (except the colours are pure and not overblown or saturated as they appear in this video) Setup is a Dell AW2725DF. Tried at 360hz and 240hz. HDR off. SRGB colour space selected in monitor. Using Vulkan in RA with just the shader on its own, it's not masked by putting a CRT shader over the top (in fact it becomes even more obvious). Tried messing about with gamma settings, LCD saver on/off, different snes cores, different video modes on my monitor, different video processors in RA, pretty much everything I could find, and I'm finding that if I make it so the image look stable and crisp when static (eg no artifacts), then it exhibits this behaviour when moving quickly. It also almost looks like a rolling shutter kind of jelly artifact as well at times, even when theres not colour seperation, there's line doubling on vertical lines, seemingly right at the currently ongoing "roll" point. It's also not always present in the same part of the image, it feels like its a rolling band that is travelling vertically throughout the height of the screen over the course of maybe...45-60 seconds or so. So it's only present when running along the bottom of this level in SMW for 20 seconds or so, then its totally gone for another cycle so to speak. Is this kind of an expected side effect? I mean we are doing rolling frame refreshes, so rolling frame artifacts kind of seems like an expected outcome almost. Can somebody that is experiencing this working as well as they'd imagine on an OLED, try running left and right on the first level of SMW (USA) for a minute or two, see if they have the same kind of artifacts pop up? (Or if the Chief knows what this is being caused by, feel free to inform me!) |
That's normal, original CRT tubes flickered. It should flicker less than BFI or strobe backlights, and if your display is not doing any weirdness (e.g. PWM) then it should flicker approximately similarly to an original tube (for same viewing distance to same brightnes tube).
A minor jelly artifact is unavoidable with slowly-scanning out granularized refresh cycles, you need a large native:simulated Hz ratios to make it look like normal CRT scanskew. I will add a scan-velocity adjustment to reduce the jelly artifact, give me about one week.
It will support infinite scan velocity, like a global-refresh CRT tube (...inventing nonexistent displays for the win in BYOA - Bring Your Own Algorithm approach...). You will just have to hang tight until I add these improvements to my shader, and that Retroarch adds a "Scan Mode: Normal/Global" setting to the CRT simulator. The global mode will have zero jelly effect, at the compromise of slightly more flicker (Because it behaves like a phosphor-style variable-MPRT black frame insertion that phosphor fades), ala Timothy Lottes math brilliance.
It travels vertically simply because it's trying to avoid LCD image retention. Turn off the LCD Saver setting (coming to next version of Retroarch) if you want to make that stationary, or use an odd-number native:simulated Hz ratio. It's the algorithm preventing LCD image retention (scientific explanation).
TL;DR:
How it works; LCD Saver intentionally kicks CRT Hz off by 0.001 to native:emulated ratio, creating ~59.985 Hz for 60.000fps for 120.00Hz/240.00Hz/360.00Hz LCDs (yes, you heard me right) to slowly slew the black frames out of phase of the LCD voltage-polarity inversion algorithm in a clever way, because my CRT algorithm does not require native:emulated ratios to be an integer, as long as it's at least (2.0f) or more, yes it's a float, even though Retroarch was designed only for integer ratios (understandable, it's not an easy workflow). If you're using an OLED, turn LCD Saver setting off, it's coming to the next version of RetroArch. If you're using an LCD, use odd-numbered native:emulated ratios like 180Hz or 300Hz (this will stop the drift, since LCD Saver automatically turns off internally) As a rule of thumb, jelly effect starts to appear with scrolling/panning/turning speeds that's faster in pixels-per-frame than the native:emulated ratio. If your native:emulated ratio is only 4 (60fps for 240Hz) and your motion is going more than 4 pixels per frame, that's when the jelly effect appears (divergence effects). That's to be expected, unless I add motion compensation or AI algorithms or interpolate-within-scanout, which is kind of a no-no for most retro purists (but some might ask for it). It's funny I fully understand the display science & physics of what's going on. All I can say, throw as much native:emulated Hz that you can at it, and that will massively help (at least until RetroArch chokes trying to keep up, 2ms time budgets at 480Hz is hard, and we've already got 1000Hz OLEDs in the lab) Upcoming solution for jelly effectThe good news is that I do have a solution for the jelly effect, as a tradeoff between flicker. It will be a "GLOBALFLICKER_VERSUS_SCANJELLY" slider of sorts, as a scan velocity adjustment, like a faster vertical deflector on a CRT tube that can be adjusted to infinity (global refresh CRT). It's a workaround for excessively low native:emulated Hz ratios until everybody has 1000Hz displays. That's because your analog moving eyes is in a different position during each of these emulated CRT frameslices. As you move your eyes from one edge to the other edge of the screen, 960 pixels/sec at "60Hz" is 16 pixels per simulated Hz, but we only have 4 slices, so we've got double-imaging at 4 pixel separation artifact (like an advanced multiayered meta version of the old CRT 30fps at 60Hz effect, simply due to native:emulated Hz ratios not being high enough, quite just yet) The color separation problemI believe Timothy Lottes has a solution to color separation during jelly effect, that can be reduced, without eliminating the jelly effect fully. I might be able to port his changes too, as a configurable option. Normally it just looks like phosphor ghosting for bright color channels, if you blur it enough (no square pixels), but with square pixels, the color separation artifacts DOES get ugly. (So you may wish to blur your pixels slightly when using my CRT simulator, using the various filter shaders) Fix: ETA early 2025 (January 2025 I hope) Reminders of Best Practices
TestUFO Jelly Effect TestWant to test jelly effect on your old 60Hz real CRT tubes? See for yourself: www.testufo.com/scanskew It was not very noticeable because every scanline imperceptibly shifted horizontally relative to your gaze, as it scanned downwards while you tracked eyes perpendicular to scanout. But with low native:emulated ratios, the jelly effect is slightly stairstepped/quantized (although softened by the soft overlaps, and softened even further by non-square pixels). |
BTW, after CES I will release an improved shader that includes:
A lot of these are to help you get around various specific display-specific itty bitty limitations. So much craziness trying to stomp all of these out; I now understand all the mechanisms creating these problems. Such fun thinking in Talbot Plateau Law wrapped into 3D matrix math (width x height x time)... To borrow an oft-used retro pop phrase: "All your blurs are belong to us" 👽 |
@pxdl, I'm having the exact same problem but I cannot find a place to enable sub-frames, where is that option at? Nevermind, fixed it. For anyone else in the future, if you've had retroarch a while, do a fresh install. I was missing the newer menu options. |
I have a new troubleshooting HOWTO HOWTO: For CRT Simulator Artifacts: Fix banding / flicker / chroma ghosting |
Inviting all stakeholders @MajorPainTheCactus @Ophidon @mdrejhon and others.
This is to discuss further improving the initial groundwork done in this PR - #16282
The text was updated successfully, but these errors were encountered: