-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automated tests using Meta XR Simulator #150
Conversation
Neat, I think the only thing we need to be careful off is that we don't put ourselves in a jam when upstream changes in Godot cause false positives in testing and we find ourselves unable to merge features because someone changed something in the rendering engine and missed the impact on XR (happens sadly all to often). |
Since we can play them back and automatically record the new outcome, I think we could fairly easily make a script that just re-records all the |
I spent a little more time trying to make SwiftShader work, but still haven't managed it. So I don't lose track of it, this is the error that appears in the console when Godot tries to start an OpenXR session when using SwiftShader:
We then get these messages from Godot on the following lines:
However, Godot itself (even the editor) seems to run fine with SwiftShader, albeit very slowly. :-) It's the XR Simulator that seems to have a problem with SwiftShader. |
I've updated the PR to point at Godot 4.2-beta1, and removed the dependency on PR #149. So, as soon as this passes tests, it'll finally be ready for review and merging! |
3391e76
to
5d63c83
Compare
This one is passing CI and ready for review now! |
We discussed how the vulkan code in xr simulator didn't like swiftshader and how we could check it and maybe create a minimal reproduction project (pull request) for someone to look at. |
The goal of the PR is to add the structure for automated tests running on GitHub Actions using Meta XR Simulator.
With the XR Simulator, it's possible to record inputs and periodically take screenshots, which both (the inputs and the images) get saved to a
.vrs
file. Then we can have the XR Simulator play back those inputs, taking screenshots at the same moments, recording into a new.vrs
file. Then using pyvrs and a Python script provided with the XR Simulator, we can compare the screenshots, generate diffs, and decide whether they match or not based on a threshold of similarity.Ultimately, I'd like to have a sample project and a
.vrs
file for testing each feature that can be tested via the XR Simulator (which includes passthrough along with the Scene API and Spatial Anchors, using synthetic environments, which is pretty cool).However, the goal of this PR is just to figure out the structure, including one example
.vrs
file.This is marked as a draft, because there's still a few things to figure out: