Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Spike] Define MR library to use #587

Open
3 tasks
vjpixel opened this issue Dec 5, 2024 · 4 comments
Open
3 tasks

[Spike] Define MR library to use #587

vjpixel opened this issue Dec 5, 2024 · 4 comments
Assignees
Labels
enhancement New feature or request feature

Comments

@vjpixel
Copy link
Member

vjpixel commented Dec 5, 2024

We need to update our AR viewer to an MR viewer. This will deprecate AR.js.

Tasks:

  • Create a spreadsheet with potential libraries with pros/cons
  • Test every potential solution
  • Present proposed solution
@vjpixel vjpixel added enhancement New feature or request feature labels Dec 5, 2024
@vjpixel vjpixel added this to the 2.0.0 Evolution to MR milestone Dec 5, 2024
@rodrigocam
Copy link
Member

Meta Horizon OS Jandig Version Viability

Jandig, as a platform for augmented reality (AR) experiences, fundamentally relies on access to a camera feed with sufficient resolution. Unfortunately, Meta currently does not allow developers direct access to the raw camera output on its headsets.

Instead, Meta offers a "camera passthrough feature," where the headset processes the raw image internally and provides developers with a filtered output—a rendered virtual environment. This allows users to see their physical surroundings while wearing the headset, but it prevents our platform from utilizing object tracking or image detection, which are essential for our current approach to AR.

On September 25, 2024, Meta announced that it plans to release an enhanced passthrough API sometime this year, featuring object tracking and additional capabilities. While this update could potentially enable the features we need, there is no confirmed release date yet.

Workaround Strategy

Since we cannot rely on pattern or image detection to place virtual content dynamically, we can instead create an experience where users manually set up their exhibition, positioning each artwork within the scene beforehand. However, a key limitation is that users will not see "markers" once they remove the headset.

Proposed Experience Flow

  1. The user creates virtual content (2D or 3D, with or without audio).
  2. The user organizes an exhibition containing multiple content objects.
  3. The user begins setting up the physical exhibition:
    • Wear the headset.
    • Open the Jandig app.
    • Select the pre-created exhibition from a menu.
    • Enter "editing mode," where each content object appears as a geometric placeholder (e.g., a cube or frame).
    • Position each object by placing it on a surface.
  4. The user finalizes the setup and saves the scene.
  5. Subsequent visitors enter "view mode" and experience the exhibition as configured by the previous user.

Downside

Our current experience is interesting and brings curiosity to our users by bringing a mix of a physical marker and its virtual content. This new approach break this, keeping only the physical surroundings.

Unfortunately, this approach breaks part of the mixed experience we have now, with artworks relying upon physical markers.

Conclusion

Even though this workaround does not provide a fully seamless AR experience, it allows us to start developing for Horizon OS and gain valuable experience with the platform. By the time Meta’s enhanced passthrough feature is released, we will have already laid much of the groundwork, making it easier to integrate object tracking and refine the experience. This marks the beginning of our journey toward making Jandig a key player in Horizon OS's AR ecosystem.

References

@vjpixel
Copy link
Member Author

vjpixel commented Feb 10, 2025 via email

@rodrigocam
Copy link
Member

Looks great! We also have another option for the setup experience:

  1. As the exhibitions are public, anyone can setup
  2. The exhibition itself doesn't hold the position of the object placements
  3. We store the positions only locally in the headset device

I came up with this experience when I thought if I wanted to setup an exhibition, as an example in my room, but I didn't make the exhibition and I don't have permission to do it, just like how it is today if I want to print the markers of an exhibition and put them in my house or something.

@vjpixel
Copy link
Member Author

vjpixel commented Feb 11, 2025

That's an exciting use case! I'll put it in my own words to ensure I understand it.

A curator wants to set up a specific exhibition in a space but has not previously created it. They couldn't do so since they didn't have permission to edit the exhibition. 

Anyone being able to set up works well when we have a small public visiting the exhibition. But in an exhibition with a large public, someone may think moving the exhibition is part of the experience and change the experience intended by the curator.

So, we should keep the required permissions to edit the exhibition and create a Clone button to prevent someone from remixing an MR exhibit to another space.

I created a "fork" of step B based on the use case that the curator wants to set up an exhibition created by another curator.

B2. In a headset, the curator
The UX itself needs to be figured out. I'm including step-by-step so we can map all the elements that need to be created as we improve the UX.

  1. Opens the Jandig app. The app opens the last exhibition (AR or MR) visited.
  2. Clicks in the hamburger button. The app shows the list of MR exhibitions and, at the bottom, two buttons: AR exhibitions and Login.
  3. Clicks Login. It sees two buttons: Setup Exhibit and Clone Exhibit.
  4. Select Clone Exhibit. It shows all MR exhibits (created by any user).
  5. Select the MR exhibition from the list. It opens a dialogue box asking Name your exhibit and a floating keyboard.
  6. They input the name and click OK. It opens the camera view with all the contents in a matrix 2 meters away from the curator. The last items of the matrix are boxes with "Save," "Save and Exit," and "Exit Without Saving."
  7. When clicking the trigger when "touching" an object, the curator "holds" it and can move it.
  8. Clicks Save when they want to save the current positions.
  9. Clicks Save and Exit or Exit Without Saving when finished.

What do you think?

@Kimberlyrenno, I wonder if it would be better to enable curators to clone in the CMS instead of using the MR editor.

Fun fact: I consider the hamburger button on step 2 to be an element that can be positioned when the exhibition is set up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature
Projects
None yet
Development

No branches or pull requests

3 participants