The speakers discuss the future of text and the benefits of using VisionPRO for research and development, including access to PDF and webexr library. They emphasize the importance of privacy, privacy, and the use of markdown, and the potential for 3D visualizations of text files. They also mention the need for a return to a cyberpunk period and the importance of collaboration and AI in the digital age. They express interest in a virtual wall or library proposal, as well as their involvement in various projects, including creating a website for monetization and creating a side project for monetization. (Msub2): Mhmm. Okay. Alright. I think we can get started. For and, of course, you know, for the people that would usually be here but couldn't make it, I'm streaming and recording everything. So there'll be vods for everyone to watch of this later. So if you guys wanna get started, I think we can go ahead and, kick on. (Dene Grigar): Great. Thank you. So I'm going to kick it off by introducing Froda and myself. I'm Dene Grigar from Washington State University Vancouver. I'm located right across the Columbia River from Portland, Oregon, so on West Coast. And Frode Hegland is living in London, and so he is many hours away from us. So I think it's about 8 o'clock at night for him in London. Indeed. Indeed. And it's Easter Sunday, and he's got children, so they're probably running crazy in the house with (Frode Hegland): eggs. Yeah. (Dene Grigar): So the the title of our project is called the future of text in XR, and we are co PIs in this project. It was funded by the Sloan Foundation for about a quarter of a $1,000,000 and what we're trying to do is develop, an experiment with how to produce text that academics can use in open source environments like WebXR or VR, XR, AR, extended realities so that we can work, in these spaces without proprietary software. So we really want to learn how to re manipulate, navigate, create space. We think that this is going to change the way we think and work in the future. We also are going to draw people together on a project and we're putting together a symposium in November that Frode will talk more about today. And we also publish a book, annually on this topic. So I'm gonna turn this over to Frodo to take us through the very, the formal slides that he's developed for us. (Frode Hegland): Yeah. Thank you, Danny. So, yeah, we're on opposite sides of the world. I was lucky enough to come to Washington's site state to meet the team. Hang on. My cursor went funny on one of the screens here. Okay. I gotta be careful how I move it. Okay. Now I'm back out, which was, absolutely amazing. So anyway, I know I'm preaching to the choir so this bit's gonna be real brief, but this is how we see the world. What is at stake is basically the future of thought and communication. This is the only time in human history when we are about to start working in XR. That's how we see it. And the opportunity is to dream again. There has been so much hypertextual innovations in the sixties seventies. And once we had flat computer screens with the PCs, people knew, quote, unquote, what they were, and there hasn't been much innovation. Now we're all asking, what can this new environment be? The user group, academics. That's who we're funded to work for. We think it's a great initial user group because they are supposed to communicate in clear specific ways, though, of course, other thinkers are not excluded. By the way, if anybody wants to interrupt, there's absolutely no problem with a question for clarification or whatever it might be. That's not a long presentation. So the end goal is to allow a user to put on an XR headsets of whatever brand, access their PDF library, read and interact the documents in XR international systems. The user should feel that this is a place they can work, that this can be their thinking cap. Talking about PDFs there, they're not ideal, but that is the initial starting documents for our user community. So to achieve the goal, we have 3 approaches. One is developing software, which we're gonna talk about a bit more. One is the connective tissue of metadata, how we can go between systems, and finally and maybe more importantly, dialogue meetings, symposium, and books. This is how we see things at the moment. The benefits and detriments of using the VisionPRO. We're talking about the VisionPRO. Not that we have anything against, the Quest. The Quest is definitely brilliant for Webexar, probably better than, the Vision at the moment. But because of some of its technological advancements, it represents some aspects of the future. However, the better text rendering, which is phenomenal, and the fantastic pass through which allows memory places, palaces to exist is really good. And for me, as a developer in this project and outside of it, porting between iOS and Vision OS is quite cheap, which is amazing. Detriments that we've noticed so far working in Vision is there's no window management. You can do amazing things with screens all over the place, but they disappear. Really poor keyboard and eyes interface. Very often when I'm working, and I'm using a keyboard and trackpad in XR, suddenly, I look at something, and that's where the cursor wants to be. There's also no Webexar pass through really for when we're in Webexar and many other version 1 bugs, but this is version 1. On on the Webexar side, and I'm sure you would all agree with this, it's an open standard. It's kind of the default VR space. It's standard on many platforms, and it has all the benefits of browser based distribution. It's not just an app store. And it is a gateway to technologies. It's a view to WebGL experiences, which can run anywhere, not just in a headset. And it can leverage all the web APIs, which is phenomenal. So PDF is better. Sorry. PDF is the reality of what we have to deal with, but HTML's better. So we're trying to extract HTML documents for better interactions in this environment. By the way, I just noticed the sound here is a bit specialized, so maybe I should just turn and, talk to you guys this way. Does that sound better for you guys? Does it make a difference? Okay. So if I was sharing in a different way, this would be a bit easier to see, but, this is recorded within the VisionPRO. I was sitting in Olympia in London. It's really short. I played again. It was very pleasant. Very traditional rectangular writing area, nothing fancy there, but the space kind of allowed my eyes to relax, which is nice. And I do because I have the headset, because of the research that Danny and I do. We have one in Britain and one in America. We go in public and use these things every once in a while to see if people react. In this case here, British Library, you know, a few glances, not very much. And I took the headset off to people German students to my left said, is that the Apple Vision 1? How do they even know that this exists? It's kind of an amazing thing. This is no longer esoteric technology. Also, here I felt I was sitting too close to people on the side, so that's one one of those things we wouldn't have thought about. Because I'm doing this in a window, I'm just gonna make this a bit larger. So this is just a few seconds of working downstairs, the other day. So I have a PDF in front of me to read. Entertainment on top is what I'm doing was quite boring. At the bottom, I am writing in a word processor. Separately, I have messages around the spatial things that you all know about. But as I've written underneath there, in private, I don't have to worry about how we look for other people. However, it is spatial but forward oriented. This is a term that Dean only recently, put to me, and I'm very grateful because that's what it is. It's facial, but you're still expected to face one specific direction. So the software that we've developed natively, one is a word processor, and it allows you basically to have a huge screen, which is a bit obvious. But it's the same word processing data that if you use the Mac and the iOS version, so you have more control there. Early days, we're building. If you wanna try it, please tell me, you know, I'll get your code. Similar with the reader, PDF viewer interface, tack sharp because it's a native app, and we've gone a lot further on what you can see in this tiny little video. You can now choose to not just have a 2 page spread but all page spread for instance. So we're doing some very basic things there. In the future text lab, which is the exciting research we're doing now, we're focused on academic reading in year 1 and authoring in year 2, although, of course, it goes a little bit back and forth. You can't always do one without the other. And at the end, I'm gonna give you a link so you can experience this yourself. And here is, a different kind of 360. This is not forward oriented. It's 360. I'd like to so and anyway, hang on. Getting ahead of myself. This is 3 60 environment, but it's expected that you're sitting at your desk and swiveling around in a chair because hang on. Sorry. Computer had an issue. Because it's hard to walk around unless you have a dedicated space for for VR. Right? There's no pass through. So you go to our website, you get a circle, and the tiny writing says tap to enter. Once you do, this spare will live on your hand for extra control. So here we are entering the environment. We're currently focused on references because that's kind of unique for academics. Tap the little sphere on the hand. One of the controls you have is scaling the texture weeding. If you pinch your 3 smaller fingers, you get a pointer that you can then use to move this around anywhere in a very large column cylinder that you're inside. You can then have further controls by pinching your fingers such as detaching a reference like this. And just to really show you the the hand thing, yeah, here we go. This is what you do. You make a pointing gesture, and the actual laser pointer, excuse me, does not come out of your fingers, which was an early experiment because then you can't do a pinch. It comes out of your hand. So here we have another piece, and then we can do cool things like finding document. So it finds where that citation is used in the document. So these are quite rudimentary interactions, but I think it shows a really interesting path forward for how people can go through and interact with the reference as a first class object in a document and not something boring at the end. Any questions on that particular thing? Okay. So, the way we try to connect all these things together, the connective tissue visual meta, it's based on a very simple principle. On a paper book, one of the first pages, it says what the book is. We use bibtex style and we put it in an appendix at the back of the document. So we have the same kind of information plus a lot more. This is visual meta from a, real normal document. It's got all the headings encoded, who wrote what sections, all the citations, all of that good stuff is is in there. And it it really key thing is up here on the top left in pure, normal English. It says what it is. The original intent for that was simply that some human on the other side of the world that doesn't know us could then use this metadata to extract it for their reading system, whether it's for a library for a single document. But what we started experimenting with as well is acknowledging that this introduction paragraph here, which says what it is, can be used as a prompt for an LLM. So that means that we can be much more liberal, but the data is in here because this little thing needs to say what it is. So if that little thing says the following is spatial data using this, that, and the other, from our testing so far, it'll just be understood. It's astounding, these LLMs. I think LLMs are gonna be a very interesting aspect of, AI as it's not controversial at all. I'm sure we'd all agree on that. A last sentence on the metadata. Vince Cerf put it really beautiful in the ACM communication. He wrote that visual meta adds an exploitable self contained self awareness within so many objects in this universe and increases their enduring reference ability. We're trying to do connected environment that may or may not have, an Internet connection happening. Those servers go down. So a brief overview of some current future design thinking. If you have any comments on this, please do stop and tell. First of all, one of the premises we made, which will probably break, is reading normal text, it's nice to have a flat background. Otherwise, it gets very arty and difficult to read. That partiness can be very valuable in some situations, but not all. So imagine you have a document rendered based on HTML. It looks like this. And then you tap on the bottom, you get the outline or table of contents on the right, so you can easily grab it and jump to different parts of the document. Imagine also on the left, you can write your notes while you're reading the document, which is stored with the document. You can have notes for specific sections, so that's quite fun. Of course, you can have both of them at the same time, or you can hide them. You can basically choose how you wanna read. This is not the most three-dimensional work at all that we're trying to sometimes go crazy and sometimes look at what's useful. In this model, you can also tap at the bottom to go to references to have a similar interaction to what you saw earlier. They're only represented as a single slide. And then something I think is kind of cool, you can choose to go to the library either by touching the sphere on your arm or by tapping on the title. And in the library, you have different connected views. You can see, maybe depending on the resolution, on the bottom here, You can choose to just list things, but you can also have a map like you're seeing now. And if we then tap connect and show, you get 2 tabs coming out saying, well, where you can specify what types of things are shown in the library. Is it persons, titles, locations? What is it? And, also, what's why are these lines connecting? Are they based on comments, highlights, entities, and so on? This is to be connected via JSON using the visual meta approach to an external library on your Linux, Mac, Windows, iOS, Android, whatever other device you have so that this is your real library that you would work on day to day. And, of course, not just, in a right little box. You can also have it in AR, and here is a picture. This is Danny sitting to the right here. In the back looking forward. It's a little playful, but I really like the idea of driving or flying through our information. So that's why I thought it was fun to show a bit of AR in that sense. So, Dean, any, summary from you, anything you want to add before we do the last two slides? (Dene Grigar): No. But I think it's important to note that this is meant to be not, a game or entertainment. This is a really different use of VR that and, like, it's in a reality that a lot of folks, you know, that I that I run with don't really think about. We make in my program, I oversee a a program called the Creative Media and Digiculture program, and we make games with VR and have been doing so for quite a while. And I began my career working as working in virtual reality with MUDs and moves and things like that. So, interestingly, this is a totally different use than what a lot of my colleagues are using XR for. And it's not a product we're trying to develop. We're just we're trying to explore, like, what is the future of this? Can we use it? And I'm in the middle of producing case studies to think about how I'm already working with computer technology to do the kinds of thing I do in my normal academic life and how this environment will change it and what I need to do and re how I need to rethink the way I work potentially in this environment. So much of what we're doing is based on this kind of mindset of how XR can change the way academics and, you know, academics works. So further go ahead, and let's go on to the rest of this. (Frode Hegland): Yeah. Absolutely. That is the key point. And the the last two slides, one is the future of text.org or also and something I think is a bit pathetic. If you just search for the future of text, you'll find us, which, you know, we're a tiny research group and, of course, it's nice. However, why isn't Google, Apple, Microsoft, or or or them doing it to a big dig degree? Anyway, here you can learn about our symposium, which you're welcome to attend. And on this last slide, I just want to highlight the dialogue we have, all of which are invited to join. We have Monday meetings, for 2 hours. They are, 8 in the morning Pacific time, which is 4 o'clock UK time. We have the annual symposium. The last 2 have been in London. The next 2 will be in Washington state. Dean has arranged an incredible venue for us. And really really interesting people are going to be there as has been this case for over 10 years now of an annual symposium on the future of text. Also been publishing a series of books on the future of text. We got 4 of them done so far. We started doing 1 a year. They are compilations or companions or whatever you wanna call of interesting people, their perspectives. So if you want to submit something to this, please get in touch. And, the metadata that I talked about, if you're interested in that, I would be more than pleased to have that dialogue. And if I don't know if you can read, but, futuretextlab.info is the project website. My email address is frod@hegland.com, and Dini is dgruigar@wsu.edu. So thank you very much for your attention. (Dene Grigar): I've been dropping items into the chat, so I dropped in the, URLs to the specific project and then also to feature text, which is the main site. I also dropped in some information about, Vince Cerf who's gonna be talking at our conference at our symposium, and we are looking to bring people to join us and expand. Part of what we're supposed to be doing with the Sloan Foundation grant is building the community, bringing more people together to think about how to to build, you know, creative, you know, environments like this that's not proprietary, that is open source, and this is a perfect organization to work with, I think, in this regard. (Frode Hegland): Yeah. Absolutely. (Dene Grigar): Do you folks have any questions? (Msub2): Not a specific question, but, I just wanted to say, very glad that you guys were able to present today. It's not often that we have, presentations on topics that are, like, more academically focused, especially in the Webexar space. I found this really interesting. (Dene Grigar): Thanks. This is the first presentation we've made. So it's we've only been at we had 3 we had our 3 month mark this past week, so we're just starting to have something to talk about and show. So this is our first presentation together, and we thank you for inviting us. (Msub2): Yeah. Absolutely. I guess, maybe a technical, question just about the current demos that you have. What are you built on? Is it is it just, 3 JS? Are you using WebexR layers at all? Because I know, especially with, like, a focus on, text, you know, like WebexR quad layers would give you an additional sort of bump in resolution that I think would be very helpful. (Dene Grigar): Do JS and Jason are the 2? (Frode Hegland): Yeah. We we try to do we try to not be too tied down to 1. It is as Dini says, it is basically 3 JS. Everything that is web based is Webexar, obviously. But outside of that, we try to experiment. We are not building 1 commercial project. Mhmm. So sometimes we'll go off, you know, we have a main coder who is very, very good. We also have a community of people who will do crazy stuff and it really varies and we have a good dialogue with what people suggest makes sense. (Dene Grigar): We had a really interesting meeting this past week when we discovered there you know, we we when we started this project, we thought Apple Vision Pro would be the way to go because it has so many features that, lends itself to, I think, academic use. Right? And it's also not so focused on games. It's it really is wide open. It's not defined. It's still pretty, I think, pretty, oblique about where it's headed in its, development. That said, WebexR does not work well in VisionPro. So we're in quest 3 and thinking that that's that's really the better place for it. And that was something we we talked about this week. And, also, the other thing is that academics use PDFs. Right? That's like the I hate PDFs. I really hate them, yet everything I deal with is in a PDF. Yeah. And so PDFs are not the most conducive things to read in a VR environment. So we're moving towards, working with HTML documents, which we think will be a lot more, interesting and visually appealing. So we'll see how that goes. That's the direction we're going right now. (Frode Hegland): Yep. (Msub2): Oh, and one last, question. Would it be possible to get a, like, a link to a copy of these slides, or something for future reference if I can post alongside the, the VOD? Because I know some slides were a little, tough to make out on the, on the video. (Frode Hegland): Yeah. I can do that. Do you prefer PDF or should I record a video? (Msub2): Whatever is easiest for you. (Frode Hegland): Oh, it's do I click this or that button? Absolutely. It does make any difference. (Dene Grigar): Do you already have it on YouTube? Why not just give him that link? Share that link with him. (Frode Hegland): Oh, it's been updated. It's always been updated. Yeah. I'll yeah. If you if you if you copy down my email address, just please email me after this, frodhegland.com, and, I'll I'll just yeah. I'll I'll upload it. Sure thing. Thanks. So, yesterday, I was one of my wife's best friends had a birthday, very nice surprise party at a place called Beaverbrook here in England. It turns out to be Lord Beaverbrook who did the aircraft production for World War II. There was this grand place and one of the people I was talking to is quite high up in the consultancy world, one of the big companies and it was a very interesting discussion to talk from a very different perspective than what Dean and I are looking at, which is very specific for academics. You know, he wanted what's the economics behind it? How could you commercialize it? He was looking very much at how you could save costs and all these things, and he gave some examples what they're doing. And, tried to then explain to him we're trying to do the opposite. We're trying to expand things and augment things and make it bigger. But even at this point, even with enrolled people, when someone says, should I buy a headset and try to work in it, our answer to this point, unless it's for research, no. It's not there yet, which is a bit sad in a way, but it's also super exciting. It's a it's an entirely open opportunity, and I'm so grateful to be here with you guys today because Apple's not gonna fix this. Microsoft, Meta, Google, no. As a community of people who care, especially within the knowledge tools community, we have to do this ourselves. Oh, yeah. The the I see, Jen. You said ABP feels like a fancy duck yet. I agree. And, you know, Alan Kaye used to say that, we should spend $50,000 on our computer if we're researchers because you're buying the future. And that's absolutely what this is. To me, the, Apple Vision Pro feels like the Macintosh, the very first Macintosh. So much potential, but not something you can, do all your work on yet. Yeah. Absolutely. (Dene Grigar): The one thing I found in working with the Apple, of course, there's not many apps for it. Right? So you can download some, but not many. Right? There's so much more for Quest. But the nice thing about it is it it provides an experience. That is what it does, I think, better than anything else right now. I mean, it doesn't make any sense to make keynote slides while you're in a in a headset. There's nothing that you gain from that, but you can gain experiences, and that's something that it's done very well. (Speaker 4): Yeah. Maybe I'm I'm trying to be the devil's advocate here. What Hello, advocate. (Frode Hegland): Sorry? I just said hello, Abiquet. Okay. (Speaker 4): I'm developing web applications for a couple of years. And every time I do that, I start to curse at iOS and start to curse at Apple. What makes you think that Apple will not destroy, your fancy WebXR ecosystem on Applevision Pro, which actually is not there yet. They sort of allow WebexR, as far as I know, from, Diego from from the Aframe community. So that would be my first question. Apple is not really, known for treating open source WebXR development nicely. (Frode Hegland): So I can answer that. First of all, we don't know. However, we have very strong reason to believe that they are taking this seriously because we're working with Apple on this. We're working with Apple Vision Pro team and their commitment to Webex is very, very strong. What hampers them more in this case is, their obsession in a good way with privacy. So, you know, releasing APIs and making things possible is taking a long time but this is definitely a conscious strategy from Apple that this is a new world and it can't just be owned by apps. So I am an absolute Apple fanboy for the products. When it comes to the company itself, I'm perfectly happy to say in a public forum. I can't stand it. You know, it it it is a very, very commercial company. There's there's a reason they're worth so much money. So we we are in no way tying ourselves to Apple. We're very grateful for their direct involvement. However, the key thing we're trying to do here is, like Amy said, the experiences. As we have in our project, we say we need to experiment to experience. It's like you can't tell what it's like living in a forest by looking at the forest from the outside, obviously, same thing. So our books, our symposium, the write ups, the video, the demos, that's what matters. You know, we're not gambling our life on a specific one code thing that'll work forever. That was a bit murky, but I hope it helps a little bit. Yeah. Deanie, please. (Dene Grigar): Yeah. Let me add to that too. As I say to our team, we have a fair fairly large development team. I'm sitting here right now in my office at home, and I have 6 devices across 2 desks. And I move Pivot among all 6 devices, and that does not include my headsets. I have the Quest 2 here at home and my VisionPRO. And I'm looking at all these and using them during the day as I'm working. And what I'm really wanting to do is to get rid of all of these these stuff and have one thing, just one thing that I can work with. And even if that means it's augmented with a keyboard and a and a trackpad, that's fine. I'm just it makes no sense to have this money that I've got invested to do what I have to do on a daily basis. It's crazy. So I think there's an impetus for folks like me that do this kind of research to build something that's available to us, but also that can be used for a lot of other people in an in an easy way using the web, which is such a such a, gift. Right? So there's a lot of impetus to build this, and Apple sees that, which is interesting. Know that I have a lab that, at the university where I have 85 McIntosh's dating back to to 1977 that I use in my research. And I do a lot of preservation work, conservation work for games and digital media, born digital media. And I've been using Mac since, late eighties. So I'm I can't say I'm a fangirl, but so much of what has been produced for creativity has come from Apple. And I find myself needing Apple for the work that I have, I'm given or the archives that I've shared with me to to fix and maintain. Yeah. (Speaker 3): I think we're all waiting for the Apple the iPhone moment. (Speaker 4): Okay. You're an Apple fangirl, and he's an Apple fanboy? (Dene Grigar): Actually, I'm not an Apple fangirl. I mean, I think that's what I was trying to say. I I'm not denominational. Right? I don't I don't privilege one over the other, but every time I get an archive given to me, 99% of it is built on Apple. Other artists are building on Apple. They're not they didn't build on DOS back in the in the eighties. They were building in the eighties on Apple products in the nineties onward. And so I get these archives, and I find that I need to have the computer that, you know, that existed when these works were produced so I can see the works in their original glory so that when I preserve them or reconstruct them, I can do so with some validity. I know what I'm I know what I'm doing. I know what the work really looks like. So the privileging of the creative, creative, people that I work with has been app which (Speaker 3): is driven (Dene Grigar): this collection I have of Apple products. Does that make sense? (Speaker 4): Absolutely. Yeah. I'm I'm I'm an Apple fan by myself. Before, the nineties, when they got to be strange, and then they got better, and then they were the richest people in the world. But I totally agree on on your assessment on that. My second question would be, if you refer to JSON, you probably mean JavaScript object notation? (Dene Grigar): Yes. I got autocorrected. Ah, it's important. I like Jason, though. It was Jason that made it. Right? Jason made Jason. (Speaker 4): I always get the song too. But my third question would be, why not prefer markdown over or (Dene Grigar): HML? Oh, that's a good question. We have people on the team that would prefer that. Frodo, you wanna talk about that? (Frode Hegland): Yeah, the thing is I'm not a fan of markdown after learning what markdown really is in practical reality. I have 2 pieces of software that are my own, author and reader. Author was trying to export in markdown and import in markdown. Turns out that some of the commands like bold and stuff is quite industry standard, but most of it that you need isn't. So it just really became a mess. And, personally, I like having things clean and there. I don't understand necessarily how to have things in a different place. Now in some cases, it's entirely appropriate. CSS obviously works. And, I talked to the inventor of CSS only 2 weeks ago to try to stretch him into working with us in XR. So that's an example of where it's useful. But the reality of it is we work with what we have, which is basically academic repositories for publishers. So it's mostly PDF. When we can, HTML, clearly better. So so that's the starting position. We don't want to be in a situation where we say, we're not gonna work with it because it's not good enough. We just have to accept it. Now what we really hope to do in the community is go far, far beyond that. I had a meeting with one of the large computer professional organizations a few months back where we talked about augmenting the PDF using visual meta. And one of the people there said, no. PDF is rubbish, old fashioned. We should go beyond it. I said, I agree on every point. What should we use as the standard to do that? And no one had any answers. So all that says to me is if someone has something to suggest, we absolutely have to try to figure out how that works. But the key thing is despite, you you know, liking the Apple headset and whatnot, it has to be transferable. You know, it has to be open. It has to be clear. (Speaker 4): I I totally agree. (Frode Hegland): You know, there's a lot at stake here. This is as I said in the introduction, and I really wanna highlight it, in the near future, let's say in 10 years, everybody will have a headset in their bag. But whether it's going to be useful to work with like a thinking cap, which is how we think of it now, or if it's going to be for watching movies, playing games, and so on, that's entirely up to people like us. And I was not gonna make as much money, but it will move us forward. (Dene Grigar): Jen just wrote something in the chat. Jen, can you talk about this? I'm interested in how we can recycle existing docs knowledge bases into 3 d formats. Can you talk about this a little bit more? (Speaker 3): Yeah. (Jin_dnakvr): There's a presentation on it, on YouTube as well for if you just, I'll I'll link the video later, But, I could also just, spawn something as, like an example of such. Essentially, it's a project that, we're we're using markdown and then we're just converting that into GLTF files in a way that is, also optimized. Because if you're should you just do it kind of, like, as textures, you know, it would be way too big of a file, so we at least the fonts, and then we, also do some instancing as well so that you can kind of read everything, in a way that's clear and sharp. And I've got a, sort of a graphic here I could just share, and I could drop an example GLTF as well. (Frode Hegland): Is this the thing (Msub2): that, I forget if it was NAR, (Jin_dnakvr): a, which I think It (Msub2): was a. (Frode Hegland): Yeah. GlTF is is definitely something we're looking at. And, right now, we're also trying to figure out what should be in 3 d and what isn't useful in 3 d, which is a very interesting discussion. (Dene Grigar): Yeah. That I love that idea, Jen, because, you know, our I'm teaching we teach, 3 d modeling in our programs. Our students are very comfortable with turning things from, you know, born digital objects into 3 d, but I don't wanna work in 2 d and a 3 d space. That just seems counterproductive. So what what does (Frode Hegland): a what does (Dene Grigar): an essay look like that's 3-dimensional in 3 d space? (Jin_dnakvr): Yeah. And I'm interested in, What's (Dene Grigar): the value of that? (Jin_dnakvr): Well, we're we're kind of going for different template layouts. You know, it's like you can maybe do a sort of horizontal or a grid, you know, expansion of a multi page document. You can do something vertical. But then I'd like to experiment with, maybe trees of conversation and whatnot. Like, if you were to try and describe a, GitHub discussion or thread where people are responding (Frode Hegland): to each other. (Speaker 4): Jen answers this right now in the chat with, one, advantage of 3 d text could be, for example, Kanban (Frode Hegland): boards. Hang on. Hang on. I can't hear you very well. I'm gonna have to go closer to you. Where are you? The the spatial audio is cold, but it's not always (Msub2): if you type in, slash audio mode 0, I believe that disables the, the spatialized audio. (Frode Hegland): Okay. Thank you. Right. Yeah. Please continue. I apologize. (Speaker 4): You're referring to (Frode Hegland): me? Yeah, I think you were talking and I just couldn't hear you well, so I just had to change the audio, but now I can hear you all very well. Thank you. (Speaker 4): What what's the trick? Okay. I'm not sure if Jim is speaking but, the one advantage of 3 d text could it be, for example, Kanban boards where you put your virtual posts on a 3 d kanban board surfing? (Dene Grigar): Mhmm. (Frode Hegland): Yeah. Uh-huh. Absolutely. Sorry. (Dene Grigar): And, Jen, what were you saying? Because you had another you were talking about something else. Visual the 3 d visualizations of some sort. (Jin_dnakvr): Yeah. Yeah. Can does this can you hear me? (Frode Hegland): Yeah. We can. Yeah. Hear you. Okay. (Jin_dnakvr): Cool. Yeah. Here's an example. I just found one of the, text files, finally. It's upside down. What what I was saying is, layouts, you know, so, like, horizontal, vertical, but then wanna experiment we get (Speaker 4): rid of the spatial, audio? (Msub2): Slash audio mode 0 in chat. (Jin_dnakvr): Yeah. There we go. But if you if you walk up to this, you can kind of see how crisp the text looks. And, the file size is kept really low because of the atlasing and instancing. So it's, how many draw calls is it? It is 14 draw calls, probably the images. Yeah. I wanna experiment with, different types of, sort of, like, me, grab another image as to, like, how things can look visualized. Okay. (Dene Grigar): Yeah. (Frode Hegland): That's why we think in terms of libraries, by the way, broken links are too ever present and too annoying and too dangerous. (Jin_dnakvr): Alright. Here's another image then. (Dene Grigar): Okay. (Jin_dnakvr): So, you know, like, you know, something that's linear like a document versus if you were to do text to GLTF of, like, a discussion or a thread, it would, you know, take a whole new different shape or dynamic. So just finding ways to kind of parse JSON and markdown to JSON to, create new, sort of visualizations of it in space. And also, like, you can go beyond the page as you can see, like, these images. They can, like, also go a little bit on the z axis board and go a little bit out of the page, and it would be cool to, like, pluck things out, you know, if it's a GLTF file that the images are nodes. And then you could just kind of, like, grab those. (Dene Grigar): Yeah. So I built a virtual library museum that's called the next. And so the idea behind it was that the work that we're we're putting in there, the kind of work that we're collecting are interactive, participatory, and experiential works. Right? There are a lot of old flash works and things like that. And we've I've written another little grant that allow us to do some, VR visualizations of the space in that space. Right now, it's still 2 d, but we do have a visualization space where you can actually see some of the physical art artifacts that are in the boxes in my lab that, are now online that you can actually manipulate with your hands, the 3 d space, but it's still not satisfactory. Right? Because we wanna be 3 d in a 3 d environment. So, I love what I'm seeing here because this is what we're visualizing even for the future of text because what does it mean to read in a 3 in a 3 d space? What does text need to look like? What does a library look like in this space? (Jin_dnakvr): Yeah. And I'm really interested in, like, creating war rooms or, you know, like, if you were to, like, wanna create a research hub, you could have a whole bunch of different, like, pieces of text around you. Maybe there could be an AI agent that also, like, with using the eye tracking, can kind of tell what you're interested in and kind of expand, you know, like, you can discuss topics with it and maybe with a vector database as well, you could kinda just talk with your knowledge base. (Frode Hegland): That is something sorry. Please, Jenny. (Dene Grigar): I'm gonna say, have you have you ever been into a cave, you know, the cave environments that were really popular in the nineties and 2000? (Jin_dnakvr): I've seen pictures. (Dene Grigar): Yeah. What I've what I've been I mean, if you could they cost 1,000,000 of dollars to build. Right? They're very expensive. But what I've always wanted was a cave environment for text, where you can actually manipulate the text inside that space. But I that that's where I'm headed in my head for the project we're doing, but, you know, but what you're saying is exactly makes sense to me too. (Frode Hegland): So, my academic affiliation in addition to Deanie's amazing what the Washington State University lab is with the University of Southampton here in the UK. We had a discussion, some of us, 2 weeks ago on something similar to this, and one of the guys, he has made an LLM that represents a philosopher, specific philosopher that is done with that philosopher's contribution, not just about this person and it's not supposed to answer questions about what that philosopher thinks or something that is known. It's supposed to be a provocation to help both that philosopher and us ask questions about crazy things, but based on that perspective, give us new things. So similar to what you're talking about, what we're looking at because right now we're being very careful, DME and I have to really focus on XR, not just suddenly go into AI space, but we're looking at a fun thing like where you have this philosopher as an agent, LLM, whatever the language might be. I actually want to call it LM, Ellen, like the lady's name, Ellen with the last name m like l l m. Anyway, that's just stupid. Right? Being in your environment and being aware of what's going on in the environment. So if you're looking at even a traditional document, very similar to what you said, this thing might even speak and say, from my perspective, what you're reading right now doesn't make sense because of these other things. So the idea will be to populate a space with different views, graphical views like you showed, which are excellent and we need to work on a few different types of these agents that are on different spaces. So when you go through knowledge of whatever shape or form or content, you get these perspectives. And my dream, personally, is to allow yourself to build that because it would you're talking about kind of a murder wall from a TV show, or a Kanban or an absolute degree, 100%. Imagine you build a 3 d version of that, which is also a memory palace, and it helps you understand a certain thing, and you have different rooms for different aspects of your work. Because then you have a furious dialogue with a friend. You have a big argument over something and you say and you're both wearing headsets and you say to your friend, that you're looking at it wrong, and you do some kind of a gesture in XR to collapse all of that into a thing, and the question is what is that thing, that you can then throw to your friend, or will expand it in their point of view and see quite literally your mental point of view on the issue. And that will, of course, also be influenced by whatever AI agents or LLMs or whatever they have in their view. So instead of just arguing isolated strings of knowledge, we can argue and have discussion based on hugely transmitted perspectives. Sorry. That took a while. (Jin_dnakvr): That's good. Yeah. Kinda reminds me, you know, when the whole worldwide web was emerging, technologists were using it to collaborate with one another. And I think that spatial techniques to evolve much in the same way where we find use cases where we can collab use it for collaboration and build tools to kind of, you know, and just improve that information bandwidth between nodes. (Speaker 4): Yes. (Frode Hegland): Yeah. Absolutely. I've been lucky enough to have a few conversations with Tim Berners Lee, and I think he is much wiser than how I generally come across him, just as deep perspective. We have to get back to making this knowledge connected world for deep dialogue, rich interactions and not just watching cohort TV. (Dene Grigar): We're moving towards the convergence. Right? I mean, AI is driving some of this too. Is that how do we converge these technologies that have been sitting there? The web is the web as it's currently being used and expressed is not fully what the web can be. We're not really using the web in its full capacity. So how can we leverage what's there to build a better environment? What is AI? How is that going to work with us? What about the XR? How do these things come together so that we can start to rethink the way we're working and and and thinking? I mean, this this it's gonna change the way we function. (Frode Hegland): Yeah. Yeah. That's that's the important thing, and I also believe that the fact that AI and VR coming into maturity, useful maturity at the same time, is a crucially important thing because if we don't have VR, we can't deal with the AI in an intelligent way. (Dene Grigar): I think we're hearkening back. I had to I've been laughingly saying we're hearkening back to the cyberpunk period. Right? The eighties of neuromancer and, sinners like Pat Cadigan. And those works that we're looking at the cyber cybernetic space is something that was, you know, consensual hallucination as Gibson called it, but also ripe, just completely ripe for redefining what what it is to be human. Right? And I think that's that's what makes it interesting for me is to see this return to and then what can we what have we learned in these 30 years that this is, you know, about 40 years with with that we can rethink all of this? (Frode Hegland): Yep. That that that's the key. I'm sorry. I've just been told that our guests are leaving, so I am afraid I have to leave. Too many family friends to to insult. I'm so grateful for this time, and I would strongly invite you to join us whenever you want. If you join us on a Monday, there's no reason for you to be there every Monday. If you can be there for an hour, off an hour, you know, if you have to leave early, just say bye. It's not a very formal thing. It is extremely casual. It is by people who love the potential of this technology like you guys (Jin_dnakvr): do. I'm (Dene Grigar): gonna drop the URL to the site for everybody in here, so just give me a second. (Frode Hegland): Thank you very much. Thank you. I'm I'm gonna have to run. People are running out the door. Bye, guys. And, happy Easter, and thank you very much for today. (Msub2): Happy Easter. Thank you so much for coming. (Dene Grigar): That's the Zoom chat, so you can meet me there anytime. So thank you, folks, for your time, and I hope to see you again. I'll be, in your Discord. So I'll be hanging out there with you folks, and I'll come to these events if you do keep me posted. Mhmm. (Msub2): Sounds good. (Frode Hegland): Nice. (Speaker 3): I've got (Frode Hegland): a question. (Speaker 3): Yeah. Hi, Ken. Is that you? (Jin_dnakvr): Is this you (Speaker 3): in the pic? That's you? No. I don't think so. (Jin_dnakvr): Okay. Never mind. (Speaker 3): No. Okay. Yeah. No. Interesting. (Msub2): What were you asking about, Carlos? (Speaker 3): Yeah. I was asking why I can't log in to, hubs to get my avatar that I already have. (Msub2): Oh, so (Speaker 3): Is that locked out? (Msub2): So it's, we used to host this, on Mozilla's hub server. But this is actually hosted on my own instance of HubSpot I see. I got it. (Speaker 3): So I can't log in to it. Okay. (Msub2): Yeah. I still need to, I think, theoretically, I can I should be able to turn on user registrations for this now because I think it's in a pretty stable state? (Speaker 3): Okay. (Msub2): But, yeah, that that's why some stuff you you might not be, the same (Frode Hegland): Access too. (Msub2): Familiar with. (Speaker 3): Yeah. Okay. No problem. Everybody, happy Easter. (Msub2): Yeah. Happy Easter. Yeah. (Speaker 3): I don't celebrate it, but I respect it for the people that do care about it. Jen, how are you? It's been a while. (Jin_dnakvr): Good. It was a refreshing discussion that we had today and a is here now. I invited him when I told him I'm showing his project. (Speaker 3): Did, you haven't been to the medic crew in a while? (Jin_dnakvr): I haven't been in a bunch of things. I've been just, like, really heads down with these avatar projects, and I'm, like, drowning in avatar assets, (Speaker 3): such wearables (Frode Hegland): everything else. Desperate. (Jin_dnakvr): It's a lot of stuff. Tens of thousands of assets. (Msub2): Yeah. I saw I saw when we realized you had made the mistake on Twitter. I was just like, oof. The, like, our (Jin_dnakvr): The mistake? (Msub2): The our weave thing (Jin_dnakvr): it was like, yeah, the off by 1. I fixed it. It costed $300, but I fixed it. And I it was a good learning lesson. I, I thought I would be wasting block space, but, now you could do name collisions and overwrite stuff. That's cool. (Speaker 3): Learn from your mistakes. (Jin_dnakvr): It's all school supplies. In fact, the school supplies were donated to me, so that's cool. And the founder of our we have reached out, So he's actually really interested in, like, meta risk preservation and throughout these little side projects I'm working on. And, you know, it's building tools to kind of create something like the Wayback Machine, but for virtual worlds. (Msub2): Yeah. It was cool to hear Danae is did, like, preservation with, like, old Apple stuff too. (Jin_dnakvr): Building tools to kind of create something like the Wayback Machine, but for virtual worlds. (Frode Hegland): Yeah. It was cool (Msub2): to hear Danae is did, like, preservation with, like, old Apple stuff too because I was also briefly interested in that for a time. I, I got this, like, very old, very obscure, like, QuickTime Macromedia Director game from Japan that was, like, made for, god, what is it? OS Mac OS 9? Mac OS 8? I think it was released in, like, 1999, and there's, like, no Internet record of it anywhere. So that's, like, one of my favorite, I guess, acquisitions when I was still in my, kind of rare retro game collection phase. And it's on Macintosh Garden now, so it's, no longer obscure. (Jin_dnakvr): I just got a MacBook. I don't really see the hype. (Frode Hegland): I mean, (Jin_dnakvr): it's pretty good hardware, I guess. (Speaker 3): Did you get the M3? Yeah. (Msub2): Mhmm. Oh, wow. (Jin_dnakvr): It's cool, I guess. But $25100? No. No. I got the cheapest MacBook Air. Yeah. (Speaker 3): I got the (Dene Grigar): 8 1800? (Jin_dnakvr): No. It was, like, 14 with, a 16 gig RAM. Mhmm. And that yeah. I needed it for VisionPRO type stuff. Mhmm. And, yeah, Xcode eats up a lot of storage. But (Speaker 4): So, Jin, you do have an Applevision Pro? (Jin_dnakvr): Yeah. I just got a, head strap for it because it is kind of heavy. I I feel the weight for sure, so I needed to get an aftermarket thing that just got delivered last week. (Speaker 3): Which one did you get? Panda. (Frode Hegland): Panda. (Jin_dnakvr): Yeah. (Speaker 3): It has this strap that goes over the top of the head? (Jin_dnakvr): Yeah. Yeah. Yeah. I just gotta hook it up and try that out. But, I mean, I thought that, you know, the killer thing with it would be the virtual display from MacBook Air. But, compared to moonlight and sunshine, which lets you do that from kind of any PC or Linux, moonlight and sunshine are just really good as well. It's just that when you have a MacBook Air connected to Vision Pro, you can use the keyboard and mouse on, like, any screen you're looking at. It has, like, cursor sharing and keyboard sharing across everything. Mhmm. I'm just kind of taking note of all these little quality of life things. (Speaker 3): No. I'm (Jin_dnakvr): just like a good yeah. It is a good productivity type, like, setup, but I do like what I see from people posting a WebXR battle stations and the, Quest 3 with, like, live editing and AR. Mhmm. Mhmm. That's something that we don't really have with VisionPRO that, you know, it's I hope we get immersive AR and multi app type things like I what Rick is working on with the overlay browser is Mhmm. It's a fixed position only, but I really hope that, you know, that kinda brings back some of those exo kit ideas we tried back in the day. (Msub2): It's important to remember with that proposal too that it's like you're when the overlay browser is open, like, your input is, like, fixed to the browser until it's dismissed again. (Jin_dnakvr): Yeah. So what's the good part of that? I mean, it's just like, you know, it's like, bringing up just, like, a fixed positional WebexR overlay that, you know, like what are the use cases of that versus something that, you know, it's like volumetric? (Msub2): It's (Speaker 3): I (Msub2): don't know, like, if he had a specific use case in mind for it. I think it's the one thing that off the top of my head I can imagine is, like, you know, for an ecommerce sort of thing, you could be, like, in the virtual shop or whatever, and then you could, like, interact with something, and then that calls the virtual overlay up for you to actually do, like, payment interaction. (Jin_dnakvr): Oh, like a browser? (Msub2): Yeah. Because before you would you would have to get you would have to be kicked out of it if you're in headset and then interact with the flat 2 d panel. And with this, now that you can, like, use the browser while in an immersive experience, that kind of streamlines that a little bit. Of course, the other the the the thing about that though is that there are, like, no virtual shops to really do this in (Frode Hegland): at the (Jin_dnakvr): I know. Yeah. (Msub2): It's like rebuff reality had a cyber shop thing for the longest time, but it's not active anymore. It's like you could inspect all the different wearables and stuff. (Frode Hegland): I (Msub2): I I contracted with them way back. That was, like, one of my very first Webexar gigs. (Jin_dnakvr): There was this other one. What was it? (Msub2): It was on the (Frode Hegland): new tab. (Jin_dnakvr): From hubs. What was it? Either way, like, one thing I like about VisionPro is this bringing multi apps back into a kind of consciousness. Mhmm. And, now we have, like, these analogies, like, we could kind of refer to volumes and people all kind of get it. Whereas talking about multi apps back in the day, it felt so niche and misunderstood. And, it has kind of exciting new wave of interest. And I think, you know, whatever the proposal is right now, eventually, we'll kind of see, like, you know, how volumes are just gonna be better. And if the browser even if it's behind a flag, if we're gonna just experiment with some of those ideas again, I would do so many more demo videos and, like, grants and stuff to just show what's possible because it's it's like the best path for a positive sum future between Webexa and native. They're complementary. They don't have to compete. You get suddenly new distribution, you know, where it's like your WebexR gadgets can be overlays with all these other programs. (Msub2): It'd be really cool to see a refresh of Metachromium, but, like, based on OpenXR type stuff. Because I I remember from what I looked at, Avior was hooking into, like, open VR stuff specifically to do the kind of overlays, like hooking into, like, the the depth buffers and other stuff, I think. Because but I know there's, like, a OpenXR, like, overlay extension, but I think I think it's only supported on Monado, actually. (Jin_dnakvr): I heard someone is working on something based on Electron and just compiling, like, just adding WebexR support to an Electron app. (Msub2): Yeah. Because, I mean, we know that's possible. Right? Ever since James did the that one experiment with it where you had to, like, dig in to, like, Windows ACLs to figure out why Chrome wasn't letting the WebexR experience happen. (Jin_dnakvr): Is that you think maybe, like, easier for maintenance than, like, maintaining Chromium? Because compiling Chromium is (Msub2): It's quite a beast. It's it is a chore. Yeah. No. I I remember doing doing patches for, like, Webexar Chromium stuff like my computer. I have a Ryzen 5 26132 gigs RAM and, like, a from scratch Chromium compile takes, like, an hour and a half. (Frode Hegland): Oof. Yeah. (Speaker 4): The James who who is the James you're referring to? (Msub2): Oh, James Bai, Janice Webb. (Jin_dnakvr): Yeah. He wrote a post on Electron and WebXR that's still kind of like the most referred cited thing for Electron and WebXR. Mhmm. (Frode Hegland): What do (Speaker 3): you guys think of Frame? What Gabriel's doing? (Msub2): I like Frame. I I like the stuff that he's doing with it. We ran, last or not last month, I think, but the month before that, the meetup in frame, because I hadn't, gotten my Hubs instance, stood up completely yet, and I was pretty impressed. (Speaker 3): It it seems to work fairly well Mhmm. For what it is. What is all the turmoil with the hubs now that, they basically wanna make money now? Is that correct? (Msub2): Pretty much. So hubs, which, honestly, I'm shocked hubs lasted as long as it did after they laid off their XR, team of 3 people in the mass layoff. Mhmm. So now, I guess, they're finally deciding to exit as as they were working on, like, some pretty major new features. Like, they were working on a whole, like, node based interactive, like, scripting thing behavior graphs for hubs. That sounds like it would have been, like, really cool, but, yeah, it's all of it's getting kind of relegated to the community now. They don't have a lot a lot of questions are still kind of up in the air about, like, what will happen with, ownership of hubs in the code and, like, what's gonna happen to the community Discord and all of that. The main thing that they've been doing the last little bit is, like, helping people transition to community edition because, community edition, it requires Kubernetes. So, obviously, it's not the most approachable, thing for a lot of people. Like, even myself, I consider myself, like, fairly, like, proficient with web stuff, but Kubernetes is a headache. I ended up going with, like, a managed solution after trying and failing to set it up set up a cluster myself on, like, a VPS. So they've got, like, instructions for, you know, getting it on the major cloud providers and whatnot. I think they just recently released a tool that let people download their data from, like, Mozilla's hosted server. Mhmm. And they've already sort of discontinued, allowing people to subscribe to, like, their old AWS Hubs Cloud things. So really, right now, we're just waiting to hear, like, what what what does Mozilla want to do, like, with hubs now? They it's there's a group of people that have already started kind of, like, forming a kind of community effort around it to, like, start maintaining, things, but it's still very early days for that. And I think there's not gonna be a ton of movement on that until they actually communicate with Mozilla and sort of figure out what, like, what can the community have and what will Mozilla not give them? They they probably gonna are gonna have to rename it, I would imagine. And then (Speaker 3): Rebrand. (Msub2): Rebrand. Yeah. So goodbye to the duck. (Speaker 3): I'm sub, do I know you? (Msub2): As in? (Speaker 3): As, on Discord or something. Have we interacted before? (Msub2): I'm not sure. I I am m 2 on the on the Discord, but, I don't know your Okay. Handle offhand. (Speaker 3): Okay. I'm Carlos, and I deal with Matt Cool and with Michael Moran and Dom. And what was the other guy's name? Couple of other people that were part of this group. And then, Michael I ran into Michael Moran, and he told me about all the upheaval with pubs and so forth. (Frode Hegland): Mhmm. (Speaker 3): And so I just I'm sad to see it, Yeah. But go in the direction it is, but I understand people just realize you can't you have to have these servers running and you have to pay the bill, and so you need to make money. (Msub2): Yeah. (Speaker 3): So it is. (Msub2): It's unfortunate, but even even when I saw the news, it was like I was disappointed, but at the same time, not surprised. (Speaker 3): Mhmm. Yeah. Neither was I. Yeah. I mean, I've seen it actually with a bunch of platforms, Altspace, High Fidelity, WAVIC, VR or XR, whatever. They all come and go. Well, I'm I'm gonna bow out. It's I just wanted to stop in. Jin, I saw your post, and I'm always kinda keep an eye out because you're always doing very interesting things. Wish you luck. Do not drown. If you need a lifesaver, I'm happy to send you one even if candy. (Jin_dnakvr): Appreciate it. (Speaker 3): And hope to see you around the Meadow Ric. Meadow Ric is working with a couple of actors trying to set up a new play and working hard on the world (Jin_dnakvr): In the architect? (Speaker 3): Yeah. With Deidre and with Steve Brusco or I can't even pronounce his last name. But Diedre Lyons (Jin_dnakvr): Interesting. (Speaker 3): That did respite and is part of the ferryman, ferryman collection, which is a bunch of actors that do VR projects. Oh, (Jin_dnakvr): interesting. (Speaker 3): Yeah. And and, MetaRick is actually now Quest compatible. He's remember in his world, he had a prison that you went into automatically if you were on Quest? Do you do you recall that? (Jin_dnakvr): Yes. (Speaker 3): Well, he he's moved away from that, and now he's super optimizing, worlds so that you can go in with a Quest. So (Jin_dnakvr): That's good. I mean, if he's doing virtual production, the Quest 3 has that inside out tracking (Speaker 3): That's right. (Jin_dnakvr): For full body. So it's like you can have anybody be a remote actor with a Quest 3. (Speaker 3): Yeah. That's nice. Do you have the q 3 too? (Jin_dnakvr): No. No. I I feel like I should because it's so good for web WebexR. It's just that (Speaker 3): you can buy it on on Facebook Marketplace for really cheap and not pay tax on it. I bought a lot of stuff off of Facebook Marketplace, marketplace, and one of the things I did get a Quest 2. I did buy the Quest 3 brand new when it came out in October. But, yeah, there's there's a lot of people that buy them, and then they don't want them. And they'll sell them for 300, 350, which is below market price. So (Jin_dnakvr): Oh, good tip. (Speaker 3): Good yeah. Look into that. (Jin_dnakvr): Yeah. I do buy a lot of secondhand type stuff. (Frode Hegland): Like (Speaker 3): Yeah. If it works, you know, why not? Exactly. (Msub2): Yeah. I I kinda already assumed you had a Quest 3, to be honest, Jim. (Jin_dnakvr): I had a Quest 2, and I keep reading that it wasn't that big of a upgrade. No. (Speaker 3): It it it's pretty good. Oh, and a (Jin_dnakvr): Quest Pro. So I I bought the Quest Pro, and I'm like, (Speaker 3): No. That was a mistake. Yeah. (Frode Hegland): No. The (Msub2): the pass through of the Quest 3 is, like, got the night and day difference between Quest Pro. And apparently, it got even better in the most recent, update too. Like, exposure has been kinda, like, adjusted, so it's, like, even easier to read your phone, like, in pass through now. (Speaker 3): Right. And, also, they added 2 more speakers on each side of the head. So I went into VR chat and went into one of these music events, and I can go, wow. That really sounds good now. And you have to get a different strap, obviously, which I did. And that strap I got has a halo that goes over the top of your head. So a lot of the weight is taken off, you know, the back of your head and your face. And then I have it, tethered to a machine so I can do PCVR. But, I'm really happy with it. (Jin_dnakvr): That's what I'm kind of stuck on because all these devices, they are so, like, I wanna do streaming to any of my headsets, Quest or a a Vision Pro, but it always requires booting Windows. And I'm a Linux person. Mhmm. And now I'm just thinking, like, maybe my best move is to virtualize Windows with GPU pass through. And it's a whole thing that I'd have to do. And right now, I'm just kind of in, archiving sort of mindset. Just dealing a lot with metadata and, all all these other things that I will eventually wanna kind of build into a cool museum or library in a spatial context, especially with what a is working on with, text to. But, yeah, it's gonna take me a while to get all the, assets kind of prepped on doing a lot with the also, like, making sure that the provenance is there, like, injecting the metadata into the GLTF files as well. There's a lot that goes into, sort of, like, preservation and archiving. (Speaker 3): Absolutely. (Msub2): You mentioned streaming to headset. You're talking about, like, streaming VR content type? (Jin_dnakvr): ALVR type stuff. (Msub2): Yeah. To say (Jin_dnakvr): Have you tried it? I (Msub2): mean ALVR kind of works. (Speaker 3): It's (Jin_dnakvr): getting way better, but yeah. On from a Linux host, still pain. Maybe it's just maybe it's just the headset, like, or the router. There's so many different things I hear. You know? It's, like, finicky. It's, like, you know, if your microwave is on, if someone else is using the you know, I I will maybe get, like, a separate router. Something on my desktop, I have sit Or (Speaker 3): just (Jin_dnakvr): Wi Fi 6 e. (Msub2): Enclose your room on a Faraday cage, and then there's no interference. (Jin_dnakvr): Yeah. (Speaker 3): Jen, one thing project that I'm involved with is in the chat right now. (Frode Hegland): I want to have (Speaker 3): a look at it and it's a very ambitious project, spearheaded by a guy by the name of Julian Reyes. He's out in the Bay Area. (Jin_dnakvr): That's a beautiful looking architecture. Oh, that's AI. (Speaker 3): He's got, yeah, he's got a lot of, fingers in all kinds of projects. One of the things he thrives on is DJing. In fact, he's coming to Texas where I am for the eclipse to do this mega concert with a bunch of DJs for 3 days or something like that. And, but he's spearheading this, and I'm helping him with doing video in world. And what he wants to do is put, like, embassies in all the different platforms and then link them together. (Jin_dnakvr): Obviously (Speaker 3): Yeah. Obviously, it doesn't work right now for many of the platforms, but he's trying to set up the infrastructure that is a solid system that then he can take that to like people like VRChat or Rec Room or so forth and that the developers there might give him the time of day to be able to maybe find a way to integrate the Embassy inside of the platform to be where you could move from one platform to another kind of thing. I mean, that's what we're all trying, I think, or a lot of us are trying to do is to have one place that you can move around without seamlessly. Right? (Jin_dnakvr): Yeah. (Speaker 3): Ready player me kind of thing. But anyway have (Jin_dnakvr): a, proposal for a museum of avatars that I have someone potentially interested in helping, to fund. But this proposal, it kinda reminds me what I'm looking at here, except, it's more ambitious having it in lots of different platforms. It's a really cool project. I think there is some alignment between an avatar museum and this one. (Speaker 3): Yeah. Yeah. What is it that you're you say you're drowning in avatars. What can you in 20 plus (Frode Hegland): words or (Speaker 3): or assets? (Jin_dnakvr): Well, same thing. You know? I mean, it's like, I'm archiving wearables, and I'm also, like, helping some collections create VRMs. So I'm literally dealing with it, like, thousands of assets to create avatars with and, converting a whole bunch of assets and, like, yeah, just dealing a lot with the avatars and wearables lately. (Speaker 3): And so those are the the components that it takes to build, to create an avatar is the assets that you need to do that. (Jin_dnakvr): Yeah. But there's so much more I'm involved in, like, the tools to build them. Like, character studio is something that m three is working on. A open source avatar builder. Mhmm. But it's become more of a Swiss army knife. Like, we can use it to, also kind of construct all these avatars and then so yeah. For for creating, like, a lot of avatars all at once, you kind of, like, gather all the ingredients and then the different layers, and then you have to build tools to kind of do quality assurance. And, there's a lot else that kind of you have to kind of make sure that they have the right shaders, that the armatures are are good and consistent. And, (Speaker 3): and then make (Jin_dnakvr): sure that yeah. (Speaker 3): This is Blender and Unity and whatever other (Jin_dnakvr): Unity, JavaScript, Batscripts, lot of little micro tools to kinda help organize data. Yeah. (Speaker 3): Are you getting paid to do this, or is this a project of your own? (Jin_dnakvr): Half and half. The avatar project's paid, but Mhmm. The archiving stuff is a side project. (Speaker 3): Mhmm. (Jin_dnakvr): It is kind of community supported a bit, though. Like, some donations that flew in a couple years ago. (Speaker 3): Mhmm. Do you have a Patheon for that or are you using another system? (Jin_dnakvr): Another system. I can link it. It's called JuiceBox and we kind of need to refresh it, but, it has a way of kind of like, it's kinda like Patreon, but for crypto. (Speaker 3): Right. Well, you could, send Julian a a connect on LinkedIn, and he's very, very good at, responding to LinkedIn. Oh, never mind. Well, I can put you in touch with him. And if you look over the his website or the website and then see how you guys align each other, I'm happy to put you in touch with him and bring you into the project if if, if it (Jin_dnakvr): fits. What's the timeline for this museum? (Speaker 3): Well, it's it's it's all right now, there's a lot if if you look at the team, there's all those people are volunteers. And so there are people that are building the assets, others that are doing the PR stuff. I'm doing the video and blah, blah, blah. And all the descriptions are there. So just have a look at it and you can reach out to me on Discord, ping me and say, hey, I'm ready to look into this or not. You know? Doesn't I just think, you're one of the more talented people in the XR environment. (Jin_dnakvr): Oh, Keyframe. (Speaker 3): Yeah. You know him? In Yeah. (Jin_dnakvr): He shows up at only meetings. (Speaker 3): Yeah. (Jin_dnakvr): Oh, Ivo. Oh, yeah. I know these people. (Speaker 3): Yeah. I know you do. It's a small community. (Jin_dnakvr): Yeah. Oh, this is (Speaker 3): cool. Okay. And m sub, is that what you go by, m sub 2? (Msub2): Yeah. M sub 2 or just m 2 for short. (Speaker 3): Okay. And what's what's your thing? What is it that you do? (Msub2): What I do? So, I mean, my day job, I work for Zesty. I work on, like, we we do Zesty Market, which, basically is like a kind of banner ads for WebexR experiences thing helps devs monetize, their virtual spaces. We're on a lot of the Javier, dot I o games out there, like, you know, barista express archery, dungeon archery training. A lot of the stuff that's on the new tab page has this SD integration. And then in my as for my side projects, I do a lot of, I do some open source contribution stuff. I do a little bit of browser hacking. Recently, I've been working on Servo, implementing, support for the gamepad API. And I'd like to get back to their Webexar implementation at some point too, but, it's the the it's it's complicated. Munar doesn't use Zesty because Diego doesn't wanna monetize it. (Speaker 3): Mhmm. (Msub2): I I think I think Elijah has asked him, like, a couple times, and that's the answer that Diego gives back. It's like, it's just supposed to be an example project. Doesn't wanna put heads on it, which (Frode Hegland): fair enough. (Jin_dnakvr): It should be an example project for monetization too. (Frode Hegland): Like Yeah. (Speaker 3): Well, that's that's what (Jin_dnakvr): that would be. Advertising for Webex are like in a huge way. Hey, there's money to be made in this ecosystem. Like, yeah. But That was a big intervention with Diego. (Speaker 3): That was a big problem with, the 2 guys from, Neos. That one one wanted to make money and the other one didn't. And so they split it up, and now they've got. Mhmm. So, anyway, I'm going to bug out. It was, good talking to you all and catching up. Again, happy Easter and all that kind of stuff, and, thanks for the hospitality. (Msub2): Yeah. Happy Easter. (Jin_dnakvr): Thank you. Likewise. Cheers. Get back to work. Mhmm. See y'all on Discord. Cheers. (Msub2): Alright. See you.