-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mesh representation change suggestions, to allow subdivision surfaces, more efficient storage and more clarity #1362
Comments
Thanks @gnagyusa. I'll start with the quick answer for point 3, and let others chime in on some of the design philosophy on the other points. Texture set number is specified in
But, ecosystem support for this is still under development. I've been told ThreeJS is currently limited to using |
Thanks @emackey. That's awesome about 3)! I figured there must be a way, but I guess, I missed it somehow :) |
Thanks @gnagyusa for thorough feedback! 1I think that multiple indices support could be implemented as an extension to the current spec. Since it would require CPU processing before uploading data to the GPU, I'd expect that not all clients (especially on the web) would implement it.
Draco mesh compression solves this by using quantization, prediction and entropy coding. 2VEC4 (VEC3 + SIGN) TANGENT data has been added exclusively for normal maps. So tangent space is defined by
At that point, the core spec didn't have (and still hasn't) notions of LODs, low/high frequency maps, or anisotropy. Adding these features is certainly possible. This should be done as app-specific 4I'm afraid that simply replacing all "UV" occurrences with "ST" would cause more confusion especially among first-time readers. Nevertheless, pull requests harmonizing terms and language are always welcome! |
Thank you @lexaknyazev for the info.
We can decide later, if we want to deprecate the current format and mandate specifying the index list for each attribute, or keep both options. 2) Fair-enough. I think, extensions for such core features might "pollute" glTF too much though. It might be better to add them to the standard step-by-step, in a more or less backward-compatible way: 4) Fair-enough :) Thank you! |
1) "attributes":
[ // <-------
{
"POSITION": 0,
"indices": 3, // POSITION has its own index list
},
{
"NORMAL": 1, // NORMAL doesn't specify its own index list, so it uses the shared one below
},
{
"TEXCOORD_0": 2, // TEXCOORD_0 also uses the shared index list
}
],
"indices": 4 or a map of maps: "attributes":
{
"POSITION": {
"values": 0, // index of the accessor with vertex data
"indices": 3, // index of the accessor with indices data
},
"NORMAL": {
"values": 1, // NORMAL doesn't specify its own index list, so it uses the shared one below
},
"TEXCOORD_0": {
"values": 2, // TEXCOORD_0 also uses the shared index list
}
},
"indices": 4 The former would require breaking the schema (thus completely impossible within glTF 2.x lifecycle), while the latter could be made somewhat-compatible with the current design by using JSON-schema polymorphism (so it could be done in theory with glTF 2.1, also please see the spec about "attributes":
{
"POSITION": {
"values": 0, // index of the accessor with vertex data
"indices": 3, // index of the accessor with indices data
},
"NORMAL": 1, // NORMAL doesn't specify its own index list, so it uses the shared one below
"TEXCOORD_0": 2, // TEXCOORD_0 also uses the shared index list,
},
"indices": 4 Proposed features (like edges or subdiv) seem to be oriented more towards interchange / DCC use cases rather than primary glTF goal - runtime delivery. |
Hi @lexaknyazev. |
Hi @lexaknyazev
We could even use a map, to indicate that it's an aggregate type (i.e. "C struct") of a VEC2 and a VEC3. For example:
This almost looks like a C struct definition, so it would be intuitive for most engineers, although, it would be more complex to parse than just saying "type":"VEC2_VEC3" or "type":"ST_TANGENT". And, we might want to specify "componentType" separately for the fields, if we go down this road. That would complicate things even further. So, this approach might be overkill at this point. |
for 1, (separate indices for position vs normals/texcoord) is there a way to render things like this that have hard creases on the GPU without converting back to what the current spec says (ie duplicating the vertices for each face before uploading to gpu). It seems like an incredible waste of transmission space, as well as gpu memory for renderers that don't use openGL |
On current GPUs, you can only send a single index list (e.g. via glDrawElements()), so you potentially have to waste a lot of RAM, by replicating vertex data, if any of the vertex attributes are different. |
I was going to adopt GLTF2 as a substitute for OBJ, for global illumination scenes, because of its completeness, but I really need quad faces, not only for subdivision surfaces, but for FEM meshes also, and for radiosity solutions computed on quads. The fact that quads are not supported on GLTF2 is a show-stopper for me to adopt it. I hope this will added at some point, but in the meantime I have no other option but to use other formats. |
I agree. The lack of support for n-gons (quads etc.) is a show-stopper for me too. It also prevents using subdivision surfaces, which is a standard feature in most renderers now. |
Hello. My name is Gabor Nagy. I was one of the original 2 designers of Collada, and I started using glTF2.0 a few months ago, in EQUINOX-3D.
As you know, we also use it at Facebook, where I'm a 3D graphics lead.
The format is great! It's super easy to parse, and the spec is nice and clear, but if I may, I'd like to make a couple of suggestions that would improve flexibility and clarity, and would allow for new features, like subdivision surfaces:
1) It would be awesome to support separate index arrays for POSITION, NORMAL, etc.
Currently, exporters have to store multiple copies of vertex positions in many cases, producing a "disconnected polygon soup", rather than a clean, connected mesh.
In addition to increasing file sizes, it doesn't allow easy mesh vertex identity checks that are needed for subdivision surfaces / mesh edge data saving, closedness tests, etc.
Instead of just comparing integer indices, import tools have to compare float triplets for equality (kind of a dirty business, with epsilons and sign checks :)), to determine vertex identity.
E.g. if 12 polygons share a vertex (position), but the normals are different, the vertex position (3 floats) must be replicated 12 times. That's a 144 bytes, instead of 12, for the same mesh vertex.
While this is how current GPUs need the data, and it's usually ok to waste RAM on potentially thousands of duplicated vertex positions, it can be a problem when the data is transmitted over the internet, especially on mobile platforms with bandwidth caps and extra fees.
Often, vertices need to be split or otherwise rearranged on input (e.g. if normals are missing, and hard normals must be generated), so the vertex array will change anyway, before it gets to the GPU.
Also, future generations of GPUs may allow separate index lists for vertex positions, normals, etc.
The current format:
This alternative would be the "best of both worlds", it would allow a separate position index array, while still allowing the use a single index array, as well:
Non-repeating vertex positions would allow us to store mesh edge attributes, like "hardness", which is needed for (creased) Catmull-Clark subdivision.
2) It would be great to have a clear separation between texture-space tangents (used for normal mapping) vs. geometric tangents (used for anisotropic shaders).
There are 2 texcoord sets (TEXCOORD_0 and 1), but there's only one TANGENT semantic, which would imply that it's for geometric tangents, but all the examples I've seen, use those for texture-space tangents.
Ideally, texture-space tangents should be packed with their corresponding texcoords. Texcoords could be either VEC2 (S, T) or VEC5 (S, T, TgX, TgY, TgZ), which should be indicated in the accessor.
The current system seems a bit confusing and incomplete.
For example, what if a model uses as anisotropic shader with a normalmap, and thus it needs both geometric tangents, and texture-space tangents that are different?
Or, if there are two normalmaps (e.g. a low-frequency + detail). An anisotropic shader + 2 normal maps would need 3 tangent sets.
It's not clear which tangents should be stored in the single TANGENT semantic. And what about the other two tangent sets? There doesn't seem to be a way to store them, at all.
This is why Collada had different semantics for geometric tangents and texture-space tangents.
3) I couldn't find a way to specify which texcoord set should be used for a particular texture, when rendering a mesh.
The PBR material allows up to 5 textures (baseColorTexture, metallicRoughnessTexture, emissiveTexture, occlusionTexture, normalTexture), but there are only up to 2 texcoord sets.
Do all 5 textures have to use the same texcoord set, via TEXCOORDS_0? But, then what is TEXCOORDS_1 for?
I see that textures refer to samplers, but samplers only specify filter and wrapping options, but not the texcoord set to be used.
4) A minor thing. texcoords are referred to as "UV", but the proper names for texture coordinates in OpenGL are S and T.
U and V are generic, "natural surface parameters" that may or may not be used for texture mapping.
An unfortunate confusion in the industry, like some folks calling mesh bitangents "binormals" :)
Thank you, and please keep up the great work on this awesome new standard!
The text was updated successfully, but these errors were encountered: