Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KHR_mesh_quantization #1673

Merged
merged 7 commits into from
Dec 26, 2019
Merged

KHR_mesh_quantization #1673

merged 7 commits into from
Dec 26, 2019

Conversation

zeux
Copy link
Contributor

@zeux zeux commented Sep 22, 2019

This is a draft of KHR_mesh_quantization (formerly known as KHR_quantized_geometry), that allows geometry attributes to use 8-bit and 16-bit types, which allows the encoders to choose the optimal tradeoff between quality and memory/transmission size.

The extension is specified to make implementations trivial; this is "de-facto" supported by Babylon.JS and Three.JS in that files that use 8/16-bit attributes can be deserialized by the respective loaders without further changes. I'd expect in general any compliant loader to be able to support this with minimal effort.

Rendered version (updated 11/13)

Open questions:

  • Is the number of possible attributes on POSITION / TEXCOORD too large? Given an arbitrary dequantization transform some of these are redundant, but a wider set of types allows for some scenes to be more easily encoded. For JavaScript libraries, it might be easier to assume that POSITION attribute is never using normalized storage because that allows all heavy lifting to be done in the JS engine since typed arrays can handle the integer -> floating point conversion. RESOLVED: It seems better to leave the spec a bit open here. There are probably cases where some engines may decide to not support a specific format natively, but they can decode data into floating-point data on the fly - a correctable inefficiency in a specific library shouldn't dictate the spec's constraints.

  • Some loaders, such as Three.JS, currently recompute absolute values of morph target attributes by adding the delta to the base attribute. Should we require that base+delta fits within the base component range? RESOLVED: Three.JS can adapt to either use deltas as is or decode into a wider type.

  • Does the extension need any more detail around any other areas? This is my first extension so I'm likely missing some details, please feel free to suggest changes/clarifications.

Closes #1670

@lexaknyazev
Copy link
Member

Hi @zeux, the PR looks good overall!

OpenGL ES 2.0 and OpenGL ES 3.0 have slightly different semantics for signed normalized attribute types (WebGL 1.0 contexts may manifest either).

Type Signed Integer OpenGL ES 2.0 OpenGL ES 3.0+
Byte -128 -1.0 -1.0
Byte -127 -0.992156 -1.0
Byte 0 0.003922 0.0
Byte 127 1.0 1.0
Short -32768 -1.0 -1.0
Short -32767 -0.999969 -1.0
Short 0 0.000015 0.0
Short 32767 1.0 1.0

Which of these two approaches should be used to produce assets with this extension?

@zeux
Copy link
Contributor Author

zeux commented Sep 23, 2019

That's a very good point. I was assuming that the behavior is as specified in GLES3/GL3 spec (0 is preserved, -128/-32768 is clamped to -1.0) - this matches the glTF conventions that are already present in the spec for animation data, and is likely to be the only convention supported by most hardware.

So the expectation is that the decoders follow the ES3 conventions; would it make sense to explicitly specify this in "decoding quantized data" section by copying the section from the animation part of the main spec?

@zeux
Copy link
Contributor Author

zeux commented Sep 23, 2019

We can also try to reduce the likelihood of encountering this issue by trimming down the supported formats:

  • Removing signed normalized formats from POSITION attributes (normalization can be incorporated into dequant transform if desired)
  • Removing signed normalized formats from TEXCOORD attributes (they are mostly included for completeness, I'd expect most encoders to use unsigned formats for TEXCOORD)

Normal/tangent data really needs signed normalized formats but for these attributes, the extra error introduced by the difference between ES2 and ES3 decoding isn't critical.

The caveat is that for morph targets, if normalization is used for positions I'd expect normalization to also be used for morph deltas - so being resistant to ES2 hardware requires unnormalized storage for everything in this case in practice.

@lexaknyazev
Copy link
Member

would it make sense to explicitly specify this in "decoding quantized data"

Since the decoding almost always happens in hardware, it would make more sense to specify this as encoding process.

The caveat is that for morph targets...

Agree. Without signed and zero deltas, morph targets aren't very useful.

There are few minor issues with the text of the extension. Would you be okay with me pushing some edits to it?

@zeux
Copy link
Contributor Author

zeux commented Sep 23, 2019

Since the decoding almost always happens in hardware, it would make more sense to specify this as encoding process.

Sure - that makes sense. I'll add this and I'll leave the implementation note for decoding that notes that legacy ES2 class hardware can decode signed normalized values with a minor precision loss.

There are few minor issues with the text of the extension. Would you be okay with me pushing some edits to it?

Absolutely

- Added @lexaknyazev to contributor list following the text fixes suggested
in PR discussion

- Fix a couple of spelling errors

- Clarified wording on data alignment - BYTE TEXCOORDs need to be
aligned to 4 bytes as well so there's no need to highlight VEC3.
@zeux
Copy link
Contributor Author

zeux commented Sep 30, 2019

Updated the PR with a few small fixes and an encoding/decoding specification guide for encoders. Please let me know if anything else is amiss, and feel free to push edits if that's easier.

@zeux
Copy link
Contributor Author

zeux commented Oct 25, 2019

Sorry to bump this but I don’t really know what the process is like and want to make sure I am doing everything I need to. Is there anything I can do to move this forward?

@zeux
Copy link
Contributor Author

zeux commented Nov 10, 2019

Any thoughts about this @donmccurdy @lexaknyazev ? I am not sure what the next steps are. I was considering supporting this extension in the validator, but it's not clear to me what state this is in. Please let me know if there's further work on this text required from my side.

@donmccurdy
Copy link
Contributor

glTF 1.0 had a quantization vendor extension (WEB3D_quantized_attributes) as you mention in #1670, implemented in at least some Fraunhofer IGD and Cesium tools —

@pjcozzi or @mlimper would you, or others you work with, have time to offer feedback on this proposal? It seems nice that this extension does not require any additional JSON metadata, but I'm also not opposed to adding that metadata if there were reasons it was necessary in the previous extension that I've missed. In particular, are we losing anything important regarding real-world scale by baking the transform into the node hierarchy?

@zeux thanks for working on this! If you have the time and would be able to pull together some numbers on compression results in realistic assets, that might be more motivating to everyone than a validator implementation. We'll want to implement it in the validator eventually too of course, but it isn't a requirement for the extension to be ratified.

I think I'm in favor of bringing quantization into glTF 2.0, in one form or another.

@zeux
Copy link
Contributor Author

zeux commented Nov 10, 2019

Sure thing! There are some numbers specified in the PR note but I'll post results for a few glTF sample models as well in a bit.

Re: having a separate quantization matrix, there are minor consequences for encoding efficiency on some meshes for not having a separate matrix. I would prefer to keep it separate from this proposal - in my mind, it's sufficient to enable the full set of data types (as proven by gltfpack) - it doesn't prevent us from, separately, shipping an extension that defines dequant transforms. As noted, the benefit of not using dequant transforms is a straightforward integration story; additionally, specifying extra data for each JSON accessor carries a significant size impact (which can be noticeable on scenes with many small meshes, where the JSON size can be on par with binary data size). Finally, we already are baking various coordinate system transformations into the node hierarchy - glTF doesn't allow specification of unit scale or coordinate system handedness, and exporters in practice encode this as extra nodes in the transform graph. Thus not using separate metadata here is consistent and helps keep the format simple.

extensions/2.0/Khronos/KHR_quantized_geometry/README.md Outdated Show resolved Hide resolved

|Name|Accessor Type(s)|Component Type(s)|Description|
|----|----------------|-----------------|-----------|
|`POSITION`|`"VEC3"`|`5126`&nbsp;(FLOAT)<br>`5120`&nbsp;(BYTE)<br>`5120`&nbsp;(BYTE)&nbsp;normalized<br>`5122`&nbsp;(SHORT)<br>`5122`&nbsp;(SHORT)&nbsp;normalized|XYZ vertex position displacements|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why only signed types here? Given that target weights can be negative, some use-cases probably may benefit from better precision of unsigned types.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm - I'm up for adding this, but I don't have specific use cases in mind. True, target weights can be negative, but the weight is applied to the entire target - so to be able to store unsigned deltas, every single component of every single vertex must have a consistent sign and I'm not sure this can happen in practice.

extensions/2.0/Khronos/KHR_quantized_geometry/README.md Outdated Show resolved Hide resolved
extensions/2.0/Khronos/KHR_quantized_geometry/README.md Outdated Show resolved Hide resolved
@mlimper
Copy link
Contributor

mlimper commented Nov 10, 2019

@pjcozzi or @mlimper would you, or others you work with, have time to offer feedback on this proposal? It seems nice that this extension does not require any additional JSON metadata, but I'm

Sure, thanks for reaching out - and thanks, @zeux, for the great work!

Even though our old 1.0 extension had explicit dequantization transforms, I'm perfectly fine with having them as part of the regular node hierarchy, that all makes sense. I thought if there could be issues with LOD, but the existing extension for that also works on the node level, luckily (not on mesh level)... so, all good.

In order to reduce fragmentation, I'd vote against introducing another extension for explicit dequantization transforms - the current 'KHR_quantized_geometry' is lean and perfectly does the job, and it could possibly even be an added to the core spec for the next glTF version (2.1)? :-)

Just one question, more related how things fit together:

How does 'KHR_quantized_geometry' work in combination with 'KHR_draco_mesh_compression'? I'd suppose it should be possible to use both, Draco for efficient transmission and 'KHR_quantized_geometry' for compact runtime storage and faster rendering - in that case, I guess the Draco decoder must be able to input/output a properly quantized attribute, instead of floating-point data. Conceptually, it shouldn't be a big issue - I think Draco quantizes geometry anyway internally, maybe one can even save a few float/integer conversions by specifying that input and output are quantized data (although the Draco geometry encoder could use non-aligned data with odd bit sizes, so it may still need to convert coordinates, for example, from 12 bit per component internally to 16 bit per component in the output, or something like that). Not sure if that would all work out of the box with the existing interfaces, didn't take time to have a look - maybe someone else has a better idea of this topic?

@zeux
Copy link
Contributor Author

zeux commented Nov 10, 2019

Here are some numbers for the vertex/index buffer data (this extension doesn't affect index data storage but it's presented here to get a full picture of the total size impact for geometry data) for a few models from glTF samples.

In the interest of fairness, I present three numbers:

  • raw: no quantization, 16b joint index storage - this is what you generally get as an exporter output
  • baseline: quantization allowed by the spec today; joint weights are quantized to 8 bits per component, texture coordinates are quantized to 16 bits per component (assuming 0..1 range), joint indices are stored in 8 bits if <=256 joints are used
  • quantized: quantization allowed by this extension; positions are quantized to 16 bits per component, normals/tangents are quantized to 8 bits per component.

All sizes are in bytes of GPU storage - I'm intentionally separating transmission size since you could use Draco today to reduce that (although the morph target example is interesting in that Draco doesn't support morph targets so there the transmission size can't be reduced by glTF extensions available today). There's a link between quantization and compression - I intend to propose a MESHOPT_compression extension that will benefit from operating on quantized data - however, for the sake of this extension let's focus on in-memory size.

FlightHelmet.gltf: 94722 triangles, 60366 vertices (PBR, no skinning, no morph)
index data: 568 322 bytes
vertex data, raw: 2 897 568 bytes
vertex data, baseline: 2 656 104 bytes
vertex data, quantized: 1 207 320 bytes

BrainStem.gltf: 61666 triangles, 34159 vertices (no PBR, skinning, no morph)
index data: 369 996 bytes
vertex data, raw: 1 636 032 bytes
vertex data, baseline: 1 090 688 bytes
vertex data, quantized: 681 680 bytes

Buggy.gltf: 531955 triangles, 412855 vertices (no PBR, no skinning, no morph)
index data: 5 317 242 bytes
vertex data, raw: 9 895 896 bytes
vertex data, baseline: 9 895 896 bytes
vertex data, quantized: 4 947 948 bytes

Alien.gltf: 25222 triangles, 13429 vertices (PBR, skinning, morph)
index data: 151 332 bytes
vertex data, raw: 1 677 112 bytes
vertex data, baseline: 1 408 532 bytes
vertex data, quantized: 685 244 bytes

@lexaknyazev
Copy link
Member

@zeux
Added a few comments, please take a look.

Since this extension is proposed with KHR prefix, it should have at least one exporter implementation and one importer (renderer) implementation before going through formal Khronos ratification.

On a side note, have you considered ES3+ float16 and {u,s}int10_10_10_2 formats? The latter seems to be a better option for tangents than sint8.

@mlimper

Draco geometry encoder could use non-aligned data with odd bit sizes, so it may still need to convert coordinates, for example, from 12 bit per component internally to 16 bit per component in the output, or something like that

Existing interfaces should be able to provide proper data types and quantization params. It would be up to the pipeline tools to inject transformation nodes correctly.

zeux added 2 commits November 10, 2019 16:20
- should => must
- WebGL restrictions => platform differences
@zeux
Copy link
Contributor Author

zeux commented Nov 11, 2019

Added a few comments, please take a look.

Thanks! Addressed all but one re: unsigned deltas - I'm ambivalent on this. On one hand it doesn't seem to cost us anything to add it, on another hand I don't see use cases where this can be interesting. In my mind this is a bit similar to unsigned quaternion components (I think I asked about this before) in animation transforms - in theory you could have an animation have all quaternion components with a consistent sign, in practice it's extremely unlikely that animations like this are interesting enough to special-case this.

Since this extension is proposed with KHR prefix, it should have at least one exporter implementation and one importer (renderer) implementation before going through formal Khronos ratification.

This makes sense. What's the process here - is it the case that extensions are first merged as drafts, and then ratified following the implementation availability? FWIW in terms of exporters, gltfpack produces files that are ready to use this extension; on the renderer side both Three.JS and Babylon.JS at this point should be ready to support this extension - I've been submitting a few fixes here and there with the last Three.JS fix for morph targets landing recently.

On a side note, have you considered ES3+ float16 and {u,s}int10_10_10_2 formats? The latter seems to be a better option for tangents than sint8.

In my experience, float16 isn't that interesting for quantization - or rather, it's simple to use which is great, but for mesh component data it gives you ~10 bits of uniformly distributed precision for any fixed size range, with better precision closer to 0 which is rarely useful for component data. So it's usually more efficient to use fixed-point storage.

10_10_10_2 formats are indeed fantastic for storing normal/tangent data. Unfortunately, WebGL 1.0 doesn't support them, so it's not as practical for domains where glTF is most likely to be used in :(

@lexaknyazev
Copy link
Member

In my mind this is a bit similar to unsigned quaternion components (I think I asked about this before) in animation transforms

IIRC, they were added just for completeness.

is it the case that extensions are first merged as drafts, and then ratified following the implementation availability?

Yes. If there are no more community comments, we can merge this on the next WG call (this Wednesday). /cc @pjcozzi

on the renderer side both Three.JS and Babylon.JS at this point should be ready to support this extension

While we cannot demand runtime behavior from the implementations, it would be better for the ecosystem if renderers actually check for the extension presence rather than simply allow new attribute formats.

Unfortunately, WebGL 1.0 doesn't support them, so it's not as practical for domains where glTF is most likely to be used in

As WebGL 2.0 adoption grows (slowly but it does), we can define another required extension (KHR_accessor_10_10_10_2?) that would further extend allowed accessor.componentType values and the attributes table in this extension.
/cc @donmccurdy @bghgary

@zeux
Copy link
Contributor Author

zeux commented Nov 11, 2019

While we cannot demand runtime behavior from the implementations, it would be better for the ecosystem if renderers actually check for the extension presence rather than simply allow new attribute formats.

Yeah - this is actually the case for BabylonJS - that is, it's "ready" to support this extension as in it basically works, but actually needs a change to accept KHR_quantized_geometry as a required extension. Three.JS doesn't generally speaking check for required extensions but it seems fine to more explicitly declare support there as well.

Neither renderer seems to explicitly differentiate between the formats (i.e. neither renderer validates the acceptable set of formats accessors have). This is where I think validator comes in - once this gets merged I will make a validator change that conditionally allows the use of extended format set when this extension is listed as required.

@lexaknyazev
Copy link
Member

@zeux
This extension is already implemented in 2.0.0-dev.3.0 validator release.

@lexaknyazev
Copy link
Member

float16 ... for mesh component data it gives you ~10 bits of uniformly distributed precision for any fixed size range, with better precision closer to 0 which is rarely useful for component data.

The reasonable use-case could be position displacements when they do not fit into [-1 .. 1] range (instead of float32 as this extension currently suggests). Anyway, this extension should not go there.

@donmccurdy
Copy link
Contributor

If an KHR_accessors_10_10_10_2 extension were added in the future, is it reasonable to assume that it, plus KHR_quantized_geometry, would together allow use for normal and tangent data? Or would that require a new KHR_quantized_geometry2 extension? Hoping for the former, if possible to provide for that now.

Our naming convention so far is — or this is my impression, not explicitly stated anywhere — <PREFIX>_<scope>_<feature>. From that perspective I might suggest KHR_mesh_quantization instead.

@zeux
Copy link
Contributor Author

zeux commented Nov 13, 2019

I don’t think we will need a separate extension for 10_2 - I think we can introduce an extension that simultaneously introduces support for the new formats and their use, similarly to KHR_instancing.

The name of this extension is inspired by preexisting glTF 1.0 extension mentioned in the PR.

@donmccurdy
Copy link
Contributor

I'd vote to prioritize consistency with existing Khronos glTF 2.0 extensions over consistency with a prior vendor extension from glTF 1.0, but would be happy to get more feedback on the name. @lexaknyazev or @bghgary?

@lexaknyazev
Copy link
Member

Since the only unaligned ratified extension names are Draco and SpecGloss (both were written way before any sort of consistency issues came up), I lean towards formalized naming scheme for all future KHR extensions.

The spec schema has meshes root-level array, so it's one way to see the scope of this extension. Another is to say that this extension extends the allowed accessor formats that are linked from mesh.primitive.attributes rather than the mesh object itself.

So it could be called one of (a pattern to not enumerate all combinations):

  • KHR_{mesh|attributes|mesh_attributes}_{quantization|quantized}

@zeux
Copy link
Contributor Author

zeux commented Nov 13, 2019

I’m coming around to mesh_quantization. It’s short and to the point - given the assumed guidelines it does fit much better. Quantized as a suffix doesn’t work too well and I don’t think attributes are particularly necessary to highlight here.

@bghgary
Copy link
Contributor

bghgary commented Nov 14, 2019

This doesn't block merging but I think it would be good to start adding glTF-Asset-Generator tests before ratifying the specification so that we are sure we've covered the bases.

@zeux
Copy link
Contributor Author

zeux commented Nov 14, 2019

My understanding is that merging precedes ratification? I can look into expanding the asset generator after this is merged as a draft.

@bghgary
Copy link
Contributor

bghgary commented Nov 14, 2019

My understanding is that merging precedes ratification?

Yes. We talked about this in the WG call this morning. We can merge as draft before ratification.

@zeux zeux changed the title KHR_quantized_geometry KHR_mesh_quantization Nov 14, 2019
@zeux
Copy link
Contributor Author

zeux commented Nov 14, 2019

As there's general support for following the established naming guidelines, I've updated this PR to use the name KHR_mesh_quantization. The extension text is unchanged except for minor wording tweak re: geometry -> mesh.

@zeux
Copy link
Contributor Author

zeux commented Nov 22, 2019

FWIW support for this draft was merged to three.js and Babylon.js. Encoding support is available in gltfpack (master only; numbered release will follow next three.js release to avoid compatibility issues). I have glTF-Asset-Generator support on my todo list which I should get to in a few weeks.

It would be great to merge this draft unless further adjustments are needed so that there’s a more stable version implementations can refer to.

@bghgary
Copy link
Contributor

bghgary commented Dec 2, 2019

@lexaknyazev If you approve, I think you can merge it.

@emackey
Copy link
Member

emackey commented Dec 2, 2019

There's no schema file here. Even KHR_materials_unlit has a schema.

@zeux
Copy link
Contributor Author

zeux commented Dec 2, 2019

@emackey My recollection is that it was unclear how to add schema definition for this, do you have a suggestion? Note that there’s no extra JSON structure here.

@zeux
Copy link
Contributor Author

zeux commented Dec 3, 2019

Right - I looked it up and I remember this now. This extension relaxes the requirements on accessors referenced through mesh primitive. These requirements for the baseline spec are not encoded into the schema (my guess is that it's impossible to express this constraint?), so no schema adjustments should be necessary for this either - of course, this needs to be validated, but that's what the validator is doing irrespective of schema.

@emackey
Copy link
Member

emackey commented Dec 3, 2019

Even just making a nearly-verbatim copy of the unlit schema would suffice. It helps me in that it allows VSCode to recognize the extension name for autocomplete and tooltips.

Also which object gets extended with this? The mesh objects, not the glTF root object, right? Is that specified somewhere?

@zeux
Copy link
Contributor Author

zeux commented Dec 3, 2019

Even just making a nearly-verbatim copy of the unlit schema would suffice. It helps me in that it allows VSCode to recognize the extension name for autocomplete and tooltips.

I'm happy to include this but I don't quite understand the consequences. Is it correct to do this even though there's no extra JSON objects that this extension declares?

Also which object gets extended with this? The mesh objects, not the glTF root object, right? Is that specified somewhere?

The extension text says "When KHR_mesh_quantization extension is supported, the set of types used for storing mesh attributes is expanded according to the table below." - I'm happy to call this out more specifically somehow, but the extension requirement (it being present in extensionsRequired) expands the list of types allowed by the base specification. I'm not sure if this is what you were asking? In that the extension extends the behavior of mesh attribute accessors, but not the mesh object specifically.

@donmccurdy
Copy link
Contributor

donmccurdy commented Dec 3, 2019

Also which object gets extended with this?

No objects are extended. The presence of...

"extensionsUsed": ["KHR_mesh_quantization"],
"extensionsRequired": ["KHR_mesh_quantization"],

...is the only schema artifact of this extension, other than relaxing some core spec requirements.

Does that seem OK? An empty object could be attached at the scene or root level, but I don't think this has the same benefit as the empty objects used by KHR_materials_unlit, where there was a need to identify which specific materials were unlit. It's not clear that assets need to identify specific meshes that have quantization baked in.

@emackey
Copy link
Member

emackey commented Dec 3, 2019

No objects are extended.

Does that seem OK?

I guess so, although it had me confused. If that's the case, then even the boilerplate schema can't be applied, and no objects are allowed to be extended by the extension. This makes it very different from Unlit, Draco, and all previous multivendor & Khronos extensions. But it's OK.

The extension does still need to appear in both extensionsUsed and extensionsRequired, not just the required list alone.

@emackey
Copy link
Member

emackey commented Dec 3, 2019

The other way I could imagine this working is more similar to Draco, where particular meshes are called out as having the quantized extension, and the extension object itself could hold references to the quantized attributes, leaving the possibility for non-quantized fallback attributes on the same mesh. But this adds some complexity, for the sake of people wanting to store both quantized and non-quantized data in the same file, and I don't know if there are any such people.

@zeux
Copy link
Contributor Author

zeux commented Dec 3, 2019

The extension does still need to appear in both extensionsUsed and extensionsRequired, not just the required list alone.

Yeah - that's correct. Would it help to highlight this as well as the fact that there's no extra JSON objects allowed by this extension more explicitly in the text?

But this adds some complexity, for the sake of people wanting to store both quantized and non-quantized data in the same file, and I don't know if there are any such people.

Yeah - the desire here was to minimize the complexity; the baseline spec already allows quantization for some attributes, just not all of them. Additionally for Draco, the extension interplay becomes unclear if this extension declares separate objects.

@emackey
Copy link
Member

emackey commented Dec 3, 2019

Would it help to highlight this as well as the fact that there's no extra JSON objects

Specifically, that no glTF object is actually extended by this extension. No "extensions": {...} object actually uses this extension, even when it is called out in the extensionsUsed and extensionsRequired arrays.

If one were to write a script to automatically gather up a refreshed list of extensionsUsed from various extensions objects around the file, the script wouldn't find this one being used anywhere.

@zeux
Copy link
Contributor Author

zeux commented Dec 3, 2019

If one were to write a script to automatically gather up a refreshed list of extensionsUsed from various extensions objects around the file, the script wouldn't find this one being used anywhere.

Correct, however I'm not sure that this isn't a generally held property. For example, the KHR_image_ktx2 that's being standardized (... and considered for removal from the proposal from what I know, but still) shares this property - it allows images to use image/ktx2 mime type without requiring the JSON blob. I'm not sure a tool like this in general can be implemented and be forward-looking? In this specific case it is possible to have a tool like this work by making sure that required extensions are kept in the used list after pruning, I suppose.

@donmccurdy
Copy link
Contributor

@lexaknyazev OK to merge this as a draft, or did you plan to review further?

@donmccurdy donmccurdy merged commit 92f59a0 into KhronosGroup:master Dec 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Quantized geometry support
6 participants