-
Notifications
You must be signed in to change notification settings - Fork 70
Description
I have trained the MeshTransformer on 200 different meshes from the chair category on ShapeNet after decimation and filtering meshes with less than 400 vertices and faces. The MeshTransformer reached a loss very close to 0
But when I call the generate
method from the MeshTransformer, I get very bad results.
From left to right, ground truth, autoencoder output, MeshTransformer generated mesh with a temperature of 0, with a temperature of 0.1, a temperature of 0.7 and a temperature of 1. This is done with meshgpt-pytorch version 0.3.3
Note: the MeshTransformer was not conditioned on text or anything, so the output is not supposed to exactly look like the sofa, but it barely look like a chair. We can guess the backrest and the legs but that's it.
Initially I thought that there might have been an error with the KV cache so here are the results with cache_kv=False
:
And this one with meshgpt-pytorch version 0.2.11
When I trained on a single chair with a version before 0.2.11, the generate
method was able to create a coherent chair (from left to right, ground truth, autoencoder output, meshtranformer.generate()
)
Why even though the transformer loss was very low the generated results are very bad?
I have uploaded the autoencoder and meshtransformer checkpoint (on version 0.3.3) as well as 10 data samples there: https://file.io/nNsfTyHX4aFB
Also quick question, why rewrite the transformer from scratch, and not use the HuggingFace GPT2 transformer?