-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to load model and inference CViT #1035
Comments
Let me briefly introduce the purpose of these suffixes:
So if you need to inference, you need to first export CVit model to infernece model files, i.e. .json and .pdiparams, which can be done with https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/ns_cvit/#__tabbed_1_4 if you need to export your own models trained by your self, you can add extra CLI args: # load your pretrained model and export to json+pdiparams
python ns_cvit.py mode=export INFER.pretrained_model_path=/path/to/your_model.pdparams
# do inference
python ns_cvit.py mode=infer Note that we only provide the ns_cvit_small_8x8 pretrained model, so if you use other model config, please train it and specify the |
Hi,
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace."> |
Check if your paddlepaddle-gpu version is develop(nightly-build) or 3.0.0-b2 |
Hi, `test data (1035, 10, 128, 128, 2) (1035, 4, 128, 128, 1) Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.` Thank you |
|
Hi, I am doing inference right now using my dataset. The problem is input channel is 2 and output channel is 1. I have problem in this part I feel the model somehow takes channel 2 when creating model. This line might be a reason : |
You can do inference via adapting eval code, just replace |
Hi,
I am trying to do inference using CViT. from training, I only have file with extension pdopt, pdparams, pdstates. how to load this file since inference method seems need file json??
thanks
The text was updated successfully, but these errors were encountered: