Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

does it can run on intel cpu or gpu? #17

Open
ziyanxzy opened this issue Jul 24, 2024 · 35 comments
Open

does it can run on intel cpu or gpu? #17

ziyanxzy opened this issue Jul 24, 2024 · 35 comments

Comments

@ziyanxzy
Copy link

hi , i see you can run on mac m1 cpu, but it can run on intel cpu or not?

@aihacker111
Copy link
Owner

All CPU, onnx model can run , TensorRT model need GPU @ziyanxzy

@ziyanxzy
Copy link
Author

intel arc770 etc can we use intel gpu?

@aihacker111
Copy link
Owner

@ziyanxzy you can test and if have any bug , please let me know

@ziyanxzy
Copy link
Author

how can i set to run on cpu?

@ziyanxzy
Copy link
Author

because i find it always run on tensorrt instead of cpu

@aihacker111
Copy link
Owner

@ziyanxzy you have to remove flag --run_time, it'll run by onnx model

@ziyanxzy
Copy link
Author

image
i do that but still can not run on cpu

@aihacker111
Copy link
Owner

This is run with onnxruntime-gpu, if you want to cpu only, you have to install requirements-cpu.txt

@ziyanxzy
Copy link
Author

yes i install requirement-cpu,but onnxruntime-gpu in the requirements
image

@aihacker111
Copy link
Owner

@ziyanxzy so sorry, just remove it and replace onnxruntime only , I'll update later

@ziyanxzy
Copy link
Author

if i uninstall onnxruntime, so i guess it can not run on cpu without any code change?
image

@aihacker111
Copy link
Owner

@ziyanxzy onnxruntime is for CPU and onnxruntime-gpu is for GPU
You need to pip uninstall onnxruntime-gpu and install again by pip install onnxruntime (CPU only)

@ziyanxzy
Copy link
Author

oh thank you ~, but where is the output video? i can not found the output..

@aihacker111
Copy link
Owner

@ziyanxzy it's in animations folder , it's auto create it

@ziyanxzy
Copy link
Author

yes, i use this cmd, but can not create a animations folder. python run_live_portrait.py --driving_video "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/driving/d0.mp4" --source_image "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/examples/source/s9.jpg" --task ['video']

@aihacker111
Copy link
Owner

@ziyanxzy Have you check in the source code, it auto generate in this, also I'm update new feature and fixing the requirements.txt file, you can git clone again

@aihacker111
Copy link
Owner

@ziyanxzy Our not have issues with it , please capture your tree project for me

@ziyanxzy
Copy link
Author

image
still can not generate i use example driving video and image

@aihacker111
Copy link
Owner

aihacker111 commented Jul 30, 2024

@ziyanxzy Please capture your inside Efficient-Live-Portrait folder, if not you need to check here for make sure the path is where
File code: fast_live_portrait_pipeline.py
Screenshot 2024-07-30 at 20 51 50
If you run it succesfully, it'll print you the path to save output

@aihacker111
Copy link
Owner

Screenshot 2024-07-30 at 20 53 12 Screenshot 2024-07-30 at 20 53 44

I'm running on colab or local not have issues

@ziyanxzy
Copy link
Author

image

@aihacker111
Copy link
Owner

show me the log when finished running

@aihacker111
Copy link
Owner

@ziyanxzy --task not is a list, it's a option image, video or webcam , --task image not --task ['image']

@ziyanxzy
Copy link
Author

ok i see. i see the code and fint out the task is not a list but a str. now it succeed!!!thank you very much. but i think it will be a little easy to confuse, because your example is a list. ^^

@aihacker111
Copy link
Owner

@ziyanxzy Haha , I'll fixed it in readme , so sorry my friend

@aihacker111
Copy link
Owner

@ziyanxzy If you run with tensorrt, please make sure your latest ubuntu and package, and it cannot run on window with TensorRT . Noted this my friend

@ziyanxzy
Copy link
Author

it doesnt matter~would be run on windows in the future? , and i also want to try run on igpu (intel), and deployment

@aihacker111
Copy link
Owner

@ziyanxzy If you have window os, Please help me build plugin for TensorRT again on window and pull request . Thank you

@aihacker111
Copy link
Owner

aihacker111 commented Jul 30, 2024

@ziyanxzy BTW, I don't like window more haha 😂 🤣

@ziyanxzy
Copy link
Author

if i use tensorrt it will run on cuda, but i want to run on windowscpu or igpu in my pc. i use cpu to run i think it takes not a long time. haha.
i also didnt like windows!

@aihacker111
Copy link
Owner

@ziyanxzy Cool !, you can send me the validation speed run time of your computer when finished ? . I want to adding the validation board in README for clearly about speed

@ziyanxzy
Copy link
Author

let me try on windows cpu, i think if we want to run on igpu on windows, it may need to convert model to openvino. i will try windows cpu first and give you speed runtime. ^^ thank you . btw , why the onnx run on cpu much faster than python run on cpu. i try to run pytorch model on cpu, but it takes a long time!!!

@aihacker111
Copy link
Owner

@ziyanxzy Cool, I don't have plan to convert to OpenVINO because I'm using more on MacOS, but if you converted successfully, can you pull request on my code to updated feature for it ?

@aihacker111
Copy link
Owner

@ziyanxzy because onnx is built to run on cpu with back-end api is C++ , and pytorch is not , it’ll take all core of your CPU to run, onnx just take half core

@ziyanxzy
Copy link
Author

i think the inference time on cpu depend on the cpu, because i run on a better cpu ,it takes about 4min49s, but the worse one , it takes about 12min25s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants