-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
does it can run on intel cpu or gpu? #17
Comments
All CPU, onnx model can run , TensorRT model need GPU @ziyanxzy |
intel arc770 etc can we use intel gpu? |
@ziyanxzy you can test and if have any bug , please let me know |
how can i set to run on cpu? |
because i find it always run on tensorrt instead of cpu |
@ziyanxzy you have to remove flag --run_time, it'll run by onnx model |
This is run with onnxruntime-gpu, if you want to cpu only, you have to install requirements-cpu.txt |
@ziyanxzy so sorry, just remove it and replace onnxruntime only , I'll update later |
@ziyanxzy onnxruntime is for CPU and onnxruntime-gpu is for GPU |
oh thank you ~, but where is the output video? i can not found the output.. |
@ziyanxzy it's in animations folder , it's auto create it |
yes, i use this cmd, but can not create a animations folder. python run_live_portrait.py --driving_video "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/driving/d0.mp4" --source_image "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/examples/source/s9.jpg" --task ['video'] |
@ziyanxzy Have you check in the source code, it auto generate in this, also I'm update new feature and fixing the requirements.txt file, you can git clone again |
@ziyanxzy Our not have issues with it , please capture your tree project for me |
@ziyanxzy Please capture your inside Efficient-Live-Portrait folder, if not you need to check here for make sure the path is where |
show me the log when finished running |
@ziyanxzy --task not is a list, it's a option image, video or webcam , --task image not --task ['image'] |
ok i see. i see the code and fint out the task is not a list but a str. now it succeed!!!thank you very much. but i think it will be a little easy to confuse, because your example is a list. ^^ |
@ziyanxzy Haha , I'll fixed it in readme , so sorry my friend |
@ziyanxzy If you run with tensorrt, please make sure your latest ubuntu and package, and it cannot run on window with TensorRT . Noted this my friend |
it doesnt matter~would be run on windows in the future? , and i also want to try run on igpu (intel), and deployment |
@ziyanxzy If you have window os, Please help me build plugin for TensorRT again on window and pull request . Thank you |
@ziyanxzy BTW, I don't like window more haha 😂 🤣 |
if i use tensorrt it will run on cuda, but i want to run on windowscpu or igpu in my pc. i use cpu to run i think it takes not a long time. haha. |
@ziyanxzy Cool !, you can send me the validation speed run time of your computer when finished ? . I want to adding the validation board in README for clearly about speed |
let me try on windows cpu, i think if we want to run on igpu on windows, it may need to convert model to openvino. i will try windows cpu first and give you speed runtime. ^^ thank you . btw , why the onnx run on cpu much faster than python run on cpu. i try to run pytorch model on cpu, but it takes a long time!!! |
@ziyanxzy Cool, I don't have plan to convert to OpenVINO because I'm using more on MacOS, but if you converted successfully, can you pull request on my code to updated feature for it ? |
@ziyanxzy because onnx is built to run on cpu with back-end api is C++ , and pytorch is not , it’ll take all core of your CPU to run, onnx just take half core |
i think the inference time on cpu depend on the cpu, because i run on a better cpu ,it takes about 4min49s, but the worse one , it takes about 12min25s |
hi , i see you can run on mac m1 cpu, but it can run on intel cpu or not?
The text was updated successfully, but these errors were encountered: