Add Intel's IPEX-LLM #4019
NikosDi
started this conversation in
Feature Ideas
Replies: 1 comment
-
https://jan.ai/docs/local-engines/llama-cpp It looks like the local engine functionality got pushed into cortex llama.cpp wrapper in development? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi.
I think it would be useful to add more GPU accelerated engines beyond nVidia CUDA.
Intel's IPEX-LLM (Intel Extension for PyTorch) is a good example.
Beta Was this translation helpful? Give feedback.
All reactions