-
Notifications
You must be signed in to change notification settings - Fork 999
Issues: Mozilla-Ocho/llamafile
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Bug: mlock is failing and llama-server is outdated for very long time.
bug
high severity
#591
opened Oct 16, 2024 by
BoQsc
Bug: llamafiler /v1/embeddings endpoint does not return model name
bug
low severity
#589
opened Oct 14, 2024 by
wirthual
Bug:
--path
Option Broken When Pointing to a Folder
bug
high severity
#588
opened Oct 13, 2024 by
gorkem
Bug: binary called ape in PATH breaks everything
bug
high severity
#587
opened Oct 13, 2024 by
step21
Bug: Phi3.5-mini-instruct Q4 K L gguf based llamafile CuDA error AMD iGPU
bug
high severity
#584
opened Oct 10, 2024 by
eddan168
Feature Request: /v1/models endpoint for further openai api compatibility
enhancement
#583
opened Oct 10, 2024 by
quantumalchemy
4 tasks done
Bug: install: cannot stat 'o/x86_64/stable-diffusion.cpp/main': No such file or directory
bug
high severity
#580
opened Oct 6, 2024 by
toby3d
Bug: APE is running on WIN32 inside WSL - whisperfile - zsh
bug
high severity
#579
opened Oct 4, 2024 by
baptistecs
Feature Request: document change to default context window in 0.8.13
enhancement
#567
opened Sep 26, 2024 by
cbowdon
4 tasks done
Bug: Segmentation fault re-running after installing NVIDIA CUDA.
bug
medium severity
#560
opened Sep 5, 2024 by
4kbyte
Feature Request: Add support for Raspberry Pi Ai Kit
request to lend support
#548
opened Aug 20, 2024 by
beingminimal
4 tasks done
Bug:
ggml-rocm.so not found
in llamafile 0.8.13
bug
medium severity
#547
opened Aug 20, 2024 by
winstonma
Bug: malloc: *** error for object 0x600003310600: pointer being freed was not allocated
#537
opened Aug 13, 2024 by
groovecoder
Bug: The token generation speed is slower compared to the upstream llama.cpp project
bug
medium severity
#533
opened Aug 13, 2024 by
BIGPPWONG
Bug: unknown argument: --threads‐batch‐draft
bug
medium severity
#532
opened Aug 9, 2024 by
moisestohias
Bug: llama 3.1 and variants fail with error "wrong number of tensors; expected 292, got 291"
bug
high severity
#516
opened Jul 30, 2024 by
camAtGitHub
Feature Request: Support for microsoft/Phi-3-vision-128k-instruct
enhancement
#515
opened Jul 30, 2024 by
azhuvath
4 tasks done
Bug: Unable to load Mixtral-8x7B-Instruct-v0.1-GGUF on Amazon Linux with AMD EPYC 7R13
bug
critical severity
#512
opened Jul 28, 2024 by
rpchastain
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.