-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Issues: meta-llama/codellama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Feature Request] Make model files available in pte format (Executorch)
#236
opened Jun 6, 2024 by
rcontesti
I have missing CUDA library files that are causing crash when I start torchrun
#231
opened May 2, 2024 by
ichibrosan
Codellama 7b model is printing random words when i ask it to write a script to add 2 numbers.
#222
opened Apr 1, 2024 by
palaniappan1791
codellama providing C++ code that does not successfully perform a basic mathematical function
#220
opened Mar 26, 2024 by
wvaughn409
Follow all the README Instructions but when I run one of the example.py files they get stuck
#213
opened Mar 13, 2024 by
redonhalimaj
Questions about Downloading Code Llama Model and Using GPT4all?
#195
opened Jan 31, 2024 by
jasonsu123
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.