Skip to content

Commit

Permalink
quick readme update
Browse files Browse the repository at this point in the history
  • Loading branch information
CRD716 committed May 3, 2023
1 parent f11c0f9 commit 8dc342c
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,11 @@ The main goal of `llama.cpp` is to run the LLaMA model using 4-bit integer quant
- Apple silicon first-class citizen - optimized via ARM NEON and Accelerate framework
- AVX2 support for x86 architectures
- Mixed F16 / F32 precision
- 4-bit integer quantization support
- 4 & 8 bit integer quantization support
- Runs on the CPU

The original implementation of `llama.cpp` was [hacked in an evening](https://github.com/ggerganov/llama.cpp/issues/33#issuecomment-1465108022).
Since then, the project has improved significantly thanks to many contributions. This project is for educational purposes and serves
as the main playground for developing new features for the [ggml](https://github.com/ggerganov/ggml) library.
Since then, the project has improved significantly thanks to many contributions. This project is for educational purposes and serves as the main playground for developing new features for the [ggml](https://github.com/ggerganov/ggml) library.

**Supported platforms:**

Expand Down

0 comments on commit 8dc342c

Please sign in to comment.