Skip to content

Commit

Permalink
Bump llama-cpp-python from 0.2.27 to 0.2.32 (#305)
Browse files Browse the repository at this point in the history
Bumps [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
from 0.2.27 to 0.2.32.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md">llama-cpp-python's
changelog</a>.</em></p>
<blockquote>
<h2>[0.2.32]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@504dc37</li>
<li>fix: from_json_schema oneof/anyof bug by <a
href="https://github.com/jndiogo"><code>@​jndiogo</code></a> in
d3f5528ca8bcb9d69d4f27e21631e911f1fb9bfe</li>
<li>fix: pass chat handler not chat formatter for huggingface
autotokenizer and tokenizer_config formats by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
24f39454e91cf5dddbc4b6041aead4accc7c7a2d</li>
<li>feat: Add add_generation_prompt option for jinja2chatformatter by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
7f3209b1eb4ad3260ba063801fab80a8c25a2f4c</li>
<li>feat: Add Jinja2ChatFormatter by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
be09318c26add8674ce494ae7cc480cce72a4146</li>
<li>feat: Expose gguf model metadata in metadata property by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
5a34c57e5479e50c99aba9b38218cc48e6560b81</li>
</ul>
<h2>[0.2.31]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@a5cacb2</li>
<li>fix: Mirostat sampling now passes correct type to ctypes and tracks
state during generation by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
3babe3512cb95743108f2b595210c38ed6f1b904</li>
<li>fix: Python3.8 support in server by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
141293a75b564a8699e0acba1da24d9aa1cf0ab1</li>
</ul>
<h2>[0.2.30]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@57e2a7a</li>
<li>feat(server): Add ability to load chat format from huggingface
autotokenizer or tokenizer_config.json files by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
b8fc1c7d83ad4a9207c707ba1d954fe580286a01</li>
<li>feat: Integration of Jinja2 Templating for chat formats by <a
href="https://github.com/teleprint-me"><code>@​teleprint-me</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/875">#875</a></li>
<li>fix: Offload KQV by default by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
48c3b77e6f558a9899de0e1155c7dc0c7958d8e8</li>
<li>fix: Support Accept text/event-stream in chat and completion
endpoints, resolves <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1083">#1083</a>
by <a href="https://github.com/aniljava"><code>@​aniljava</code></a> in
<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1088">#1088</a></li>
<li>fix(cli): allow passing n_ctx=0 to openAI API server args to use
model n_ctx_train field per <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1015">#1015</a>
by <a href="https://github.com/K-Mistele"><code>@​K-Mistele</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1093">#1093</a></li>
</ul>
<h2>[0.2.29]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@4483396</li>
<li>feat: Add split_mode option by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
84615adbc6855c8384807c42f0130f9a1763f99d</li>
<li>feat: Implement GGUF metadata KV overrides by <a
href="https://github.com/phiharri"><code>@​phiharri</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1011">#1011</a></li>
<li>fix: Avoid &quot;LookupError: unknown encoding: ascii&quot; when
open() called in a destructor by <a
href="https://github.com/yieldthought"><code>@​yieldthought</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1012">#1012</a></li>
<li>fix: Fix low_level_api_chat_cpp example to match current API by <a
href="https://github.com/aniljava"><code>@​aniljava</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1086">#1086</a></li>
<li>fix: Fix Pydantic model parsing by <a
href="https://github.com/DeNeutoy"><code>@​DeNeutoy</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1087">#1087</a></li>
</ul>
<h2>[0.2.28]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@6efb8eb</li>
<li>feat: Add ability to pass in penalize_nl param by <a
href="https://github.com/shankinson"><code>@​shankinson</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1068">#1068</a></li>
<li>fix: print_grammar to stderr by <a
href="https://github.com/turian"><code>@​turian</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1052">#1052</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/2ce0b8aa2c2f81d999bbb2a7246a9f221f9d52d0"><code>2ce0b8a</code></a>
Bump version</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/d3f5528ca8bcb9d69d4f27e21631e911f1fb9bfe"><code>d3f5528</code></a>
fix: from_json_schema oneof/anyof bug. Closes <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1097">#1097</a></li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/8eefdbca03d005095f6645d4e5b42b982af9daf0"><code>8eefdbc</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/88fbccaaa3416f38552d80d84c71fdb40c7c477a"><code>88fbcca</code></a>
docs: Add macosx wrong arch fix to README</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/24f39454e91cf5dddbc4b6041aead4accc7c7a2d"><code>24f3945</code></a>
fix: pass chat handler not chat formatter for huggingface autotokenizer
and t...</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/7f3209b1eb4ad3260ba063801fab80a8c25a2f4c"><code>7f3209b</code></a>
feat: Add add_generation_prompt option for jinja2chatformatter.</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/ac2e96d4b4610cb9cd9b0c978c76ece6567f5c02"><code>ac2e96d</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/be09318c26add8674ce494ae7cc480cce72a4146"><code>be09318</code></a>
feat: Add Jinja2ChatFormatter</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/5a34c57e5479e50c99aba9b38218cc48e6560b81"><code>5a34c57</code></a>
feat: Expose gguf model metadata in metadata property</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/833a7f1a86f2136df5f75c1bd62d2e4d5adaa439"><code>833a7f1</code></a>
Bump version</li>
<li>Additional commits viewable in <a
href="https://github.com/abetlen/llama-cpp-python/compare/v0.2.27...v0.2.32">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-cpp-python&package-manager=pip&previous-version=0.2.27&new-version=0.2.32)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  • Loading branch information
dependabot[bot] authored Jan 22, 2024
1 parent 57ae356 commit 2315820
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
8 changes: 5 additions & 3 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ selfsign = "scripts.gen_certs:entrypoint"
python = "^3.11"
pydantic = "^2.5.3"
fastapi = "^0.109.0"
llama-cpp-python = "^0.2.27"
llama-cpp-python = "^0.2.32"
huggingface-hub = "0.20.1"
duckdb = "^0.9.1"
uvicorn = "^0.25.0"
Expand Down

0 comments on commit 2315820

Please sign in to comment.