Releases: superlinear-ai/raglite
Releases · superlinear-ai/raglite
v0.7.0
What's Changed
- feat: make importing faster by @lsorber in #86
- fix: avoid conflicting chunk ids by @joachim-Heirbrant-SL in #93
- feat: add ability to directly insert Markdown content into the database by @ThomasDelsart in #96
- feat: make llama-cpp-python an optional dependency by @rchretien in #97
- feat: migrate from poetry-cookiecutter to substrate by @rchretien in #98
- chore: upgrade scaffolding by @lsorber in #105
- fix: revert pandoc extra name by @lsorber in #106
- docs: improve inline comments by @lsorber in #107
- fix: lazily raise module not found for optional deps by @lsorber in #109
- feat: compute optimal sentence boundaries by @lsorber in #110
- fix: fix CLI entrypoint regression by @lsorber in #111
- feat: replace post-processing with declarative optimization by @lsorber in #112
New Contributors
- @joachim-Heirbrant-SL made their first contribution in #93
- @ThomasDelsart made their first contribution in #96
- @rchretien made their first contribution in #97
Full Changelog: v0.6.2...v0.7.0
v0.6.2
v0.6.1
v0.6.0
What's Changed
- chore: update _extract.py by @eltociear in #70
- feat: improve sentence splitting by @lsorber in #72
- feat: add streaming tool use to llama-cpp-python by @lsorber in #71
- feat: upgrade from xx_sent_ud_sm to SaT by @lsorber in #74
- feat: add support for Python 3.12 by @lsorber in #69
- chore: cruft update by @lsorber in #76
New Contributors
- @eltociear made their first contribution in #70
Full Changelog: v0.5.1...v0.6.0
v0.5.1
v0.5.0
What's Changed
- style: reduce httpx log level by @lsorber in #59
- feat: let LLM choose whether to retrieve context by @lsorber in #62
- fix: support pgvector v0.7.0+ by @undo76 in #63
- docs: add GitHub star history to README by @MattiaMolon in #65
- feat: add MCP server by @lsorber in #67
New Contributors
- @MattiaMolon made their first contribution in #65
Full Changelog: v0.4.1...v0.5.0
v0.4.1
v0.4.0
What's Changed
- feat: improve late chunking and optimize pgvector settings by @lsorber in #51
- Add a workaround for #24 to increase the embedder's context size from 512 to a user-definable size.
- Increase the default embedder context size to 1024 tokens (more degrades bge-m3's performance).
- Upgrade llama-cpp-python to the latest version.
- More robust testing of rerankers with Kendall's rank correlation coefficient.
- Optimise pgvector's settings.
- Offer better control of oversampling in hybrid and vector search.
- Upgrade to the PostgreSQL 17.
Full Changelog: v0.3.0...v0.4.0
v0.3.0
v0.2.1
What's Changed
- fix: add fallbacks for model info by @undo76 in #44
- fix: improve unpacking of keyword search results by @lsorber in #46
- fix: upgrade rerankers and remove flashrank patch by @lsorber in #47
- fix: improve structured output extraction and query adapter updates by @emilradix in #34
New Contributors
- @undo76 made their first contribution in #44
- @emilradix made their first contribution in #34
Full Changelog: v0.2.0...v0.2.1