Releases: lofcz/LlmTornado
Releases · lofcz/LlmTornado
Release v3.1.33
What's Changed
- anthropic: support caching
- improve vector stores, fix potentially not disposing httpresult
- gemini-2.0-flash-thinking-exp, gemini-2.0-flash-exp
- adding test case for custom chunking strategy
- Update README.md
- Update README.md
- fix
- Update project versions to v3.1.32
- minor changes
- adding self-cleaning test-cases
- fixing issues from PR review
- fixing issues from PR review
- fixing xml doc and pointing created_at to created property in base result class
- adding rest of the endpoints for VectorStoreFile
- fixing json convertor
- adding VectorStoreFile object and converter for chunking strategy
- adding vectore store to the baes API + changing json tag
- adding base for VectoreStore endpoint
- using Path.Join to file navigations rather Window's \
Updated Projects
LlmTornado.Demo.csproj -> 0.0.1
LlmTornado.csproj -> 3.1.33
Release v3.1.32
What's Changed
- add ollama streaming demo
- add ollama demo
- Update dotnet.yml
- Update dotnet.yml
- Update dotnet.yml
- fix
- exclude forks from status checking
- fix ci
- Pruned .gitignore
- Update .gitignore
- Update project versions to v3.1.31
Updated Projects
LlmTornado.Demo.csproj -> 0.0.1
LlmTornado.csproj -> 3.1.32
Release v3.1.31
What's Changed
- add release script
- fix
- fix
- setup ci
- llama-3.3-70b-versatile, llama-3.1-8b-instant, llama-guard-3-8b
Updated Projects
LlmTornado.csproj -> 3.1.31
LlmTornado.Demo.csproj -> 0.0.1
3.1.30
Features
- SSE implementation for OpenAI and Anthropic is reworked, fixing some rare issues with mid-response tool calls.
3.1.26
Breaking changes
- LlmTornado was refactored to drop
enums.net
dependency. Please installLlmTornado.Contrib
to get back someFunctionCall
helper methods.
Models
- OpenAI:
gpt-4o-2024-11-20
Features
- Network-level safe APIs now expose the
HttpContent
of the outbound request and other details. - Strict JSON mode is now supported in Google APIs.
- Library dependencies are now marked as ranges for increased compatibility.
3.0.18
Breaking changes
- Audio endpoint (transcription, translation) is undergoing an extensive rework to bring it up to the
chat
endpoint standard. - Some of the old methods had the suffix
Async
removed.
Models
- OpenAI:
gpt-4o-audio-preview
- Anthropic:
claude-3-5-sonnet-20241022
,claude-3-5-haiku-20241022
- Groq:
whisper-large-v3-turbo
,distil-whisper-large-v3-en
,whisper-large-v3
Features
- Chat
audio
modality is now fully supported, including streaming, smart compression. All modes (audio in, text out; text in, audio out; audio in, audio out; mixed) are supported. - Mapped missing bits from OpenAI, including new
usage
details,store
andmetadata
fields. - Extensive audio rework, see the new demos for more details.
3.1.5
3.1.0
Breaking changes:
ChatModel.Cohere.CommandRPlus
->ChatModel.Cohere.Claude3.CommandRPlus
Features
- Support Google Gemini models (text, [parallel] tools, streaming).
- Improve performance when dealing with rich responses (images).
- Support missing Cohere models:
command
,command-nightly
,command-light
, andcommand-light-nightly
.
3.0.17
- fix parallel tool calls setting
3.0.16
- improved Anthropic streaming
- support disabling parallel tools calling