Releases: smkrv/ha-text-ai
v2.1.2
v2.1.2 Release Notes:
Bug Fixes
- Resolved UI-level token limit calculation bug
- Maintains full functional compatibility with v2.1.1
Previous Version Features (v2.1.1)
For more details on the integration, check out the discussion on the Home Assistant Community forum
🔄 Major Architectural Changes
- Complete refactoring of token handling mechanism
- Elimination of custom token calculation approach
- Direct
max_tokens
parameter passing to LLM APIs
🎯 Key Technical Improvements
- Enhanced cross-provider compatibility
- Expanded support for large-context language models
- Robust and predictable token limit management
- Significant codebase simplification
- DeepSeek provider full integration
Provider Updates
DeepSeek — NEW Integration
DeepSeek is a cutting-edge AI provider specializing in advanced language models optimized for both conversational and reasoning tasks. This integration brings:
- High-performance model inference
- Cost-effective API endpoints
- Enterprise-grade reliability
- Flexible deployment options
Full Changelog: v2.1.1...v2.1.2
v2.1.1
v2.1.1 - Token Handling & DeepSeek Provider Integration
For more details on the integration, check out the discussion on the Home Assistant Community forum
🔄 Major Architectural Changes
- Complete refactoring of token handling mechanism
- Elimination of custom token calculation approach
- Direct
max_tokens
parameter passing to LLM APIs
🎯 Key Technical Improvements
- Enhanced cross-provider compatibility
- Expanded support for large-context language models
- Robust and predictable token limit management
- Significant codebase simplification
- DeepSeek provider full integration
🙏 Community Acknowledgments
Heartfelt gratitude to @estiens for identifying token handling complexities and providing comprehensive feedback that drove these critical improvements (#1).
Provider Updates
DeepSeek — NEW Integration
DeepSeek is a cutting-edge AI provider specializing in advanced language models optimized for both conversational and reasoning tasks. This integration brings:
- High-performance model inference
- Cost-effective API endpoints
- Enterprise-grade reliability
- Flexible deployment options
New Model Support
DeepSeek
-
deepseek-chat (DeepSeek-V3) — NEW Model
A state-of-the-art conversational AI model designed for natural, context-aware dialogues. Features include:- Enhanced context retention
- Multi-turn conversation support
- Emotion-aware responses
- Multi-language capabilities
-
deepseek-reasoner (DeepSeek-R1) — NEW Model
A specialized reasoning engine optimized for:- Complex problem-solving
- Logical inference tasks
- Structured data analysis
- Multi-step reasoning workflows
Full Changelog: v2.1.0...v2.1.1
v2.1.0
v2.1.0 Release Notes:
Fixed incorrect version display in the integration instance
Full Changelog: v2.0.9...v2.1.0
v2.0.9
Version increment
This is a routine version update without substantial modifications to the codebase.
Version bumped to v2.0.9 for maintenance purposes.
Full Changelog: v2.0.8...v2.0.9
v2.0.8
fix: Display only last Q&A in sensor state to prevent data truncation
- Show only the latest question and answer in sensor state
- Keep full conversation history in attributes
- Fix truncation issues in Home Assistant UI
- Maintain backwards compatibility
- No configuration changes required
Full Changelog: v2.0.7-beta...v2.0.8
v2.0.7-beta
Optimization and Performance Improvements
🚀 Key Enhancements
- JSON Handling: Optimized JSON file history management for improved efficiency
- Memory Management: Added comprehensive memory usage validation during initialization
- Disk Space Monitoring: Implemented free disk space verification to prevent potential storage-related issues
- Performance Optimization: Streamlined code structure, enhanced parallel request processing, and improved overall system responsiveness
- Token Estimation: Refined heuristic token counting method for more accurate conversation tracking
🔍 Technical Details
- Enhanced error handling and resource management
- Improved system resilience through proactive checks
- More precise token usage estimation
🛠 Recommended Update
Recommended for all users seeking improved system stability and performance.
Full Changelog: v2.0.5-beta...v2.0.7-beta
❌ v2.0.6-broken
❌ v2.0.6 — broken
v2.0.5-beta
🔄 Changes v2.0.5
🆕 New Features
- Conversation history now stored in files instead of Home Assistant states
- Added configurable history size limit (default: 100 entries)
- Added automatic cleanup of old history entries
💪 Improvements
- Reduced memory usage by moving history to file storage
- Better handling of large conversation histories
- Fixed async history updates
- Updated translations
- Fixed minor bugs
- Updated documentation
📝 Notes
- History is now stored in:
.storage/ha_text_ai_history/
- No configuration changes required
Full Changelog: v2.0.4-beta...v2.0.5-beta
v2.0.4-beta
v2.0.4-beta Release Notes:
feat(localization): Expand multilingual support
-
Added translations for:
- Chinese (zh)
- Serbian (sr)
- Italian (it)
- Hindi (hi)
- Spanish (es)
-
Fixed minor bugs
-
Improved language coverage
Full Changelog: v2.0.3-beta...v2.0.4-beta
v2.0.3-beta
v2.0.3-beta Release Notes:
- Fixed sensor naming, now sensors are created correctly according to the Sensor Naming Convention.
- Added additional endpoint availability checks during operation.
- Improved Anthropic integration stability.
- Implemented more accurate token counting (heuristic in any case, but calculates more precisely).
- Enhanced max_tokens handling.
- Improved error descriptions and hints.
- Expanded sensor attribute list.
- Other bugfixes.
Important: If you previously created automations or sensors using special characters, please recreate them. Unfortunately, it's not possible to automatically reimport them with correct names.
ⓘ Sensors created according to the documentation description will continue to work correctly.
Full Changelog: v2.0.2-beta...v2.0.3-beta