-
Notifications
You must be signed in to change notification settings - Fork 700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Llama 3.1 and Mixtral 8x7B for Groq #1065
Conversation
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. PR Reviewer Guide 🔍
|
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. PR Code Suggestions ✨
|
lgtm |
User description
PR Type
enhancement
Description
pr_agent/algo/__init__.py
.groq/mixtral-8x7b-32768
groq/llama-3.1-8b-instant
groq/llama-3.1-70b-versatile
groq/llama-3.1-405b-reasoning
Changes walkthrough 📝
__init__.py
Add new Groq models to the configuration dictionary
pr_agent/algo/init.py
mixtral-8x7b-32768
,llama-3.1-8b-instant
,llama-3.1-70b-versatile
, andllama-3.1-405b-reasoning
.