Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Azure Llama client support #872

Merged
merged 4 commits into from
Oct 10, 2024
Merged

Add Azure Llama client support #872

merged 4 commits into from
Oct 10, 2024

Conversation

clavedeluna
Copy link
Contributor

Overview

Codemodder can run codemods with Azure LLama models

Description

  • This is a breaking change because I thought it best to change context.llm_client > context.openai_llm_client for clarity
  • now there can be both clients enabled.
  • it's not necessary to add the model to the ModelRegistry, since the model used is encoded in whatever endpoint is used in CODEMODDER_AZURE_LLAMA_ENDPOINT
    I tested this out with
        from azure.ai.inference.models import SystemMessage, UserMessage

        response = self.azure_llama_llm_client.complete(
            messages=[
                SystemMessage(content="You are a helpful assistant."),
                UserMessage(content="How many feet are in a mile?"),
            ]
        )

and got the expected response

(Pdb) print(response.choices[0].message.content)
There are 5,280 feet in a mile.

Close #871

Copy link
Member

@drdavella drdavella left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but I'm not sure what's up with the failing tests.


azure_llama_key = os.getenv("CODEMODDER_AZURE_LLAMA_API_KEY")
azure_llama_endpoint = os.getenv("CODEMODDER_AZURE_LLAMA_ENDPOINT")
if bool(azure_llama_key) ^ bool(azure_llama_endpoint):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my fault because I did this originally but != is definitely a lot clearer here 😅

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I kinda agree but I also got very used to reading this and it's nice :)

@drdavella
Copy link
Member

This is a breaking change because I thought it best to change context.llm_client > context.openai_llm_client for clarity

I agree. I don't think there's any need for deprecation periods at this point, we should just remember to bump major versions.

@clavedeluna clavedeluna added this pull request to the merge queue Oct 10, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Oct 10, 2024
Copy link

sonarcloud bot commented Oct 10, 2024

Quality Gate Passed Quality Gate passed

Issues
1 New issue
0 Accepted issues

Measures
0 Security Hotspots
0.0% Coverage on New Code
0.0% Duplication on New Code

See analysis details on SonarCloud

@clavedeluna clavedeluna added this pull request to the merge queue Oct 10, 2024
Merged via the queue into main with commit 7a41a84 Oct 10, 2024
13 checks passed
@clavedeluna clavedeluna deleted the llama-client branch October 10, 2024 16:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add client for Azure-hosted Llama models
2 participants