Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google-genai [feature]: Support Gemini system messages #5069

Closed
1 task done
afirstenberg opened this issue Apr 11, 2024 · 14 comments · Fixed by #7235
Closed
1 task done

google-genai [feature]: Support Gemini system messages #5069

afirstenberg opened this issue Apr 11, 2024 · 14 comments · Fixed by #7235
Labels
auto:enhancement A large net-new component, integration, or chain. Use sparingly. The largest features

Comments

@afirstenberg
Copy link
Contributor

Privileged issue

  • I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.

Issue Content

See langchain-ai/langchain-google#129

https://ai.google.dev/docs/system_instructions
https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini#request_body (for reference only)

This isn't implemented for all models, so also add convertSystemMessageToHumanContent to be compatible with Python. Also see the warning message generated at https://github.com/langchain-ai/langchain-google/blob/62ba6ca5e39174d467e47c83119c8bb4275128f6/libs/vertexai/langchain_google_vertexai/chat_models.py#L175

@dosubot dosubot bot added the auto:enhancement A large net-new component, integration, or chain. Use sparingly. The largest features label Apr 11, 2024
@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jul 11, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 18, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jul 18, 2024
@jacoblee93 jacoblee93 reopened this Jul 19, 2024
@shannon-ab
Copy link
Contributor

Hi @afirstenberg @jacoblee93 we are students from the University of Toronto and we are interested in contributing to this issue. Would it be possible for us to take it on? If so, are there any specific guidelines or suggestions you would like us to follow as we work on it?

@jacoblee93
Copy link
Collaborator

jacoblee93 commented Oct 15, 2024

Hey @shannon-ab! I'm not sure the status of this but you can check out general guidelines here:

https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md

But in general feel free to give it a your best shot and let us refine it!

@shannon-ab
Copy link
Contributor

Hi @jacoblee93! Thank you for your response and the guidelines. We have an issue analysis document that outlines the code changes for this issue which you can find here: https://docs.google.com/document/d/1YDUxJVI9NPu7nPObFppPfkdn1ZSR5BsosZdUkuDNB6I/edit?tab=t.0#heading=h.6w32w3mbbyvh
We plan to have a PR ready by early to mid-November. I saw from the guidelines attached that we should assign the issue to ourselves once we start working on it, however I am unable to assign it to myself, could you please assign this issue to me? Thanks again!

@afirstenberg
Copy link
Contributor Author

@shannon-ab - Great to see your group taking this on, and it is good to see you have a preliminary design document. Some notes on your analysis:

  • PaLM has been decommissioned (see https://ai.google.dev/palm_docs/deprecation). There is no need to reference it anymore.
  • In general, it may be useful to see how the google-common package is doing things. (The google-common package is doing things with REST, and not Google's library, and it aims to support both the AI Studio and the Vertex AI endpoint, but it may provide some insight about how to address some issues.)
  • I'm not sure GeminiRole is used anywhere in the genai package. Is it?
  • I'm pretty sure that formatSystemInstruction() will need to collapse system instructions found anywhere in the chat, since other models support more than one.

Looks good so far! Looking forward to reviewing the work.

@shannon-ab
Copy link
Contributor

Hi @afirstenberg, thanks for taking the time to take a look into our draft. We will update our planned implementation in the next days after discussing with the rest of the group and we will send the next draft under this issue too. Once again, thank you and have a great rest of your day!

@chrisnathan20
Copy link
Contributor

Hi @afirstenberg, I'm a groupmate of @shannon-ab and we were wondering what is the acceptance criteria for implementing this feature.

We tried to reproduce the error message in langchain-ai/langchain-google#129 but we are not sure if this is still reproducible given that the issue is already closed

@chrisnathan20
Copy link
Contributor

chrisnathan20 commented Oct 28, 2024

Hi @afirstenberg, I just wanted to follow up on my previous message regarding the acceptance criteria for implementing this feature. I understand you might be busy, but my groupmates and I are still trying to determine if the issue in langchain-ai/langchain-google#129 is reproducible, as we're unsure due to its closure.

We’d appreciate any guidance or clarification you can provide when you have the time. Thanks so much for your time and help!

@afirstenberg
Copy link
Contributor Author

langchain-ai/langchain-google#129 references the Python version of LangChain. As far as I know system messages using the @langchain/google-genai package is still unimplemented. (It is already implemented in the @langchain/google-common package and it's sub-packages, but google-genai is separate.)

@chrisnathan20
Copy link
Contributor

Hi @jacoblee93, our group (together with @shannon-ab) is looking to take on this issue now that we confirmed the status of this, could you assign this issue to me?

@chrisnathan20
Copy link
Contributor

chrisnathan20 commented Oct 30, 2024

Hi @afirstenberg, I hope you've been having a great week. I have some follow-up questions regarding this issue,

  1. In the current behavior of genai's integration, system messages are currently prepended to the next user message, which I assume is a temporary stopgap assuming all models lack system instruction support. For models that don't support system instructions, would it be more consistent to align with the Google Common approach, where system messages are converted into a user message followed by an 'OK' model response or should I leave the behavior as is?

  2. From what I understand, system messages are treated as just another role and concatenated to contents until they are finally removed.

    async formatContents(
    input: BaseMessage[],
    _parameters: GoogleAIModelParams
    ): Promise<GeminiContent[]> {
    const inputPromises: Promise<GeminiContent[]>[] = input.map((msg, i) =>
    this.api.baseMessageToContent(
    msg,
    input[i - 1],
    this.useSystemInstruction
    )
    );
    const inputs = await Promise.all(inputPromises);
    return inputs.reduce((acc, cur) => {
    // Filter out the system content
    if (cur.every((content) => content.role === "system")) {
    return acc;
    }
    // Combine adjacent function messages
    if (
    cur[0]?.role === "function" &&
    acc.length > 0 &&
    acc[acc.length - 1].role === "function"
    ) {
    acc[acc.length - 1].parts = [
    ...acc[acc.length - 1].parts,
    ...cur[0].parts,
    ];
    } else {
    acc.push(...cur);
    }
    return acc;
    }, [] as GeminiContent[]);
    }

    I assume that before removal, they are saved as part of the systemInstructions field, as it influences the model’s behavior as expected. Could you point me to where this is implemented in the Google Common library?

@afirstenberg
Copy link
Contributor Author

@chrisnathan20

  1. For models that don't support system instructions, would it be more consistent to align with the Google Common approach, where system messages are converted into a user message followed by an 'OK' model response or should I leave the behavior as is?

No, you should leave the behavior as is.
(The GenAI and Google Common packages are separate and take different approaches.)

  1. I assume that before removal, they are saved as part of the systemInstructions field, as it influences the model’s behavior as expected. Could you point me to where this is implemented in the Google Common library?

The formatData() method in the AbstractGoogleLLMConnection class is what creates all the different fields that are being sent in the request to Gemini. It calls this.formatSystemInstruction() to create the system instruction fields itself.

async formatData(
input: MessageType,
parameters: GoogleAIModelRequestParams
): Promise<GeminiRequest> {
const contents = await this.formatContents(input, parameters);
const generationConfig = this.formatGenerationConfig(input, parameters);
const tools = this.formatTools(input, parameters);
const toolConfig = this.formatToolConfig(parameters);
const safetySettings = this.formatSafetySettings(input, parameters);
const systemInstruction = await this.formatSystemInstruction(
input,
parameters
);

The formatSystemInstruction() method in the ChatConnection class is the actual implementation that is typically used.

async formatSystemInstruction(
input: BaseMessage[],
_parameters: GoogleAIModelParams
): Promise<GeminiContent> {
if (!this.useSystemInstruction) {
return {} as GeminiContent;
}
let ret = {} as GeminiContent;
for (let index = 0; index < input.length; index += 1) {
const message = input[index];
if (message._getType() === "system") {
// For system types, we only want it if it is the first message,
// if it appears anywhere else, it should be an error.
if (index === 0) {
// eslint-disable-next-line prefer-destructuring
ret = (
await this.api.baseMessageToContent(message, undefined, true)
)[0];
} else {
throw new Error(
"System messages are only permitted as the first passed message."
);
}
}
}
return ret;
}
}

@chrisnathan20
Copy link
Contributor

Hi @afirstenberg, thanks for the detailed response!

Here's the updated implementation plan for this feature. We are targeting to have the initial PR ready by next week:
https://docs.google.com/document/d/1UolNPrfg9QfXvdH698bkpWaqt6BYBYFOZnaGbLE_Zbo/edit?usp=sharing

However, our group still has some questions about the following statement from the issue description, particularly the requirement for convertSystemMessageToHumanContent to be compatible with Python:

This isn't implemented for all models, so also add convertSystemMessageToHumanContent to be compatible with Python. Also see the warning message generated at https://github.com/langchain-ai/langchain-google/blob/62ba6ca5e39174d467e47c83119c8bb4275128f6/libs/vertexai/langchain_google_vertexai/chat_models.py#L175

Could you clarify how this would apply in the context of google-genai? Comparing the two chat_models.ts, we noticed that the genai version does not include a ChatConnection class that extends AbstractGoogleLLMConnection. We were wondering if this statement is in scope for this issue or if it was a carryover from the google-common equivalent of this issue.

Once again, we appreciate the time you have taken to guide us in fulfilling this feature. Have a great rest of your week!

@afirstenberg
Copy link
Contributor Author

@chrisnathan20 - I'll review the implementation plan shortly.

The paragraph you quoted has two parts:

  • The part about convertSystemMessageToHumanContent should be followed, since it will match the Generative AI package on Python.
  • The part about the warning message is advisory only and specifically related to the warning message, not to the implementation in the ChatConnection class.

@chrisnathan20
Copy link
Contributor

Hi @afirstenberg, I hope you are doing well.

I have updated the implementation plan to better match the Gen AI package on Python.
https://docs.google.com/document/d/1UolNPrfg9QfXvdH698bkpWaqt6BYBYFOZnaGbLE_Zbo/edit?usp=sharing

Key Updates:

  • The pseudocode in the implementation plan no longer looks like pseudocode because we successfully implemented it.
  • Confirmed that running yarn test still passes all tests
  • PR will be made on Wednesday or Thursday once all the new test cases are implemented and passing

Given that the PR will be ready soon, you can just make your review on the PR directly if that works better for you. However, if you see anything from the implementation plan that needs to be revised, please do not hesitate to let us know and we will make the change prior to making the PR.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:enhancement A large net-new component, integration, or chain. Use sparingly. The largest features
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants