Skip to content

Commit

Permalink
Prepare implementation IBM WatsonxAI to langchain community package. (#…
Browse files Browse the repository at this point in the history
…10)

Prepare implementation IBM WatsonxAI to langchain community package.

Provide solutions for:

- LLMs
- embeddings
- tests
  • Loading branch information
FilipZmijewski authored Oct 1, 2024
1 parent 0f370ba commit 2a964ef
Show file tree
Hide file tree
Showing 19 changed files with 2,188 additions and 8 deletions.
166 changes: 166 additions & 0 deletions docs/core_docs/docs/integrations/llms/ibm.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
# @langchain/community/llm/ibm

This is an intergation for the LangChain.js community with Watsonx AI by IBM through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/community @langchain/core
```

## Chat Models

This package contains the `WatsonxLLM` class, which is the recommended way to interface with the Watsonx series of models.

To use, install the requirements, and configure your environment depending on what type od authentication you will be using.

## IAM authentication

```bash
export WATSONX_AI_AUTH_TYPE=iam
export WATSONX_AI_APIKEY=<YOUR-APIKEY>
```

## Bearer token authentication

```bash
export WATSONX_AI_AUTH_TYPE=bearertoken
export WATSONX_AI_BEARER_TOKEN=<YOUR-BEARER-TOKEN>
```

### CP4D authentication

```bash
export WATSONX_AI_AUTH_TYPE=cp4d
export WATSONX_AI_USERNAME=<YOUR_USERNAME>
export WATSONX_AI_PASSWORD=<YOUR_PASSWORD>
export WATSONX_AI_URL=<URL>
```

Once these are places in your enviromental variables and object is initialized authentication will proceed automatically.

Authentication can also be accomplished by passing these values as parameters to a new instance.

## IAM authentication

```typescript
import { WatsonxLLM } from "@langchain/community/llms/ibm";

const props = {
version: "YYYY-MM-DD",
serviceUrl: "<SERVICE_URL>",
projectId: "<PROJECT_ID>",
watsonxAIAuthType: "iam",
watsonxAIApikey: "<YOUR-APIKEY>",
};
const instance = new WatsonxLLM(props);
```

## Bearer token authentication

```typescript
import { WatsonxLLM } from "@langchain/community/llms/ibm";

const props = {
version: "YYYY-MM-DD",
serviceUrl: "<SERVICE_URL>",
projectId: "<PROJECT_ID>",
watsonxAIAuthType: "bearertoken",
watsonxAIBearerToken: "<YOUR-BEARERTOKEN>",
};
const instance = new WatsonxLLM(props);
```

### CP4D authentication

```typescript
import { WatsonxLLM } from "@langchain/community/llms/ibm";

const props = {
version: "YYYY-MM-DD",
serviceUrl: "<SERVICE_URL>",
projectId: "<PROJECT_ID>",
watsonxAIAuthType: "cp4d",
watsonxAIUsername: "<YOUR-USERNAME>",
watsonxAIPassword: "<YOUR-PASSWORD>",
watsonxAIUrl: "<url>",
};
const instance = new WatsonxLLM(props);
```

## Loading the model

You might need to adjust model parameters for different models or tasks. For more details on the parameters, refer to IBM's documentation.

```typescript
import { WatsonxLLM } from "@langchain/community/llms/ibm";

const props = {
decoding_method: "sample",
max_new_tokens: 100,
min_new_tokens: 1,
temperature: 0.5,
top_k: 50,
top_p: 1,
};
const instance = new WatsonxLLM({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
projectId: "<PROJECT_ID>",
spaceId: "<SPACE_ID>",
idOrName: "<DEPLOYMENT_ID>",
modelId: "<MODEL_ID>",
...props,
});
```

Note:

- You must provide spaceId, projectId or idOrName(deployment id) in order to proceed.
- Depending on the region of your provisioned service instance, use correct serviceUrl.
- You need to specify the model you want to use for inferencing through model_id.

## Props overwrittion

Passed props at initialization will last for the whole life cycle of the object, however you may overwrite them for a single method's call by passing second argument as below

```typescript
const result = await instance.invoke("Print hello world.", {
modelId: "<NEW_MODEL_ID>",
parameters: {
max_new_tokens: 20,
},
});
console.log(result);
```

## Text generation

```typescript
const result = await instance.invoke("Print hello world.");
console.log(result);

const results = await instance.generate([
"Print hello world.",
"Print bye, bye world!",
]);
console.log(result);
```

## Streaming

```typescript
const result = await instance.stream("Print hello world.");
for await (let chunk of result) {
console.log(chunk);
}
```

## Tokenization

This package has it's custom getNumTokens implementation which returns exact amount of tokens that would be used.

```typescript
const tokens = await instance.getNumTokens("Print hello world.");
console.log(tokens);
```
130 changes: 130 additions & 0 deletions docs/core_docs/docs/integrations/text_embedding/ibm.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# @langchain/community/embeddings/ibm

This is an intergation for the LangChain.js community with Watsonx AI by IBM through their SDK.

## Installation

```bash npm2yarn
npm install @langchain/community @langchain/core
```

## Embeddings

This package contains the `WatsonxEmbeddings` class, which is the recommended way to interface with the Watsonx series of models.

To use, install the requirements, and configure your environment depending on what type od authentication you will be using.

## IAM authentication

```bash
export WATSONX_AI_AUTH_TYPE=iam
export WATSONX_AI_APIKEY=<YOUR-APIKEY>
```

## Bearer token authentication

```bash
export WATSONX_AI_AUTH_TYPE=bearertoken
export WATSONX_AI_BEARER_TOKEN=<YOUR-BEARER-TOKEN>
```

### CP4D authentication

```bash
export WATSONX_AI_AUTH_TYPE=cp4d
export WATSONX_AI_USERNAME=<YOUR_USERNAME>
export WATSONX_AI_PASSWORD=<YOUR_PASSWORD>
export WATSONX_AI_URL=<URL>
```

Once these are places in your enviromental variables and object is initialized authentication will proceed automatically.

Authentication can also be accomplished by passing these values as parameters to a new instance.

## IAM authentication

```typescript
import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";

const props = {
version: "YYYY-MM-DD",
serviceUrl: "<SERVICE_URL>",
projectId: "<PROJECT_ID>",
watsonxAIAuthType: "iam",
watsonxAIApikey: "<YOUR-APIKEY>",
};
const instance = new WatsonxEmbeddings(props);
```

## Bearer token authentication

```typescript
import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";

const props = {
version: "YYYY-MM-DD",
serviceUrl: "<SERVICE_URL>",
projectId: "<PROJECT_ID>",
watsonxAIAuthType: "bearertoken",
watsonxAIBearerToken: "<YOUR-BEARERTOKEN>",
};
const instance = new WatsonxEmbeddings(props);
```

### CP4D authentication

```typescript
import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";

const props = {
version: "YYYY-MM-DD",
serviceUrl: "<SERVICE_URL>",
projectId: "<PROJECT_ID>",
watsonxAIAuthType: "cp4d",
watsonxAIUsername: "<YOUR-USERNAME>",
watsonxAIPassword: "<YOUR-PASSWORD>",
watsonxAIUrl: "<url>",
};
const instance = new WatsonxEmbeddings(props);
```

## Loading the model

```typescript
import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";

const instance = new WatsonxEmbeddings({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
projectId: "<PROJECT_ID>",
spaceId: "<SPACE_ID>",
idOrName: "<DEPLOYMENT_ID>",
modelId: "<MODEL_ID>",
});
```

Note:

- You must provide spaceId, projectId or idOrName(deployment id) in order to proceed.
- Depending on the region of your provisioned service instance, use correct serviceUrl.
- You need to specify the model you want to use for inferencing through model_id.

## Embeddings

Following package supports embeddings model, you can proceed with following code snipet.

```typescript
import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";

const instance = new WatsonxEmbeddings({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
projectId: "<PROJECT_ID>",
spaceId: "<SPACE_ID>",
idOrName: "<DEPLOYMENT_ID>",
modelId: "<MODEL_ID>",
});

const result = await instance.embedQuery("Hello world!");
console.log(result);
```
10 changes: 10 additions & 0 deletions examples/src/embeddings/ibm.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";

const instance = new WatsonxEmbeddings({
version: "YYYY-MM-DD",
serviceUrl: process.env.WATSONX_AI_SERVICE_URL as string,
projectId: process.env.WATSONX_AI_PROJECT_ID,
});

const result = await instance.embedQuery("Hello world!");
console.log(result);
30 changes: 30 additions & 0 deletions examples/src/llms/ibm.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import { WatsonxLLM } from "@langchain/community/llms/ibm";

const props = {
decoding_method: "sample",
max_new_tokens: 100,
min_new_tokens: 1,
temperature: 0.5,
top_k: 50,
top_p: 1,
};
const instance = new WatsonxLLM({
version: "2024-05-31",
serviceUrl: process.env.WATSONX_AI_SERVICE_URL as string,
projectId: process.env.WATSONX_AI_PROJECT_ID,
...props,
});

const result = await instance.invoke("Print hello world.");
console.log(result);

const results = await instance.generate([
"Print hello world.",
"Print bye, bye world!",
]);
console.log(results);

const stream = await instance.stream("Print hello world.");
for await (const chunk of stream) {
console.log(chunk);
}
8 changes: 8 additions & 0 deletions libs/langchain-community/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,10 @@ embeddings/hf_transformers.cjs
embeddings/hf_transformers.js
embeddings/hf_transformers.d.ts
embeddings/hf_transformers.d.cts
embeddings/ibm.cjs
embeddings/ibm.js
embeddings/ibm.d.ts
embeddings/ibm.d.cts
embeddings/jina.cjs
embeddings/jina.js
embeddings/jina.d.ts
Expand Down Expand Up @@ -254,6 +258,10 @@ llms/hf.cjs
llms/hf.js
llms/hf.d.ts
llms/hf.d.cts
llms/ibm.cjs
llms/ibm.js
llms/ibm.d.ts
llms/ibm.d.cts
llms/llama_cpp.cjs
llms/llama_cpp.js
llms/llama_cpp.d.ts
Expand Down
Loading

0 comments on commit 2a964ef

Please sign in to comment.