Skip to content

Commit 7b35d2b

Browse files
chore(internal): temporarily remove some code for migration (#853)
1 parent eb603e9 commit 7b35d2b

File tree

141 files changed

+46
-58065
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

141 files changed

+46
-58065
lines changed

.gitignore

-4
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,4 @@ dist
66
/deno
77
/*.tgz
88
.idea/
9-
tmp
10-
.pack
11-
ecosystem-tests/deno/package.json
12-
ecosystem-tests/*/openai.tgz
139

README.md

+10-226
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,9 @@
22

33
[![NPM version](https://img.shields.io/npm/v/openai.svg)](https://npmjs.org/package/openai)
44

5-
This library provides convenient access to the OpenAI REST API from TypeScript or JavaScript.
5+
This library provides convenient access to the OpenAI REST API from server-side TypeScript or JavaScript.
66

7-
It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
8-
9-
To learn how to use the OpenAI API, check out our [API Reference](https://platform.openai.com/docs/api-reference) and [Documentation](https://platform.openai.com/docs).
7+
The REST API documentation can be found [on platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](api.md).
108

119
## Installation
1210

@@ -26,7 +24,7 @@ import OpenAI from 'https://deno.land/x/openai@v4.47.1/mod.ts';
2624

2725
## Usage
2826

29-
The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples). The code below shows how to get started using the chat completions API.
27+
The full API of this library can be found in [api.md](api.md).
3028

3129
<!-- prettier-ignore -->
3230
```js
@@ -55,18 +53,14 @@ import OpenAI from 'openai';
5553

5654
const openai = new OpenAI();
5755

58-
async function main() {
59-
const stream = await openai.chat.completions.create({
60-
model: 'gpt-4',
61-
messages: [{ role: 'user', content: 'Say this is a test' }],
62-
stream: true,
63-
});
64-
for await (const chunk of stream) {
65-
process.stdout.write(chunk.choices[0]?.delta?.content || '');
66-
}
56+
const stream = await openai.chat.completions.create({
57+
messages: [{ role: 'user', content: 'Say this is a test' }],
58+
model: 'gpt-3.5-turbo',
59+
stream: true,
60+
});
61+
for await (const chatCompletionChunk of stream) {
62+
console.log(chatCompletionChunk);
6763
}
68-
69-
main();
7064
```
7165

7266
If you need to cancel a stream, you can `break` from the loop
@@ -97,196 +91,6 @@ main();
9791

9892
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
9993

100-
> [!IMPORTANT]
101-
> Previous versions of this SDK used a `Configuration` class. See the [v3 to v4 migration guide](https://github.com/openai/openai-node/discussions/217).
102-
103-
### Polling Helpers
104-
105-
When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes
106-
helper functions which will poll the status until it reaches a terminal state and then return the resulting object.
107-
If an API method results in an action which could benefit from polling there will be a corresponding version of the
108-
method ending in 'AndPoll'.
109-
110-
For instance to create a Run and poll until it reaches a terminal state you can run:
111-
112-
```ts
113-
const run = await openai.beta.threads.runs.createAndPoll(thread.id, {
114-
assistant_id: assistantId,
115-
});
116-
```
117-
118-
More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)
119-
120-
### Bulk Upload Helpers
121-
122-
When creating and interacting with vector stores, you can use the polling helpers to monitor the status of operations.
123-
For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.
124-
125-
```ts
126-
const fileList = [
127-
createReadStream('/home/data/example.pdf'),
128-
...
129-
];
130-
131-
const batch = await openai.vectorStores.fileBatches.uploadAndPoll(vectorStore.id, fileList);
132-
```
133-
134-
### Streaming Helpers
135-
136-
The SDK also includes helpers to process streams and handle the incoming events.
137-
138-
```ts
139-
const run = openai.beta.threads.runs
140-
.stream(thread.id, {
141-
assistant_id: assistant.id,
142-
})
143-
.on('textCreated', (text) => process.stdout.write('\nassistant > '))
144-
.on('textDelta', (textDelta, snapshot) => process.stdout.write(textDelta.value))
145-
.on('toolCallCreated', (toolCall) => process.stdout.write(`\nassistant > ${toolCall.type}\n\n`))
146-
.on('toolCallDelta', (toolCallDelta, snapshot) => {
147-
if (toolCallDelta.type === 'code_interpreter') {
148-
if (toolCallDelta.code_interpreter.input) {
149-
process.stdout.write(toolCallDelta.code_interpreter.input);
150-
}
151-
if (toolCallDelta.code_interpreter.outputs) {
152-
process.stdout.write('\noutput >\n');
153-
toolCallDelta.code_interpreter.outputs.forEach((output) => {
154-
if (output.type === 'logs') {
155-
process.stdout.write(`\n${output.logs}\n`);
156-
}
157-
});
158-
}
159-
}
160-
});
161-
```
162-
163-
More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md)
164-
165-
### Streaming responses
166-
167-
This library provides several conveniences for streaming chat completions, for example:
168-
169-
```ts
170-
import OpenAI from 'openai';
171-
172-
const openai = new OpenAI();
173-
174-
async function main() {
175-
const stream = await openai.beta.chat.completions.stream({
176-
model: 'gpt-4',
177-
messages: [{ role: 'user', content: 'Say this is a test' }],
178-
stream: true,
179-
});
180-
181-
stream.on('content', (delta, snapshot) => {
182-
process.stdout.write(delta);
183-
});
184-
185-
// or, equivalently:
186-
for await (const chunk of stream) {
187-
process.stdout.write(chunk.choices[0]?.delta?.content || '');
188-
}
189-
190-
const chatCompletion = await stream.finalChatCompletion();
191-
console.log(chatCompletion); // {id: "…", choices: […], …}
192-
}
193-
194-
main();
195-
```
196-
197-
Streaming with `openai.beta.chat.completions.stream({…})` exposes
198-
[various helpers for your convenience](helpers.md#events) including event handlers and promises.
199-
200-
Alternatively, you can use `openai.chat.completions.create({ stream: true, … })`
201-
which only returns an async iterable of the chunks in the stream and thus uses less memory
202-
(it does not build up a final chat completion object for you).
203-
204-
If you need to cancel a stream, you can `break` from a `for await` loop or call `stream.abort()`.
205-
206-
### Automated function calls
207-
208-
We provide the `openai.beta.chat.completions.runTools({…})`
209-
convenience helper for using function tool calls with the `/chat/completions` endpoint
210-
which automatically call the JavaScript functions you provide
211-
and sends their results back to the `/chat/completions` endpoint,
212-
looping as long as the model requests tool calls.
213-
214-
If you pass a `parse` function, it will automatically parse the `arguments` for you
215-
and returns any parsing errors to the model to attempt auto-recovery.
216-
Otherwise, the args will be passed to the function you provide as a string.
217-
218-
If you pass `tool_choice: {function: {name: …}}` instead of `auto`,
219-
it returns immediately after calling that function (and only loops to auto-recover parsing errors).
220-
221-
```ts
222-
import OpenAI from 'openai';
223-
224-
const client = new OpenAI();
225-
226-
async function main() {
227-
const runner = client.beta.chat.completions
228-
.runTools({
229-
model: 'gpt-3.5-turbo',
230-
messages: [{ role: 'user', content: 'How is the weather this week?' }],
231-
tools: [
232-
{
233-
type: 'function',
234-
function: {
235-
function: getCurrentLocation,
236-
parameters: { type: 'object', properties: {} },
237-
},
238-
},
239-
{
240-
type: 'function',
241-
function: {
242-
function: getWeather,
243-
parse: JSON.parse, // or use a validation library like zod for typesafe parsing.
244-
parameters: {
245-
type: 'object',
246-
properties: {
247-
location: { type: 'string' },
248-
},
249-
},
250-
},
251-
},
252-
],
253-
})
254-
.on('message', (message) => console.log(message));
255-
256-
const finalContent = await runner.finalContent();
257-
console.log();
258-
console.log('Final content:', finalContent);
259-
}
260-
261-
async function getCurrentLocation() {
262-
return 'Boston'; // Simulate lookup
263-
}
264-
265-
async function getWeather(args: { location: string }) {
266-
const { location } = args;
267-
// … do lookup …
268-
return { temperature, precipitation };
269-
}
270-
271-
main();
272-
273-
// {role: "user", content: "How's the weather this week?"}
274-
// {role: "assistant", tool_calls: [{type: "function", function: {name: "getCurrentLocation", arguments: "{}"}, id: "123"}
275-
// {role: "tool", name: "getCurrentLocation", content: "Boston", tool_call_id: "123"}
276-
// {role: "assistant", tool_calls: [{type: "function", function: {name: "getWeather", arguments: '{"location": "Boston"}'}, id: "1234"}]}
277-
// {role: "tool", name: "getWeather", content: '{"temperature": "50degF", "preciptation": "high"}', tool_call_id: "1234"}
278-
// {role: "assistant", content: "It's looking cold and rainy - you might want to wear a jacket!"}
279-
//
280-
// Final content: "It's looking cold and rainy - you might want to wear a jacket!"
281-
```
282-
283-
Like with `.stream()`, we provide a variety of [helpers and events](helpers.md#events).
284-
285-
Note that `runFunctions` was previously available as well, but has been deprecated in favor of `runTools`.
286-
287-
Read more about various examples such as with integrating with [zod](helpers.md#integrate-with-zod),
288-
[next.js](helpers.md#integrate-wtih-next-js), and [proxying a stream to the browser](helpers.md#proxy-streaming-to-a-browser).
289-
29094
## File uploads
29195

29296
Request parameters that correspond to file uploads can be passed in many different forms:
@@ -361,26 +165,6 @@ Error codes are as followed:
361165
| >=500 | `InternalServerError` |
362166
| N/A | `APIConnectionError` |
363167

364-
## Microsoft Azure OpenAI
365-
366-
To use this library with [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview), use the `AzureOpenAI`
367-
class instead of the `OpenAI` class.
368-
369-
> [!IMPORTANT]
370-
> The Azure API shape differs from the core API shape which means that the static types for responses / params
371-
> won't always be correct.
372-
373-
```ts
374-
const openai = new AzureOpenAI();
375-
376-
const result = await openai.chat.completions.create({
377-
model: 'gpt-4-1106-preview',
378-
messages: [{ role: 'user', content: 'Say hello!' }],
379-
});
380-
381-
console.log(result.choices[0]!.message?.content);
382-
```
383-
384168
### Retries
385169

386170
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.

api.md

-26
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,6 @@ Methods:
8181
- <code title="delete /files/{file_id}">client.files.<a href="./src/resources/files.ts">del</a>(fileId) -> FileDeleted</code>
8282
- <code title="get /files/{file_id}/content">client.files.<a href="./src/resources/files.ts">content</a>(fileId) -> Response</code>
8383
- <code title="get /files/{file_id}/content">client.files.<a href="./src/resources/files.ts">retrieveContent</a>(fileId) -> string</code>
84-
- <code>client.files.<a href="./src/resources/files.ts">waitForProcessing</a>(id, { pollInterval = 5000, maxWait = 30 _ 60 _ 1000 }) -> Promise&lt;FileObject&gt;</code>
8584

8685
# Images
8786

@@ -208,10 +207,6 @@ Methods:
208207
- <code title="get /vector_stores/{vector_store_id}/files/{file_id}">client.beta.vectorStores.files.<a href="./src/resources/beta/vector-stores/files.ts">retrieve</a>(vectorStoreId, fileId) -> VectorStoreFile</code>
209208
- <code title="get /vector_stores/{vector_store_id}/files">client.beta.vectorStores.files.<a href="./src/resources/beta/vector-stores/files.ts">list</a>(vectorStoreId, { ...params }) -> VectorStoreFilesPage</code>
210209
- <code title="delete /vector_stores/{vector_store_id}/files/{file_id}">client.beta.vectorStores.files.<a href="./src/resources/beta/vector-stores/files.ts">del</a>(vectorStoreId, fileId) -> VectorStoreFileDeleted</code>
211-
- <code>client.beta.vectorStores.files.<a href="./src/resources/beta/vector-stores/files.ts">createAndPoll</a>(vectorStoreId, body, options?) -> Promise&lt;VectorStoreFile&gt;</code>
212-
- <code>client.beta.vectorStores.files.<a href="./src/resources/beta/vector-stores/files.ts">poll</a>(vectorStoreId, fileId, options?) -> Promise&lt;VectorStoreFile&gt;</code>
213-
- <code>client.beta.vectorStores.files.<a href="./src/resources/beta/vector-stores/files.ts">upload</a>(vectorStoreId, file, options?) -> Promise&lt;VectorStoreFile&gt;</code>
214-
- <code>client.beta.vectorStores.files.<a href="./src/resources/beta/vector-stores/files.ts">uploadAndPoll</a>(vectorStoreId, file, options?) -> Promise&lt;VectorStoreFile&gt;</code>
215210

216211
### FileBatches
217212

@@ -225,19 +220,6 @@ Methods:
225220
- <code title="get /vector_stores/{vector_store_id}/file_batches/{batch_id}">client.beta.vectorStores.fileBatches.<a href="./src/resources/beta/vector-stores/file-batches.ts">retrieve</a>(vectorStoreId, batchId) -> VectorStoreFileBatch</code>
226221
- <code title="post /vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel">client.beta.vectorStores.fileBatches.<a href="./src/resources/beta/vector-stores/file-batches.ts">cancel</a>(vectorStoreId, batchId) -> VectorStoreFileBatch</code>
227222
- <code title="get /vector_stores/{vector_store_id}/file_batches/{batch_id}/files">client.beta.vectorStores.fileBatches.<a href="./src/resources/beta/vector-stores/file-batches.ts">listFiles</a>(vectorStoreId, batchId, { ...params }) -> VectorStoreFilesPage</code>
228-
- <code>client.beta.vectorStores.fileBatches.<a href="./src/resources/beta/vector-stores/file-batches.ts">createAndPoll</a>(vectorStoreId, body, options?) -> Promise&lt;VectorStoreFileBatch&gt;</code>
229-
- <code>client.beta.vectorStores.fileBatches.<a href="./src/resources/beta/vector-stores/file-batches.ts">poll</a>(vectorStoreId, batchId, options?) -> Promise&lt;VectorStoreFileBatch&gt;</code>
230-
- <code>client.beta.vectorStores.fileBatches.<a href="./src/resources/beta/vector-stores/file-batches.ts">uploadAndPoll</a>(vectorStoreId, { files, fileIds = [] }, options?) -> Promise&lt;VectorStoreFileBatch&gt;</code>
231-
232-
## Chat
233-
234-
### Completions
235-
236-
Methods:
237-
238-
- <code>client.beta.chat.completions.<a href="./src/resources/beta/chat/completions.ts">runFunctions</a>(body, options?) -> ChatCompletionRunner | ChatCompletionStreamingRunner</code>
239-
- <code>client.beta.chat.completions.<a href="./src/resources/beta/chat/completions.ts">runTools</a>(body, options?) -> ChatCompletionRunner | ChatCompletionStreamingRunner</code>
240-
- <code>client.beta.chat.completions.<a href="./src/resources/beta/chat/completions.ts">stream</a>(body, options?) -> ChatCompletionStream</code>
241223

242224
## Assistants
243225

@@ -282,8 +264,6 @@ Methods:
282264
- <code title="post /threads/{thread_id}">client.beta.threads.<a href="./src/resources/beta/threads/threads.ts">update</a>(threadId, { ...params }) -> Thread</code>
283265
- <code title="delete /threads/{thread_id}">client.beta.threads.<a href="./src/resources/beta/threads/threads.ts">del</a>(threadId) -> ThreadDeleted</code>
284266
- <code title="post /threads/runs">client.beta.threads.<a href="./src/resources/beta/threads/threads.ts">createAndRun</a>({ ...params }) -> Run</code>
285-
- <code>client.beta.threads.<a href="./src/resources/beta/threads/threads.ts">createAndRunPoll</a>(body, options?) -> Promise&lt;Threads.Run&gt;</code>
286-
- <code>client.beta.threads.<a href="./src/resources/beta/threads/threads.ts">createAndRunStream</a>(body, options?) -> AssistantStream</code>
287267

288268
### Runs
289269

@@ -301,12 +281,6 @@ Methods:
301281
- <code title="get /threads/{thread_id}/runs">client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">list</a>(threadId, { ...params }) -> RunsPage</code>
302282
- <code title="post /threads/{thread_id}/runs/{run_id}/cancel">client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">cancel</a>(threadId, runId) -> Run</code>
303283
- <code title="post /threads/{thread_id}/runs/{run_id}/submit_tool_outputs">client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">submitToolOutputs</a>(threadId, runId, { ...params }) -> Run</code>
304-
- <code>client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">createAndPoll</a>(threadId, body, options?) -> Promise&lt;Run&gt;</code>
305-
- <code>client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">createAndStream</a>(threadId, body, options?) -> AssistantStream</code>
306-
- <code>client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">poll</a>(threadId, runId, options?) -> Promise&lt;Run&gt;</code>
307-
- <code>client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">stream</a>(threadId, body, options?) -> AssistantStream</code>
308-
- <code>client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">submitToolOutputsAndPoll</a>(threadId, runId, body, options?) -> Promise&lt;Run&gt;</code>
309-
- <code>client.beta.threads.runs.<a href="./src/resources/beta/threads/runs/runs.ts">submitToolOutputsStream</a>(threadId, runId, body, options?) -> AssistantStream</code>
310284

311285
#### Steps
312286

0 commit comments

Comments
 (0)