Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI Node v4 improperly polyfills fetch #202

Closed
AaronFriel opened this issue Jul 19, 2023 · 2 comments
Closed

OpenAI Node v4 improperly polyfills fetch #202

AaronFriel opened this issue Jul 19, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@AaronFriel
Copy link

Describe the bug

Using this library, openai@4.0.0-beta.5 (and confirmed on a few other beta versions) improperly polyfills fetch with node-fetch, which has different semantics than the fetch in Node v18 and above.

This is one of the causes of this issue:

The type error looks like this:

TypeError: responseBodyStream.pipeThrough is not a function
    at AIStream (file:///app/node_modules/ai/dist/index.mjs:135:29)
    at Module.OpenAIStream (file:///app/node_modules/ai/dist/index.mjs:172:18)
    at renderCompletion (file:///app/index.js:19:34)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async file:///app/index.js:33:3

And it's due to node-fetch not behaving idiomatically.

To Reproduce

Proof of concept repo

Clone https://github.com/AaronFriel/openai-vercel-ai-bug
Run node ./index.js, you should see:

$ node index.js
Catching 💥
TypeError: responseBodyStream.pipeThrough is not a function
    at AIStream (file:///app/node_modules/ai/dist/index.mjs:135:29)
    at Module.OpenAIStream (file:///app/node_modules/ai/dist/index.mjs:172:18)
    at renderCompletion (file:///app/index.js:19:34)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async file:///app/index.js:33:3
💥 caught!
Should the OpenAI library polyfill fetch, or should it use the global fetch on Node v18 and above?


The OpenAI library should use the global fetch on Node v18 and above, rather than polyfilling fetch.

On Next.js

1. Configure a next.js app

Create an app or copy the app dir here: https://github.com/vercel-labs/ai/blob/main/examples/next-openai

2. Create a server route handler for OpenAI

In a route handler, instantiate an OpenAI client and return a stream. Do not set runtime = 'edge' - we want to see how this behaves on Node.js on the server.

(Modified from https://github.com/vercel-labs/ai/blob/main/examples/next-openai/app/api/chat/route.ts)

// ./app/api/chat/route.ts
import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream, StreamingTextResponse } from 'ai'

// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

// IMPORTANT! DO NOT SET THE RUNTIME
// export const runtime = 'edge'

export async function POST(req: Request) {
  // Extract the `prompt` from the body of the request
  const { messages } = await req.json()

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages.map((message: any) => ({
      content: message.content,
      role: message.role
    }))
  })

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response)
  // Respond with the stream
  return new StreamingTextResponse(stream)
}

3. Observe a TypeError on using the route

TypeError: responseBodyStream.pipeThrough is not a function
    at AIStream (file:///app/node_modules/ai/dist/index.mjs:135:29)
    at Module.OpenAIStream (file:///app/node_modules/ai/dist/index.mjs:172:18)
    at renderCompletion (file:///app/index.js:19:34)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async file:///app/index.js:33:3

Code snippets

import * as ai from 'ai';
import { OpenAI } from 'openai';

import * as fs from 'fs/promises';

async function renderCompletion(client, content) {
  const res = await client.chat.completions.create({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: [
      {
        role: 'system',
        content:
          'You are an expert software engineer, with deep Node.js and web development experience. You are succinct, brief, and to the point.',
      },
      {
        role: 'user',
        content,
      },
    ],
  });

  for await (const message of ai.OpenAIStream(res.response)) {
    process.stdout.write(message);
  }
  process.stdout.write('\n');
}

try {
  // This API client uses the node-fetch polyfill:
  const brokenClient = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  });

  console.log('Catching 💥');
  await renderCompletion(brokenClient, 'PING');
} catch (error) {
  console.log(error);
  console.log(`💥 caught!`);

  // This API client uses the built-in fetch support in Node v18:
  const workingClient = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
    fetch: globalThis.fetch,
  });

  const currentSourceCode = await fs.readFile('./index.js', 'utf-8');
  const question = `Should the OpenAI library polyfill fetch, or should it use the global fetch on Node v18 and above?`;
  const prompt = `
\`\`\`javascript
${currentSourceCode}
\`\`\`

That code threw this error, as a result of the OpenAI API client named \`brokenClient\`.

The \`workingClient\` however works.

\`\`\`
${error}
\`\`\`

${question}
`;
  console.log(`${question}\n\n`);
  await renderCompletion(workingClient, prompt);
}

OS

Linux

Node version

v18.16.1

Library version

openai v4.0.0-beta.5

@AaronFriel AaronFriel added the bug Something isn't working label Jul 19, 2023
@AaronFriel
Copy link
Author

For an opinionated perspective:

https://www.builder.io/blog/stop-polyfilling-fetch-in-your-npm-package

@rattrayalex
Copy link
Collaborator

Thanks for opening this!

We intentionally use node-fetch on Node instead of the built-in fetch, as it's still in Experimental stability, and (I believe) does not yet support connection pooling.

As I'm sure you noticed, the critical line new ai.OpenAIStream(res.response) is where the problem arises, and this library actually deprecates res.response; it'll be removed before this package leaves beta, partly due to these sorts of incompatibilities.

Vercel plans to add support for you to simply pass res directly here, so it'll look like this: new ai.OpenAIStream(res) – and they'll read from our stream's async iterator directly, rather than from the raw Response, which should address your use-case. I'll bump this issue with their team.

For posterity (if others come here with similar issues), as you noted in the issue, as a workaround you can pass the global fetch function like so:

const client = new OpenAI({
  fetch: globalThis.fetch,
});

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants