You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using this library, openai@4.0.0-beta.5 (and confirmed on a few other beta versions) improperly polyfills fetch with node-fetch, which has different semantics than the fetch in Node v18 and above.
TypeError: responseBodyStream.pipeThrough is not a function
at AIStream (file:///app/node_modules/ai/dist/index.mjs:135:29)
at Module.OpenAIStream (file:///app/node_modules/ai/dist/index.mjs:172:18)
at renderCompletion (file:///app/index.js:19:34)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///app/index.js:33:3
And it's due to node-fetch not behaving idiomatically.
$ node index.js
Catching 💥
TypeError: responseBodyStream.pipeThrough is not a function
at AIStream (file:///app/node_modules/ai/dist/index.mjs:135:29)
at Module.OpenAIStream (file:///app/node_modules/ai/dist/index.mjs:172:18)
at renderCompletion (file:///app/index.js:19:34)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///app/index.js:33:3
💥 caught!
Should the OpenAI library polyfill fetch, or should it use the global fetch on Node v18 and above?
The OpenAI library should use the global fetch on Node v18 and above, rather than polyfilling fetch.
In a route handler, instantiate an OpenAI client and return a stream. Do not set runtime = 'edge' - we want to see how this behaves on Node.js on the server.
// ./app/api/chat/route.tsimport{Configuration,OpenAIApi}from'openai-edge'import{OpenAIStream,StreamingTextResponse}from'ai'// Create an OpenAI API client (that's edge friendly!)constconfig=newConfiguration({apiKey: process.env.OPENAI_API_KEY})constopenai=newOpenAIApi(config)// IMPORTANT! DO NOT SET THE RUNTIME// export const runtime = 'edge'exportasyncfunctionPOST(req: Request){// Extract the `prompt` from the body of the requestconst{ messages }=awaitreq.json()// Ask OpenAI for a streaming chat completion given the promptconstresponse=awaitopenai.createChatCompletion({model: 'gpt-3.5-turbo',stream: true,messages: messages.map((message: any)=>({content: message.content,role: message.role}))})// Convert the response into a friendly text-streamconststream=OpenAIStream(response)// Respond with the streamreturnnewStreamingTextResponse(stream)}
3. Observe a TypeError on using the route
TypeError: responseBodyStream.pipeThrough is not a function
at AIStream (file:///app/node_modules/ai/dist/index.mjs:135:29)
at Module.OpenAIStream (file:///app/node_modules/ai/dist/index.mjs:172:18)
at renderCompletion (file:///app/index.js:19:34)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///app/index.js:33:3
Code snippets
import*asaifrom'ai';import{OpenAI}from'openai';import*asfsfrom'fs/promises';asyncfunctionrenderCompletion(client,content){constres=awaitclient.chat.completions.create({model: 'gpt-3.5-turbo',stream: true,messages: [{role: 'system',content:
'You are an expert software engineer, with deep Node.js and web development experience. You are succinct, brief, and to the point.',},{role: 'user',
content,},],});forawait(constmessageofai.OpenAIStream(res.response)){process.stdout.write(message);}process.stdout.write('\n');}try{// This API client uses the node-fetch polyfill:constbrokenClient=newOpenAI({apiKey: process.env.OPENAI_API_KEY,});console.log('Catching 💥');awaitrenderCompletion(brokenClient,'PING');}catch(error){console.log(error);console.log(`💥 caught!`);// This API client uses the built-in fetch support in Node v18:constworkingClient=newOpenAI({apiKey: process.env.OPENAI_API_KEY,fetch: globalThis.fetch,});constcurrentSourceCode=awaitfs.readFile('./index.js','utf-8');constquestion=`Should the OpenAI library polyfill fetch, or should it use the global fetch on Node v18 and above?`;constprompt=`\`\`\`javascript${currentSourceCode}\`\`\`That code threw this error, as a result of the OpenAI API client named \`brokenClient\`.The \`workingClient\` however works.\`\`\`${error}\`\`\`${question}`;console.log(`${question}\n\n`);awaitrenderCompletion(workingClient,prompt);}
OS
Linux
Node version
v18.16.1
Library version
openai v4.0.0-beta.5
The text was updated successfully, but these errors were encountered:
We intentionally use node-fetch on Node instead of the built-in fetch, as it's still in Experimental stability, and (I believe) does not yet support connection pooling.
As I'm sure you noticed, the critical line new ai.OpenAIStream(res.response) is where the problem arises, and this library actually deprecates res.response; it'll be removed before this package leaves beta, partly due to these sorts of incompatibilities.
Vercel plans to add support for you to simply pass res directly here, so it'll look like this: new ai.OpenAIStream(res) – and they'll read from our stream's async iterator directly, rather than from the raw Response, which should address your use-case. I'll bump this issue with their team.
For posterity (if others come here with similar issues), as you noted in the issue, as a workaround you can pass the global fetch function like so:
Describe the bug
Using this library,
openai@4.0.0-beta.5
(and confirmed on a few other beta versions) improperly polyfillsfetch
withnode-fetch
, which has different semantics than the fetch in Node v18 and above.This is one of the causes of this issue:
The type error looks like this:
And it's due to node-fetch not behaving idiomatically.
To Reproduce
Proof of concept repo
Clone https://github.com/AaronFriel/openai-vercel-ai-bug
Run
node ./index.js
, you should see:On Next.js
1. Configure a next.js app
Create an app or copy the app dir here: https://github.com/vercel-labs/ai/blob/main/examples/next-openai
2. Create a server route handler for OpenAI
In a route handler, instantiate an OpenAI client and return a stream. Do not set
runtime = 'edge'
- we want to see how this behaves on Node.js on the server.(Modified from https://github.com/vercel-labs/ai/blob/main/examples/next-openai/app/api/chat/route.ts)
3. Observe a TypeError on using the route
Code snippets
OS
Linux
Node version
v18.16.1
Library version
openai v4.0.0-beta.5
The text was updated successfully, but these errors were encountered: