Let's see how we can update our previous generateText example to use streaming.
Open core/stream-text.ts. You should see the following code already:
core/stream-text.ts
import { openai } from "@ai-sdk/openai";import { generateText } from "ai";import dotenv from "dotenv";dotenv.config();async function main() { const result = await generateText({ model: openai("gpt-4o"), prompt: "Tell me a joke.", }); console.log(result.text);}main().catch(console.error);
First, change the generateText function to streamText.
core/stream-text.ts
import { openai } from "@ai-sdk/openai";import { streamText } from "ai";import dotenv from "dotenv";dotenv.config();async function main() { const result = awaitstreamText({ model: openai("gpt-4o"), prompt: "Tell me a joke.", }); console.log(result.text);}main().catch(console.error);
Next, replace your console.log with an asynchronous for-loop and iterate over the resulting textStream.
core/stream-text.ts
import { openai } from "@ai-sdk/openai";import { streamText } from "ai";import dotenv from "dotenv";dotenv.config();async function main() { const result = await streamText({ model: openai("gpt-4o"), prompt: "Tell me a joke.", }); for await (const textPart of result.textStream) { } }main().catch(console.error);
Finally, write the text to the console.
core/stream-text.ts
import { openai } from "@ai-sdk/openai";import { streamText } from "ai";import dotenv from "dotenv";dotenv.config();async function main() { const result = await streamText({ model: openai("gpt-4o"), prompt: "Tell me a joke.", }); for await (const textPart of result.textStream) { process.stdout.write(textPart); }}main().catch(console.error);
Note: you are using process.stdout.write rather than console.log because it does not add any new lines.
Run the script in the terminal, and see what happens.
npx tsx core/stream-text.ts
You should see a joke streaming in the console just like ChatGPT!