Could not resolve "axios/lib/core/settle"

I’m trying to follow this tutorial: OpenAI GPT function calling with JavaScript and Cloudflare Workers · Cloudflare Workers docs

However, when I went to deploy I got this error:

Build failed with 4 errors:

node_modules/@vespaiach/axios-fetch-adapter/index.js:2:19: ERROR: Could not resolve
“axios/lib/core/settle”
node_modules/@vespaiach/axios-fetch-adapter/index.js:3:21: ERROR: Could not resolve
“axios/lib/helpers/buildURL”
node_modules/@vespaiach/axios-fetch-adapter/index.js:4:26: ERROR: Could not resolve
“axios/lib/core/buildFullPath”
node_modules/@vespaiach/axios-fetch-adapter/index.js:5:62: ERROR: Could not resolve
“axios/lib/utils”

package.json:

  "dependencies": {
    "@vespaiach/axios-fetch-adapter": "^0.3.1",
    "itty-router": "^4.0.23",
    "openai": "^4.12.1"
  }

How can this be resolved?

1 Like

I ran into the same issue, try this instead:
As before -

npx wrangler secret put OPENAI_API_KEY

For local development, create a new file .dev.vars in your Worker project and add this line:
OPENAI_API_KEY = “<YOUR_OPENAI_API_KEY>”

New code -
(no need for import fetchAdapter)

import { OpenAI } from "openai";
addEventListener('fetch', event => {
	event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
	const openai = new OpenAI({ apiKey: OPENAI_API_KEY });
	app.post('/', async (c) => {
		const openai = c.state.openai;
	}
	return app.resolve(request, new Map(), {});
}

Hope this helps!

1 Like

Here’s a much improved version:

import { Hono } from "hono";
import { OpenAI } from "openai";
export default function (app: Hono) {
    app.get('/', async (c) => {
        const openai = new OpenAI({
                apiKey: c.env.OPENAI_API_KEY,
                baseURL: "<url or gateway url>"
            });
        # for embeddings
        question = '<your query>' # for vector search
        const embeddings = await openai.embeddings
                .create({ input: question, model: "text-embedding-ada-002" });
        # for chat
        const systemPrompt = '<your system prompt>'
        const chatCompletion = await openai.chat.completions
                .create({
                    model: 'gpt-4',
                    messages: [
                        { role 'system', content: matches }, # e.g.: from a db
                        { role: 'system', content: systemPrompt },
                        { role: 'user', content: question },
                    ],
                })
                .catch((err) => {
                    if (err instanceof OpenAI.APIError) {
                        console.log(`OpenAI returned an API Error: ${err}`);
                    }
        }
}