You’re migrating from OpenAI and want minimal code changes
You’re using frameworks built for OpenAI (LangChain, LlamaIndex, Vercel AI SDK)
You want a simple chat interface for browser automation
You need streaming responses
You prefer the familiar OpenAI SDK patterns
Choose native REST/WebSocket instead if you need:
Multi-task sessions with persistent browser state
Manual browser control and takeover
Video streaming
Fine-grained session management
The OpenAI-compatible endpoint creates a new session for each request and auto-terminates after completion. For multi-task workflows, use the native REST API.
import OpenAI from 'openai';const client = new OpenAI({ baseURL: 'https://connect.webrun.ai/v1', apiKey: 'YOUR_API_KEY' // Your WebRun API key});const completion = await client.chat.completions.create({ model: 'enigma-browser-1', messages: [ { role: 'user', content: 'Go to google.com and search for Anthropic' } ]});console.log(completion.choices[0].message.content);
Response:
Copy
{ "id": "chatcmpl-a1b2c3d4", "object": "chat.completion", "created": 1704067200, "model": "enigma-browser-1", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Successfully searched for Anthropic on Google. The first result is the official Anthropic website at anthropic.com, which describes Claude as a next-generation AI assistant..." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12450, "completion_tokens": 3200, "total_tokens": 15650 }}
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";const model = new ChatOpenAI({ modelName: "enigma-browser-1", openAIApiKey: "YOUR_API_KEY", configuration: { baseURL: "https://connect.webrun.ai/v1" }});const response = await model.invoke([ new HumanMessage("Go to amazon.com and search for wireless keyboards. List the top 3 results with prices.")]);console.log(response.content);
With Streaming:
Copy
const stream = await model.stream([ new HumanMessage("Search Google for Anthropic and summarize the first result")]);for await (const chunk of stream) { process.stdout.write(chunk.content);}
With Chains:
Copy
import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatOpenAI({ modelName: "enigma-browser-1", openAIApiKey: "YOUR_API_KEY", configuration: { baseURL: "https://connect.webrun.ai/v1" }});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a web research assistant. Extract structured data from websites."], ["human", "{task}"]]);const chain = prompt.pipe(model).pipe(new StringOutputParser());const result = await chain.invoke({ task: "Go to example.com and extract all product names and prices"});console.log(result);
import { OpenAI } from "llamaindex";const llm = new OpenAI({ model: "enigma-browser-1", apiKey: "YOUR_API_KEY", additionalChatOptions: { baseURL: "https://connect.webrun.ai/v1" }});const response = await llm.chat({ messages: [ { role: "user", content: "Navigate to github.com/anthropics and list the top 5 repositories" } ]});console.log(response.message.content);
With Agent:
Copy
import { OpenAI, OpenAIAgent } from "llamaindex";const llm = new OpenAI({ model: "enigma-browser-1", apiKey: "YOUR_API_KEY", additionalChatOptions: { baseURL: "https://connect.webrun.ai/v1" }});const agent = new OpenAIAgent({ llm, systemPrompt: "You are a browser automation assistant. Help users extract data from websites."});const response = await agent.chat({ message: "Go to product hunt and find the top 3 products today"});console.log(response.response);
import { openai } from '@ai-sdk/openai';import { generateText } from 'ai';const model = openai('enigma-browser-1', { baseURL: 'https://connect.webrun.ai/v1', apiKey: 'YOUR_API_KEY'});const { text } = await generateText({ model, prompt: 'Go to news.ycombinator.com and summarize the top 5 stories'});console.log(text);
With Streaming:
Copy
import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';const model = openai('enigma-browser-1', { baseURL: 'https://connect.webrun.ai/v1', apiKey: 'YOUR_API_KEY'});const { textStream } = await streamText({ model, prompt: 'Search Google for "AI browser automation" and summarize the results'});for await (const textPart of textStream) { process.stdout.write(textPart);}
In Next.js Route Handler:
Copy
// app/api/research/route.tsimport { openai } from '@ai-sdk/openai';import { streamText } from 'ai';export async function POST(req: Request) { const { task } = await req.json(); const model = openai('enigma-browser-1', { baseURL: 'https://connect.webrun.ai/v1', apiKey: process.env.ENIGMA_API_KEY }); const result = await streamText({ model, prompt: task }); return result.toAIStreamResponse();}// Usage in component:// const response = await fetch('/api/research', {// method: 'POST',// body: JSON.stringify({ task: 'Go to example.com and extract all links' })// });