Skip to main content

Say Goodbye to AI Complexity with TanStack AI! The SDK That Ends Vendor Lock-in

Say Goodbye to AI Complexity with TanStack AI! The SDK That Ends Vendor Lock-in
tips#TanStack AI#AI SDK#AI Integration#TypeScript+17 more

Say Goodbye to AI Complexity with TanStack AI! The SDK That Ends Vendor Lock-in

TanStack AI is a type-safe, provider-agnostic SDK that ends vendor lock-in. Switch between OpenAI, Claude, and Gemini without rewriting code. Full guide with examples.

Mohammad Alhabil

Author

December 17, 2025
10 min read
~2000 words

Say Goodbye to AI Complexity with TanStack AI! The SDK That Ends Vendor Lock-in 🤖

If you've ever tried building a chatbot or integrating AI into your application, you've definitely experienced these pain points. Today, there's finally a solution that addresses all of them.


The Problems We All Face with AI Integration

The Vendor Lock-in Nightmare

You build your entire application around OpenAI's API. Everything works great. Then Claude releases a better model, or you discover Gemini offers better pricing for your use case.

The problem? You have to rewrite everything from scratch. Your code is tightly coupled to OpenAI's specific API structure, response formats, and error handling patterns.

// You're stuck with this
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function chat(messages) {
  const response = await openai.chat.completions.create({
    model: "gpt-4",
    messages: messages,
    // OpenAI-specific options everywhere
  });
  return response.choices[0].message;
}

// Want to switch to Claude? Rewrite everything! 😱

Streaming Complexity and Type Safety Issues

Real-time streaming is essential for good UX, but implementing it properly is a nightmare:

  • No type safety for streamed responses
  • Manual chunk parsing and state management
  • Tool calling becomes exponentially more complex
  • Different providers have different streaming formats

Tool Calling Boilerplate

Every AI provider handles function calling differently. You end up writing tons of adapter code just to make tools work consistently across providers.


The Solution: TanStack AI

TanStack AI is a lightweight, type-safe SDK from the team behind TanStack Query (React Query) that provides a unified interface across multiple LLM providers. It's designed to give you professional-grade AI experiences without the usual complexity.

Why TanStack AI?

The TanStack team has a decade of experience building tools that developers love and that age well. They follow a simple philosophy: your code, your infrastructure, your choice - with no vendor lock-in and no proprietary formats.


Key Features That Make TanStack AI Stand Out

Full Type Safety with Zod Inference

Every schema and tool is validated by TypeScript using Zod schema inference, meaning zero runtime errors from type mismatches. Your IDE catches mistakes at compile time, not in production.

Example: Type-Safe Chat Function

import { chat } from '@tanstack/ai';
import { openai } from '@tanstack/ai-openai';
import { z } from 'zod';

// Define your message schema with full type safety
const MessageSchema = z.object({
  role: z.enum(['user', 'assistant', 'system']),
  content: z.string(),
});

// TypeScript knows exactly what's valid
const result = await chat({
  adapter: openai(),
  model: 'gpt-4o', // ✅ Type-checked against available models
  messages: [
    { role: 'user', content: 'Hello!' }
  ],
  // ❌ TypeScript error if you pass invalid options
});

Provider-Specific Type Safety

Different providers offer unique features that aren't available across all models - TanStack AI provides per-model type safety through providerOptions:

const result = await chat({
  adapter: openai(),
  model: 'o3-mini', // Reasoning model
  messages: [...],
  providerOptions: {
    // ✅ Your IDE knows 'reasoning' is available for this model
    reasoning: { effort: 'high' }
  }
});

// Switch to a different model
const result2 = await chat({
  adapter: openai(),
  model: 'gpt-4o', // Standard model
  messages: [...],
  providerOptions: {
    // ❌ TypeScript error! 'reasoning' not available for this model
    reasoning: { effort: 'high' }
  }
});

Built-in Streaming Support

Real-time streaming is built into the core, so users see responses as they're generated.

Server-Side Streaming Example

import { chat } from '@tanstack/ai';
import { anthropic } from '@tanstack/ai-anthropic';

export async function POST(req: Request) {
  const { messages } = await req.json();
  
  // Streaming is automatic - just return the async generator
  const stream = chat({
    adapter: anthropic(),
    model: 'claude-sonnet-4-20250514',
    messages,
    as: 'stream', // Stream mode
  });
  
  // Convert to Response with proper SSE format
  return new Response(
    new ReadableStream({
      async start(controller) {
        for await (const chunk of stream) {
          controller.enqueue(
            new TextEncoder().encode(`data: ${JSON.stringify(chunk)}\n\n`)
          );
        }
        controller.close();
      }
    }),
    {
      headers: {
        'Content-Type': 'text/event-stream',
        'Cache-Control': 'no-cache',
      }
    }
  );
}

Client-Side with React

import { useChat } from '@tanstack/ai-react';

function ChatInterface() {
  const { messages, input, setInput, submit, isLoading } = useChat({
    connection: fetchServerSentEvents('/api/chat'),
  });
  
  return (
    <div className="flex flex-col h-screen">
      <div className="flex-1 overflow-y-auto p-4">
        {messages.map((msg, i) => (
          <div key={i} className={msg.role === 'user' ? 'text-right' : 'text-left'}>
            <div className="inline-block p-3 rounded-lg bg-gray-100">
              {msg.content}
            </div>
          </div>
        ))}
      </div>
      
      <form onSubmit={(e) => { e.preventDefault(); submit(); }}>
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type a message..."
          disabled={isLoading}
        />
        <button type="submit" disabled={isLoading}>
          {isLoading ? 'Sending...' : 'Send'}
        </button>
      </form>
    </div>
  );
}

Isomorphic Tools - Define Once, Use Everywhere

Using toolDefinition(), you define a tool once and provide environment-specific implementations with .server() or .client(). No duplicate code, complete type safety.

Tool Definition Example

import { toolDefinition } from '@tanstack/ai';
import { z } from 'zod';

// Define the tool interface once
const weatherTool = toolDefinition({
  name: 'getWeather',
  description: 'Get current weather for a location',
  inputSchema: z.object({
    location: z.string().describe('City name'),
    unit: z.enum(['celsius', 'fahrenheit']).default('celsius'),
  }),
  outputSchema: z.object({
    temperature: z.number(),
    conditions: z.string(),
    humidity: z.number(),
  }),
});

// Server implementation
const weatherToolServer = weatherTool.server({
  async execute({ location, unit }) {
    // Call your weather API
    const data = await fetch(
      `https://api.weather.com/current?city=${location}&unit=${unit}`
    ).then(r => r.json());
    
    return {
      temperature: data.temp,
      conditions: data.conditions,
      humidity: data.humidity,
    };
  },
});

// Use in AI chat
const stream = chat({
  adapter: openai(),
  model: 'gpt-4o',
  messages: [{ role: 'user', content: "What's the weather in Paris?" }],
  tools: [weatherToolServer], // Tool executes automatically!
});

Client Implementation (for browser-only operations)

// Client-side tool (runs in browser)
const getUserLocationTool = toolDefinition({
  name: 'getUserLocation',
  description: 'Get user\'s current location',
  inputSchema: z.object({}),
  outputSchema: z.object({
    latitude: z.number(),
    longitude: z.number(),
  }),
}).client({
  async execute() {
    return new Promise((resolve, reject) => {
      navigator.geolocation.getCurrentPosition(
        (position) => resolve({
          latitude: position.coords.latitude,
          longitude: position.coords.longitude,
        }),
        reject
      );
    });
  },
});

Switch Providers Without Code Changes

Support for OpenAI, Anthropic, Ollama, and Google Gemini out of the box - switch providers at runtime without code changes.

Adapter Switching Example

import { chat } from '@tanstack/ai';
import { openai } from '@tanstack/ai-openai';
import { anthropic } from '@tanstack/ai-anthropic';
import { gemini } from '@tanstack/ai-google';

// Same interface, different providers!
const providers = {
  openai: openai(),
  anthropic: anthropic(),
  gemini: gemini(),
};

async function getAIResponse(provider: keyof typeof providers, messages) {
  return chat({
    adapter: providers[provider], // ✅ Switch at runtime
    model: provider === 'openai' ? 'gpt-4o' :
           provider === 'anthropic' ? 'claude-sonnet-4-20250514' :
           'gemini-2.0-flash-exp',
    messages,
  });
}

// Use OpenAI
await getAIResponse('openai', messages);

// Switch to Claude - same code!
await getAIResponse('anthropic', messages);

// Try Gemini - no rewrites!
await getAIResponse('gemini', messages);

Environment-Based Configuration

// Load provider from environment
const getAdapter = () => {
  const provider = process.env.AI_PROVIDER || 'openai';
  
  switch(provider) {
    case 'anthropic':
      return anthropic();
    case 'gemini':
      return gemini();
    default:
      return openai();
  }
};

const stream = chat({
  adapter: getAdapter(), // Determined at runtime
  model: process.env.AI_MODEL || 'gpt-4o',
  messages,
});

Tool Approval Flow

Built-in support for tool approval workflows - if the AI suggests a sensitive operation (payment, deletion, modification), you can require user approval before execution.

Approval Flow Example

import { toolDefinition } from '@tanstack/ai';
import { z } from 'zod';

const deleteUserTool = toolDefinition({
  name: 'deleteUser',
  description: 'Delete a user account',
  inputSchema: z.object({
    userId: z.string(),
  }),
}).server({
  // Tool requires approval before execution
  requiresApproval: true,
  
  async execute({ userId }) {
    await database.users.delete(userId);
    return { success: true };
  },
});

// In your chat handler
const stream = chat({
  adapter: openai(),
  model: 'gpt-4o',
  messages,
  tools: [deleteUserTool],
  
  // Handle approval requests
  onToolApprovalRequest: async (tool, input) => {
    // Show confirmation dialog to user
    const confirmed = await showConfirmDialog(
      `The AI wants to delete user ${input.userId}. Allow?`
    );
    
    return confirmed; // true = execute, false = reject
  },
});

Seamless React Integration

The useChat hook manages messages, loading states, and streaming automatically without boilerplate.

Complete Chat Component

'use client';

import { useChat } from '@tanstack/ai-react';
import { fetchServerSentEvents } from '@tanstack/ai-client';

export function ChatWindow() {
  const {
    messages,
    input,
    setInput,
    submit,
    isLoading,
    error,
    stop,
  } = useChat({
    connection: fetchServerSentEvents('/api/chat'),
  });
  
  return (
    <div className="flex flex-col h-screen max-w-2xl mx-auto">
      {/* Messages Display */}
      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.map((message) => (
          <div
            key={message.id}
            className={`flex ${
              message.role === 'user' ? 'justify-end' : 'justify-start'
            }`}
          >
            <div
              className={`max-w-[80%] rounded-lg p-4 ${
                message.role === 'user'
                  ? 'bg-blue-500 text-white'
                  : 'bg-gray-100 text-gray-900'
              }`}
            >
              {message.content}
            </div>
          </div>
        ))}
        
        {/* Loading Indicator */}
        {isLoading && (
          <div className="flex justify-start">
            <div className="bg-gray-100 rounded-lg p-4">
              <div className="flex space-x-2">
                <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce" />
                <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce delay-75" />
                <div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce delay-150" />
              </div>
            </div>
          </div>
        )}
      </div>
      
      {/* Error Display */}
      {error && (
        <div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded">
          Error: {error.message}
        </div>
      )}
      
      {/* Input Form */}
      <form
        onSubmit={(e) => {
          e.preventDefault();
          if (input.trim()) {
            submit();
          }
        }}
        className="border-t p-4"
      >
        <div className="flex gap-2">
          <input
            type="text"
            value={input}
            onChange={(e) => setInput(e.target.value)}
            placeholder="Type your message..."
            disabled={isLoading}
            className="flex-1 rounded-lg border border-gray-300 px-4 py-2 focus:outline-none focus:ring-2 focus:ring-blue-500"
          />
          
          {isLoading ? (
            <button
              type="button"
              onClick={stop}
              className="px-6 py-2 bg-red-500 text-white rounded-lg hover:bg-red-600"
            >
              Stop
            </button>
          ) : (
            <button
              type="submit"
              disabled={!input.trim()}
              className="px-6 py-2 bg-blue-500 text-white rounded-lg hover:bg-blue-600 disabled:opacity-50 disabled:cursor-not-allowed"
            >
              Send
            </button>
          )}
        </div>
      </form>
    </div>
  );
}

How to Get Started

Installation

Install the core packages and your chosen provider:

npm install @tanstack/ai @tanstack/ai-react @tanstack/ai-openai

Or with other providers:

npm install @tanstack/ai @tanstack/ai-anthropic
npm install @tanstack/ai @tanstack/ai-google
npm install @tanstack/ai @tanstack/ai-ollama

Server Setup

Create your chat endpoint:

// app/api/chat/route.ts (Next.js App Router)
import { chat } from '@tanstack/ai';
import { openai } from '@tanstack/ai-openai';

export async function POST(req: Request) {
  const { messages } = await req.json();
  
  const stream = chat({
    adapter: openai(),
    model: 'gpt-4o',
    messages,
    as: 'stream',
  });
  
  // Return SSE stream
  return new Response(
    new ReadableStream({
      async start(controller) {
        for await (const chunk of stream) {
          controller.enqueue(
            new TextEncoder().encode(`data: ${JSON.stringify(chunk)}\n\n`)
          );
        }
        controller.close();
      }
    }),
    {
      headers: {
        'Content-Type': 'text/event-stream',
        'Cache-Control': 'no-cache',
        'Connection': 'keep-alive',
      }
    }
  );
}

Client Setup

Use the React hook in your components:

'use client';

import { useChat } from '@tanstack/ai-react';
import { fetchServerSentEvents } from '@tanstack/ai-client';

export default function ChatPage() {
  const { messages, input, setInput, submit, isLoading } = useChat({
    connection: fetchServerSentEvents('/api/chat'),
  });
  
  return (
    <div>
      {/* Your chat UI here */}
    </div>
  );
}

That's it! You have a fully functional, type-safe AI chat with streaming support.


Advanced Features

Automatic Tool Execution

The SDK automatically detects tool calls from the model, executes tool functions, adds results to conversation, and continues the conversation:

import { chat, maxIterations } from '@tanstack/ai';

const stream = chat({
  adapter: openai(),
  model: 'gpt-4o',
  messages: [
    { role: 'user', content: "What's the weather in Paris and New York?" }
  ],
  tools: [weatherTool, currencyTool],
  
  // Control the agent loop
  agentLoopStrategy: maxIterations(5),
});

// SDK handles the entire flow:
// 1. Model requests tool calls
// 2. Tools execute automatically
// 3. Results added to conversation
// 4. Model continues with new context
// 5. Process repeats until completion or max iterations

Multimodal Support

Send images, audio, video, and documents as part of your messages:

const stream = chat({
  adapter: anthropic(),
  model: 'claude-sonnet-4-20250514',
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What do you see in this image?' },
        {
          type: 'image',
          source: {
            type: 'base64',
            media_type: 'image/jpeg',
            data: base64ImageData,
          }
        }
      ]
    }
  ],
});

Custom Connection Adapters

Build custom adapters for WebSockets or any other transport:

import type { ConnectionAdapter } from '@tanstack/ai-client';

const wsAdapter: ConnectionAdapter = {
  async *connect(messages, data) {
    const ws = new WebSocket('wss://your-ai-api.com/chat');
    
    // Send messages
    ws.send(JSON.stringify({ messages, data }));
    
    // Yield chunks as they arrive
    for await (const chunk of streamWebSocket(ws)) {
      yield chunk;
    }
  },
  
  abort() {
    ws.close();
  },
};

// Use custom adapter
const chat = useChat({
  connection: wsAdapter,
});

Important Note: Alpha Status

TanStack AI is currently in alpha stage. This means:

What to Expect:

  • 🚀 Rapid development and new features
  • ⚠️ Breaking changes possible between versions
  • 📚 Documentation improving continuously
  • 🐛 Some bugs and rough edges

Recommended Use Cases:

  • ✅ Personal projects and experimentation
  • ✅ Learning and exploring AI integration
  • ✅ Prototypes and proof-of-concepts
  • ❌ Production applications with tight deadlines
  • ❌ Mission-critical systems

Stay Updated: Always check the changelog before upgrading, and join the TanStack Discord for announcements.


TanStack AI vs Vercel AI SDK

TanStack AI takes the opposite philosophy from Vercel's AI SDK - it prioritizes open, portable tooling over platform-specific optimization.

Key Differences

Philosophy:

  • Vercel AI SDK: Optimized for Vercel's platform, rapid iteration on features
  • TanStack AI: Framework-agnostic, works everywhere, no platform lock-in

Type Safety:

  • Vercel AI SDK: Flexible typing allows passing options that may not apply to your model
  • TanStack AI: Per-model type safety with zero runtime overhead

Maturity:

  • Vercel AI SDK: Production-ready, battle-tested, comprehensive docs
  • TanStack AI: Alpha stage, rapidly evolving, exciting future

When to Choose Which:

  • Choose Vercel AI SDK if you need production-ready features today
  • Choose TanStack AI if you're thinking about the next two years and value architectural purity

The Bottom Line

TanStack AI solves the fundamental problems in AI development:

No More Vendor Lock-in - Switch providers without rewriting code
Type Safety First - Catch errors at compile time, not in production
Streaming Made Simple - Real-time responses without complexity
Isomorphic Tools - Define once, run anywhere
Framework Agnostic - Works with React, Solid, vanilla JS, and more
Open Source - No hidden fees, no service lock-in, community-driven

TanStack AI is a pure open-source ecosystem of libraries and standards—not a service. It connects you directly to AI providers with no middleman, no service fees, and no vendor lock-in.

The Future is Open

With backing from sponsors like Cloudflare and Prisma, and the proven track record of the TanStack team, TanStack AI represents the future of AI development - open, portable, and developer-friendly.


Further Reading

Topics covered

#TanStack AI#AI SDK#AI Integration#TypeScript#React#AI Models#LLM Integration#AI Providers#OpenAI#Anthropic#Claude#Gemini#Ollama#GPT#tanstack ai tutorial#ai sdk comparison#vercel ai alternative#react ai integration#ai tool calling#ai chatbot development#streaming ai responses

Found this article helpful?

Share it with your network and help others learn too!

Mohammad Alhabil

Written by Mohammad Alhabil

Frontend Developer & Software Engineer passionate about building beautiful and functional web experiences. I write about React, Next.js, and modern web development.