Skip to Content

Using the OpenAI SDK

This guide shows you how to use the official OpenAI SDK with Liona, allowing you to integrate AI capabilities while maintaining security and cost control. Liona works as a drop-in replacement for OpenAI’s API, requiring minimal changes to your existing code.

JavaScript/TypeScript Integration

The OpenAI JavaScript/TypeScript SDK is fully compatible with Liona. Follow these steps to integrate it into your application.

Install the OpenAI SDK

If you haven’t already, install the OpenAI SDK using npm, yarn, or pnpm:

npm install openai # or yarn add openai # or pnpm add openai

Initialize the OpenAI client

Initialize the OpenAI client using your Liona access key instead of your OpenAI API key:

import OpenAI from 'openai'; const openai = new OpenAI({ apiKey: 'your_liona_access_key_here', // Your Liona access key baseURL: 'https://api.liona.ai/v1/provider/openai1', });

Use the OpenAI SDK as normal

Now you can use the OpenAI SDK exactly as you would normally, with all the same methods and parameters:

async function generateText() { const completion = await openai.chat.completions.create({ model: 'gpt-4', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'Explain quantum computing in simple terms.' } ], temperature: 0.7, }); console.log(completion.choices[0].message.content); }

Client-side usage (browser)

One of Liona’s key benefits is enabling secure client-side usage of the OpenAI SDK. You can safely use your Liona access key directly in browser-based applications:

// In a React component or other client-side code import OpenAI from 'openai'; function AiComponent() { const [result, setResult] = useState(''); async function handleSubmit(userInput) { // Safe to use in client-side code with Liona! const openai = new OpenAI({ apiKey: 'your_liona_access_key_here', // Your Liona access key baseURL: 'https://api.liona.ai/v1/provider/openai', dangerouslyAllowBrowser: true, // Still required by OpenAI SDK }); const completion = await openai.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: userInput }], }); setResult(completion.choices[0].message.content); } // Component rendering... }
Note

The dangerouslyAllowBrowser flag is still required by the OpenAI SDK, but using a Liona access key makes this approach secure because your actual OpenAI API key is never exposed.

Handle rate limits and policy errors

When using Liona, you should check for rate limit errors (HTTP 429) or policy limit exceeded messages:

async function callOpenAI() { try { const completion = await openai.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: 'Hello' }], }); return completion.choices[0].message.content; } catch (error) { if (error.status === 429 || error.message.includes('policy limit exceeded')) { console.log('Rate limit or policy limit reached. Please try again later.'); // Handle gracefully - perhaps show a user-friendly message } else { console.error('Error:', error); } } }

Python Integration

The OpenAI Python SDK also works seamlessly with Liona.

Install the OpenAI Python package

If you haven’t already, install the OpenAI Python package:

pip install openai # or poetry add openai

Initialize the OpenAI client

Initialize the OpenAI client with your Liona access key:

from openai import OpenAI client = OpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", )

Use the OpenAI client as normal

You can now use the OpenAI client normally:

response = client.chat.completions.create( model="gpt-4", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain how gravity works."} ], temperature=0.7, ) print(response.choices[0].message.content)

Handle rate limits and policy errors

Handle rate limits and policy errors in your Python applications:

import openai from openai import OpenAI client = OpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", ) try: response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Hello"}], ) print(response.choices[0].message.content) except openai.APIError as e: if e.status_code == 429 or "policy limit exceeded" in str(e).lower(): print("Rate limit or policy limit reached. Please try again later.") else: print(f"Error: {e}")

Additional Integration Options

Environment Variables

You can use environment variables to configure the OpenAI SDK:

# JavaScript OPENAI_API_KEY=your_liona_access_key_here OPENAI_BASE_URL=https://api.liona.ai/v1/provider/openai # Python OPENAI_API_KEY=your_liona_access_key_here OPENAI_BASE_URL=https://api.liona.ai/v1/provider/openai

Using with Next.js

For Next.js applications using server components:

// In app/api/route.js or similar import OpenAI from 'openai'; import { NextResponse } from 'next/server'; export async function POST(request) { const { prompt } = await request.json(); const openai = new OpenAI({ apiKey: process.env.LIONA_ACCESS_KEY, baseURL: 'https://api.liona.ai/v1/provider/openai', }); const completion = await openai.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: prompt }], }); return NextResponse.json({ result: completion.choices[0].message.content }); }

Common Issues and Troubleshooting

Rate Limits and Policies

If you encounter rate limits or permission errors, check:

  1. The policy assigned to your user in Liona
  2. Your current usage against the set limits
  3. Whether the specific model is allowed by your policy
💡
Tip

You can check your usage and limits in the Liona dashboard under the “Usage” section.

Error Response Codes

Liona preserves OpenAI’s error structure while adding additional context:

  • HTTP 429: Rate limit or policy limit exceeded
  • HTTP 403: Unauthorized access to a specific model or feature
  • HTTP 401: Invalid or expired access key

Next Steps

Now that you’ve integrated the OpenAI SDK with Liona, you might want to:

Last updated on