Skip to Content

Using Langchain

This guide shows you how to use Langchain with Liona, allowing you to integrate AI capabilities while maintaining security and cost control. Liona works as a drop-in replacement for various AI providers’ APIs, requiring minimal changes to your existing Langchain code.

JavaScript/TypeScript Integration

Langchain’s JavaScript/TypeScript library is fully compatible with Liona. Follow these steps to integrate it into your application.

Install Langchain

If you haven’t already, install the Langchain packages using npm, yarn, or pnpm:

npm install langchain @langchain/openai @langchain/anthropic # or yarn add langchain @langchain/openai @langchain/anthropic # or pnpm add langchain @langchain/openai @langchain/anthropic

Configure Langchain with Liona

Initialize Langchain models using your Liona access key instead of provider API keys:

import { ChatOpenAI } from "@langchain/openai"; import { ChatAnthropic } from "@langchain/anthropic"; // For OpenAI models const openaiModel = new ChatOpenAI({ apiKey: "your_liona_access_key_here", // Your Liona access key baseURL: "https://api.liona.ai/v1/provider/openai", modelName: "gpt-4", }); // For Anthropic models const anthropicModel = new ChatAnthropic({ apiKey: "your_liona_access_key_here", // Same Liona access key baseURL: "https://api.liona.ai/v1/provider/anthropic", modelName: "claude-3-opus-20240229", });

Use Langchain as normal

Now you can use Langchain exactly as you would normally:

async function generateWithLangchain() { // Simple question-answering with OpenAI const openaiResponse = await openaiModel.invoke( "Explain quantum computing in simple terms." ); console.log("OpenAI response:", openaiResponse.content); // Simple question-answering with Anthropic const anthropicResponse = await anthropicModel.invoke( "Explain machine learning in simple terms." ); console.log("Anthropic response:", anthropicResponse.content); }

Client-side usage (browser)

You can safely use your Liona access key directly in browser-based Langchain applications:

// In a React component or other client-side code import { ChatOpenAI } from "@langchain/openai"; import { useState } from "react"; function LangchainComponent() { const [result, setResult] = useState(""); async function handleSubmit(userInput) { // Safe to use in client-side code with Liona! const model = new ChatOpenAI({ apiKey: "your_liona_access_key_here", // Your Liona access key baseURL: "https://api.liona.ai/v1/provider/openai", modelName: "gpt-4", }); const response = await model.invoke(userInput); setResult(response.content); } // Component rendering... }

Handle rate limits and policy errors

When using Liona with Langchain, implement proper error handling:

import { ChatOpenAI } from "@langchain/openai"; async function callLangchain() { const model = new ChatOpenAI({ apiKey: "your_liona_access_key_here", baseURL: "https://api.liona.ai/v1/provider/openai", modelName: "gpt-4", }); try { const response = await model.invoke("Explain neural networks."); return response.content; } catch (error) { if (error.status === 429 || error.message.includes('policy limit exceeded')) { console.log('Rate limit or policy limit reached. Please try again later.'); // Handle gracefully - perhaps show a user-friendly message } else { console.error('Error:', error); } } }

Python Integration

Langchain’s Python library also works seamlessly with Liona.

Install Langchain Python packages

Install the required Langchain packages:

pip install langchain langchain-openai langchain-anthropic # or poetry add langchain langchain-openai langchain-anthropic

Configure Langchain with Liona

Initialize Langchain with your Liona access key:

from langchain_openai import ChatOpenAI from langchain_anthropic import ChatAnthropic # For OpenAI models openai_model = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4" ) # For Anthropic models anthropic_model = ChatAnthropic( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/anthropic", model_name="claude-3-opus-20240229" )

Use Langchain as normal

You can now use Langchain normally:

from langchain.schema import HumanMessage # Using OpenAI models openai_response = openai_model.invoke("Explain quantum entanglement simply.") print(f"OpenAI response: {openai_response.content}") # Using Anthropic models anthropic_response = anthropic_model.invoke("Explain blockchain simply.") print(f"Anthropic response: {anthropic_response.content}") # Using with messages messages = [ HumanMessage(content="Explain the concept of recursion in programming.") ] response = openai_model.invoke(messages) print(response.content)

Handle rate limits and policy errors

Handle rate limits and policy errors in your Python applications:

from langchain_openai import ChatOpenAI import openai model = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4" ) try: response = model.invoke("What is artificial intelligence?") print(response.content) except openai.APIError as e: if getattr(e, "status_code", None) == 429 or "policy limit exceeded" in str(e).lower(): print("Rate limit or policy limit reached. Please try again later.") else: print(f"Error: {e}")

Advanced Usage

Using with Chains and Agents

Liona works seamlessly with Langchain’s chains and agents:

from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain_openai import ChatOpenAI # Initialize with Liona llm = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4" ) # Create a simple chain prompt = PromptTemplate.from_template("Explain {concept} in simple terms.") chain = LLMChain(llm=llm, prompt=prompt) # Run the chain result = chain.invoke({"concept": "quantum computing"}) print(result["text"])
import { ChatOpenAI } from "@langchain/openai"; import { PromptTemplate } from "@langchain/core/prompts"; import { LLMChain } from "langchain/chains"; const llm = new ChatOpenAI({ apiKey: "your_liona_access_key_here", baseURL: "https://api.liona.ai/v1/provider/openai", modelName: "gpt-4", }); const prompt = PromptTemplate.fromTemplate("Explain {concept} in simple terms."); const chain = new LLMChain({ llm, prompt }); const result = await chain.invoke({ concept: "quantum computing" }); console.log(result.text);

Environment Variables

You can use environment variables with Langchain:

# JavaScript & Python (for OpenAI) OPENAI_API_KEY=your_liona_access_key_here OPENAI_BASE_URL=https://api.liona.ai/v1/provider/openai # JavaScript & Python (for Anthropic) ANTHROPIC_API_KEY=your_liona_access_key_here ANTHROPIC_BASE_URL=https://api.liona.ai/v1/provider/anthropic

Then in your code:

// The model will automatically use the environment variables import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ modelName: "gpt-4" });
# The model will automatically use the environment variables from langchain_openai import ChatOpenAI model = ChatOpenAI(model_name="gpt-4")

Common Issues and Troubleshooting

Rate Limits and Policies

If you encounter rate limits or permission errors, check:

  1. The policy assigned to your user in Liona
  2. Your current usage against the set limits
  3. Whether the specific model is allowed by your policy
💡
Tip

You can check your usage and limits in the Liona dashboard under the “Usage” section.

Error Response Codes

Liona preserves the underlying provider’s error structure while adding additional context:

  • HTTP 429: Rate limit or policy limit exceeded
  • HTTP 403: Unauthorized access to a specific model or feature
  • HTTP 401: Invalid or expired access key

Next Steps

Now that you’ve integrated Langchain with Liona, you might want to:

Last updated on