Skip to Content

Using Langgraph

This guide shows you how to use Langgraph with Liona, allowing you to build and orchestrate complex AI workflows while maintaining security and cost control. Liona works as a drop-in replacement for the underlying LLM providers’ APIs, requiring minimal changes to your existing Langgraph code.

Python Integration

Langgraph is primarily available as a Python library and works seamlessly with Liona.

Install Langgraph

Install the required Langgraph packages along with the language model integrations:

pip install langgraph langchain langchain-openai langchain-anthropic # or poetry add langgraph langchain langchain-openai langchain-anthropic

Configure Langgraph with Liona

Initialize language models within your Langgraph workflows using your Liona access key:

from langchain_openai import ChatOpenAI from langchain_anthropic import ChatAnthropic from langgraph.graph import StateGraph from langgraph.graph.message import add_messages # Initialize language models with Liona openai_model = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4" ) anthropic_model = ChatAnthropic( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/anthropic", model_name="claude-3-opus-20240229" )

Create and Use Langgraph Workflows

You can now use Langgraph normally with your Liona-configured language models:

from typing import TypedDict, Annotated, Sequence from langchain_core.messages import HumanMessage, SystemMessage, AIMessage # Define state class AgentState(TypedDict): messages: Annotated[Sequence[HumanMessage | AIMessage | SystemMessage], add_messages] # Define nodes (functions that update the state) def research_with_gpt4(state: AgentState) -> AgentState: """Research step using GPT-4.""" messages = state["messages"] response = openai_model.invoke(messages) return {"messages": [response]} def analyze_with_claude(state: AgentState) -> AgentState: """Analysis step using Claude.""" messages = state["messages"] response = anthropic_model.invoke(messages) return {"messages": [response]} # Build graph workflow = StateGraph(AgentState) workflow.add_node("research", research_with_gpt4) workflow.add_node("analyze", analyze_with_claude) # Add edges workflow.add_edge("research", "analyze") workflow.set_entry_point("research") workflow.set_finish_point("analyze") # Compile the graph app = workflow.compile() # Run the workflow result = app.invoke({ "messages": [ SystemMessage(content="You are a helpful research assistant."), HumanMessage(content="Provide information about quantum computing advancements.") ] }) # Print the final messages for message in result["messages"]: print(f"{message.type}: {message.content}")

Handle rate limits and policy errors

Add proper error handling to your Langgraph workflows:

import openai from langchain_openai import ChatOpenAI from langgraph.graph import StateGraph from typing import TypedDict, Annotated, Sequence from langchain_core.messages import HumanMessage, AIMessage, SystemMessage # Define state class AgentState(TypedDict): messages: Annotated[Sequence[HumanMessage | AIMessage | SystemMessage], add_messages] error: str # Initialize the language model with Liona model = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4" ) # Define node with error handling def process_with_llm(state: AgentState) -> AgentState: messages = state["messages"] try: response = model.invoke(messages) return {"messages": [response], "error": ""} except openai.APIError as e: if getattr(e, "status_code", None) == 429 or "policy limit exceeded" in str(e).lower(): error_msg = "Rate limit or policy limit reached. Please try again later." else: error_msg = f"Error: {str(e)}" return {"messages": state["messages"], "error": error_msg} # Define error handling node def handle_error(state: AgentState) -> AgentState: if state["error"]: return {"messages": state["messages"] + [AIMessage(content=f"Workflow encountered an error: {state['error']}")], "error": ""} return state # Build graph with error handling workflow = StateGraph(AgentState) workflow.add_node("process", process_with_llm) workflow.add_node("error_handler", handle_error) # Add conditional routing workflow.add_conditional_edges( "process", lambda state: "error_handler" if state["error"] else "end" ) workflow.add_edge("error_handler", "end") workflow.set_entry_point("process") workflow.set_finish_point("end") # Compile the graph app = workflow.compile()

Advanced Langgraph Usage

Multi-Agent Workflows with Liona

You can create multi-agent workflows using different providers through Liona:

from langchain_openai import ChatOpenAI from langchain_anthropic import ChatAnthropic from langchain.agents import create_openai_functions_agent from langchain.prompts import ChatPromptTemplate from langchain.tools import WikipediaQueryRun from langchain_community.utilities import WikipediaAPIWrapper from langgraph.graph import StateGraph, END from typing import TypedDict, List, Annotated from langchain_core.messages import BaseMessage, HumanMessage # Initialize models with Liona openai_llm = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4" ) anthropic_llm = ChatAnthropic( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/anthropic", model_name="claude-3-opus-20240229" ) # Create tools wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper()) tools = [wikipedia] # Create agents researcher_prompt = ChatPromptTemplate.from_messages([ ("system", "You are a research agent who gathers information. Use tools to find facts."), ]) researcher = create_openai_functions_agent(openai_llm, tools, researcher_prompt) analyst_prompt = ChatPromptTemplate.from_messages([ ("system", "You are an analysis agent who reviews and synthesizes research findings."), ]) analyst = create_openai_functions_agent(anthropic_llm, [], analyst_prompt) # State definition class AgentState(TypedDict): messages: Annotated[List[BaseMessage], add_messages] next: str # Define nodes def researcher_node(state: AgentState) -> AgentState: """Researcher agent node.""" messages = state["messages"] response = researcher.invoke(messages) return {"messages": [response], "next": "analyst"} def analyst_node(state: AgentState) -> AgentState: """Analyst agent node.""" messages = state["messages"] response = analyst.invoke(messages) return {"messages": [response], "next": END} # Create graph workflow = StateGraph(AgentState) workflow.add_node("researcher", researcher_node) workflow.add_node("analyst", analyst_node) # Add edges workflow.add_conditional_edges( "researcher", lambda state: state["next"] ) workflow.add_conditional_edges( "analyst", lambda state: state["next"] ) workflow.set_entry_point("researcher") # Compile the graph app = workflow.compile() # Run the workflow result = app.invoke({ "messages": [HumanMessage(content="Research recent breakthroughs in fusion energy.")], "next": "researcher" })

Environment Variables

You can use environment variables with your Langgraph workflows:

# For OpenAI OPENAI_API_KEY=your_liona_access_key_here OPENAI_BASE_URL=https://api.liona.ai/v1/provider/openai # For Anthropic ANTHROPIC_API_KEY=your_liona_access_key_here ANTHROPIC_BASE_URL=https://api.liona.ai/v1/provider/anthropic

Then in your code:

# The models will automatically use the environment variables from langchain_openai import ChatOpenAI from langchain_anthropic import ChatAnthropic openai_model = ChatOpenAI(model_name="gpt-4") anthropic_model = ChatAnthropic(model_name="claude-3-opus-20240229")

Persistent Graphs with Liona

For long-running workflows, you can configure persistent Langgraph with Liona:

from langgraph.graph import StateGraph, END from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage, AIMessage # Initialize model with Liona model = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4" ) # Create a basic graph builder = StateGraph() builder.add_node("generate", lambda state: {"messages": [model.invoke(state["messages"])]}) builder.set_entry_point("generate") builder.add_edge("generate", END) # Compile with persistence graph = builder.compile(checkpointer=JSONFileCheckpointer("checkpoints")) # Create and run a thread thread = {"messages": [HumanMessage(content="Tell me about AI governance.")]} thread_id = graph.create_thread(thread) result = graph.invoke(thread_id) # Later, continue the thread new_input = {"messages": [HumanMessage(content="What are the key challenges?")]} updated_result = graph.invoke(thread_id, new_input)

Common Issues and Troubleshooting

Rate Limits and Policies

If you encounter rate limits or permission errors with your Langgraph workflows, check:

  1. The policy assigned to your user in Liona
  2. Your current usage against the set limits
  3. Whether the specific model is allowed by your policy
💡
Tip

You can check your usage and limits in the Liona dashboard under the “Usage” section.

Error Response Codes

Liona preserves the underlying provider’s error structure while adding additional context:

  • HTTP 429: Rate limit or policy limit exceeded
  • HTTP 403: Unauthorized access to a specific model or feature
  • HTTP 401: Invalid or expired access key

Debugging Langgraph Workflows

For complex workflows, use Liona’s request tracing:

# Make sure to enable request tracing in your Liona dashboard from langgraph.graph import StateGraph from langchain_openai import ChatOpenAI llm = ChatOpenAI( api_key="your_liona_access_key_here", base_url="https://api.liona.ai/v1/provider/openai", model_name="gpt-4", # Add debug headers default_headers={"X-Liona-Debug": "true"} ) # Rest of your Langgraph setup...

Next Steps

Now that you’ve integrated Langgraph with Liona, you might want to:

Last updated on