Skip to content

Find the complete code on Github

You can find the code used on this integration directly on OVHcloud's Github.

LangChain

LangChain

LangChain is a powerful framework for developing applications powered by large language models (LLMs). It provides tools for building chains, agents, and complex workflows with LLMs.

OpenAI Compatible

Since OVHcloud AI Endpoints is OpenAI-compatible, you can use LangChain's ChatOpenAI class by simply changing the base_url parameter.

Python SDK

Installation

Install LangChain and required dependencies:

pip install langchain langchain-openai

For specific integrations, you may also need:

# For community integrations
pip install langchain-community

# For document loaders
pip install pypdf chromadb

# For text splitting
pip install tiktoken

Setting Up Authentication

Set your OVHcloud API key as an environment variable:

export OVHCLOUD_API_KEY='your-api-key'

Or use a .env file with python-dotenv:

pip install python-dotenv
from dotenv import load_dotenv
load_dotenv()

Usage

Basic Chat Completion

from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0.7,
    max_tokens=200,
)

response = llm.invoke("Explain what serverless computing is in one paragraph.")
print(response.content)

Streaming Responses

Stream responses for real-time output:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    streaming=True,
)

for chunk in llm.stream("Write a short poem about cloud computing."):
    print(chunk.content, end="", flush=True)

Chat with Message History

Build conversational applications with context:

from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage, AIMessage, SystemMessage

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
)

messages = [
    SystemMessage(content="You are a helpful AI assistant specialized in cloud computing."),
    HumanMessage(content="What is OVHcloud?"),
    AIMessage(content="OVHcloud is a European cloud provider offering various cloud services including AI Endpoints."),
    HumanMessage(content="What models are available on AI Endpoints?"),
]

response = llm.invoke(messages)
print(response.content)

Prompt Templates

Use templates for reusable prompts:

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that translates {input_language} to {output_language}."),
    ("human", "{text}"),
])

chain = prompt | llm

response = chain.invoke({
    "input_language": "English",
    "output_language": "French",
    "text": "Hello, how are you?"
})

print(response.content)

Chains with LCEL (LangChain Expression Language)

Build complex workflows using LCEL:

from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0.7,
)

# Define prompts
idea_prompt = ChatPromptTemplate.from_template(
    "Generate a creative startup idea for: {industry}"
)

analysis_prompt = ChatPromptTemplate.from_template(
    "Analyze the following startup idea and provide pros and cons:\n\n{idea}"
)

# Build chain
chain = (
    {"idea": idea_prompt | llm | StrOutputParser()}
    | analysis_prompt
    | llm
    | StrOutputParser()
)

result = chain.invoke({"industry": "sustainable fashion"})
print(result)

Structured Output with Pydantic

Generate structured data with validation:

from langchain_openai import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import ChatPromptTemplate
from pydantic import BaseModel, Field

# Define output structure
class ProductReview(BaseModel):
    product_name: str = Field(description="Name of the product")
    rating: int = Field(description="Rating from 1 to 5", ge=1, le=5)
    pros: list[str] = Field(description="List of positive aspects")
    cons: list[str] = Field(description="List of negative aspects")
    recommendation: str = Field(description="Final recommendation")

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0.3,
)

parser = PydanticOutputParser(pydantic_object=ProductReview)

prompt = ChatPromptTemplate.from_template(
    "Analyze this product review and extract structured information.\n"
    "{format_instructions}\n\n"
    "Review: {review}"
)

chain = prompt | llm | parser

review_text = """
I've been using the EcoBottle 2000 for three months. 
It keeps drinks cold for 24 hours and is made from recycled materials.
However, it's quite heavy and the lid is difficult to clean.
Overall, great for eco-conscious consumers who don't mind the weight.
"""

result = chain.invoke({
    "review": review_text,
    "format_instructions": parser.get_format_instructions()
})

print(f"Product: {result.product_name}")
print(f"Rating: {result.rating}/5")
print(f"Pros: {', '.join(result.pros)}")
print(f"Cons: {', '.join(result.cons)}")
print(f"Recommendation: {result.recommendation}")

Function/Tool Calling

Enable LLMs to use tools and functions:

from langchain_openai import ChatOpenAI
from langchain.agents import Tool, AgentExecutor, create_openai_functions_agent
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import tool
import os

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0,
)

# Define tools
@tool
def get_current_weather(location: str) -> str:
    """Get the current weather for a given location."""
    # Simulated weather data
    weather_data = {
        "Paris": "Sunny, 22°C",
        "London": "Cloudy, 18°C",
        "New York": "Rainy, 15°C",
    }
    return weather_data.get(location, f"Weather data not available for {location}")

@tool
def calculate(expression: str) -> str:
    """Calculate a mathematical expression."""
    try:
        result = eval(expression)
        return f"The result is: {result}"
    except Exception as e:
        return f"Error calculating: {str(e)}"

tools = [get_current_weather, calculate]

# Create agent prompt
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to tools."),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

# Create agent
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Use the agent
result = agent_executor.invoke({
    "input": "What's the weather in Paris and what is 234 * 567?"
})

print(result["output"])

Retrieval-Augmented Generation (RAG)

Build RAG applications to query your own documents:

Simple RAG Pipeline

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Chroma
from langchain.chains import RetrievalQA
import os

# Initialize LLM
llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0,
)

# Initialize embeddings (using OVHcloud AI Endpoints)
embeddings = OpenAIEmbeddings(
    model="BGE-M3",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
)

# Load and split documents
loader = TextLoader("your_document.txt")
documents = loader.load()

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200,
)
splits = text_splitter.split_documents(documents)

# Create vector store
vectorstore = Chroma.from_documents(
    documents=splits,
    embedding=embeddings,
)

# Create retrieval chain
qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever(search_kwargs={"k": 3}),
)

# Query the documents
query = "What are the main topics discussed in the document?"
result = qa_chain.invoke(query)
print(result["result"])

Advanced RAG with Custom Prompts

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import PyPDFLoader
from langchain_community.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
import os

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0.2,
)

embeddings = OpenAIEmbeddings(
    model="BGE-M3",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
)

# Load PDF documents
loader = PyPDFLoader("your_document.pdf")
documents = loader.load()

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200,
    length_function=len,
)
splits = text_splitter.split_documents(documents)

# Create vector store
vectorstore = Chroma.from_documents(
    documents=splits,
    embedding=embeddings,
    persist_directory="./chroma_db"
)

# Custom prompt template
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.

Context:
{context}

Question: {question}

Helpful Answer:"""

QA_CHAIN_PROMPT = PromptTemplate(
    input_variables=["context", "question"],
    template=template,
)

# Create RAG chain
qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever(
        search_type="similarity",
        search_kwargs={"k": 4}
    ),
    return_source_documents=True,
    chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)

# Query with source documents
query = "What are the key findings in the document?"
result = qa_chain.invoke({"query": query})

print("Answer:", result["result"])
print("\nSource Documents:")
for i, doc in enumerate(result["source_documents"], 1):
    print(f"\n{i}. {doc.page_content[:200]}...")

Conversational RAG

Build a chatbot that remembers conversation history:

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain_community.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
import os

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0.3,
)

embeddings = OpenAIEmbeddings(
    model="BGE-M3",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
)

# Load and process documents
loader = TextLoader("your_document.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
splits = text_splitter.split_documents(documents)

# Create vector store
vectorstore = Chroma.from_documents(documents=splits, embedding=embeddings)

# Initialize memory
memory = ConversationBufferMemory(
    memory_key="chat_history",
    return_messages=True,
    output_key="answer"
)

# Create conversational chain
qa_chain = ConversationalRetrievalChain.from_llm(
    llm=llm,
    retriever=vectorstore.as_retriever(search_kwargs={"k": 3}),
    memory=memory,
    return_source_documents=True,
)

# Interactive conversation
print("Chat with your documents (type 'exit' to quit):")
while True:
    query = input("\nYou: ")
    if query.lower() == 'exit':
        break

    result = qa_chain.invoke({"question": query})
    print(f"\nAssistant: {result['answer']}")

Embeddings

Generate embeddings for semantic search and RAG:

from langchain_openai import OpenAIEmbeddings
import os

embeddings = OpenAIEmbeddings(
    model="BGE-M3",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
)

# Embed single text
text = "OVHcloud AI Endpoints provides serverless access to LLMs."
embedding = embeddings.embed_query(text)
print(f"Embedding dimension: {len(embedding)}")

# Embed multiple documents
texts = [
    "Serverless computing eliminates infrastructure management.",
    "AI models require significant computational resources.",
    "Vector databases enable semantic search capabilities.",
]
document_embeddings = embeddings.embed_documents(texts)
print(f"Generated {len(document_embeddings)} embeddings")

Agents

Build autonomous agents that can use tools and make decisions:

from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.tools import Tool
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
import os

llm = ChatOpenAI(
    model="gpt-oss-120b",
    openai_api_key=os.environ.get("OVHCLOUD_API_KEY"),
    openai_api_base="https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
    temperature=0,
)

# Define tools
def search_database(query: str) -> str:
    """Search the company database for information."""
    # Simulated database search
    return f"Database results for '{query}': Found 3 relevant entries."

def send_email(recipient: str, subject: str) -> str:
    """Send an email to a recipient."""
    return f"Email sent to {recipient} with subject '{subject}'"

tools = [
    Tool(
        name="SearchDatabase",
        func=search_database,
        description="Search the company database for information. Input should be a search query."
    ),
    Tool(
        name="SendEmail",
        func=send_email,
        description="Send an email. Input should be: recipient|subject"
    ),
]

# Create prompt
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful AI assistant with access to tools. Use them when necessary."),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

# Create agent
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=3
)

# Execute task
result = agent_executor.invoke({
    "input": "Search for customer data on Project Phoenix and send a summary email to team@company.com"
})

print(result["output"])

Going Further

Additional Resources

Community and Support

  • GitHub: Report issues on GitHub
  • OVHcloud: Visit our Discord in the #ai-endpoint channel