Find the complete code on Github
You can find the code used on this integration directly on OVHcloud's Github.

LlamaIndex¶
LlamaIndex is the leading framework for building LLM-powered agents over your data with LLMs and workflows.
Python¶
Installation¶
Getting started
You can have more information on how to get started with LlamaIndex on their Installation and Setup documentation.
First, install the llama-index and llama-index-llms-ovhcloud packages:
https://developers.llamaindex.ai/python/framework-api-reference/llms/ovhcloud/
Usage¶
Create a new file such as main.py and paste this code. Don't forget to add your AI Endpoints API key in the api_key parameter or by setting your OVHCLOUD_API_KEY environmnent variable.
from llama_index.llms.ovhcloud import OVHcloud
from llama_index.core.llms import ChatMessage
llm = OVHcloud(
model="gpt-oss-120b",
api_key="",
)
response = llm.complete("The capital of France is")
print(response.text)
messages = [
ChatMessage(
role="system", content="You are a helpful assistant"
),
ChatMessage(role="user", content="What is the capital of France?"),
]
response = llm.chat(messages)
print(response)
Execute the script using the following command:
You should see the following output in your terminal:
Typescript¶
Going further¶
You can explore the LlamaIndex Python documentation and Typescript documentation to dive deeper into advanced features.
- Check out our Getting Started guide to learn more about OVHcloud AI Endpoints capabilities.
- Need support or want to chat? Join our community on Discord.
- Found a mistake or want to improve this guide? Feel free to open a Pull Request or Issue on GitHub.