Native LlamaIndex Integration

Context retrieval is a mainstay in LLM engineering. This latest integration brings Langfuse's observability to LlamaIndex applications for simple tracing, monitoring and evaluation of RAG applications.

We’re launching our latest major integration with the LlamaIndex framework. LlamaIndex is a darling of our community. This integration has been our users’ most requested (opens in a new tab) feature for a while. We're thrilled to publicly release the integration after getting great feedback from our initial beta users. Thanks again to everyone who contributed to this.

LlamaIndex Integration

Based on the LlamaIndex Cookbook.

🦙 LlamaIndex, RAG and 🪢 Langfuse

LlamaIndex (opens in a new tab) is a framework for augmenting LLMs with private data. The framework is extremely popular with developers (it is approaching 30k stars on GitHub (opens in a new tab)) and is used across the space, from hobbyists to enterprises.

LLMs are trained on huge datasets. Popular models such as OpenAI's GPT, Anthropic's Claude or Mistral are general purpose in nature. That is, they are not trained for a specific use case or context but to provide answers to any input that is provided to them. Retrieval-augmented-generation (RAG) tries to steer Large Language Models towards a specific context again. By allowing the addition of private data sources and domain specific context to LLM apps, they can significantly enhance the quality of an LLM’s response (opens in a new tab).

RAG has proven to be a pragmatic and efficient way of working with LLMs. It is specifically suited to builders on the application layer, trying to tackle problems in a specific domain or context. It is already a much used technique in Langfuse user’s projects and we’re thrilled to more natively support its most popular framework.

LlamaIndex completes our roster of major integrations next to existing integrations with OpenAI and LLM framework LangChain (see full list of integrations). We are committed to support the most popular ways developers build on top of LLMs and specifically to open source application frameworks.

Thanks again to the team at LlamaIndex for their guidance which helped make this integration feature rich and stable in record time.

Integrating Langfuse with LlamaIndex

Langfuse integrates with LlamaIndex via its global callback manager. This makes getting set up as easy as adding the following few lines to your LlamaIndex app:

from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager
from langfuse.llama_index import LlamaIndexCallbackHandler
 
langfuse_callback_handler = LlamaIndexCallbackHandler(
    public_key="pk-lf-...",
    secret_key="sk-lf-...",
    host="https://cloud.langfuse.com"
)
Settings.callback_manager = CallbackManager([langfuse_callback_handler])

That’s it. Now your LlamaIndex app will automatically send detailed traces to Langfuse.

While easy to set up, the integration makes extensive use of Langfuse Tracing to capture sessions, users, tags, versions, and additional metadata. Also, the integration is fully interoperable with the Langfuse Python SDK.

Users can naturally continue layering other Langfuse features such as evaluations, datasets or prompt management.

Dive In

Head to the Langfuse Docs or see an example integration in this end-to-end cookbook to dive straight in.

Was this page useful?

Questions? We're here to help

Subscribe to updates