Announcing Langfuse Integration for Inferable
We're excited to announce our native integration with Langfuse, an open-source observability and analytics platform designed specifically for LLM applications. This integration brings powerful monitoring, evaluation, and improvement capabilities to your Inferable deployments.
What is Langfuse?
Langfuse is a comprehensive observability platform that helps teams monitor and evaluate their LLM implementations. It provides detailed tracing and evaluation tools that give you deep insights into your LLM applications' performance and behavior.
What does this mean for Inferable users?
Inferable users can now enable Langfuse tracing and evaluation in their clusters. This will allow them to gain deeper insights into the behavior and performance of their AI applications.
Langfuse integration enables detailed telemetry for:
- Token consumption
- Tool calls (Request count, latency, errors)
- LLM Caller (Request count, latency, errors)
- Run config eval scores ...and more
The Langfuse integration is available now for all Inferable users. To learn more about configuring and using this integration, visit our detailed documentation.