Subcategory · AI Citation Index
Who AI is citing in AI Observability
Tracing, evals, and monitoring for LLM applications.
6 brands tracked · refreshed Apr 26, 2026
Brands to know
In this category
LangSmith
LangChain-native tracing and evaluation platform
Read brand profile →Langfuse
Open-source LLM observability and prompt management
Read brand profile →Arize AI
ML monitoring extended to LLM applications
Read brand profile →Weights & Biases
Experiment tracking for model training and evals
Read brand profile →Braintrust
Eval-first observability for prompt engineering
Read brand profile →FAQ
AI Observability questions, answered
What is AI observability?
AI observability tools trace requests through LLM applications, surface token usage and latency, and run evals to catch regressions. Teams use them to debug hallucinations, optimize prompts, and monitor production inference costs across models.
Which vendors are LangChain-affiliated?
LangSmith is built by the LangChain team and integrates natively with LangChain and LangGraph. Other platforms like Langfuse, Helicone, and Braintrust offer integrations but operate independently.
Do traditional ML monitoring platforms support LLMs?
Arize AI and Weights & Biases both started in classical ML monitoring and experiment tracking. They have added LLM-specific tracing, prompt versioning, and eval frameworks to their platforms in the past two years.
Are there open-source options?
Langfuse is fully open-source and self-hostable, offering tracing, datasets, and prompt management under Apache 2.0. Helicone also offers a self-hosted variant alongside its cloud service.
Related
More in AI Infrastructure
Want to know if AI cites your brand for AI Observability?
Free audit. ChatGPT, Perplexity, Gemini, Claude.
Run an audit →