8

Langfuse - Open source tracing and analytics for LLM applications | Product Hunt

 1 year ago
source link: https://www.producthunt.com/posts/langfuse
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Support is great. Feedback is even better.

"We appreciate all your engagement & kind words. Looking forward to engaging with you all & hearing your thoughts on Langfuse.

If you want to give Langfuse a try, the easiest way is via our demo: https://langfuse.com/docs/demo"

The makers of Langfuse
Sort by:

@mwseibel - thank you for the hunt!

Hi Product Hunt community 👋👋👋

I’m Clemens, co-founder of Langfuse. We're proud to launch our open source observability and analytics platform for LLM apps @marc_klingen @max_deichmann.

Getting to an MVP using LLMs is lightning fast. But productionizing an app and going from 80 to 100% is hard work. Langfuse is built to support makers on this journey through insights into their production data. We’ve experienced this ourselves as makers and built the product that we wanted for ourselves.

We re-think observability & analytics from the ground up for LLM applications. Langfuse can visualize & analyze even complex LLM executions such as agents, nested chains, embedding retrieval and tool usage.

📊Log traces Integrate via SDKs (Python, JS/TS), API or 🦜🔗 Langchain to asynchronously log production traces & attach metadata

🔍Inspect & Debug Use our visual UI to inspect & debug production traces

💯Score Attach scores to traces through user feedback or manual scoring in our UI. Filter for high and low quality traces to improve your application quickly

🔧Dashboards Cost: Track token counts and compute costs. Break down by user, feature, session etc. to understand your unit economics Latency: Measure total latency and its constituents to improve UX Quality: Monitor output quality over time, users and features to ensure your users see value

🏗Open Source We’re open source, model-agnostic, and our fully async SDKs can integrate with any LLM.

🏃Get Started We’re excited to hear your thoughts & feedback in the comments. To see Langfuse in action today, give our interactive demo a spin at: https://langfuse.com/docs/demo

Offer: Start using Langfuse Cloud for free today and track unlimited events (up to 1GB storage) in our free tier. You can find self-hosting instructions in our docs: https://langfuse.com/docs/deploy...

So excited to see Langfuse go live — we've been a happy user for 4 weeks now. Most detailed latency and analytics in the market. Highly recommend for anyone using complex chains or with user-facing chat applications, where latency becomes crucial. Congrats, Clemens, Marc, and Max!
@david_paffenholz Thank you, David. It's been great fun to work with you and Ishan. You've built an amazing product at juicebox.work and we're super proud to be able to support you.
This looks absolutely brilliant idea to track and stay informed about the performance of LLM applications. I am in the process of building one and certainly give it a try to measure my LLM app.
Feel free to reach out @ishwarjha once you integrate Langfuse with your project. Happy to help you make the most out of it.
This is something we were looking for and that we be very useful to allow our users and ourselves to track spend and compute in our upcoming AI product. Congrats Clemens and team!
@mehdidjabri Thank you, Mehdi. Appreciate it & looking forward to working together. Can't wait to see how we can help @iteration_x

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK