Continuously validate LLM-based applications including LLM hallucinations, performance metrics, and potential pitfalls throughout the entire lifecycle from pre-deployment and internal experimentation to production.π
Thanks @kevin for hunting our LLM Evaluation solution π
π Hey, ProductHunt community
I am Shir, co-founder and CTO of Deepchecks. At Deepchecks, weβve built a pretty special solution for LLM Evaluation and are thrilled to launch it today on ProductHunt!
When we launched our open-source testing package last year, we quickly received an overwhelming response with over 3K stars π and more than 900K downloads! After the launch of our NLP package in June, we noticed that an incredible amount of the feedback calls we were having about the NLP package were asking for help with evaluating LLM-based apps. π€―
After creating an initial POC and getting feedback from various companies, we gained the confidence we needed to dive deeply into the LLM Evaluation space. And yes, turns out itβs a pretty big deal.
π As we began working on the LLM Evaluation module, weβve arrived at some important learnings that teams are struggling to figure out answers to these questions while deploying their LLM apps:
- Is it good? π (accuracy, relevance, usefulness, grounded in context, etc.)
- Is it not problematic? π (bias, toxicity, PII leakage, straying from company policy, etc.)
- Evaluating and comparing versions (that differ in their prompts, basemodels, or any other change in the pipeline)
- Efficiently building a process for automatically estimating the quality of the LLM interactions and annotating them
- Deployment lifecycle management from experimentations/development, staging/beta testing, to production.
Deepchecks LLM Evaluation solution helps with-
β Simply and clearly assess "How good is your LLM application?"
π Track and compare different combinations of prompts, models, and code.
π Gain direct visibility into the functioning of your LLM-based application.
β οΈ Reduce the risk during the deployment of LLM-based applications.
π Simplify compliance with AI-related policies and regulations.
We're also hosting a launch event today at 8.30 AM PST today, feel free to sign up to interact with the Deepchecks team and see a live demo: https://www.linkedin.com/events/...
Apply for Deepcheks LLM evaluation access:
https://deepchecks.com/solutions...
π Would appreciate any questions, and hope to see you there!
An innovative approach to evaluating language models. The detailed insights it provides are invaluable for improving model performance. Congrats on the launch! π
I've been experimenting with LLM evaluation metrics on my own for a while now. This is a pretty good solution, will definitely try it out. How do you imagine the future of CI/CD for LLM applications?
@sakameister great question, this has been a question for testing classic ML as well. I can imagine a process kind of like GitHub Actions that runs suites of tests, and some of them may need to involve making sure some manual annotations happened
@matan_mishan Thanks for your question. Indeed, this is one of the most popular use cases users use :-) Question answering, customer support, etc... We enable logging the various steps in the interaction (e.g. the input, information retrieval part, output, etc.), and common issues we find are things like: the output wasn't based on the information retrieval part (a.k.a. indication for hallucination), the retrieved info isn't relevant to the question asked, etc.
Congratulations on launching the Deepchecks LLM assessment! This is an incredible achievement and a testament to your team's dedication to the field. I can see how this will be a game-changer for many projects. Keep up the great work!
Deepchecks Monitoring