Openlayer
p/openlayer
Test, fix, and improve your ML models
Michael Seibel
Openlayer — Slack or email alerts for when your AI fails
Featured
52
Openlayer provides observability, evaluation, and versioning tools for LLMs and machine learning products.
Replies
Vikas Nair
@mwseibel, thank you for the hunt! Hi Product Hunt 👋👋👋 We are Vikas, Gabe and Rish from Openlayer. We’re building a next-gen monitoring and eval platform that allows developers to keep track of how their AI is performing beyond token cost and latency. Why? There is no comprehensive toolkit for AI reliability that gives developers insight into how and why their AI fails. Traditional software has a myriad of standardized tests and monitors, but ML development has lagged behind, often hinging on intuition and post-release user feedback. What does Openlayer do? 🧠 Intelligent testing: You can setup all sorts of tests in Openlayer, ranging from data quality and drift checks to model performance expectations using rule-based metrics and GPT evaluation. 🔍 Log and debug: Log your production requests, and run your tests on them to make sure they’re performing for your users as expected. 🚨 Alerts: Get proactive notifications in Slack and email when something’s amiss, and detailed reports that let you drill down into what exactly went wrong. 💼 For pre-release and production: Whether your model is in development or shipped, our platform stands watch, offering continuous analysis. 🚀 Seamless integration: Deploy our monitors with just a few lines of code and watch as Openlayer becomes the guardrail for your AI models. It’s developer-first, hassle-free, and provides real-time insights. 🌏 Beyond LLMs: While we have specialized tools for language models, Openlayer’s scope includes a wide range of AI tasks. Tabular data, classification, regression models – you name it, we can test it. 📊 Subpopulation analysis: Understand how your model interacts with different data demographics. Pinpoint where your AI excels and where improvement is needed. 🔒 Security-conscious: We’re SOC 2 Type 2 compliant, with on-premise hosting options for those with stringent security requirements. Using this toolset, you can set up all sorts of guardrails on your AI. For example, you might be building an application that relies on an LLM, and want to guarantee the output doesn’t contain PII or profanity. You can define a test for this in Openlayer. Or, you might be developing a classification model that predicts fraud on your platform, and you want to guard against data drift over time. You can do this with Openlayer. Product Hunt offer: For those willing to give feedback on their experience using the product, we’re giving out free (limited edition 😉) swag. Send an email to founders@openlayer.com after trying out the product to schedule a call! We're eager for you to explore Openlayer and shape its evolution. Join for free and start testing your models and data! ◾ Sign up for free: https://app.openlayer.com/ ◾ Join our community: https://discord.gg/t6wS2g6MMB ◾ Curious about diving deeper into Openlayer? Reach out to us at founders@openlayer.com to request an upgrade to your workspace.
Ed Roman
Congratulations! We're very grateful to be investors in OpenLayer. They're solving critical challenges with monitoring and evaluating AI and a big congratulations to them on their launch!
Saroj
@vikasnair : Congrats on the launch team, the product looks amazing.
Iuliia Shnai
Congrats on the launch! Just did the onboarding roast of Open layer, looks like a really essential service for developers working with LLMs :)
Sienna Adams
Joined the community! Excited to connect with other users. Any plans for webinars or community events to share best practices and use cases?
Vikas Nair
@siennaadamss_ Hey Sienna, we definitely plan to start hosting more events online and in-person in the coming months. Stay tuned on Discord, and thanks for joining the community!
Naveed Rehman
haha after last few surprise downtime, this product is a super must! but i can see its got more integration down to core llm level. congratulations 🎊
Germán Merlo
Amazing job, Vikas! Everyone knows that testing and observability for LLM applications can be overwhelming - however, your creative solution makes it so much simpler. Well done!
Alex Pall
congrats openlayer team! Everyone at Mantis including myself have been anxiously awaiting this day when the world can see all the hard work you guys have been putting in to building something truly beneficial for the Ai/ML space!
Emlyn Thompson
Congratulations on the launch! Proud to be a part of your journey!
Vikas Nair
@emlyn_thompson Thanks Emlyn, happy to have you onboard!
Matt Williamson
Congrats Vikas!
Ruslan Pavlovich
Would be keen to try this for computer vision models
Vikas Nair
@ruslan_ponomarev We don't support CV at the moment, but definitely join our Discord for more updates on this in the new year! https://discord.gg/t6wS2g6MMB
Champa ghosh
Very good project.
Justin Woodbridge
been using this since js support landed - super helpful for both dev and now monitoring as we've launched our v1
Vikas Nair
@justin_woodbridge Thanks Justin! Awesome to see you guys iterating so fast on your models
Chris Okorodudu
how do you handle data security for larger enterprises ?
Vikas Nair
@chris_okorodudu You can deploy Openlayer on-premise to your internal infrastructure or host on your AWS/GCP/Azure or other cloud provider, so the container is air-gapped and data never leaves your servers. We also are SOC-2 compliant. We’ve built this to be secure from the ground up, as we care a lot about data security and many of our clients are enterprises who have strict security requirements!
Cool!
Ajay Mauli
Congrats to Openlayers and the team
Hugo Delattre
Congratulations on the launch! 😁
Ghost Kitty
Comment Deleted
Shahmir Faisal
Congrats on the launch Vikas!
Eddie Forson
Congrats on the launch Vikas and team! I haven't looked much into LLM Monitoring tools but it's something on my radar for the new year. I hear the term "eval" a lot these days but how do we concretely do this if we are not a ML Engineer or Researcher? It would be great if you had some tutorials on how to do eval for the non-initiated.
Vikas Nair
@ed_forson Hey thanks! We do offer a ton of really useful tutorials that anybody can follow, even if you're not an ML engineer. You can check them out at https://docs.openlayer.com/docum... The gist is that evals are tests that you run on your models and data. It's important to stress test your AI to make sure the data it's trained on is clean, high-quality, and up-to-date. It's also important to set guardrails on the model's behavior, to make sure it performs up to standard both as you are iterating during development, and also after you ship to users. We make it super easy for anybody to be able to set up tests, whether your an ML engineer or if you serve a less technical role in a team or company that leverages AI. You can use our super simple UI to create all sorts of powerful tests that make sure your AI is highly performant and to understand where it fails. After you create these tests in the UI, you can also connect to Slack or email so your team gets notifications whenever they do fail. As your developers iterate on the problems, they can keep track of the versions and see how they improve on their tests all through our platform. Check out the docs for more, and feel free to join our Discord too and reach out to us or our community for more info about how to set up quality evals in your LLMOps stack! https://discord.gg/t6wS2g6MMB