Warehouse every AI request to your database. Query logs to analyze usage and costs, evaluate models, and generate datasets. Install the proxy with 2 lines of code. It’s free to get started, and you own your data.
Hey Product Hunt 👋 Emma and Chris here from Velvet. We built an AI gateway to warehouse OpenAI and Anthropic requests to your database. Engineers use Velvet logs to analyze usage and costs, evaluate models, and generate datasets.
Our first product was an AI SQL editor. It worked really well, but we had limited insights into what happened between our app and OpenAI. We started storing requests to our database, giving us the ability to query data directly. It gave us so much more control over developing our AI features.
We didn’t think of it as a product until one of our customers (Find AI) asked to use it. We warehoused over 3 million requests for them in the first week, logging up to 1,500 requests per second during their launch. Now we have many startups using Velvet daily for caching, evaluations, and analysis of opaque endpoints like the Batch API.
It's easy to get started! Once set up, we warehouse every request to your database.
Requests are formatted as JSON. You can include custom metadata in the header - like user ID, org ID, model ID, and version ID. This means you can run complex SQL queries unique to your app. Granularly calculate costs, evaluate new models, or identify datasets for fine-tuning.
We'd love to hear from you! Email team@usevelvet.com with any feedback or questions.
@emmalawler24 Congratulations on the launch! 🎉 I’m curious, how does Velvet handle data security and compliance with such high request volumes? Excited to see how it evolves!
Hey @devindsgbyq - thanks! For teams with large volumes of data in production, we warehouse requests to their own PostgreSQL instance. This resolves most security and compliance concerns since the company maintains full control of data. Our app and infrastructure is built with Cloudflare, Supabase, Neon, Vercel, and OpenAI. All are SOC 2 Type II compliant.
Congratulations on launch! Our company Find AI has been an early customer of the AI gateway, and it's been a game-changer for our engineering team to interact with OpenAI.
Here's some of the things we do with the millions of requests we warehouse weekly with Velvet:
- Cost analysis: What's AI spend per user, per search, for logged out users, and in prod vs. development?
- Debugging: When a query does something weird, trace it back to the OpenAI call. We traced one query that occasionally returned gibberish, and realized that the temperature was just too high.
- Caching: particularly use this on smoke tests, so that we can actually query OpenAI if our prompt/model/etc changes to verify expected behavior, but skip paying money if things are the same
- Testing model upgrades: This week we've been exporting data of some models in gpt-4o, then re-running the exact queries against gpt-4o-mini. To our surprise, a couple of our classification queries had 100% the same accuracy on the mini model, so we could immediately switch and save money.
@philipithomas - thanks! Find AI has been a pivotal early design partner. Their engineering team uses Velvet daily, providing feedback on exactly what they need implemented to run and optimize a high-usage AI application. It's inspiring to see the Find AI product get better every day!
Congrats for the launch and thank you, we are happy users of Velvet, it's not only very simple to set up and use but also highly reliable and very well designed. Kudos team 🙌
@mehdidjabri - it's been great to see the revo.pm product evolve and scale. Excited to be part of your journey as you automate more product management workflows.
Awesome to see Velvet’s innovative approach to warehousing AI requests! 🎉 The ability to analyze usage and costs while owning your data is a game-changer for developers. The integration seems seamless with just two lines of code! Excited to see how this evolves and brings more insights to the AI space. Shoutout to @emmalawler24 and Chris for building something so valuable for startups! Definitely going to check out the sandbox demo!
I'm usually working with AI companies that use LLMs to improve their models; I'll tell them to check out Velvet. Seems like useful tool for brining transparency to the workflow of engineers. Congrats on the launch @emmalawler24 🤠
I see three killer features:
1. Integration in 2 lines means I can try it w/o much work
2. Hosting my own DB (obviously important)
3. Caching out of the box
The third feature is crazy important for nearly everyone building with LLMs. I can't think of a reason not to try Velvet - congrats on the launch, really excited to see where this goes
@glen_dsouza_1 - can you share what country you're accessing the website from? We're focusing on a small group of countries while the product is in beta, so we may need to expand coverage for you.
Congrats on the launch! Making our LLM requests actually queryable has been on our wishlist and isn't quite satisfied by logging/observability providers we know about. Excited to test it out 🔥
Congrats, Emma and Chris, on launching Velvet! 🚀 This AI gateway sounds like a game-changer for developers looking to optimize and analyze their usage of OpenAI and Anthropic. The ability to warehouse requests and query logs will definitely open new insights. Can't wait to see how it evolves! Upvoted!
Hey Emma and Chris,
Does using Velvet as a proxy introduce any noticeable latency?
For companies concerned about data security, what measures are in place to ensure the safety of the stored requests?
Congrats on the launch!
@kyrylosilin - thanks! Velvet’s proxy latency is nominal. And with our caching feature enabled, we can improve response times by more than 50%. You can read an article about our latency benchmarks here - www.usevelvet.com/articles/velve...
For products in production at scale, we warehouse requests directly to the team's PostgreSQL instance. This resolves most data security concerns since the company maintains full control of their data.
This is a game changer, Emma and Chris! Velvet simplifies AI request management significantly. The insights from warehousing those requests will empower developers to optimize their models and costs effectively. Excited to see where this goes! 👏
Velvet
Hey Product Hunt 👋 Emma and Chris here from Velvet. We built an AI gateway to warehouse OpenAI and Anthropic requests to your database. Engineers use Velvet logs to analyze usage and costs, evaluate models, and generate datasets.
Our first product was an AI SQL editor. It worked really well, but we had limited insights into what happened between our app and OpenAI. We started storing requests to our database, giving us the ability to query data directly. It gave us so much more control over developing our AI features.
We didn’t think of it as a product until one of our customers (Find AI) asked to use it. We warehoused over 3 million requests for them in the first week, logging up to 1,500 requests per second during their launch. Now we have many startups using Velvet daily for caching, evaluations, and analysis of opaque endpoints like the Batch API.
Try a sandbox demo → usevelvet.com/sandbox (no signup needed)
Read the docs → docs.usevelvet.com
Get started with two lines of code 🧑🏻💻
It's easy to get started! Once set up, we warehouse every request to your database.
Requests are formatted as JSON. You can include custom metadata in the header - like user ID, org ID, model ID, and version ID. This means you can run complex SQL queries unique to your app. Granularly calculate costs, evaluate new models, or identify datasets for fine-tuning.
We'd love to hear from you! Email team@usevelvet.com with any feedback or questions.
Watch a demo video to learn more.