Blake Peeling

Banana - Serverless GPUs for Machine Learning inference

byβ€’

Banana provides inference hosting for ML models in three easy steps and a single line of code.

Stop paying for idle GPU time and deploy models to production instantly with our serverless GPU infrastructure.

Use Banana for scale. 🍌

Add a comment

Replies

Best
Blake Peeling
Hi all πŸ‘‹, makers Kyle + Erik + Sahil + Blake + Candice here! We’re excited for you to try out Banana. Banana is an ML inference hosting platform on serverless GPUs. Why did we build Banana? We used to operate an ML consulting firm that helped companies build & host models in production. Through this work we realized how difficult it was to get models deployed, and how incredibly expensive it was. Customer models had to run on a fleet of always-on GPUs that would get maybe 10% utilization, which felt like a really big money pit and waste of compute. During our time as consultants we built really efficient deployment infrastructure underneath us. Six months ago we made a pivot to focus solely on productizing our deployment infra into a hosting platform for ML teams to use that would remove the pain of deployment and reduce the cost of hosting models. That brings us to today. Banana provides inference hosting for ML models in three easy steps and a single line of code. We are 100% self serve, meaning users can sign up and deploy to production without ever talking to our 🍌 team. And thanks to being on serverless GPUs, customers see their compute costs reduced by upwards of 90%. Try it out and let us know what you think!
Erik Dunteman
@bp_banana Hi friends :) Looking forward to a great launch day. Let us know any questions you may have.
Abishek Muthian
@bp_banana Congratulations on the launch! Banana could be addressing the need-gap: 'Democratisation of AI/ML/DL hardware', Posted on my problem validation forum - https://needgap.com/problems/50-... . You're welcomed to explain how Banana addresses that problem there, So those who need it can find Banana easily.
Morgan Gallant
Been using Banana in production for a good bit now, nothing but great things to share! Few notes: - Product is insanely good. Specifically, we use it for indexing jobs requiring a good bit of GPU compute. These jobs are huge, sometimes involving up to 1M inferences of a large NL model. Banana is perfect for this use case, as we can burst up to 10+ GPUs, only pay for the compute we use, and quickly scale back down to near zero. - Team is very strong, super responsive to questions and are experts at deploying & scaling ML models. We often get advice and recommendations from their team on how to best do something, and it's been really appreciated! - Lastly, velocity / speed of iteration has been ridiculous. They're moving really quick, have an ambitious roadmap, and ship new features and improvements daily. It's been really cool to watch. Would highly recommend anyone check them out!
Kyle Morris
@gallantlabs Thank you for your kind words & support, Morgan! Fantastic having you as a customer and inspiring seeing your progress as a team! Always a message away :)
Erik Dunteman
@gallantlabs Morgan is an awesome customer! Thanks for the love
Ryan Hoover
Did you invest, @turnernovak?
Erik Dunteman
@turnernovak @rrhoover Not sure he could handle as much potassium as us
Greg Priday
This looks seriously impressive. I'll be trying it out soon. Seems like this will be a huge step up over Google Cloud Run in terms of speed. I see on your roadmap that you're planning on moving to beefier GPUs in the future. Which GPUs are you running now? Also, from a technical perspective, I assume this works by moving models from CPU to GPU at inference time? Trying to wrap my head around how you're getting such fast cold starts.
Kyle Morris
@gregpriday Thanks for the support! The roadmap is now! We used to only do T4 GPUs, but now we also support A100 GPUs which are yielding faster cold boots + inference + download speeds. To understand how banana works it may be easier to think of us as a compiler company. When you send us a model we do stuff under-the-hood to make it run faster/cheaper. CPU/GPU memory hacks are definitely involved (how we load memory, where, when). A key point is none of our optimizations affect model outputs. This means we don't do weight quantization dynamic layer/node pruning which yield way smaller/faster models but does affect output.
Abhishek Bhargava
Love banana!! super excited to be a user soon :)
Erik Dunteman
@abhishek_bhargava do it, we've got the fastest servers in the whole wild west. any specific models you're looking at?
Abhishek Bhargava
@erikdoingthings mostly large language models! + inference using k-means clustering / indexing & potentially running LSTMs on the embeddings :)
Erik Dunteman
@abhishek_bhargava We like them large! When you do jump into Banana, if you implement in pytorch you'll get some killer speedups on cold boot
Blake Peeling
@abhishek_bhargava we can't wait to have you on Banana! thank you for the support :)
Kekayan Nanthakumar
Congratulations on the launchπŸš€ Super Happy with the experience so far.
Erik Dunteman
@kekayann It's been awesome working with you the last few weeks :) Love the energy, and appreciate the support here!
Blake Peeling
@kekayann you rock!
Eric Jung
Are there any metrics around the cold boot? Does it depend on the model size, etc? And what does the quota look like on the max number of concurrency, model size, etc?
Erik Dunteman
@eric_j1 Yes! Cold boots vary based on model size, but a GPTJ model (which takes 20 minutes to load to GPU usually) comes live on our platform in 10 seconds. Most customers see 1-5s cold boots right now. We just upped max concurrency! We're provisioned for the average model (8gb GPU RAM) to spike to 200x concurrency, but do have a soft cap of 10 to prevent customers from accidentally overscaling. We can adjust that for anyone who needs more :)
Farrukh Jadoon
Can't say enough good things about @erikdoingthings and this rockstar team. Having personally grappled with the problem of GPU provisioning and management for years (and having built an unsuccessful product around it), the way Banana has abstracted it is magical. Can see this becoming the DX standard for ML.
Erik Dunteman
@farrukh_jadoon1 It means a lot coming from a dev tools founder like you :) Really appreciate this
Hélène SAN
Hey, Congrats on your launch! πŸš€ We will be launching soon too, (https://www.producthunt.com/upco...), I hope that you will support us as well, Cheers 😊
Erik Dunteman
@helene_san thanks for the support and best of luck in your launch! We'll see you then
Harsh Siriah
This looks awesome! Congratulations on the launch!πŸš€
elena silenok
Can’t wait to explore!
Blake Peeling
@silenok thanks Elena!
Amit Mahapatra
Great product and a great team! Congratulations on the launch :)
Blake Peeling
@amit_mahapatra1 thank you very much Amit! :)
John Dagdelen
Banana is pretty great! I have been using banana.dev to run the backend ML processes for my AI generated stock photo site, infinitestockphotos.com. The team has been super helpful and I’m really happy I went with them for serving my ML models. They also sent me a frankly shocking number of bananas to my house, haha.
Erik Dunteman
@jdagdelen Thanks John! It's great working with you. Bananas are a usage perk ;) (house address obtained with customer consent of course)
Pranav Teegavarapu
this is awesome!!
Erik Dunteman
@pranavnt means a lot from the Kobra founder! Give it a whirl
Blake Peeling
@pranavnt thanks! we appreciate the <3
David Banys
On-demand + autoscaling + minimal cold start + GPUs?? The dream!
Erik Dunteman
@david_banys all those things :) thanks David
Joe Speiser
This rocks, sharing with all my devs friends now!
Sahil Chaudhary
@joe_speiser1 Thanks for the share!
Arielle Lok
this is poggers
Derek Pankaew
Wish this existed when we were doing GPU-heavy stuff!
Erik Dunteman
@derekpankaew yes I'm sure this would have been solid for the Yolo models you were running at Next Fitness! With models that small too they would have scaled up in the matter of a couple seconds too :)
Nader Khalil
This is awesome!!! Serverless GPUs makes so much sense
Blake Peeling
@naderlikeladder heck yea! thanks Nader :)