Lora is a local LLM for mobile, with performance comparable to GPT-4o-mini. It offers seamless SDK integration, guarantees complete privacy without data logging, and supports airplane mode. Try our app and build your own Lora-powered app.
👋 Hello, I'm Seunghwan
We're excited to introduce our new product, “Lora”.
☀️ What is Lora?
Lora is a local LLM on phone. It offers an SDK that lets you build a Lora-powered app with just one line of code.
✨ Who Should Use Lora?
🛠️ Developers who have built LLM-based AI apps but struggle with high operating costs
🌍 PMs preparing for global launches but facing obstacles due to data regulations
📡 Anyone who needs to offer AI services where internet connectivity is limited
🔎 Key Features of Lora
✅ GPT-4o-mini speed
✅ Simple SDK integration
✅ 100% private, no data saved
✅ Airplane mode support
🔥 Try Lora-powered AI app for free!
We also launched a private AI assistant "Lora", Lora-powered AI chatbot app. It's 100% free to use.
If you are interested in using it for development, please feel free to contact us. We can provide you with the SDK. A free plan is available, so don't hesitate to reach out.
🚀 Thrilled to Launch on Product Hunt!
We’d love your feedback to make Lora a “Wow!” product.
Questions or suggestions? DM me anytime.
Thank you!
@sonu_goswami2 Thank you for your support! We value real user feedback, so if you try it out and share your thoughts, we'd really appreciate it. Thanks again, and have a great day!
@parekh_tanmay Thank you so much for your kind words and support! 😊 Your encouragement truly means a lot to us and gives our team a huge boost on this special day. 🚀 We’ll keep pushing forward—really appreciate you being part of our journey! 🙌
@changhoi Hello! Thank you for your interest. Our demo app supports free & unlimited chat. but if you want to build one for yourself, you can use Lora SDK, which is divided into free tier and paid. Actually, every development tool is supported in the free tier. But if you want to make your on-device LLM service without any limits and a comfortable performance analytics dashboard, a paid plan will be the best option!
@changhoi Lora is free to use, with a maximum of 20000 tokens per day, and paid usage is charged to companies that develop AI services using the our SDK 👷🏻♂️👷🏻♀️
@changhoi Glad you’re excited about it! 🚀
The free tier includes:
• Single Request Limit: 2,000 tokens
• Daily Request Limit: 20,000 tokens
• Basic support
With the paid version, you get:
✅ Unlimited Lora LLM Access – No token limits, so you can generate as much as you need.
📊 Performance Analytics Dashboard – Track and analyze AI performance across different devices.
🔧 Technical Support – Get direct assistance for development and implementation.
In short, the paid version provides unlimited token usage, device-specific performance insights, and dedicated technical support to optimize your experience!
Privacy and data protection are my top concerns when using AI tools. I appreciate the 100% private and no data saved feature of Lora and curious about how do you ensure 100% privacy?
@evakk Hello! Thank you for reaching out, and I really appreciate your questions about privacy :) Lora doesn't collect ANY private data except your login and does not collect your chat data(You can check that by enabling airplane mode!). Your chat data will be saved in your private storage, which no one can access. If you want to delete them, you can permanently delete them by flushing them away or deleting Lora. And after downloading the model, Lora does not require any internet access for evermore. So, Lora does not collect any data, and it is designed unable to collect any data. If you use Lora, you can talk about important things free from threat of leakage or hacking. Thank you!
@evakk First of all, thank you for your deep interest in Lora! We’ve conducted interviews and found that many users are concerned about sensitive or legal information being sent to servers. By processing everything solely on the user’s mobile device, we ensure that customers can confidently utilize AI features without any privacy concerns!
@evakk The AI is installed directly on the user's mobile, ensuring that no data is transmitted externally. Additionally, we do not perform any data logging, guaranteeing 100% privacy.
Congratulations @hansol_nam and team! I just got it on TestFlight and tested Lora on flight mode without WiFi. Very impressive + I won’t be without a GPT on my next flight. Thanks
@izakotim Thank you!! In our test, ANY device with 8GB ram can run Lora(For example, Galaxy Note 9 or OnePlus 5). It is just matter of how long it takes to run inference :) In our best test device iphone 15 pro, it takes only 1.2 second to start inference!
@izakotim Thank you so much for your deep interest in asking about the minimum spec devices! 😊 We’re continuously optimizing our model to ensure the service works across a wide range of devices. 🚀
@seungwhan@hansol_nam@peekabooooo@woobeen_back Congratulations on the launch of Lora. I personally love the privacy aspect of this which is now a huge concern and a compulsion for any enterprise. Going to try this out right away and will spread the word with my colleagues. Keeping my eyes peeled on how this product grows:)
@seungwhan@hansol_nam@peekabooooo@michael_talreja Thank you so much for the kind words! We're thrilled that you appreciate the privacy features of Lora—it’s a top priority for us, especially in today’s privacy-conscious world. We're confident that Lora will not only provide the security you need but also offer flexibility and power at your fingertips. We’re excited to have you try it out and can’t wait to hear your thoughts as you explore its potential. Thanks for spreading the word—together, we can make this product even better!
@michael_talreja Thank you so much for your support! 😊 Privacy is indeed a major concern, and we’re committed to making Lora a truly secure and enterprise-friendly solution. 🚀 Excited for you to try it out—really appreciate you spreading the word! Looking forward to sharing our progress with you. 🙌🔥
@jmarten Thank you so much for your kind words! 🎉 Your support truly means a lot! 😊 Next, we’re focusing on making Lora even faster, lighter, and more versatile while keeping privacy at its core. Excited for what’s ahead—stay tuned! 🚀🙌
@krazygaurav93 Really appreciate it! 😊 Excited to hear that you’re already thinking of use cases—can’t wait to see how you put it to work! 🚀 Let us know if you have any ideas or feedback. 🙌
@mssulthan Thank you! We are also working on more features such as datapack for RAG; which allows to eliminate hallucination and add accurate informations. Please stay tuned! I can't wait to show you Lora-with better features!
@sungman_cho1 Thank you for reaching out! Lora will be a great tool for everyone that wants to build LLM service without any costs. Try our free & unlimited demo if you are interested :)
Interesting! As this is local LLM can I use it without internet? if so it must be so helpful. Few weeks ago I visited Yosemite National Park, and the signal was unavailable. It must be so helpful if I can talk with AI in those situation.
@psycoder It must have been really inconvenient due to the unstable internet. I will work hard on research and development to ensure that services can work without problems even in wide areas with unstable internet. Thank you so much for your warm encouragement :)
@danilofiumi Thank you so much! 😊🎉 Privacy is at the core of what we do, and we’re thrilled that you love the concept! 🚀 Your support means a lot—excited for you to try it out! 🙌
Congrats on the launch, @seungwhan! Offering a local LLM with easy SDK integration sounds incredibly useful—how does Lora handle updates and improvements without relying on cloud-based processing?
@seungwhan@adamyork Hello! Thank you for good question! Lora is designed to ensure seamless updates and continuous improvements while maintaining an on-device experience. Updates are delivered efficiently through lightweight patches, allowing users to keep their models up-to-date without depending on cloud-based processing. This approach not only preserves privacy and minimizes latency but also ensures that developers can easily integrate improvements via the SDK without interrupting workflows.
@adamyork Great question—thank you! 😊 We’re continuously improving the model and will enable dynamic downloads for updates. This way, users can leverage Lora’s local LLM on-device while still receiving the latest LLM improvements through patches, all without relying on cloud-based processing. 🚀
@suryansh_tiwari2 Really appreciate the love and support! 🙌🔥 So glad you’re enjoying Lora! Our team put in a lot of effort to make this happen, and hearing this means a lot. 🚀 Thanks for being a part of our journey!
Really excited for this! As someone who’s always been mindful of privacy and security, this approach feels like a game changer! 🔥 wishing you great success with the launch!!! 🚀👏
So excited to see what a masterpiece of excellence it will become! It’s incredible to see you Junghwan and your team bring such an innovative idea to life, and I truly admire your drive and vision. ☺️ I have no doubt that your dedication and creativity will lead to even more groundbreaking successes ahead. Wishing you all the best on this journey! Keep pushing boundaries and changing the game. You’ve got this!