Launches
Coming soon
Upcoming launches to watch
Launch archive
Most-loved launches by the community
Launch Guide
Checklists and pro tips for launching
Products
News
Newsletter
The best of Product Hunt, every day
Stories
Tech news, interviews, and tips from makers
Changelog
New Product Hunt features and releases
Forums
Forums
Ask questions, find support, and connect
Streaks
The most active community members
Events
Meet others online and in-person
Advertise
Subscribe
Sign in
Groq Chat
An LPU inference engine
•
6 reviews
•
14 shoutouts
•
366 followers
Visit website
Follow
Overview
Launches
Forums
Shoutouts
Reviews
Team
More
Blog
•
Newsletter
•
Questions
•
Forums
•
Product Categories
•
Apps
•
About
•
FAQ
•
Terms
•
Privacy and Cookies
•
X.com
•
Facebook
•
Instagram
•
LinkedIn
•
YouTube
•
Advertise
© 2025 Product Hunt
p/groq-chat
Chris Messina
•
1yr ago
Groq® - Hyperfast LLM running on custom built GPUs
An LPU Inference Engine, with LPU standing for Language Processing Unit™, is a new type of end-to-end processing unit system that provides the fastest inference at ~500 tokens/second.
17
212
p/groq-chat
Jonayed Tanjim
•
1yr ago
Groq Chat - World's fastest Large Language Model (LLM)
This alpha demo lets you experience ultra-low latency performance using the foundational LLM, Llama 2 70B (created by Meta AI), running on the Groq LPU™ Inference Engine.
1
5
View all