[LW24] Megaparse
p/lw24-megaparse
Open-source File Parser optimized for LLM ingestion
Stan Girard
Megaparse [LW24] — Open-source Document Parser to Markdown with OCR/LLMs
Featured
18
Megaparse is a file parser optimized for LLM Ingestion. It can parse PDFs, DOCX, PPTX in a format that is ideal for LLMs. All of that accessible from a python package, an API, or a queue.
Replies
Best
Stan Girard
Hi everyone, Today I’d like to introduce you to the new Quivr project. It a simple python package, API that helps you take in documents such as PDFs, Docx, PPTx, ... and turn them into Markown It has several new abilities: * OCR * Vision Models * Table Optimization in the extraction * Open-source You can use it in any of your products where you need to parse file to then send them to an LLM or simply store it Here is how to get started: * Go to https://github.com/QuivrHQ/MegaP... * pip install megaparse * Have fun Give it a try! We’d love to hear your feedback and ideas in the comments. This is part of Supabase mega Launch Week -> https://launchweek.dev/HOME
Ashit from Draftly.so
Congrats on the launch @stan_girard @amine_dirhoussi @chloe_daems Super helpful. We are working on a product that needs something similar though we have already solved the PDF parsing problem. Quick question - do you plan to add Excel / Spreadsheet as well? This would be super helpful. Excited to give it a try!
Christophe Pasquier
Everyone that went through the pain of parsing slides and pdf know how big a problem that solves ;) GG team!
Stan Girard
@christophepas Thanks mate! Let me know if you are using it and I'll gladly help you improve it
Tom Shapland
There's such a huge need for this. It seems like every other week I meet someone asks me about how to get structured data from a PDF with LLMs.
Ioannis Tsiokos
Love it. Markdown is becoming the de-facto in AI input processing, and proper conversion to it (without having to install a million packages) will be paramount.
Robin Philibert
Really nice! Open source, with OCR and table optimization, perfect for LLM workflows. Congrats to the team! 🙌
Michael Ohana
Awesome ! How does it tackle tables in financial documents?
Stan Girard
@michaelohana This is a hard piece to tackle, we are currently working hard on improving tables. We are exploring some techniques. For example we are looking at combining LLM Vision models with current OCR. Passing the table to a dataframe. Would love to tell you more or help you with your use case. Ping me if need on twitter @_StanGirard
Huzaifa Shoukat
Congrats on the launch! Megaparse looks like a game-changer for parsing docs into Markdown format. What types of files do you find it works best with?
Tony Tong
Megaparse is a really interesting tool for LLM data ingestion! 🔥 How does it handle parsing complex document structures, like multi-column layouts or mixed content (text, images, tables)? Does the OCR integration maintain accuracy across different fonts and handwriting? Also, how does the API handle large-scale batch processing—are there any optimizations for speed and efficiency with extensive datasets?
Tony Tong
Megaparse sounds super useful for prepping docs for LLMs! Love the flexibility with Python, API, or queue. Does it handle complex layouts or metadata well?
Tony Tong
Awesome tool with Megaparse! 📄✨ The ability to seamlessly parse PDFs, DOCX, and PPTX for LLM ingestion is a game-changer for data extraction. I'm curious—how does Megaparse handle complex document layouts or non-standard formats? For example, if a document has lots of embedded images or custom fonts, does it still maintain accuracy in parsing? Also, what kind of customization options do you offer for different document types or use cases?
Florian Buguet
Wow, this looks super handy for integrating document parsing into LLM workflows! 🚀 Love that it's open-source and includes OCR + table optimization—makes it a no-brainer for anyone working with complex document data. Can't wait to test it out! 🔥
Max Comperatore
stan this is sweet. thank you. will use. upvoted and starred
Stan Girard
@maxcompe Thanks mate! We worked hard on this one