Deepchecks Monitoring
p/deepchecks-monitoring
Open Source Monitoring for AI & ML
Shir Chorev
Deepchecks Testing Package — Open source tool for testing & validating your model & data
Featured
104
Find problems in your ML models and data before it’s too late!
With just a few lines of code, Deepchecks creates elaborate reports about your data’s integrity, distributions, and your model’s performance.
Check it out: https://github.com/deepchecks/deepchecks
Replies
Ashley Porciuncula
I love this! Good job team!
philip tannor
@ashleymarinep glad you love it! Would love to get some more detailed feedback once you have it...
Almog Baku
Really cool product. I'm following you guys for a while :) good luck w/ the launch
Shir Chorev
@almogbaku Thanks for the ongoing support!
Nevo Alva
philip tannor
@nevo_alva much appreciated!
Nir Ben-Zvi
DeepChecks is a great product for conducting product-focused computer vision research. The library contains various tools for verifying trained model and dataset issues that would otherwise waste a *lot* of time.
Shir Chorev
@nir_benz Thanks for your support!
Arseny Kravchenko
Deepchecks solve a very important problem for the applied ML community - I wish them luck and rapid growth!
philip tannor
@arseny_info thank you so much! And great to see your work on Kaggle as well, can't wait to hear more detailed feedback
Clément Delangue
congrats for the launch team!
philip tannor
@clement_delangue thank you so much, and great to see you here! BTW at Deepchecks we're huge fans of Huggingface, this is a good place to shout out that you can get the packages to work together - https://docs.deepchecks.com/stab.... Currently for computer vision models and hopefully, NLP will be later in the year
Daniel Keyes
This is amazing! Kudos on building it open source 👏
philip tannor
@daniel_keyes thanks a million!
Ofir Bibi
This is a must run 🏃‍♂️🏃🏾‍♀️🏃🏼‍♀️ It's so simple to get a first report that anyone with a model should run this as a sanity check that might find some issues and may simply bring new insights. Well done @shirch , @ptannor and the Deepchecks team.
Shir Chorev
@ofirbibi Thanks for sharing your kind feedback!
philip tannor
@ofirbibi much appreciated - and coming from you means a lot. Can't wait for more detailed feedback!
Inbal Argov
Awesome tool. Seems like Deepchecks is becoming a thought leader in this space. Love the data integrity piece
Shir Chorev
@inbal_argov1 Thanks for your kind words! Would be happy to hear feedback about how you're using it... 🤩
philip tannor
@inbal_argov1 thank you so much, and flattering to hear that!
Eduard Hambardzumyan
This is Great 🔥 Looking forward to testing this with my team. Congrats on the launch!
Shir Chorev
@eduard_hambardzumyan Thanks 🙏, looking forward for your feedback 🤩
Roy Miara 🕶
Really important product in today's ML toolchain, really nice open source work. Good luck team!
Shir Chorev
@miararoy Excited that's the way you see it as well, thanks for sharing! 🤓
Dani Avitz
The testing package looks very comprehensive, and I see it achieved 1.8K GitHub stars in no time! Well done.
philip tannor
@dani_avitz1 thanks! And I can't tell you how much we appreciate the support we've been receiving from the community
Idan Basre
Very useful! Love the data integrity checks.
Shir Chorev
@idan_basre Thanks for sharing. We found these checks to find surprising quality issues with your data. Feel free to open an issue with suggestions for additional checks!
Ori Kabeli
That's an awesome product. We used to do such tests manually at our team and recently migrated to using it, and the design of the library is delightful to work with! Congrats!
Shir Chorev
@ori_kabeli1 Great to hear! Looking forward for hearing more feedback 🙏🤩
Shauli Rozen
Thanks for sharing ! The testing package has been super helpful for my team!
Tom Ahi Dror
Kudos team Deepchecks! It's a pleasure to see a repo with such great docs (: Just out of curiosity, how is this different than Great Expectations?
Shir Chorev
@tom_ad Thanks! ❤️ Docs are key for open source adoption 😃 (inviting the readers to check out swimm.io!) About Great Expectations - first of all, its a great tool and we love what they're doing! In short: Deepchecks is focused in everything about testing ML-related projects, and thus our checks (model evaluation, distribution and leakages, and also the data integrity ones) are tailor built for these scenarios. One example - finding similar samples with "Conflicting Labels" is a data integrity check that is relevant only within the ML context. Summing up a few usage-related differences: - GE is meant for Data Engineers, when ML is not always around, when our focus is on Data Scientists that are working on an ML Project - GE works great after you configure it thoroughly, Deepchecks is designed to give a solid out-of-the-box experience, even if this can be improved after configuration - Deepchecks is also built for unstructured data (work in progress)
Shir Chorev
@karimsaif Happy to hear you liked it!
Bohdan Romaniuk
It seems to me that for a quick check and building an MVP is an awesome solution
Shir Chorev
@bidkaromaniuk Would be happy to hear your usage scenarios and any additional suggestions! Link for opening new issues: https://github.com/deepchecks/de... (⭐️ to support)
philip tannor
🙏🙏🙏 Thank you all SO MUCH for the open source love. 🤩 If you like what we're doing, the best ways to support us are by starring our repo (https://github.com/deepchecks/de...) or by leaving feedback via our GitHub issues (such as opening a feature request or a bug report here - https://github.com/deepchecks/de...).
Shir Chorev
@ptannor Some of our best checks came from user's requests & ideas 😍!! (One simple & lovable example is the IsSingleValue check) And starring the repo is a great way to show your support ⭐️🙏⭐️
Uri Eliabayev
Great open source project!
Shir Chorev
Thanks @urieli17! Appreciate your support greatly