Hyperparameter tuning is a way to find the best machine learning model. We make it ridiculously easy to run hyperparameter sweeps using simple algorithms like grid search, to more modern approaches like bayesian optimization and early stopping.
Hi everyone - I'm Lukas and I worked on this product. We made this tool for ML engineers to launch a hyperparameter search and visualize their results with three lines of code. We found that most ML practitioners think that hyperparameter search is a good idea but don't do it all the time because it seems like a pain to set up. Do you do hyperparameter search? I'd love to hear about how you do it.
I've advised many companies trying to improve their machine learning results, and almost all of them have failed to do something as important as hyperparameter tuning. They usually leave it as a todo or afterthought or underestimate the gains as they think it will be too difficult. Sweeps makes it dead easy, so they no longer have an excuse for untuned models!
PS: sweeps are free for everyone and we're super excited for you to try them.
If you give them a try, please let us know what you think! 😊 We're always trying to make them even better.
ML is core of our company and we just switched to weights and biases instead of our own internal ML tooling. Massive upgrade and freed engineers up to improve performance vs build tools!
Hi, I am a developer on the sweeps product. The product simplifies the exploration of your hyperparameter search space. It is designed to be very flexible and work in different environments and workflows. Let me know if you have any questions or suggestions.
This is a great product. We have been using it at GitHub for the past 1.5 years. It is a powerful tool that enables many uses cases. It is really easy to get started, and their API allows you to extend its capabilities further to fit your needs. Most importantly, their team is great to work with and have been amazing business partners. Highly recommend!
I am user of Sweeps. I have been hooked after the first time I used it for hyperparameter tuning which is genuinely a tiring process. I was so satisfied with the product that I decided to put together a blog post around it including my personal commentary: https://www.wandb.com/articles/r....
I highly encourage all the ML practitioners try it.
Although our company is heavily AWS invested (whom already have some experiment visualisation/hyperparameter optimisation tools in Sagemaker), I found W&B *much* easier to use, easier to share the results with others, and much more in-depth with the visualisations they can give. Would highly recommend!
ML Visualization IDE