You no longer need to collect and label images or train a ML model to add computer vision to your project. Start from the finish line by using one of the dozens of pre-trained models our community has already trained and shared via a simple API.
Since we launched Roboflow last year, over 50,000 developers have used it to build everything from blackjack apps[1] and duck hunt aim-bots[2] to drywall manufacturing quality monitors and oil pipeline leak alarms.
But, until now, it has been really painful to get started. You need to collect and label images, train a custom machine learning model, and figure out how to package and deploy it.
With Roboflow Universe you can start from the finish-line and benefit from the hard work that others in the community have already done. There are dozens of pre-trained models (from rock paper scissors[3] to face masks[4]) you can add to your project with a single line of code. And we'll be opening the floodgates to hundreds more user-created projects ranging from basketball trackers to insect identifiers over the coming weeks!
As an example of what you can build, I used the public playing cards project[5] from Roboflow Universe to live-code a computer vision blackjack app you can try in your browser[1] from start to finish in under 2 hours.
[1] https://github.com/roboflow-ai/b...
[2] https://blog.roboflow.com/comput...
[3] https://universe.roboflow.com/te...
[4] https://universe.roboflow.com/jo...
[5] https://universe.roboflow.com/au...
@dadior_chen itβs certainly getting closer! Still needs a bit of code as glue to plug it into other things (Eg βwhen my model sees a rabbit, play a sound to scare it awayβ[1]) but we are working on making this simpler and simpler as much as we can! Stay tuned :)
[1] https://universe.roboflow.com/sa...
Stack Roboflow