LLMs or AI, as you call it, are changing the software market. Thousands of new LLM-powered products are launched, but the core tech is pretty much similar as everyone is using foundational models, so the architectures & application cases are pretty much limited to:
- RAG (search + reasoning + text generation)
- summarization
- entities extraction (names, organisations, locations, etc)
- text classfification (tone of voice, positive / negative, etc)
- chit-chat with dialogue history and maybe RAG for chatbots
- agnetic pipelines (this one is pretty flexible)
- image recognition (quite recent thing with GTP-4o)
This pretty short tech stack is applied to thousands of usecases and datasets giving birth to all the new apps & services. The big difference is that before we normally had some custom fine-tuned models to solve particular problems, we needed ML Engineers, some domain specific data and this was our moat. Today all the apps can be powered by GPT-4o and they will be fine.
It's ok to have specific tools for specific problems, market niches and audiences (cause audience is also a moat), but is that a sustainable business model or would only giants survive eventually?
Curious of your thoughts.
Humva