What has been your biggest challenge while building a GenAI application?
Manouk Draisma
5 replies
Replies
Gurkaran Singh@thestarkster
Launching soon!
Building a GenAI application has been quite the journey! One of the biggest challenges I faced was fine-tuning the algorithms to ensure accurate predictions. It's all about striking the perfect balance between data input and model output. Trust me, the struggle is real but oh-so-rewarding in the end!
Share
LLMs can be stubborn π
Making them do what we want was definitely one of the biggest challenges.
When you use the plain chat interfaces, you can iteratively work towards the result you want. But this is not efficient. We want to go from input to high quality output without bothering the user to refine it. This was quite tricky. And once you get there, you have to manage model deprecations and start over with a new version sometimes (at least with OpenAI).
What helped us was setting up an evaluation framework to compare prompts and assert certain outputs and to switch to Azure - which supports models for longer - to reduce the number of model changes.
LangWatch
AlphaCorp AI
First wall I hit: The output quality of the API will not be the same as in the main chat app (such as Claude, ChatGPT, etc.). System message, temperature, and top p are all important parameters that will need to be played with and tweaked to reach a similar quality responses. Out of the box, the OpenAI API, or any other large language model (LLM) API, will not perform the same way as ChatGPT does.
The second challenge: Of all the APIs, the most frustrating to work with was Gemini. One example is that instead of returning an unfinished output when the MAX_TOKENS limit is reached, it simply errors out. I spent 3x more time on the Gemini API than on Mistral, OpenAI, Claude, or Bedrock.
Legit site that pays more.
$8000 everyday
24,000 every 30 days as a side hustle.
Join our platform on insta Wini_fx_team and become a pro in crypto trading