Build AI assistants that can collaborate and access external tools, APIs, and real-time data.
With data validation, errors recovery, real-time responses, and long-term memory.. You can build reliable LLM-powered apps easily without the hustle
Hello Product Hunters 👋
Ever dream of having super-powered AI assistants in your app? But then reality hits: unexpected LLM output and complex integration, data validation woes, and juggling server-side/client-side communication?
Introducing Scoopika! ✨
✅ Build custom AI agents that collaborate, use external tools & APIs, and access real-time data based on context.
✅ Say goodbye to data validation nightmares! Scoopika handles it all with full type safety.
✅ Stream agent responses seamlessly with +10 built-in hooks for both server and client-side. No sweat!
✅ Power agents with client-side actions so they can perform actions in the user's browser in real-time for next-level user experience.
✅ Long-term memory, managed effortlessly with Serverless history stores & zero setup.
✅ Effortless integration with your tech stack. Flexibility is our middle name.
✅ Works with your LLMs and all your API keys are kept safe on your servers and never shared with us.
---
What can agents do?
✅ Engage with users in natural language conversations, utilizing available tools to fetch data or perform actions.
✅ Generate validated and type-safe structured JSON data from any unstructured data.
✅ Collaborate with other agents! connect them into a multi-agent boxes or pass an agent as a tool to another agent so they can call each other when needed.
✅ See the world! If the LLM powering your agent supports vision then your agent will too.
....and we're not done yet!
✅ Turn any function in your app into a tool your agent can use.
✅ Plug custom knowledge to your agents or connect a vector database for RAG.
✅ Scoopika doesn't plug custom data to your LLMs so your costs stays in your hand with your own prompts.
✅ If your agent has access to a lot of tools, Scoopika will select the most suitable ones for each run so the context window stays in hand.
---
My Story (the Pain Point):
I was there, folks. Building an app with AI assistants that use LLMs with function-calling to perform actions for users... sounded amazing! But the reality? Unexpected LLM outputs, constant data validation struggles, and a tangled mess of managing LLM flow, history, and server-client interactions. Needless to say, the feature got scrapped.
That's why I built Scoopika! To make adding powerful AI assistants to your web apps a breeze. Focus on what matters: creating awesome user experiences! ✨
Go give it a try, It's open-source and free to use with a pro plan for extra features with 10% off.
This is just the start and many great things are coming in the future!
Thanks
Congratulations on the launch! Scoopika truly stands out with its seamless integration of AI-powered agents, making it incredibly useful for automation and data-driven decisions.
congratulations on the launch of scoopika, kais. your approach to simplifying the integration of AI assistants sounds very robust. i'm curious, how does scoopika ensure data privacy, especially given its ability to access real-time data and connect various APIs?
@mashy thank you so much. I would love to learn more about what you mean with "data privacy", as all of this data is never shared with us and always kept on the developers servers.
I upvoted because it offers an easy way to integrate AI agents into applications, making it accessible to add powerful AI features without extensive development effort. The ability for these agents to access external tools, APIs, and real-time data adds a lot of versatility.
How robust is the data validation and error recovery when deploying these AI agents? Ensuring reliability and minimizing disruptions are key to a smooth user experience.
@aman_wen Thanks! the data validation uses JSON schema so the data is always validated. when the data is invalid we handle it in few ways:
1. Missing data: Scoopika communicates back to the LLM about the missing required data, so the agent will then whether fix the missing data or ask the user to provide it if they did not already.
2. Invalid data: In a JSON schema you can define an `enum`, which includes a list of valid values for each parameter, if the value the LLM output for the parameter does not match the enum list we follow the same approach we follow when we have missing data. trying to fix the error.
3. Invalid data type: In case the LLM output is not a valid JSON object or types are not valid, then we throw an error, so you catch these errors and handle them the way you want.
4. tool execution error: Let's say the agent called a tool but it failed, in this case we inform the agent about the error so it can communicate with the user and apologize.
In server-side/client-side communication, errors are handled ensuring the server heals any errors and not cause it to crash in case of expected errors. and soon We'll be adding AI-powered errors reports that you can keep an eye on from the platform.
If you have any other questions just let me know ;)
Koxy AI