Introduction:
Thanks to state-of-the-art AI models, coding has transformed into a conversational process. By simply defining your project's constraints, a back-and-forth dialogue with the AI can take you from idea to product in just one day—without AI, this could take one to two weeks of trial and error.
Planing:
Before starting the project, I sought to identify the problem I wanted to solve. With the Babel project (github*com/eonist/babel), I aimed to use AI to translate localization strings for iPhone apps. I needed to localize my app soon, and the only alternative was to pay $150+ to services like Localise*com. How hard could it be? Famous last words 😅 (But not this time.😏)
Identifying the Problem:
- High Cost: Existing solutions like Localise*com are priced at $150+ per month.
- Dependency on Third Parties: Relying on external providers can be risky—they may increase prices, shut down, or change their services unexpectedly.
- Lack of Free Online Hosting: Other solutions exist, but they don't offer free cloud-based translation hosting.
- Quality Concerns: Language translation is complex, and human translators still outperform even the best AI models.
Identifying the Solution:
- OpenAI's GPT-4 is affordable, costing as little as $0.01 to $1.00 for multiple language translations.
- We can trigger OpenAI to update translations whenever needed by creating an automated workflow with GitHub Actions.
- Hosting the project in a GitHub repository allows users to fork it and integrate it into their own app projects using their own OpenAI API key.
- The latest OpenAI models offer near-human translation quality, verified in various translation benchmarks.
Defining the Agentic Flow:
To build the agentic flow, I first researched the necessary components using PPLx.ai here are the steps I needed to accomplish my AI wrapper app:
1. Read the English version into a list.
2. Send this list to OpenAI and request translations into the required languages.
3. Store the returned data in language files.
4. Perform basic testing with a unit test to ensure the translations have the correct format.
5. As a bonus, notify a Slack channel when the translations are finished, including a receipt from OpenAI detailing the cost of the batch job.
The Coding Part
- I asked PPLX*ai with the prompt: "In GitHub Actions, how do I perform steps 1, 2, 3, 4, and 5 one after the other?"
- Then I stored the returned answers as issues in my Babel project.
- Next, I started to copy and paste the code into Cursor.
- Once all five parts were copied over, I began asking Cursor to improve the code. I didn't even upload to GitHub; I wanted to get my bearings first and understand how everything worked.
- I'm not a native Python coder, nor am I familiar with GitHub Action YAML format or JavaScript. However, I knew how to articulate what I wanted to achieve in the five steps for my agentic AI flow.
- I then started to ask Cursor and O1 to comment on each part of the code and kept asking the AI if there was anything I could add or improve upon.
- The comments helped the AI understand my intent.
- If I didn't understand something, I went back to PPLX*ai and asked about concepts I didn't fully grasp. After a few hours, I had the five steps coded up in five different GitHub Action workflows.
- Then I took a long break before uploading the project with GitHub Desktop and began debugging errors.
- GitHub Actions immediately reported errors in the code, starting from the first step.
- So I copied the error and pasted it into Cursor and O1, asking them to solve the issue.
- O1 would then provide new code that I copied over and uploaded to GitHub.
- This process continued for a few hours, clearing one step after another, and soon enough, all five steps were complete.
Shipping:
With the code working as intended—you upload English text and receive translations in multiple languages—it was time to wrap up the project. I created a README.md document outlining how it could be used. The final step was to announce the project to the world by making a social media post on LinkedIn.
The launch of my first AI wrapper went exceptionally well. I posted it late on a Sunday, and by Wednesday, more than 14,000 people had viewed the LinkedIn post. 🚀
During this project, using O1 cost me approximately $50–$100 for 10–12 hours of non-stop coding. However, with DeepSeek R1, which rivals O1, the cost should now be reduced to $5–$10. Utilizing deluxe models is essential for this kind of high-level coding.
minimalist phone: creating folders