Let me be clear: I am not a developer. I’m not a coder. I’m just a regular, computer-literate Operations Manager who, like many of you, spends half their day in Slack, sending messages that promptly disappear into a digital black hole. I was tired of watching my brilliant (heavy sarcasm) questions and perfectly reasonable (heavier sarcasm) requests vanish into the void, so I decided to do something about it. I was going to build a simple Slack bot to poke me about my own messages that went unanswered that I completely forgot about.
What I naively thought would be a quick, pleasant little project turned into a four-hour marathon of learning, failing, and eventually succeeding with the help of some unexpected tools. This is the story of how I built my first cloud application.
The Idea: A Simple Nudge Bot
The concept was straightforward, at least I thought so:
- Listen: The bot would listen to every message I sent.
- Wait: The bot (and I) would patiently wait for 12 hours for a response.
- Check: After its nap, it would check if the message had received a reply or a simple emoji reaction.
- Nudge: If the message was still being ignored, the bot would send me a private note.
If you feel like you’ve seen something like this, you have. Gmail has a feature that will remind you follow-up with an email you sent a few days ago. My pixel phone will do the same with messages that haven’t been replied to. Unfortunately, Slack doesn’t have this specific feature.
This seemed like a perfect task for a large language model (LLM). I was using a new tool that integrates with a model called Gemini, and I figured it could just spit out the flawless, production-ready code in a matter of minutes. I was wrong. Hilariously wrong.
A Journey of a Thousand Failed Deployments
The process was anything but smooth. In a four-hour coding session, I ran into about 30 failed deployments, roughly one every eight minutes if you’re keeping score. The LLM kept suggesting code that, while looking plausible on the surface, consistently blew up when I tried to deploy it. The errors were technical and confusing: Container Healthcheck failed (apparently my server wasn’t healthy enough for Google), Permission denied (because, you know, security is hard), MissingTargetException, and dispatch_failed. Each time, the LLM apologized profusely for its mishap while cheerily offering a new piece of code that was supposed to fix the last one, and each time, it failed for a brand new, equally obscure reason.
I was completely out of my depth. I had no prior experience with:
- Creating a bot from scratch.
- Building a Cloud Function to run serverless code.
- Using a Firebase database to store data.
I realized I needed help. That’s when I turned to a co-worker, an expert Integration and Python developer, for mentorship.
The Expert’s Intervention: A Simple, Crucial Pivot
My co-worker, bless their heart, quickly recognized that the core issue was architectural. The LLM was trying to force one function to do two very different, very incompatible things. The “Healthcheck failed” error, which had been plaguing me, was a perfect example of a misleading error message. The expert explained it simply: “Your scheduled bot isn’t a web server, so it can’t respond to a health check. The tool is just confused.”
After multiple hours of failing with Gemini, my co-worker and I moved over to another LLM, Claude. Claude has been known to be a much better code-generator; however, since we were using all Google products, we figured that the original plan would work (not-so-much).
Claude came to the rescue and generated a refined, and more importantly, simplified version of the code and added new features to ensure that the code would pass the health check and deploy correctly. This time, the code worked flawlessly.
Lessons Learned
With the code now functioning perfectly, I could return to Gemini, provide it with the working code, and utilize it for the small, incremental updates. I discovered that Gemini had no issue with making minor, precise edits to code that was already working. For instance, updating the nudge time from 24 hours to 12 hours was a breeze, as was creating a slash command in Slack to manually “nudge” my nudge bot.
This experience taught me several valuable lessons:
- LLMs are not magic: They can’t always produce a perfect, complex solution from scratch, especially for multi-part systems. They’re more like brilliant but easily distracted interns – sans the Starbucks addiction.
- Human expertise is invaluable: An expert developer can quickly diagnose architectural flaws that an LLM might miss. But also provided much-needed moral support when I was about to quit.
- Break it down: Solving a complex problem by splitting it into smaller, more manageable pieces is a fundamental programming principle that applies even when using an LLM.
In the end, my bot now works exactly as I want it to. I’m no longer a non-coder; I’m a person who has successfully built and deployed a real-world application. It took over four hours and a lot of trial and error, but the sense of accomplishment is worth every failed deployment.