top of page

The Hidden Automation Detector: A Journey to Clarity

  • Writer: Sangamesh Gella
    Sangamesh Gella
  • Sep 28
  • 7 min read

Updated: Oct 6

The Extension That Nobody Understood (The First Time)


Six months ago, I faced a problem that drove me insane. Every time I saved a Salesforce record, mysterious things happened. Triggers fired, flows executed, and validation rules kicked in. It was all invisible until something broke.


You know the drill. Save an Opportunity, and you get a random error message about a validation rule you've never heard of. Or worse, everything saves fine, but your flow didn't work as expected, and you have no clue why.


Therefore, I developed a Chrome extension to address this issue. I called it the Hidden Automation Detector. The idea was simple: show me exactly what automation runs when I save records.


However, my first version was a disaster.


I did what I always do, opened VS Code and started coding. No real planning; I thought, "I'll figure it out as I go." I built a basic content script to detect form submissions, added some debug log parsing, and created a pop-up interface.


Three weeks later, I had something that technically worked. It could capture debug logs and show some basic timeline info. I was proud of it.


Then I launched it and watched it fail spectacularly.


The Launch That Made Me Question Everything


In the first week, I had 12 downloads. Most people installed it, tried it once, and immediately uninstalled it.


The reviews were brutal: "Too complicated to set up," "Doesn't show what I expected," and "Interface is confusing."


The killer feedback came from a Salesforce developer who said, "I can see this could be useful, but I can't figure out how to make it work for my use case."


That hit me hard because they were right. I had built something that solved my specific problem in my way, without considering how anyone else might want to use it.


My Terrible Development Process (And Why It Failed)


Here's how I was "developing" before this wake-up call:


  1. Have a problem.

  2. Start coding immediately.

  3. Add features as they occur to me.

  4. Test on my own Salesforce org.

  5. Ship it and pray.


There was no user research, no precise requirements, and no systematic testing: just me, my assumptions, and a lot of hope.


When I talked to people who had tried the extension, the feedback was consistent: "I couldn't figure out what it was supposed to do or how it was supposed to help me debug my specific automation problems."


I had never clearly defined what problems I was solving for whom.


Discovering Agentic Development (And Getting Sceptical)


After licking my wounds for a few weeks, I began exploring more effective development approaches. That's when I discovered a concept called "agentic deutilizest," which utilises AI assistants.


At first, it sounded like typical AI hype. "Let AI write your code!" Yeah, sure. I had tried GitHub Copilot; it's helpful for boilerplate but not much else.


But this was different. Instead of AI just autocompleting code, it was about AI managing entire development workflows while I focused on specifications and requirements.


The Tools I Discovered


  • Cursor is my primary IDE with AI pair programming.

  • Claude Code for command-line AI development.

  • MCP (Model Context Protocol) connects AI to my tools.

  • Linear for project management, connected via MCP.


The workflow looked insane: I'd describe what I wanted to build, Claude Code would create Linear tickets for technical requirements, implement the code, test it, and even update the tickets when done.


I was sceptical. But I was also desperate.


Rebuilding from Scratch (With Actual Requirements This Time)


Before touching any code, I did something I'd never done: I talked to Salesforce developers about their debugging problems.


Ten conversations later, I had a completely different understanding of what people actually needed:


  • Real-time visibility into automation execution without manually managing TraceFlags.

  • Visual timelines showing exactly what ran and when.

  • Multi-log support to see patterns across multiple saves.

  • Zero-setup experience that works with existing Salesforce sessions.

  • Detailed breakdowns of flows, triggers, validation rules, and SOQL queries.


Not just "show debug logs" but specific, actionable debugging workflows.


The /specify Phase: Writing Requirements That Actually Work


Using GitHub's spec-driven development approach, I started by /specifying and describing precisely what I wanted:


This time, I was specific about user workflows, exact functionality, and measurable outcomes, no vague technical requirements.


The /plan Phase: Technical Architecture That Made Sense


Then I used /plan to define the technical approach:


Claude Code generated a comprehensive technical plan, including file structure, API patterns, security considerations and optimisations. It was way more thorough than my usual "wing it" approach.


The /tasks Phase: Actually Managing the Project


Instead of my chaotic feature-adding, I got structured tasks in Linear:


  1. Authentication System - SID cookie detection, OAuth fallback, session management.

  2. Debug Log Management - TraceFlag automation, log collection, error handling.

  3. Timeline Parser - Regex patterns for flows, triggers, validation rules, SOQL.

  4. HUD Interface - Real-time overlay, recent activity detection, and user interaction.

  5. Modal Timeline - Detailed view, expandable sections, copy functionality.

  6. Multi-Log Support - Handle multiple recent logs with smart correlation.

  7. Testing & Polishing - Cross-organisational test optimisation, edge cases.


Each task had clear acceptance criteria and dependencies. This was revolutionary for someone like me who usually keeps adding features until something works.


The Implementation (Where Everything Actually Worked)


Here's where it got weird. I told Claude Code to implement the plan, and it generated the entire extension:


  • Content scripts with sophisticated DOM monitoring.

  • Background service worker handling Salesforce API calls.

  • An authentication system supporting multiple domain patterns.

  • Timeline parser with regex patterns for every automation type.

  • UI components that actually looked professional.


The core implementation took about four hours. Not because the AI was magic, but because I had done the hard work upfront: understanding the problem and clearly specifying the solution.


The Technical Deep Dive (For the Curious)


The automation parser ultimately handled complex patterns.


The parser constructs a nested timeline of automation events, triggers that invoke flows, which in turn execute DML operations. It's the kind of regex-heavy, edge-case-riddled code that would have taken me weeks to get right.


Claude Code generated it in minutes and handled edge cases I hadn't even thought of.


What I Learned About My Own Process


The biggest revelation wasn't about AI capabilities. It was about my own terrible habits.


Before, I was using coding as a way to avoid thinking through problems properly.


Starting to build felt like progress, even when I didn't know what I was building toward.


The agentic development approach forced me to:


  • Understand user problems before proposing solutions.

  • Write specific requirements instead of keeping them in my head.

  • Think through edge cases and error conditions upfront.

  • Define what "done" actually means for each feature.


The AI didn't replace my judgment; it amplified it by handling the mechanical parts, allowing me to focus on critical thinking.


The Results (That Actually Matter)


I relaunched Hidden Automation Detector in September. It was a completely different story:


  • 10+ active users within a month.

  • 5-star average rating.

  • Reviews like "This is exactly what I needed for debugging flows" and "Finally, I can see what's happening when records save."

  • Approximately 85% of users remain active after 30 days.


More importantly, I received feedback like "I debugged a complex flow issue in 5 minutes instead of an hour" and "This shows me automation patterns I never would've noticed manually."


The extension wasn't more technically sophisticated than my first attempt. If anything, the user interface was more straightforward. But it solved real problems in ways people could immediately understand and use.


Why This Approach Actually Works


The magic isn't in AI generating code. It's in the process of forcing you to think before you build.


When I had to write specifications that AI could implement, I couldn't be vague. I had to be precise about:


  • What problems I was solving and for whom.

  • Exactly how the solution should behave.

  • What edge cases need handling?

  • How I'd measure success.


The AI handled tedious tasks, such as authentication flows, regular expression patterns, UI boilerplate, and error handling. This allowed me to focus on what truly mattered: understanding users and designing compelling experiences.


The Agentic Development Workflow in Practice


Here's what my daily development process looks like now:


Morning standup with Claude Code: "What did we accomplish yesterday? What are we working on today? Any blockers?"


Feature conversations: Instead of writing code, I describe what I want to achieve. Claude creates Linear tickets, estimates effort, and identifies dependencies.


Implementation: Claude Code writes the code, runs tests, commits to git, and updates tickets. I review and provide feedback.


Debugging: When something breaks, I describe the issue. Claude investigates, proposes fixes, and implements them.


It's like having an excellent developer on the team who never gets tired, doesn't have ego conflicts, and can implement ideas as fast as I can describe them.


What This Means for How We Build Software


I'm not saying AI writes perfect code (it doesn't). I'm not saying it replaces developers (it won't).


I'm saying the relationship between specification and implementation fundamentally changes when AI handles the translation.


Instead of thinking "how do I code this feature?", you think "how do I specify this feature clearly enough that the implementation will be correct?"


For Salesforce development specifically, this could change how we approach:


  • Flow Design: Describe business logic, generate Flow configurations.

  • Lightning Components: Specify user interactions, get production-ready code.

  • Integration Patterns: Define data flows and implement API connections.

  • Automation Testing: Describe test scenarios and generate test automation.


The highest-leverage skill isn't becoming a better coder. It's becoming better at understanding problems and translating them into specifications that can be built.


Try This If You're Building Anything


I'm genuinely curious: what's that project you've been putting off because it feels too complex to plan out properly? The Salesforce automation you know would help, but it seems too daunting to build. The Chrome extension idea you keep starting and abandoning?


The tools I used are mostly free:


  • Cursor (paid but reasonable).

  • Claude Code (free tier available).

  • GitHub's spec-kit (open source).

  • Linear (free for small teams).


Start with something you understand well enough to write clear and accurate requirements for. The key is being specific about what you want, not how to build it.


Try it and let me know what happens. Did writing specifications change how you thought about the problem? Did the AI implementation match what you had in your head? What worked and what felt off?


I'm still learning this approach myself, but it's already changed how I think about building software. Instead of coding my way to understanding, I'm understanding my way to better code.


Tell me about the Salesforce project you've been putting off. Drop a comment.


Maybe this approach is precisely what you need to ship something people actually want. Following this, you will find the next blog, which will provide a detailed explanation, breaking down the process with an actual example and a video showing the exact workflow I use for development.


P.S. - If you want to try Hidden Automation Detector, it's on the *Chrome Web Store. And if you try agentic development, definitely let people know how it goes. This stuff is moving fast, and we're all figuring it out together

Comments


© 2025 by Sangamesh Gella. Powered and secured by Wix

bottom of page