top of page

How I Built a Salesforce Chrome Extension Using AI Agents (And Why Spec-Driven Development Actually Works)

  • Writer: Sangamesh Gella
    Sangamesh Gella
  • Sep 13
  • 5 min read

Here's the thing about building software in 2025: I just spent four weeks creating a Chrome extension that provides real-time visibility into Salesforce automation execution, and I barely wrote any code myself.


No, this isn't some "AI will replace developers" hot take. This is about something way more interesting: agentic AI development, where AI handles the implementation while you focus on the specifications and business logic.


Illustration of a developer at a desk with glowing screens, a puzzle piece, and the title “Agentic AI Development: From Specs to Code.”
Illustration of a developer at a desk with glowing screens, a puzzle piece, and the title "Agentic AI Development: From Specs to Code."

What I Actually Built using AI Agents


Let me start with what the extension does. If you've ever wondered, "What the hell is running when I save this Salesforce record?", you know the pain. Triggers fire, flows execute, validation rules kick in - and it's all invisible until something breaks.


My Hidden Automation Detector extension solves this by:

  • Monitoring record saves in real-time

  • Automatically managing Salesforce TraceFlags

  • Parsing debug logs into visual timelines

  • Showing precisely what automation executed and when


The technical architecture? A Chrome extension with content scripts, background service workers, SID cookie authentication, and a complete timeline parser. About 2,000 lines of JavaScript across 20+ files.


And here's the kicker - I didn't write most of it.


The Agentic Development Stack


My development setup looked like this:

  • Cursor as my primary IDE with AI pair programming

  • Claude Code for command-line agentic development

  • MCP (Model Context Protocol) connecting Claude Code to my tools

  • Rube as the MCP connector to Linear for task management

  • Linear for tracking features, bugs, and technical requirements

  • Perplexity for my research


Picture this workflow: I research what I intend to build and iterate on it for an MVP, then I will have a conversation with Claude about a feature. It would create specifications of the feature. I use a GitHub framework called Specify to break down the tasks into manageable pieces for both me and the agent, according to the technical requirements. I'd iterate on the specs. Then Claude Code would implement the code, test it, and update the tickets linearly when done. As I complete a feature, I will push it to GitHub, where I create a PR, and Cubic Dev will take care of reviewing and letting me know the potential gaps, improvements that I need to take care of and once done, boom!


Spec-Driven Development in Practice


Traditional development: Write code, hope it works, fix bugs, refactor when requirements change.


Agentic development: Write specifications, let AI implement, verify against specs, iterate on requirements.


For the automation detector, my "specs" weren't formal documents. They were conversations like this:


Me: "I need to detect when users save records in Salesforce and immediately show what automation executed."


Claude Code: "I'll need to create a content script that monitors form submissions and button clicks, plus a way to parse debug logs. Should I create Linear tickets for the TraceFlag management and log parsing components?"


Me: "Yes, and make sure it works across both Classic and Lightning Experience."


Claude would then:

  1. Create detailed Linear tickets

  2. Plan the technical architecture

  3. Implement the features

  4. Test the implementation

  5. Update ticket status


The result? I spent my time on product decisions and business logic instead of debugging Chrome extension manifest files.


What Actually Worked (And What Didn't)


The Good:

  • Requirements stayed in sync with implementation because AI managed both

  • Complex parsing logic (like extracting automation timelines from debug logs) got implemented faster than I could have done manually

  • Edge cases got caught early because AI is paranoid about error handling


The Frustrating:

  • Sometimes, AI would over-engineer solutions when simple approaches worked fine.

  • I had to learn to be more precise with specifications - vague requirements produced code that was equally vague.

  • Debugging AI-generated code requires a different mindset than debugging your own code.


The Surprising:

  • The code quality was consistently better than my usual first drafts

  • Architecture decisions were more thoughtful because AI considers more patterns

  • Documentation actually got written (because I specified it)


The Authentication Challenge


The trickiest part was Salesforce authentication. The extension needed to work without requiring users to set up OAuth flows or API tokens.


My spec: "Use the existing Salesforce session cookie (SID) that's already in the browser."


Claude Code's response: "I'll implement SID cookie detection across multiple Salesforce domain patterns, with fallback authentication methods and proper error handling for expired sessions."


The implementation involved scanning for SID cookies across different domain variations (e.g., *.my.salesforce.com, *.lightning.force.com) and mapping them to Salesforce's various subdomain patterns.


Would I have thought through all those edge cases upfront? Probably not. Did the AI implementation handle them systematically? Absolutely.


Real-World Performance


Since deploying this extension:

  • 60% faster debugging when Salesforce automations misbehave

  • Zero manual TraceFlag management (the extension handles arming/disarming automatically)

  • Visual timeline parsing that would have taken me weeks to implement manually


But the bigger win is how this development approach scales. The Linear-MCP-Spec Framework-Claude Code integration means I can iterate on features by just describing what I want differently. The AI updates the code, tests it, and manages the project tracking.


What This Means for Development


I'm not saying AI writes perfect code (it doesn't). I'm not saying it replaces developers (it won't).


I'm saying the relationship between specification and implementation fundamentally changes when AI handles the translation.


Instead of thinking "how do I implement this feature?", you think "how do I specify this feature clearly enough that the implementation will be correct?"


It's spec-driven development, but the specifications are conversations rather than documents.


The Technical Deep Dive


For the curious, here's what the automation detector actually parses from Salesforce debug logs:

// Flow execution patterns
flowStart: /FLOW_START_INTERVIEW_BEGIN.*?\|([^|]+)\|(.+)/,
flowElementBegin: /FLOW_ELEMENT_BEGIN.*?\|([^|]+)\|([^|]+)/,

// Trigger patterns  
triggerStart: /CODE_UNIT_STARTED.*?\|([^|]+trigger[^|]*)\|/i,
triggerEnd: /CODE_UNIT_FINISHED.*?\|([^|]+trigger[^|]*)\|/i,

// Validation rule patterns
validationRule: /VALIDATION_RULE.*?\|([^|]+)\|/,
validationFail: /VALIDATION_FAIL.*?\|([^|]+)/,

The parser builds a timeline of automation events with proper nesting for triggers that call flows that execute DML operations. It's the kind of regex-heavy, edge-case-riddled code that AI excels at generating.


What's Next


I'm genuinely curious about where this development approach leads. If AI can handle implementation details this well, what happens when it can also:


  • Automatically refactor based on new requirements?

  • Suggest architectural improvements based on usage patterns?

  • Generate test cases from specifications?


The Hidden Automation Detector was my first serious project using full agentic development. It won't be my last.


Want to try the extension? It's available on the Chrome Web Store here. Curious about the agentic development process? I will create a series on this and keep you posted.


For more details, you can watch the video below.


Drop a comment - I'm genuinely interested in hearing about others' experiences with AI-assisted development. In a subsequent blog, I will provide a deeper dive into the world of spec-driven development, using an actual example to help you understand it more easily.


The code isn't perfect, but the development experience was transformative. And in a world where requirements change faster than developers can implement them, maybe that's precisely what we need.


P.S. If you found this helpful, I write about Salesforce, AI tools, and productivity stuff that actually works: no fluff, no generic advice, just real experiences from the trenches. For more information, please visit my website's home page and subscribe. Thank you for reading this.


Comments


© 2025 by Sangamesh Gella. Powered and secured by Wix

bottom of page