Creating AI-driven content delivery
Content design at JPMorganChase was catching the blame for slow delivery on work we didn’t actually own. The bottleneck wasn’t writing. It was that every team had its own way of documenting standards, every LLM tool the firm was spinning up needed those standards to be machine-readable, and there was no shared format that could feed both.
I designed and shipped the internal documentation standard our LLMs and AI agents are now trained on. Over nine months, I wrote the rules, partnered with engineering on a Figma plugin and API integration, and rolled the tool out across 25 content design teams. The result: an 80% efficiency gain in content delivery, and a single source of truth that humans, models, and agents can all pull from.
Everyone had accepted this as the cost of doing the job. I didn’t want to.
The problem
Before the standard existed, getting a piece of content from a Figma file into our content management system took 6 to 8 hours per ticket on average. The cost was visible. The harder cost was political: content design was the team holding the work, so content design was the team taking the heat for slow delivery.
The deeper problem was that no two teams documented the same way. Some used Confluence pages. Some used spreadsheets. Some used inline Figma comments. None of it was structured in a way an LLM could read reliably, which meant every AI tool the firm built had to be retrained on a different shape of input, every time.
Writing the rules
I wrote rules for the firm’s internal LLM suite: rules for how content should pull from Figma, how tags should be applied, and how the table should be structured so a model could parse it without ambiguity. The standard wasn’t a style guide. It was a schema.
The hardest part wasn’t the technical build. It was deciding to take the work on. Content design is supposed to be downstream of product. Owning a firm-wide standard meant arguing for scope that wasn’t in my job description, with stakeholders who had their own ideas about what AI tooling should look like.
How content moves through the standard
The standard takes a content design ticket from Figma, pulls structured fields out of it, applies the tags and hierarchy the model needs, and writes the result to a Review table inside the chat interface. Content designers see the parsed output, confirm or correct it, and only then does it get sent to Confluence as the canonical record.
The two-step generate-and-confirm pattern wasn’t an obvious choice. Teams wanted speed. Engineering wanted parameter control. I shipped the version where users could review the table before sending it to Confluence, because content designers didn’t trust a one-click flow. The conversational copy is what got 25 teams to use the tool with their actual work.
Outcomes
Measured over 9 months of rollout:
What I learned
Three things stuck with me from this work:
The agent’s output was only as good as the rules I wrote. The model needed to know the intent of a piece of copy, not just its surface form. Treating the model as a reader with user needs is what produced reliable output. That’s a content design instinct, not an engineering one.
The two-step generate-and-confirm pattern wasn’t an obvious choice. Teams wanted speed. Engineering wanted parameter control. I shipped the version where users could review the table before sending it to Confluence, because content designers didn’t trust a one-click flow. The conversational copy is what got 25 teams to use the tool with their actual work.
Refusing to accept the existing bottleneck is what made the work possible. Content design owned the slow part of the process by default. Naming that out loud, and proposing a system to fix it, is what unlocked the partnership with engineering and the budget to build it.
Let’s work together. See my resume.