Back to blog

Plugging my AI editor into Puck: how hard could it be?

Puck has 12k+ GitHub stars and a clean plugin system. I wanted to see if my editor's architecture could plug into it — and what I'd learn from trying.

  • AI
  • Editor
  • Puck
  • Open Source

The experiment

I’d been building my AI site editor with a chat-first interface — you describe what you want, the system generates structured operations, you review and approve. It works well for intent-driven changes. But I kept wondering how my architecture would hold up if I plugged it into a completely different editing UI.

Puck caught my eye. It’s an open-source visual editor for React with over 12,000 GitHub stars, a drag-and-drop canvas, and a plugin system. It’s clearly the one the community has rallied around. I wanted to test a specific question: is my operations pipeline actually editor-agnostic, or is it secretly coupled to my own chat UI?

If the architecture was clean, I should be able to swap in Puck as a visual editing surface and have it produce the same operations that flow through the same orchestrator. If it wasn’t, I’d learn exactly where the coupling was hiding.

What I actually had to build

The integration came down to three adapter functions:

  • pageToPuckData() — converts my page documents into Puck’s data format so the canvas can render them
  • buildOpsFromPuckDiff() — converts Puck state changes back into my standard operations (add_block, update_block, remove_block, move_block)
  • createPuckConfig() — converts my block manifest into Puck’s field definitions

That’s it. No changes to the orchestrator, no changes to the site, no changes to the publishing pipeline. Puck produces state diffs, my adapter turns them into operations, and the orchestrator handles them the same way it handles operations from the chat editor.

Chat editor  ────────────→  operations  ──→  Orchestrator  ──→  Site
Puck editor  ──→ adapter ──→  operations  ──→  Orchestrator  ──→  Site

The site never knows which editor produced the changes.

The part that surprised me

I registered the AI chat as a Puck sidebar plugin. So now you get both: drag a section into position on the canvas, then type “write the copy for this testimonial” in the chat sidebar. Visual editing for structure, AI for content. Both sharing the same undo/redo stack and the same publishing flow.

I didn’t plan this combination upfront — it just fell out of the architecture. The fact that operations are the shared language between both editors meant I could mix them without special handling.

What I learned

The adapter layer is small. Three functions, no changes to the core. That told me the operations pipeline is genuinely decoupled from the editor UI — which was the whole point of the experiment.

Puck’s plugin system makes it embeddable. I’m running it inside my editor app, not as a standalone product. The plugin API let me wire in chat, history, and custom image pickers without forking anything.

It’s a self-hostable alternative to cloud-hosted editors. Puck itself is MIT-licensed. Embedding it means my users get a visual editor without a separate SaaS dependency — same principle as the rest of the project: open, composable, runs on your own infrastructure.

What’s still rough

Puck mode is experimental. Blocks render in the Puck canvas rather than the actual Next.js site, so there’s a visual fidelity gap. Nested layout zones aren’t supported yet. Image upload is basic.

But as an experiment in architectural flexibility, it worked better than I expected. I set out to test whether my pipeline was truly editor-agnostic. Turns out it is — and the combination of visual and AI editing is more useful than either one alone.