TL;DR
Interactive editing requires near-instant visual feedback. The original renderer required ~900ms to produce a full card image, which made iterative editing feel sluggish. By restructuring rendering as a deterministic incremental pipeline, the system reduced typical updates to ~80–150ms, allowing card changes to appear almost instantly as the user types.
- Rendering decomposed into deterministic pipeline stages.
- Intermediate stage caching eliminates unnecessary recomputation.
- Incremental updates enable sub-100ms interactive latency.
- Explicit render contexts isolate state between render passes.
Origin
I have always enjoyed designing game systems. As a kid I would draw cards on paper and play them with friends during lunch and recess. Years later I discovered Magic: The Gathering in college and quickly became fascinated with the design space surrounding the game.
Eventually I discovered the custom card community, where designers build entire fan-created sets using a desktop application called Magic Set Editor (MSE). While powerful for its time, the tool had not kept pace with modern editing workflows and frequently felt fragile during iterative design.
As my own custom set ideas grew more ambitious, the tooling itself became the bottleneck. Instead of continuing to work around those limitations, I started building the tool I wished existed: a card editor where every change instantly updates the rendered card.
Rendering Pipeline Overview
Rather than treating rendering as a single monolithic operation, the system models rendering as a sequence of deterministic stages.
Breaking rendering into stages allows intermediate results to be reused, dramatically reducing the work required for incremental updates.
Figure 1. The rendering pipeline assembling a card from structured data into a final image.
The Problem
Real-time editing only feels responsive when updates complete in under roughly 100 milliseconds.
Incremental Rendering
Most edits affect only a subset of rendering stages. The pipeline therefore recomputes only the stages affected by an edit.
For example, changing card text only invalidates the layout stage. Asset loading and glyph parsing results remain valid.
Figure 2. As the user types, only the typography layer is recomputed while previously rendered assets are reused.
Dynamic Text Layout
Magic cards use proportional fonts and embedded symbol glyphs. Layout must measure text segments and symbol images together.
"Tap {T}: Add {G}."
→
Text("Tap ")
Glyph(Tap)
Text(": Add ")
Glyph(GreenMana)
Text(".")
Each token is measured and positioned within the card text box before final composition.
Execution Contexts
Early versions of the renderer allowed intermediate state to leak across render passes.
To prevent this, each render pass now operates within an explicit render context containing cached pipeline stages and intermediate computations.
This isolation ensures deterministic results while enabling aggressive caching.
Design Lessons
- Performance constraints must shape architecture early.
- Deterministic pipelines simplify reasoning about complex rendering systems.
- Incremental recomputation dramatically improves interactive responsiveness.
- Explicit execution contexts prevent subtle state leaks between render passes.
Supporting the full vision of a modern card design platform would have required an entire ecosystem of tools including asset management, collaboration systems, and card databases. After reaching the core technical goal of sub-100ms rendering, I concluded that the opportunity cost of continuing the project outweighed the benefits.
Why This Matters for AI Systems
The principles behind this renderer map closely to modern AI systems. Model pipelines increasingly rely on incremental computation, structured intermediate representations, and deterministic execution.
- Incremental pipelines resemble token-stream generation pipelines.
- Structured intermediate stages mirror ML preprocessing pipelines.
- Execution contexts resemble deterministic evaluation environments.
- Latency-sensitive design mirrors real-time AI interfaces.