
How we made payment links faster
Every millisecond matters when someone is about to pay you. When a customer lands on a payment page, they’re making a snap judgment before they ever click “Place Order.” Does this feel safe? Is this legitimate? That judgment starts with how fast the page loads. Users start wondering if their payment is secure, if the business behind the page is real, and whether they’ll actually get what they paid for.
Speed isn’t a vanity metric. It’s the first trust signal your customer ever receives, and for payment links, it might be the most important one. We rebuilt how Moov’s payment links render, and the results speak for themselves:
| Metric (Mobile) | Before | After | Improvement |
|---|---|---|---|
| First Contentful Paint | 11.2s | 2.2s | -80% |
| Speed Index | 11.2s | 2.2s | -80% |
| Largest Contentful Paint | 16.7s | 4.4s | -74% |
On desktop, we’re now hitting 0.6s FCP, 0.6s Speed Index, 0.9s LCP, and a CLS of 0.019. That’s near-instant for a fully interactive payment form. Here’s a screenshot of our desktop Lighthouse results.

Here’s what we changed and why.
The problem
Our payment links were a React SPA. The server returned a minimal index.html shell with a JSON payload, the browser downloaded a JavaScript bundle, React started rendering, data was fetched, and then the user finally saw a payment form. The total uncached download was 4.2MB over the wire.
For a page whose entire job is to collect a payment, that architecture was working against us. Every byte and every round-trip sat between the customer and the “Pay” button.
A report from Portent found that an ecommerce site that loads in 1 second has conversion rate 5x higher than one that loads in 10 seconds. That’s some serious upside.
The fix: Server-side rendering with client-side hydration
We moved rendering out of the browser and onto the server. Instead of shipping a blank shell and a fat JS bundle, we now return fully-rendered HTML from a sidecar SSR service powered by Bun.
The request flow looks like this:
- Request hits our server
- Server fetches payment link data
- Bun-powered sidecar renders the React component tree to HTML
- Fully-formed page is returned in the initial response
- Client-side React hydrates for interactivity
The browser gets something to paint immediately instead of staring at a blank page while it downloads, parses, and executes JavaScript.

Why Bun?
We evaluated a few approaches: Node-based SSR, edge rendering, and static pre-generation. Bun won on simplicity and raw speed. Built-in JSX support, native TypeScript support, and fast startup time made it a natural fit for a sidecar service that needs to render React components on every request without the overhead of a full Node runtime. And we didn’t need to change our build system or rewrite any code. It just does the one thing we need, and it does it fast with less memory usage than Node.
What helped move the needle
The SSR migration was the headline change, but a few things compounded to get us from 11s to 2.2s FCP on mobile and 0.6s FCP on desktop:
- Eliminating the render waterfall. In the SPA model, the browser had to download JS → execute JS → fetch data → render. With SSR, data fetching and rendering happen server-side in a single pass. The browser’s first response already contains the finished page.
- Smaller initial payload. The server returns HTML the browser can render immediately, not a JavaScript application. The hydration bundle is a fraction of the original SPA bundle because it only needs to attach event handlers to existing DOM — not rebuild it from scratch.
- Smarter chunking and lazy loading. We’ve always had chunking lazy loading, and dead code elimination in place, but we were able to improve it by more aggressively chunking the codebase and only loading the code that’s needed for the current route. For example, we were able to remove the code for the payout page from the payment page bundle, and vice versa.
- No layout shift. Because the server renders the final layout, the page doesn’t jump around as components mount and data arrives. Our CLS score reflects that.
And of course the design of the page is a big factor. We load a high quality logo of the business as quickly as we can, using loading="eager" and decoding="async". The line items and amount of the payment are viewable immediately, and the only thing left to load is the payment form, which needs hints from the browser, like whether Apple Pay should be shown.
A quick detour on product images. We scale the images to size at
2xon high density screens and based on the number added and the size class of the browser loading the page, we load the appropriate number of images, making sure to lazily load any that aren’t immediately in view.

The tradeoff
SSR isn’t free. We now run a sidecar service that needs to stay healthy and fast. If the SSR service is slow, every page load is slow since there’s no client-side fallback hiding the latency. To keep the service fast, we run the backend service in Go and the Bun SSR service containers in the same Kubernetes pods, reducing networking overhead. The SSR service is only responsible for ingesting data from the backend service to create the React component tree and serve the static assets.
Results
The mobile numbers tell the story. We cut First Contentful Paint by 80% and LCP by 74%. On desktop, FCP dropped to 0.6 seconds. That’s fast enough that the page feels like it was already loaded.
For payment links specifically, speed isn’t a nice-to-have. It’s the difference between a completed transaction and an abandoned one. If your checkout flow loads in 16 seconds on a phone, some percentage of those customers are gone.
We believe software should be beautiful and fast. We’re not done yet, but we’re proud of the incremental progress we’ve made.
Want to build with us? Join our team.





