Point11
Performance

Rendering Strategies for the Agent Era

SSR, SSG, CSR, and ISR each make different tradeoffs for speed, SEO, and agent discoverability. Choosing wrong means agents can't read your site.

The rendering strategy you choose determines whether agents can read your site at all. GPTBot, ClaudeBot, Googlebot, and every other agent crawler fetches your URL and reads the HTML response. If that response is an empty `<div id="root"></div>` with a JavaScript bundle, agents see nothing. Your entire site is invisible.

The Core Problem

When an agent crawls your site, it sends an HTTP request and reads what comes back. It does not open a browser, execute JavaScript, or wait for React to hydrate. The HTML in the response is all it gets[1].

That makes your rendering strategy a discoverability decision, not just a performance one. Pick the wrong approach and you are invisible to every agent that matters.

SSR: Server-Side Rendering

Server-side rendering generates HTML on the server for every request. When an agent or a user requests a page, the server runs the application, fetches data, and returns fully formed HTML.

How it works: The browser requests a page and the server executes code, queries databases, and builds complete HTML. The browser displays the document immediately, then hydrates JavaScript for interactivity.

Strengths: - Agents always receive complete content. - Content can be personalized per request, including user-specific data, geo-targeting, and real-time pricing. - No JavaScript execution is required to see the page.

Tradeoffs: - Every request requires server compute, so high traffic means high server load. - Time to First Byte (TTFB) depends on how fast your server can render the page. - Pages cannot be cached at the CDN edge without additional configuration.

Best for: Dynamic pages with frequently changing content, such as product pages with live pricing, authenticated dashboards, and any page where content changes per request.

SSG: Static Site Generation

Static site generation builds every page at build time. The output is plain HTML files that can be served directly from a CDN with no server involved.

How it works: At build time, the framework generates HTML for every route and deploys the files to a CDN. When an agent or user requests a page, the CDN returns the pre-built HTML file with no server computation at request time.

Strengths: - Fastest possible delivery, with no server computation, database queries, or rendering delay. - Perfect CDN cacheability because every edge location serves the same file. - Agents always receive complete content.

Tradeoffs: - Content only updates when you rebuild and redeploy. - Build times scale with page count. A site with 100,000 product pages means 100,000 pages generated at build time. - Not suitable for personalized or frequently changing content.

Best for: Marketing pages, blog posts, documentation, and landing pages. Any content that changes on a publish cycle rather than per request.

CSR: Client-Side Rendering

Client-side rendering sends a minimal HTML shell and a JavaScript bundle. The browser downloads the JS, executes it, fetches data from APIs, and renders the page entirely in the browser.

How it works: The server returns a nearly empty HTML document. The browser downloads and executes JavaScript, which fetches data and renders the UI. The page appears only after all of this completes.

This is the biggest mistake brands make for agent discoverability. When GPTBot, ClaudeBot, or any other agent crawler requests a CSR page, it receives the empty shell. It cannot execute JavaScript. It sees no content, no products, no text, no structured data. Your site might as well not exist[1].

Googlebot is a partial exception. Google runs a secondary rendering pass using headless Chrome that can execute JavaScript[2]. But this rendering queue has lower priority, can take days to process, and does not help any non-Google agent. Relying on it is a gamble.

Strengths: - Rich interactivity after initial load. - Simple deployment with static files and an API. - Well-suited for authenticated applications where SEO and agent access are irrelevant.

Tradeoffs: - Invisible to agents. GPTBot, ClaudeBot, and others see an empty page. - Slower perceived load time because users see a blank screen until JavaScript finishes. - Poor Core Web Vitals, particularly LCP and CLS.

Best for: Internal tools, authenticated dashboards, and applications where agent discoverability does not matter. Never for public-facing content you want agents to find.

ISR: Incremental Static Regeneration

ISR combines the speed of static generation with the freshness of server rendering. Pages are built statically but regenerate in the background after a configurable time interval.

How it works: On first request, the page is generated and cached. Subsequent requests serve the cached version until the revalidation period expires. The next request after expiration triggers a background regeneration, serving the stale page instantly while the new version builds. Once ready, the new version replaces the old one in the cache.

This concept originated in Next.js[3] but the pattern applies universally. Netlify, Cloudflare, and other platforms offer equivalent capabilities under different names.

Strengths: - Static-speed delivery for most requests. - Content stays fresh without full rebuilds. - Agents always receive complete, pre-rendered HTML. - Build times stay fast regardless of page count.

Tradeoffs: - Content can be stale for up to one revalidation period. - The first request to a new page incurs a rendering delay (cold start). - Caching behavior is more complex to reason about.

Best for: E-commerce product pages, content sites with thousands of pages, and any site where content updates periodically but not per request. This is the default choice for most production sites.

Streaming SSR

Streaming SSR sends HTML to the browser in chunks as the server renders them, rather than waiting for the entire page to be ready.

How it works: The server starts sending HTML immediately, so critical content like navigation and above-the-fold text arrives first. Slower parts such as data-dependent sections and recommendations stream in as they become available, and the browser renders progressively.

Strengths: - Faster TTFB than traditional SSR because the first byte arrives before the full page is rendered. - Progressive rendering lets users see content sooner. - Agents receive the complete HTML once the stream finishes.

Tradeoffs: - Requires more complex server infrastructure. - Some agents may not handle chunked transfer encoding well, though major crawlers do. - Debugging is harder when content arrives in pieces.

Best for: Pages with mixed data sources where some queries are fast and others are slow, such as product pages where the title and price load instantly but reviews take longer.

Comparison Table

StrategyAgent VisibleSpeedContent FreshnessBest For
SSRYesMediumReal-timeDynamic, personalized pages
SSGYesFastestBuild-time onlyStatic content, marketing pages
CSRNoSlow (perceived)Real-timeInternal tools only
ISRYesFastNear real-timeProduct pages, large content sites
Streaming SSRYesFast (progressive)Real-timeMixed data sources

Decision Framework

Start with one question: does the content on this page need to be visible to agents?

If yes (public content, product pages, marketing, blog), eliminate CSR immediately. Then choose based on how often the content changes:

  1. Content changes per request (pricing, personalization): use SSR or Streaming SSR.
  2. Content changes periodically (products, articles): use ISR.
  3. Content changes on a publish cycle (landing pages, docs): use SSG.

If no (internal dashboards, authenticated tools), CSR is fine. Agent visibility is irrelevant for private applications.

The Agent Visibility Test

Agents like GPTBot, ClaudeBot, and PerplexityBot cannot execute JavaScript[4]. The simplest way to test whether your rendering strategy works for them is to disable JavaScript in your browser and load the page. If you see your content, agents can see it too. If you see a blank page or a loading spinner, agents see exactly the same thing.

You can also test with `curl -s https://your-site.com | head -100`. If the output contains your actual content, such as product names, headings, and text, your rendering strategy works. If it contains only `<script>` tags and an empty `<div>`, agents see nothing.

How Site Scanner Helps

Site Scanner evaluates whether your pages return meaningful content in the initial HTML response. It flags pages that rely on client-side rendering for critical content and identifies where agents would see an empty page. The Performance dimension of your Site Score reflects how your rendering strategy affects load speed, Core Web Vitals, and agent accessibility.

See how your site scores.

Run a free scan at point11.ai to check your Rendering Strategies for the Agent Era and 40+ other metrics.

Scan Your Site