Point11

Rankings Methodology

The definitive benchmark for enterprise readiness in the agentic era. We measure whether AI agents can find you, transact with you, and trust you — the scorecard for the next decade of customer acquisition.

Scoring Framework

150 enterprise websites across 10 industries, scored 0–100 on five equally weighted pillars. McKinsey projects $1–5 trillion in agentic commerce by 2030 — our methodology captures the transition from traditional web performance to AI-driven customer acquisition.

PillarWeightWhat It Measures
Brand Discovery & GEO20%LLM visibility, structured data, AI crawl policies, GEO citation readiness
Agentic Commerce20%MCP/A2A protocol support, product feeds, agent checkout, API access
On-Site AI Experience20%Chat quality, voice AI, multimodal capabilities, agent DOM interaction
Modern Tech & Architecture20%Modern framework, legacy-free stack, CDN/edge delivery, real-time capability
Performance & Speed20%Performance analysis, mobile perf, Core Web Vitals (LCP/INP/CLS/TTFB)

Brand Discovery & GEO

As AI models become the primary interface between consumers and brands, discoverability shifts from search engine indexing to LLM citation.

LLM Visibility
Presence and quality of llms.txt / llms-full.txt — the standard for providing LLMs with structured site descriptions.
Structured Data
JSON-LD and schema.org coverage. Rich structured data drives accurate AI citations about products, services, and content.
AI Crawl Policy
robots.txt handling of GPTBot, ClaudeBot, Google-Extended. Sites allowing AI crawling signal agentic-era readiness.
GEO Readiness
FAQ schemas, expert authorship markup, and content optimized for extraction in AI-generated responses.

Agentic Commerce

AI agents will browse, compare, and transact on behalf of consumers. This pillar evaluates whether a site exposes the protocols, feeds, and APIs agents need.

Protocol Support
MCP server endpoints (/.well-known/mcp.json), A2A agent cards, and OpenAPI specs — standardized by Anthropic, Google, and the Linux Foundation.
Product Feeds
OpenAI product feed compliance, Google Merchant Center feeds, and catalog structure for AI shopping agents.
Agent Checkout
Agentic Commerce Protocol support, payment token handling, and programmatic checkout flows.
API Access
Programmatic endpoints for catalog queries, inventory checks, and order placement with agent-suitable auth.

On-Site AI Experience

Customer-facing AI features — the most visible dimension of AI readiness.

Chat Quality
Conversational AI interfaces: context retention, site tool integration, modern LLM infrastructure vs. legacy decision-tree systems.
Voice AI
Voice agent detection via ElevenLabs, Vapi, and Retell signatures. Target: <800ms response latency.
Multimodal
Image, video, and 3D content within AI experiences — visual search, AR/VR previews, cross-modal adaptation.
Agent Interaction
DOM traversability, form fillability, and action APIs for external agent operation.

Modern Tech & Architecture

Modern stack signals — the foundation for fast, secure, agent-compatible experiences.

Modern Framework
React, Next.js, Vue, Angular, Svelte — modern JS frameworks that support widget embedding and dynamic AI experiences.
Legacy-Free
No jQuery dependency, modern build tooling, and clean dependency trees that reduce conflict risk.
CDN & Edge
CDN delivery (Cloudflare, Fastly, Akamai, CloudFront), modern hosting (Vercel, Netlify), and CSP headers.
Real-Time
WebSocket support, streaming HTTP responses, and infrastructure for live AI interactions.

Performance & Speed

Performance audits and real-world Core Web Vitals — the speed metrics that matter for both users and AI agents.

Performance
Blended performance score (60% mobile, 40% desktop) — the industry standard for performance measurement.
Mobile Perf
Mobile-specific performance — critical as mobile traffic dominates and agents increasingly operate on mobile-first APIs.
Core Web Vitals
LCP (≤ 2.5s), INP (≤ 200ms), CLS (≤ 0.1), TTFB (≤ 800ms) — Standard UX metrics via CrUX with lab-data fallback.

Data Sources & Collection

Sources

Detection Methods

  • HTML analysis — script tags, structured data, AI widget signatures
  • API probing — MCP, A2A, OpenAPI, llms.txt, product feeds
  • Header inspection — CDN, hosting platform, CSP, security
  • Protocol detection — WebSocket upgrades, SSE, streaming
  • CMS fingerprinting — WordPress, Drupal, Contentful, Sanity
Schedule: Daily cron at 8 AM UTC. Performance testing rotates ~20 sites/day (full cycle every 7–8 days). CrUX and fast collectors run daily for all sites. Monthly snapshots on the 1st power the public rankings.

Limitations

  • HTML-only AI detection — server-side features behind auth are not observed
  • Single-location lab tests — performance may vary by geography
  • CrUX availability — some sites lack sufficient traffic for field data
  • Protocol detection — existence verified, quality not fully evaluated
  • Voice AI — limited to known provider script signatures
  • Public-only — gated/internal features are not captured

Our detection is intentionally conservative — false negatives are more likely than false positives.

Version History

  • v3.0 — Feb 2026: Restructured to 5 equally weighted pillars. Added Modern Tech & Architecture (CDN, hosting, CMS detection). Merged CWV + Trust into Performance & Speed.
  • v2.0 — Feb 2026: Expanded to 6 pillars. Added Agentic Commerce, Real-Time Speed. Restructured AI Capabilities into On-Site AI Experience.
  • v1.0 — Jan 2026: Initial launch. 150 sites, 10 industries, 4 pillars.

Contact

Questions about methodology or your score? Reach us at rankings@point11.ai or request a review.