Point11
  • Demo
  • Pricing
  1. Home
  2. Learn
Discoverability

How Claude Crawlers Work

ClaudeBot is Anthropic's web crawler, the engine that lets Claude learn about your brand. If it can't read your site, Claude can't learn about you.

ClaudeBot is Anthropic's web crawler. If it can't read your site, Claude can't learn about your brand.

One Crawler, Two Jobs

ClaudeBot crawls the public web for both training data and real-time retrieval[1]. Unlike OpenAI, which splits crawling across three separate bots, Anthropic uses a single crawler for everything. Block it and you lose both training and live answers.

What It Can Read

ClaudeBot is a plain HTTP fetcher. It reads headings, body text, JSON-LD structured data, image alt text, and anchor text from static HTML. It does not execute JavaScript, so content rendered client-side with frameworks like React, Vue, or Angular is invisible to it.

robots.txt Control

Block ClaudeBot entirely: `` User-agent: ClaudeBot Disallow: / ``

Or allow everything (the default when no rule exists): `` User-agent: ClaudeBot Allow: / ``

Common Mistakes

  • Accidentally blocking ClaudeBot with a User-agent: * catch-all rule. Most robots.txt files have one, and it silently prevents Claude from ever seeing your content.
  • Relying on JavaScript to render key content. If your product descriptions, pricing, or FAQs only exist inside a client-side component, ClaudeBot will never see them.
  • Using User-agent: claude instead of User-agent: ClaudeBot. The string must match exactly or the rule is ignored.

Making Your Site Readable

ClaudeBot only sees what the server sends in the initial HTML response. Your most important content, including product names, pricing, and company description, needs to exist as static HTML rather than hydrated client-side.

JSON-LD structured data gives ClaudeBot explicit entity relationships it can parse without guessing. An llms.txt file goes further by providing the crawler a machine-readable map of what your site is and what matters most.

How Scanner Helps

Scanner evaluates your site against the signals ClaudeBot uses: static HTML content, structured data, robots.txt configuration, and page speed. See also How Agent Crawlers Work.

Sources

  1. 1.Anthropic: Web crawling and ClaudeBot

See how your site scores.

Run a free scan at point11.ai to check your How Claude Crawlers Work and 40+ other metrics.

Scan Your Site

More from Learn

Discoverability

How Agent Crawlers Work

Agents use specialized crawlers to read the web. Understanding how GPTBot, ClaudeBot, and others work helps you stay visible where it matters most.

Discoverability

Structured Data Is Your Site's API for Agents

Structured data turns page content into labeled facts that agents can act on with certainty, instead of guessing from raw HTML.

Discoverability

What Is llms.txt

llms.txt is a proposed standard that gives agents a clean, structured document instead of raw HTML, serving as a front door for agentic browsing.

Point11

Analytics

  • SignalYour share of voice.
  • ScannerSee how agents see.
  • BenchmarksCompetitive views.
  • JourneysLive agents on site.

Infrastructure

  • SiteOptimized for agents.
  • ChatYour data, your edge.
  • VoiceNavigate by voice.
  • AdsAgent powered campaigns.

Insights

  • Blog
  • Case Studies
  • Podcast
  • Learn
  • Benchmarks

Company

  • About
  • Careers
  • Contact
  • Partners

Industries

  • Automotive
  • Education
  • Energy
  • Financial Services
  • Government
  • Healthcare
  • Insurance
  • Legal
  • Manufacturing
  • Media
  • Real Estate
  • Retail
  • Technology
  • Travel

Demo

  • Site Platform
  • DemoShopRetail
  • DemoBankFinance
  • DemoGovGovernment

Pricing

  • Pricing
© 2026 Point11 · Patent Pending
© 2026 Point11 · Patent PendingLegalPrivacyTermsSystem Status
System Status

Anthropic Crawler

A single crawler for training data — no separate search or browsing bot yet

ClaudeBot

Only crawler
User-agent: ClaudeBot

Crawls pages to collect training data for Anthropic's Claude AI models. Unlike Google and OpenAI, Anthropic uses a single bot — blocking it opts you out of all Claude-related crawling.

User-agent: ClaudeBot Disallow: /
AI model trainingBlocked
Search indexingN/A — no search bot
Live browsingN/A — no browsing bot

Key insight: Anthropic currently operates a single crawler. Blocking ClaudeBot is an all-or-nothing decision — there is no way to allow search while blocking training, unlike Google or OpenAI.