Intelligent Content Transformation
Our AI engine analyzes your pages and injects rich semantic enrichments -- Schema.org markup, FAQs, entity definitions, and more. AI systems don't just crawl your site, they understand it.
Key metrics
12x EnrichmentLarge-Scale Model16K Token LimitEdge-CachedSub-50ms CachedHow AI optimization transforms your content
The AI pipeline identifies what's missing and fills the gaps automatically.
Feature 01
Content Gap Analysis -- AI-Powered Enrichment
Before (thin content)
<div class="product-page">
<h1>Running Shoes</h1>
<p>Great shoes for running.</p>
<div class="price">$149.99</div>
<button>Add to Cart</button>
</div>
<!-- ~200 tokens. No schema,
no FAQs, no context. -->After (AI enriched)
<div class="product-page">
<h1>Running Shoes</h1>
<p>Great shoes for running.</p>
<div class="price">$149.99</div>
<!-- AI ENRICHMENTS -->
<script type="application/ld+json">
{ "@type": "Product",
"name": "Running Shoes",
"offers": { "price": "149.99" },
"aggregateRating": { ... } }
</script>
<section class="faq">
<h2>Frequently Asked Questions</h2>
<!-- 10-15 AI-generated FAQs -->
</section>
<section class="entity-context">
<h2>What are Running Shoes?</h2>
<p>Running shoes are specialized
footwear designed for...</p>
</section>
</div>
<!-- ~2,400 tokens. 12x enriched. -->Our AI engine analyzes your page content with deterministic settings to identify what's missing for AI systems to truly understand your content. The content gap analyzer then injects enrichments directly into the HTML -- invisible to human visitors, invaluable to AI crawlers.
Enrichments injected
- Schema.org LD+JSON structured data (Product, Article, FAQ, etc.)
- FAQ sections with 10-15 AI-generated questions and answers
- Entity expansions with Wikipedia-style definitions
- Semantic relationship mapping (alternatives, related items)
- "What is X?" contextual paragraphs for disambiguation
- Credibility markers and factual statistics
Transformation pipeline
Crawler Detected
GPTBot / ClaudeBot / PerplexityBot
Content Fetched
Origin HTML retrieved
Gap Analysis
AI identifies missing enrichments
Enriched & Cached
Schema + FAQs + context injected → edge cache
Feature 02
Content Rules -- Per-Path Configuration
Content rules let you fine-tune AI optimization on a per-path basis. Set custom AI prompts for different sections of your site, configure cache TTLs, scope rules to specific domains, and control exactly how the AI transforms each page type.
Configuration options
- Path Patterns -- Match URLs with glob patterns like /products/* or /blog/**
- Custom AI Prompts -- Tell the AI exactly how to enrich specific content types
- Cache TTL -- Configure per-rule caching from 5 minutes to 30 days (default 1h)
- Domain Scoping -- Apply rules to specific domains or tenant-wide
Content rule configuration
{
"name": "Product Pages",
"path_pattern": "/products/*",
"domain_id": "d_abc123",
"is_active": true,
"custom_prompt": "Focus on product specs,
pricing comparisons, and buyer FAQs.
Add Schema.org Product markup with
offers, ratings, and availability.",
"cache_ttl": 3600,
"priority": 10
}AI model parameters
Model: Large-scale instruction-tuned LLM Temperature: 0.4 (deterministic) Max Tokens: 4,000 Pipeline: Content Gap Analyzer Cache: Edge-cached (configurable TTL) Fallback: Secondary AI provider (auto-failover)
Crawler-only activation
AI optimization only activates for recognized AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Anthropic-AI, Google-Extended, etc.). Human visitors always receive the original, unmodified page. Zero impact on user experience.
Four steps to AI-optimized content
From crawler detection to enriched response -- fully automated at the edge.
Step 01
AI crawler detected
Wrenda identifies the incoming request as an AI crawler by matching user-agent patterns. GPTBot, ClaudeBot, PerplexityBot, Google-Extended, and dozens more are recognized in under 1ms.
Step 02
Origin content fetched
The original HTML is fetched from your origin server. Static assets (images, CSS, JS) are automatically detected and skipped -- they pass through directly with CDN caching.
Step 03
AI analyzes content gaps
Our AI engine runs the content gap analyzer with deterministic settings. It identifies missing Schema.org markup, thin content sections, absent FAQs, and areas lacking semantic context.
Step 04
Enriched HTML served & cached
The enriched HTML is served to the crawler and cached at the edge with a configurable TTL. Subsequent requests for the same URL serve from cache in single-digit milliseconds.
Optimized for every content type
AI optimization adapts to your content -- whether you sell products, publish articles, or host documentation.
E-Commerce Product Pages
Product pages get Schema.org Product markup with offers, ratings, and availability. AI-generated FAQs cover sizing, shipping, comparisons, and care instructions.
- Schema.org Product + Offer + AggregateRating
- Buyer FAQ sections (10-15 questions)
- Competitor comparisons and alternatives
Publisher Articles
Articles are enriched with Article schema, entity definitions for key terms, topical context paragraphs, and comprehensive FAQ sections for featured snippet eligibility.
- Schema.org Article + Author + DatePublished
- Entity expansions for technical terms
- Topical authority signals and citations
SaaS Documentation
Documentation pages receive TechArticle schema, code example annotations, API reference enrichments, and troubleshooting FAQs that AI assistants can surface directly.
- Schema.org TechArticle + HowTo markup
- API parameter and endpoint descriptions
- Troubleshooting FAQs for common issues
See the enrichment in action
Send a request as GPTBot and see how Wrenda enriches the response automatically.
# Request as GPTBot -- triggers AI optimization curl https://yoursite.com/products/running-shoes \ -H "User-Agent: GPTBot/1.0" # Response headers show optimization was applied HTTP/2 200 X-AI-Optimized: true X-Crawler-Action: optimize X-Content-Enrichment: 12.4x X-Cache: HIT Content-Type: text/html; charset=utf-8 # Same URL as a regular browser -- no optimization curl https://yoursite.com/products/running-shoes \ -H "User-Agent: Mozilla/5.0 Chrome/120" # Response: original page, untouched HTTP/2 200 X-Crawler-Action: pass-through Content-Type: text/html; charset=utf-8
~200 → 2,400
Token enrichment (12x)
16K tokens
Per-page output limit
< 50ms
Cached response time
AI Optimization FAQs
Common questions about how Wrenda transforms content for AI crawlers.
Wrenda matches the request User-Agent against a continuously-updated list of AI crawler patterns: GPTBot, ClaudeBot, anthropic-ai, PerplexityBot, OAI-SearchBot, Google-Extended, Bytespider, CCBot, Applebot-Extended and more. Detection is per-request — the same URL can be served as enriched HTML to GPTBot and the original page to a regular browser.
For pages routed through the optimize action, Wrenda injects schema.org structured data, expands key entities with definitions, generates 10–15 FAQs from page context, and adds semantic context paragraphs. The result is dense, structured HTML that AI engines can tokenise and cite more easily — without changing what humans or Googlebot see.
No. Optimization is gated by user-agent, so only AI crawlers receive enriched HTML. Browsers, Googlebot and Bingbot get pre-rendered or pass-through traffic — whichever you configure per-rule. Google does not penalise serving different content to AI crawlers.
Schema markup is one piece of what Wrenda generates. The full pipeline also adds entity expansion, factual enrichment, FAQ blocks and relationship maps — written specifically for the way LLMs tokenise and weight content. It runs at request time on the edge, so changes to your origin propagate immediately without rebuilding.
Transformation runs on the edge close to your visitors. First-request latency for a long page is typically under 2 seconds; cached responses return in under 50ms. The default cache TTL is 1 hour, so repeat requests within an hour return immediately.
Each cache miss counts as one billable AI-optimisation request against your plan limit (Starter 100K/month, Professional 500K, Enterprise 2M). Cache hits and pass-through traffic are free. Most sites see hit rates above 90% within a week.
Get Started
Make AI systems understand your content
Enable AI optimization in minutes. No code changes required -- just point your domain and let our AI engine do the rest.
Start Optimizing14-day free trial -- No credit card required