Running a Weekly SEO Audit on Autopilot with Claude Code and Metrifyr
Combine Claude Code scheduled tasks, Anthropic skills and the Metrifyr MCP server to run a full SEO + GEO audit of your site every week β without touching a keyboard.
SEO audits are the kind of work you keep meaning to do and never actually get around to. Pull GSC numbers, cross-reference with GA4, run PageSpeed Insights, compare to last month, spot the pages that dropped two positions last week β it's tedious, repetitive, and exactly the kind of thing an LLM with live API access should be doing for you on a timer.
This post walks through an actual setup I run for pixelden.io, my indie pixel-art browser-game platform. Every Monday morning, Claude Code wakes up on its own, pulls live data from Metrifyr, runs a 10-phase audit using Anthropic's SEO skills library, and leaves me a structured report. I read it with my coffee.
Here's the full recipe.
The three pieces
Three pieces of infrastructure make this work:
- Claude Code scheduled tasks β lets you register a cron-style trigger that fires a prompt at Claude Code on a schedule. No local daemon, no GitHub Actions cron, no "my Mac has to be on". Anthropic runs it for you.
- Anthropic skills library β a set of structured, reusable skills that give Claude step-by-step instructions for common tasks. For SEO, the community library
aaron-he-zhu/seo-geo-claude-skillsgives you 20 skills covering keyword research, technical SEO, Core Web Vitals, schema markup, content quality and GEO (Generative Engine Optimization). - Metrifyr MCP server β live Google Analytics 4, Search Console, AdSense and Tag Manager data plus SEO tools (PageSpeed Insights, Google Trends, SEO audit). Connect it once to your AI client and every tool is available as an MCP function call.
The skills tell Claude how to think. Metrifyr gives Claude what to think about. Claude Code scheduled tasks tell it when to think.
Step 1 β Connect Metrifyr to Claude Code
- Sign in at metrifyr.cloud with GitHub or Google.
- Create an API key from the dashboard.
- In Claude Code, add the Metrifyr MCP server β either via OAuth (recommended) or by pasting your API key.
- The connection now gives Claude access to ~70 Google Marketing tools:
ga4_run_report,gsc_search_analytics,psi_analyze,trends_interest_over_timeand friends.
Thanks to v1.10.1, your MCP session lasts 180 days, so you set this up once and forget about it for six months.
Step 2 β Install the SEO skills library
Skills are published as npm packages. The SEO + GEO library installs into your Claude Code workspace with a single command:
npx skills add aaron-he-zhu/seo-geo-claude-skills -y --allThis gives you 20 structured skills. We'll use five of them in the audit: keyword-research, technical-seo-checker, schema-markup-generator, geo-content-optimizer and content-quality-auditor.
Step 3 β Write the scheduled prompt
Here's the real prompt I have scheduled to run every Monday at 7:00 CET. It produces a complete audit of pixelden.io in about 8β12 minutes.
The prompt is long β that's the point. Long structured prompts are what make unattended runs reliable. You're not going to be there to steer Claude if it wanders, so the prompt has to carry the structure itself.
## SETUP
Before anything else, install the SEO & GEO skills library:
npx skills add aaron-he-zhu/seo-geo-claude-skills -y --all
Tool connector mapping for all skills (replace placeholders):
search console -> Metrifyr gsc_* tools
analytics -> Metrifyr ga4_* tools
page speed tool -> Metrifyr psi_analyze / psi_compare
SEO tool -> Metrifyr seo_audit
web crawler -> WebFetch tool (built into Claude Code)
trends tool -> Metrifyr trends_* tools
---
## PHASE 1 β Live data collection via Metrifyr
### 1a. Full SEO audit
Call Metrifyr `seo_audit` for https://www.pixelden.io/
Extract: PSI mobile + desktop, LCP, CLS, INP, top 5 opportunities,
GSC clicks/impressions/position (last 28d), indexed pages.
### 1b. GSC search analytics
`gsc_search_analytics` with dimensions=["query","page"], rowLimit=50,
dateRange=last28Days. Flag queries at positions 8β20 (quick wins),
pages with >100 impressions and <2% CTR, branded vs non-branded split.
### 1c. Index status
`gsc_list_sitemaps` + `gsc_inspect_url` on /, /games.
### 1d. Trends landscape
`trends_interest_over_time` for: pixel art games, browser games,
free online games, pixel art tower defense, indie browser games,
phaser games. geo=US, timeframe=today 3-m.
### 1e. GA4 traffic
`ga4_run_report` β last 28 days,
dimensions=["sessionDefaultChannelGroup","landingPage","deviceCategory"],
metrics=["sessions","bounceRate","averageSessionDuration","newUsers"].
---
## PHASE 2 β Keyword research (keyword-research skill)
Seeds: pixel art browser games, free indie browser games,
pixel art tower defense, phaser.js games, play pixel games online.
Run the 8-phase process: scope β discovery β variations β intent
classification β opportunity score β GEO-check β topic clusters
β deliver top 10 + 5 GEO-priority keywords.
---
## PHASE 3 β Technical SEO (technical-seo-checker skill)
Cover: crawlability, indexability, Core Web Vitals (with actual
Metrifyr numbers), mobile-friendliness, HTTPS, URL structure,
structured data. Score each /10. Flag veto-level issues.
---
## PHASE 4 β Fixes (execute in order)
4a. Sitemap (app/sitemap.ts)
4b. robots.txt
4c. Root metadata (title, description, OG, Twitter)
4d. JSON-LD (schema-markup-generator skill): WebSite + Organization
on homepage, VideoGame + FAQPage on /games/crystal-maze-td
4e. Core Web Vitals: fix top 3 PSI opportunities, img width/height,
LCP fetchpriority, lazy loading below fold
4f. Canonical tags on every page
---
## PHASE 5 β GEO (geo-content-optimizer skill)
Homepage: add a quotable definition paragraph near the top using
the X-is-a-Y-that-Z pattern. /games: add an H2 FAQ-style section.
/games/crystal-maze-td: add 3 FAQ questions wrapped in FAQPage JSON-LD.
---
## PHASE 6 β Content quality (content-quality-auditor skill)
Run CORE-EEAT on homepage and Crystal Maze TD page.
Output dimension scores: C:XX O:XX R:XX E:XX E:XX A:XX T:XX.
Flag veto items T04, C01, R10.
---
## PHASE 7 β Final report
Data snapshot, keyword targets, files modified, CORE-EEAT scores,
technical SEO scores, issues fixed (severity + Y/N), remaining
manual actions, top 3 highest-impact changes with reasoning.
## HARD CONSTRAINTS
- Read every file before editing
- Do NOT break Phaser.js canvas
- Do NOT rewrite existing content, only add/fix/extend
- English only for user-facing content
- Every recommendation must cite a specific data point from MetrifyrStep 4 β Schedule it
In Claude Code, open the scheduled tasks panel and create a new trigger:
- Cron:
0 7 * * 1(every Monday at 07:00) - Prompt: the full markdown above
- Workspace: your pixelden.io local checkout (so it can edit files if needed)
Claude Code now owns the job. Every Monday morning it runs the prompt against the latest Metrifyr data.
What you actually get back
For my site, a typical run produces:
- PSI mobile 84, desktop 96 (with the exact LCP/CLS/INP numbers and whether they regressed)
- Top 10 keyword targets with opportunity scores tied to actual GSC impressions from the last 28 days β not an SEO tool's generic volume estimate
- CORE-EEAT dimension scores on the homepage and one game page
- A diff of every file it modified (sitemap, robots, metadata, JSON-LD) with one-line explanations
- Three "highest impact" recommendations with reasoning like "your /games/crystal-maze-td page has 412 impressions at position 11.3 over the last 28 days β a title rewrite targeting 'pixel tower defense' moves this to page 1"
And critically: every recommendation cites a real Metrifyr data point. The skills library enforces this as a hard constraint. No generic "improve your titles" advice.
Why this works
Three reasons.
Structured skills beat prompt engineering. Instead of writing a 5000-word prompt and hoping Claude obeys it, the skills library encodes the entire SEO audit methodology as composable, reusable skills. You point at them, Claude follows them. Updates to the methodology come via npx skills upgrade.
Live data beats frozen data. The skills are skillsets, not reports. Without Metrifyr, the skills would have to either rely on the AI's training data (frozen and wrong) or ask you for PSI scores and GSC numbers manually (defeats the point). Metrifyr is the live-data substrate the skills run on.
Unattended runs demand structure. When you're sitting next to the AI, a loose prompt works fine β you steer it as it wanders. When Claude Code is running the prompt at 7 AM on a Monday while you're still asleep, there is no steering. The combination of hard phases, explicit tool mappings and data-citation requirements is what makes this work without you in the loop.
Adapting it to your site
Three things to change:
- The URL and seed keywords. The prompt I pasted targets pixelden.io. Replace the URL in each Metrifyr tool call and replace the seed keywords list in Phase 2 with your niche.
- The schema blocks in Phase 4d. Pick the schema types that match your content:
Article,Product,LocalBusiness,SoftwareApplication, etc. Theschema-markup-generatorskill handles the details. - The GEO FAQ questions in Phase 5. Drive these from actual user queries surfaced by your GSC data in Phase 1b, not from imagination. That's the whole point.
Once you've adapted it, schedule it weekly and forget about it. When your next audit lands in your inbox, fix the three highest-impact items it flags. Repeat every week. Your search rankings will drift upward while you focus on shipping actual product.
Next up on the Metrifyr blog: a deep-dive on how the ga4_compare_periods tool turns a 40-line prompt into a one-line question, and how to build your own skills library for your team.
Want to try this yourself? Create a Metrifyr account (free beta), install the MCP server in Claude Code, and grab the SEO skills library. The whole setup takes about 15 minutes.