All sites up to date
JD

Notifications

Alerts & blocked sites across all properties

Crawling website...

🕷️ Sending spider down the rabbit hole...

Discovering and analyzing all internal pages. This may take a moment.

System Metrics
Health Score +3.2%
Tracked Pages
Last crawl:
Pages crawled:
Open Issues
Changes (7d)
Search Console
Clicks (28d)
Impressions (28d)
Avg. CTR
Avg. Position
Inspected
Indexed
Core Web Vitals

No CrUX data yet

Core Web Vitals sync automatically from Google Search Console

Health Score Over Time

Daily average · Track your site's overall health trend

Issues by Category

Distribution of detected issues

Recent Changes

View All

Top Issues

View All

Pages Health Breakdown

62%
24%
14%
Healthy
Needs Attention
Critical
URL Title Health HTTP Status Indexable Last Crawled Actions
8 Critical
11 Warnings
5 Notices
142 Resolved
0 Ignored
Googlebot Visits 3,482 +8.4%
Bingbot Visits 892 -2.1%
Other Bots 214 0%

Bot Crawl Activity

Most Crawled Pages

URL Googlebot Bingbot Others Total Crawls Last Crawled

Upload Your Access Log File

Drag & drop your access.log file here, or click to browse

Supported formats: Combined Log Format, Common Log Format (.log, .txt)

Advantages of Log File Analysis

1

Download Access Logs

Download the access.log file from your web server (Apache, Nginx, IIS).

2

Upload to Analyzer

Drag and drop or browse to upload your log file for analysis.

3

Get Detailed Report

See which bots crawl your site, status codes distribution, and crawled URLs.

Website Settings

Crawl Settings

Crawler Identity Override

Override the HTTP Host header and User-Agent sent during crawling. Use this when the site's security layer is blocking the crawler — enable the toggle, then enter the identity values that match what you've whitelisted on the server.

How to whitelist BrainZMonitor on your site

BrainZMonitor crawls with a rotating pool of browser-realistic User-Agents by default. If your WAF, CDN, or server-side firewall is blocking the crawler, share the following information with your developer or hosting team and ask them to whitelist it.

Option 1 — Whitelist by User-Agent string (recommended)

Ask your developer to allow requests whose User-Agent header contains the following identifier. This is the safest method and requires no IP-level changes.

BrainZMonitor/2.0 (+https://brainzmonitor.com/bot)

For Nginx: if ($http_user_agent ~* "BrainZMonitor") { ... }
For Apache (.htaccess): SetEnvIf User-Agent "BrainZMonitor" allow_bot
For Cloudflare WAF: add a rule — User Agent contains "BrainZMonitor" → Allow
For Wordfence / WordPress plugins: add BrainZMonitor to the bot whitelist

Option 2 — Use a shared secret header

Set a custom User-Agent (or Host header) below that only you and your developer know. Configure your server to allow any request carrying that exact string. Then enable the override toggle and paste the same value here.

X-Crawler-Token: <your-secret>

Alternatively, simply set a recognisable User-Agent such as InternalSEOBot/1.0 MyCompany and whitelist that string on the server.

robots.txt — allow the crawler

Add the following block to your robots.txt to explicitly allow BrainZMonitor:

User-agent: BrainZMonitor Allow: /
Option 3 — Whitelist by server IP (Cloudflare / WAF firewall rule)

Add the server's IP address to your WAF/CDN allowlist so all requests from BrainZMonitor are let through regardless of User-Agent.

Fetching…

For Cloudflare: Security → WAF → Tools → IP Access Rules → add this IP as Allow.
For Cloudflare Bot Management: Security → Bots → Configure → add a WAF Skip rule: IP Source Address equals <this IP> → Skip all managed rules.
For Nginx: allow <this IP>; in your server {} block.
For Apache (.htaccess): Require ip <this IP>

After whitelisting, use the Enable Custom Host / User-Agent toggle below only if you chose a custom identity (Option 2). For Option 1 or robots.txt, no override is needed — the crawler will use its default identity automatically.

Cache Bypass

Force the crawler to fetch fresh content by bypassing server-side caches (WP Rocket, LiteSpeed Cache, Cloudflare CDN, Varnish, etc.). Enable this if monitored changes aren't detected due to aggressive caching.

Bypass cache when crawling this site

Custom Session Cookies

Some sites (e.g. those using IP-rate-limiting or bot-detection services) block automated crawlers. You can paste session cookies from your browser here to help the crawler bypass those restrictions.

How to get cookies: Open the site in Chrome, press F12 → Application → Cookies, then copy the Name and Value pairs you need.
Format: name1=value1; name2=value2

Cookies are sent with every request to this site. Keep them private — anyone who has them can access the site as you.

Slack Notifications

Paste a Slack Incoming Webhook URL to receive alert notifications in your Slack workspace. Any alert rule with Slack checked will post here. How to create a webhook →

Email Notifications

Enter an email address to receive all alert notifications by email. Every triggered alert rule will send a message to this address. Requires SMTP to be configured on the server (SMTP_HOST, SMTP_USER, SMTP_PASS env vars).

CMS Integration (Auto-Fix)

Connect your WordPress site to enable automatic fixing of SEO issues. Requires a WordPress Application Password (guide →).

The root URL of your WordPress site (wp-json must be accessible).
Generate in WordPress → Users → Your Profile → Application Passwords.
Enable automatic fixing after each crawl
Integration enabled

Google Search Console

Connect Google Search Console to enrich SEO data with real search performance metrics (clicks, impressions, CTR, position) and indexation status.

Click below to authorize BrainZMonitor to read your Search Console data.

Danger Zone

Initialize site data

Clear crawled pages, issues, changes, alerts, and history for this site while keeping the website definition and settings for future crawls from scratch.

Delete this website

Permanently remove this website and all its crawled data, issues, changes, and alerts. This action cannot be undone.

Member Role Assigned sites Last login Actions
Loading team…
Type Subject Requestor Status Created Actions
Loading…

Closed & Rejected

Type Subject Requestor Status Created Actions
No closed or rejected requests.