Performance Audit
Analyze web performance with browse — Core Web Vitals, framework detection, coverage analysis, and actionable recommendations.
Overview
Browse includes a built-in performance analysis system that gives AI agents everything they need to diagnose and fix web performance issues. Seven commands work together:
browse perf-audit— full performance audit in one commandbrowse perf-audit save— save audit report for later comparisonbrowse perf-audit compare— compare saved baseline against current or another auditbrowse perf-audit list— list saved audit reportsbrowse detect— technology stack fingerprintbrowse coverage start|stop— JS/CSS code coveragebrowse initscript— pre-navigation script injection
Quick Start
The simplest workflow is two commands:
browse goto https://example.com
browse perf-auditOr navigate and audit in one step:
browse perf-audit https://example.comThe audit reloads the page, collects Core Web Vitals, analyzes resources, measures code coverage, detects the tech stack, and returns a structured report with actionable recommendations.
perf-audit
browse perf-audit [url] [--no-coverage] [--no-detect] [--json]Flags:
| Flag | Purpose |
|---|---|
--no-coverage | Skip JS/CSS coverage collection (faster audit) |
--no-detect | Skip technology stack detection |
--json | Output raw JSON instead of formatted text |
The audit reloads the current page to collect fresh metrics. If a URL is provided, it navigates there first.
Report Sections
The text report contains up to 16 sections:
1. Core Web Vitals — TTFB, FCP, LCP, CLS, INP, and TBT rated against Google's thresholds (good / needs improvement / poor).
2. LCP Analysis — The Largest Contentful Paint element, its size, load duration, blocking resources, and critical path reconstruction.
3. Layout Shifts — Each layout shift with timestamp, shift value, reason (font swap, missing dimensions, dynamic content), and source element.
4. Long Tasks — Scripts grouped by domain with total blocking time (TBT) contribution and task count.
5. Resource Breakdown — Total page weight broken down by type (JS, CSS, images, fonts, media, API calls).
6. Render-Blocking — Synchronous scripts and blocking stylesheets that delay first paint.
7. Coverage Summary — JS and CSS files sorted by wasted bytes, showing total size, used percentage, and unused bytes.
8. Image Audit — Each image checked for format (WebP/AVIF), dimensions vs display size, lazy-load attribute, fetchpriority, srcset, and oversized delivery.
9. Font Audit — Each font family checked for font-display value, preload status, and FOIT/FOUT risk.
10. DOM Complexity — Total node count, maximum nesting depth, and largest subtree identification.
11. Stack Detection — Detected frameworks, SaaS platforms, build modes, and versions (powered by detect).
12. Third-Party Impact — Third-party domains ranked by size with script counts and category classifications.
13. Fixable vs Platform Constraints — Separates issues you can fix from things constrained by your hosting platform (e.g., Shopify's theme system, Wix's runtime).
14. Recommendations — Prioritized list of actionable fixes, including platform-specific advice when a SaaS platform is detected.
15. Warnings — Any partial failures or fallbacks during the audit.
16. Audit Timing — How long each phase took (reload, settle, collect, detection, coverage).
Example Output
$ browse perf-audit https://example.com
Core Web Vitals:
TTFB 45ms good
FCP 312ms good
LCP 520ms good
CLS 0.000 good
INP --- (no data)
TBT 0ms good
Resource Breakdown:
Total page weight: 11KB
Scripts (JS): 0 files, 0B
Styles (CSS): 0 files, 0B
Images: 0 files, 0B
Fonts: 0 files, 0B
DOM Complexity:
Nodes: 28
Depth: 4
Largest: body (16 nodes)
Audit completed in 1.2s (reload: 320ms, settle: 500ms, collect: 45ms)Analytics-Heavy Pages
For pages that make continuous analytics requests (common on e-commerce sites), networkidle may never fire. The audit automatically detects this and falls back to the load event, adding a warning to the report:
Warnings:
- networkidle wait failed (timeout), fell back to load eventSaving and Comparing Audits
Save audit reports for later comparison — useful for tracking performance across deploys or verifying that fixes actually improved metrics.
Save
# Save with auto-generated name (hostname + date)
browse perf-audit save
# Audit saved: .browse/audits/example-com-2026-03-27.json
# Save with custom name
browse perf-audit save my-baselineThe full audit runs first, then the report is saved to .browse/audits/. You can also combine with a URL and flags:
browse perf-audit save pre-deploy https://staging.example.com --no-detectCompare
Compare a saved baseline against the current page (live audit) or another saved report:
# Compare saved baseline against live audit of current page
browse perf-audit compare my-baseline
# Compare two saved audits
browse perf-audit compare before-deploy after-deploy
# JSON output for programmatic use
browse perf-audit compare my-baseline --jsonThe diff shows per-metric changes with regression/improvement detection:
Web Vitals:
Metric Baseline Current Delta Verdict
TTFB 120ms 180ms +60ms = unchanged
LCP 1800ms 2400ms +600ms ↑ regression
CLS 0.050 0.020 -0.030 ↓ improvement
TBT 80ms 90ms +10ms = unchanged
0 regressions, 1 improvement, 3 unchangedRegression thresholds are aligned with Web Vitals "good" boundaries:
| Metric | Threshold |
|---|---|
| TTFB | +100ms |
| FCP | +100ms |
| LCP | +200ms |
| CLS | +0.05 |
| TBT | +100ms |
| INP | +50ms |
Changes within the threshold are marked "unchanged" to avoid false positives from natural variance.
List and Delete
# List all saved audits
browse perf-audit list
# Name Size Date
# pre-deploy 48KB 2026-03-27 10:30:02
# post-deploy 52KB 2026-03-27 14:15:18
# Delete a saved audit
browse perf-audit delete pre-deployAudit files are stored in .browse/audits/ as JSON.
detect
browse detectDetects the technology stack of the current page in a single page.evaluate() call (typically under 200ms). No page reload required.
Detection Scope
- 108 frameworks across 12 categories: JavaScript (React, Vue, Angular, Svelte...), meta-frameworks (Next.js, Nuxt, Remix...), PHP (WordPress, Laravel...), Python (Django, Flask...), Ruby (Rails), Java/.NET, CSS frameworks, static site generators, mobile web, emerging frameworks, state management, and build tools
- 55 SaaS platforms across 6 categories: e-commerce (Shopify, Magento, WooCommerce...), website builders (Wix, Squarespace, Webflow...), CMS hosted, CMS headless, marketing, and hosting
- Infrastructure: CDN provider, HTTP protocol breakdown (h2/h3), compression by type, cache hit rate, Service Worker strategy, DOM complexity
- 88 third-party domain classifications (analytics, ads, social, CDN, payment, etc.)
Example Output
$ browse detect
Stack:
meta-framework Next.js 14.2.3 (production), router: app
js-framework React 18.2.0 (production)
css Tailwind CSS 3.4
Infrastructure:
CDN: Vercel (cache: HIT)
Protocol: h2 (h2: 95%, h1.1: 5%)
Compression: js: 100% compressed, css: 100% compressed
Cache rate: 72% (18/25 resources)
DNS origins: 3 unique (1 missing preconnect)
DOM: 1,247 nodes, depth 14, largest: main#content (892 nodes)For the full list of detectable technologies, see the supported technologies reference in the CLI repository.
coverage
Measure how much of your JavaScript and CSS is actually used on the current page.
# Start collecting coverage
browse coverage start
# Navigate and interact with the page
browse goto https://example.com
browse click @e3
browse scroll down
# Stop and see results
browse coverage stopExample Output
$ browse coverage stop
JavaScript:
vendor.js 450KB used: 120KB ( 27%) wasted: 330KB
app.js 180KB used: 95KB ( 53%) wasted: 85KB
analytics.js 45KB used: 8KB ( 18%) wasted: 37KB
[inline] 2KB used: 2KB (100%) wasted: 0B
Total 677KB used: 225KB ( 33%) wasted: 452KB
CSS:
styles.css 85KB used: 22KB ( 26%) wasted: 63KB
[inline] 3KB used: 1KB ( 33%) wasted: 2KB
Total 88KB used: 23KB ( 26%) wasted: 65KB
Grand Total: 765KB used: 248KB (32%) wasted: 517KB (68%)Results are sorted by wasted bytes (largest waste first). Inline scripts and styles appear as [inline].
initscript
Inject a script that runs before every page load. Useful for mocking APIs, injecting polyfills, or setting up custom observers before the page's own JavaScript executes.
# Set an init script
browse initscript set "window.__TEST_MODE = true;"
# View the current init script
browse initscript show
# Remove the init script
browse initscript clearKey Behaviors
- The script runs before any page JavaScript, on every navigation
- Use an IIFE pattern for namespace safety:
(function() { ... })() - Survives device emulation (restored when context is recreated)
- Coexists with the domain filter (both use
addInitScriptinternally) - Clearing the script takes effect on the next context recreation (e.g.,
emulateorrestart)
Use Cases
Mock an API endpoint:
browse initscript set "(function() {
window.fetch = new Proxy(window.fetch, {
apply(target, thisArg, args) {
if (args[0] === '/api/feature-flags') {
return Promise.resolve(new Response(JSON.stringify({ darkMode: true })));
}
return Reflect.apply(target, thisArg, args);
}
});
})()"Inject a performance observer:
browse initscript set "(function() {
new PerformanceObserver(list => {
for (const entry of list.getEntries()) {
console.log('Long task:', entry.duration, 'ms');
}
}).observe({ type: 'longtask', buffered: true });
})()"Supported Technologies
The detection system covers:
- 107 frameworks across 12 categories (JS, meta-frameworks, PHP, Python, Ruby, Java/.NET, CSS, SSG, mobile, emerging, state management, build tools)
- 55 SaaS platforms across 6 categories (e-commerce, website builders, CMS hosted, CMS headless, marketing, hosting)
- 88 third-party domain classifications (analytics, ads, social, CDN, payment, monitoring, etc.)
- Infrastructure: CDN, protocol, compression, caching, Service Worker, DOM complexity
Using with AI Agents
The performance audit is designed for an AI agent workflow:
- Agent runs
browse perf-audit save before https://example.comto establish a baseline - Agent reads the report and the source code to understand the issues
- Agent fixes the identified issues (render-blocking scripts, oversized images, unused CSS, missing font-display, etc.)
- Agent runs
browse perf-audit compare beforeto verify fixes worked — regressions are flagged automatically - Agent repeats until all actionable items are resolved and no regressions remain
The report's "Fixable vs Platform Constraints" section tells the agent which issues are worth fixing and which are inherent to the hosting platform.
Tips
- Use
--no-coveragefor faster audits when you only need Web Vitals and detection - Use
--jsonfor programmatic processing or when piping to other tools perf-auditreloads the page — run it after you've navigated to the target URL, or pass the URL as an argument- For pages with continuous analytics, the audit automatically falls back from
networkidleto theloadevent - Run
detectseparately when you only need the tech stack (no reload, under 200ms) - Coverage collection works across navigation — start it, browse multiple pages, then stop to see aggregate results
- Use
perf-audit savebefore and after changes to track exactly what improved or regressed - Use
perf-audit comparein CI to catch performance regressions before deploy - Saved audits are ~4-50KB JSON files — safe to keep many without disk concerns