browse-geo skill
Generative Engine Optimization — multi-engine brand visibility monitoring across Google AI Overviews, Perplexity, and ChatGPT Search.
Install
npx skills add https://github.com/ulpi-io/skills --skill browse-geoStrongly recommended for Google coverage:
npm install camoufox-js
npx camoufox-js fetchTrigger
/browse-geo <domain> [queries...] — or invoked when the user asks for:
- "GEO monitoring", "generative engine optimization"
- "AI visibility tracking", "brand monitoring in AI search"
- "Where does my site get cited in AI answers?"
What it does
Multi-query, multi-engine brand visibility monitoring. For each query × engine combination, the skill records whether the target domain appears, its position, citation count, and context. Produces a cross-engine visibility matrix.
Engines covered
| Engine | Method | Auth needed | Camoufox needed |
|---|---|---|---|
| Google AI Overviews | browse goto https://www.google.com/search?q=<q> | No | Yes |
| Perplexity | browse goto https://www.perplexity.ai/search?q=<q> | No | Yes (Cloudflare) |
| ChatGPT Search | browse goto https://chatgpt.com + fill/Enter | Yes (--profile chatgpt) | No |
Workflow
- Gather inputs — target domain, query list, engine selection (via
AskUserQuestion) - Google AI Overviews —
--runtime camoufox --headed,BROWSE_CONSENT_DISMISS=1, snapshot + extract - Perplexity — direct navigation,
wait --network-idle, numbered citation analysis - ChatGPT Search —
--profile chatgptfor authenticated session, fill + press Enter, handle streaming response - Visibility report — cross-engine matrix table with summary metrics
Report output
- Visibility matrix: domain × engine × query
- Summary metrics: visibility rate (%), avg citation position, gap queries (where domain is absent), competing domains (who gets cited instead)
- Recommendations: which queries to target, which competitors to study
Key rules
- Camoufox is required for Google. Without it, you'll hit "unusual traffic" blocks within a few queries.
- Rate limit: add
browse wait --network-idlebetween queries. - No fabrication — the agent reads actual citations from the snapshot, never guesses.
- Results are point-in-time — AI engines change answers frequently. Rerun periodically.
Commands used
goto, snapshot, text, wait, fill, press
Related
- browse-aeo — per-page AEO readiness analysis
- browse-stealth — anti-detection setup for Google/Perplexity