Use cases
Three real scenarios where ProceedGate stops runaway behavior. Each demo below calls the live API — click 11+ times to trigger storm detection.
A buggy prompt causes GPT-4 to keep calling itself, re-evaluating the same context. Each call costs ~$0.04. At 200 calls that's $8 before anyone notices.
An agent using model_call enters a reasoning loop — same prompt, same context, same output, repeat. The LLM API doesn't know it's a loop. It just bills per token.
Every model call hashes the prompt context. After 10 identical hashes in 60s, the check returns loop_detected and stops the agent cold — before it burns through your monthly API budget.
const PG_KEY = process.env.PG_KEY; async function pgCheck(agentId: string, taskHash: string) { const res = await fetch('https://governor.proceedgate.dev/v1/check', { method: 'POST', headers: { 'Authorization': `Bearer ${PG_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ agent_id: agentId, task_hash: taskHash, action: 'model_call', }), }); if (!res.ok) throw new Error((await res.json()).error); return res.json(); // { allowed, proceed_token, zone, ... } } // before every GPT-4 call: const gate = await pgCheck('reasoning-agent', sha256(prompt)); if (!gate.allowed) throw new Error('loop_detected');
A 403 or rate limit triggers automatic retries. Without a circuit breaker the scraper retries the same URL hundreds of times — each one counting against your quota.
Apify actor hits a 403, calls actor.retry(). Default retry config: 8 attempts per request. Across 100 URLs that's 800 wasted API calls before the job fails.
Each URL is hashed before fetching. After 10 retries of the same hash in 60s, loop_detected stops the actor and fires a webhook alert. The other 90 URLs continue normally.
const PG_KEY = process.env.PG_KEY; for (const url of urls) { const res = await fetch('https://governor.proceedgate.dev/v1/check', { method: 'POST', headers: { 'Authorization': `Bearer ${PG_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ agent_id: 'apify-scraper', task_hash: sha256(url), action: 'tool_call', }), }); if (res.status === 429) throw new Error('loop_detected'); const html = await fetch(url); }
A pipeline of 4 agents (Researcher → Analyst → Writer → Reviewer) shares one budget. If Researcher enters a loop, it shouldn't drain the credits for everyone else.
In CrewAI or LangGraph, agents share a workspace. One stuck agent in a disambiguation loop consumes all remaining budget — the rest of the pipeline is starved and fails with no budget left.
Each agent identifies itself via agentId. Loop detection is per-agent, but budget is shared per workspace. The stuck agent is blocked while the others continue unaffected.
const PG_KEY = process.env.PG_KEY; async function check(agentId: string, action: string, taskHash: string) { const res = await fetch('https://governor.proceedgate.dev/v1/check', { method: 'POST', headers: { 'Authorization': `Bearer ${PG_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ agent_id: agentId, task_hash: taskHash, action, }), }); if (res.status === 429) throw new Error(`${agentId}: loop_detected`); return res.json(); } // each agent checks independently, budget is shared await check('researcher', 'tool_call', sha256(query)); await check('analyst', 'model_call', sha256(data)); await check('writer', 'model_call', sha256(draft));
Feature matrix
All features are live in the free tier. No paid plan required to test loop detection or proceed tokens.
| Feature | LLM Gateway | Scraping | Multi-Agent |
|---|---|---|---|
| Loop detection (>10 identical / 60s) | ✓ | ✓ | ✓ |
| Gray zone (6–10 repeats) | ✓ | ✓ | ✓ |
| Signed proceed token (ES256) | ✓ | ✓ | ✓ |
| Per-workspace budget cap | ✓ | ✓ | ✓ |
| Per-agent loop detection (shared workspace) | — | — | ✓ |
| Webhook alert on loop block | ✓ | ✓ | ✓ |
| Agent reputation scoring | ✓ | ✓ | ✓ |
No card required. The demos above use the real API — your integration will too.
Get your API key →