🎉 Free during beta — all features included

Stop prompt attacks
before they reach your LLM

One API call. Instant risk scores. Block prompt injection, jailbreaks, and system-prompt extraction in milliseconds — with full analytics and webhooks.

Drop-in SDK or raw HTTP — your choice

Pythonpip install llmgateways
from llmgateways import wrap, PromptBlockedError
from openai import OpenAI

client = wrap(OpenAI(), api_key="lgk_...")

try:
    resp = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role":"user",
                   "content":"Hello!"}],
    )
except PromptBlockedError as e:
    print("Blocked:", e.result.threats)
Node.js / TypeScriptnpm i llmgateways
import { wrap, PromptBlockedError } from 'llmgateways';
import OpenAI from 'openai';

const client = wrap(new OpenAI(), { apiKey: 'lgk_...' });

try {
  const resp = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user',
                 content: 'Hello!' }],
  });
} catch (e) {
  if (e instanceof PromptBlockedError)
    console.log('Blocked:', e.result.threats);
}
cURL / raw HTTP
curl -X POST https://llmgateways-backend-nqpl7yvf3a-ew.a.run.app/api/v1/prompt/scan \
  -H "X-API-Key: lgk_your_key" -H "Content-Type: application/json" \
  -d '{"prompt": "Ignore all previous instructions..."}'

# Response (< 5ms)
{"risk_score": 0.82, "action": "block", "threats": ["prompt_injection", "jailbreak"], "latency_ms": 3}

< 5ms latency

3-layer detection engine: pattern rules → semantic similarity → LLM judge. Most prompts decided in under 5ms.

78+ threat patterns

Covers prompt injection, DAN jailbreaks, system-prompt extraction, PII leakage, token smuggling and more.

Full analytics

Real-time dashboards, threat breakdowns by category, per-API-key stats and scan history.

Webhooks

Get notified instantly when a prompt is blocked. HMAC-signed payloads, configurable per event type.

Email alerts

Automatic alerts when your block rate spikes — catch active attacks or misconfigured integrations early.

Per-key custom rules

Blocklist phrases and disable detection categories per API key. Rate limited to 100 req/min per key.

Ready to secure your LLM?

Free during beta. No credit card required.

Create free account →