Quickstart
Get a working proxy in one terminal session. No engine required to start.
Gateway mode (recommended starting point)
No detection engine. No Python. No extra dependencies. The proxy forwards all requests while giving you rate limiting, audit logging, and provider routing out of the box.
git clone https://github.com/promptshieldhq/promptshield-proxy
cd promptshield-proxy
cp .env.example .env
# edit .env: set PROMPTSHIELD_PROVIDER and your API key
make run
# proxy listening on :8080Verify it is up:
curl -s http://localhost:8080/health
# {"status":"ok","service":"promptshield-proxy"}Send your first request:
curl -s -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Say hello"}]}'That is it. Your app now has a proxy with full audit logging and rate limiting. Add the engine when you are ready for PII scanning.
Full PII scanning and injection blocking is coming soon in a separate detection engine. The proxy works fully without it, but you won't get the security features until it ships. Star the repo and watch for the release.
What is next
- Configure your LLM provider: Gemini, OpenAI, Ollama, and more
- Customize the policy: decide what gets blocked, masked, or allowed
- Integrate your SDK: point
base_urlat the proxy, done
PromptShield
A free, open-source proxy for every LLM call. Rate limiting, audit logging, provider routing, and Prometheus metrics out of the box. PII scanning and injection blocking coming soon.
Provider Setup
Route requests to Gemini, OpenAI, Ollama, or any OpenAI-compatible endpoint by setting two environment variables.