Spendra is a financial governance platform for AI operations. It sits between company AI clients and paid provider APIs, reserves budget before requests reach a provider, records settled usage into a ledger, and exposes operational views for finance, platform, and engineering leaders. These docs are written for technical evaluators and implementation owners planning a governed Spendra deployment.Documentation Index
Fetch the complete documentation index at: https://docs.cynsta.com/llms.txt
Use this file to discover all available pages before exploring further.
What Spendra governs
Spendra V1 governs traffic that passes through the Spendra gateway. The gateway is OpenAI-compatible for OpenAI Responses, Files, and Chat Completions workloads, so most OpenAI client integrations keep their existing SDK and change only the base URL and API key. Spendra also supports provider-specific Chat Completions routing for OpenAI, OpenRouter, Google Gemini API, Vertex AI, Azure OpenAI, and Anthropic through/v1/providers/{provider}/chat/completions.
The platform tracks:
- Scoped API keys for employees, agents, and automation surfaces.
- Role capabilities for finance, IAM, platform, management, employee, and audit workflows.
- Provider-aware model scopes across OpenAI, OpenRouter, Google Gemini API, Vertex AI, Azure OpenAI, and Anthropic chat traffic.
- Hard-cap and soft-cap policies.
- Atomic budget reservations before provider calls.
- Settled spend events and booked ledger entries.
- Audit records for key, policy, budget, role, and organization changes.
Runtime components
Spendra ships as three application processes plus a database/Auth project:- Web dashboard: Next.js operational UI for finance, platform, and management workflows.
- API and gateway: Fastify service for management APIs, OpenAI-compatible gateway routes, and provider-specific chat routing.
- Worker: background processor for rollups, outbox processing, alerts, and reconciliation.
- Database/Auth: Postgres as the system of record with Supabase Auth in the managed V1 deployment model.
Evaluation checklist
Before a pilot, confirm:- Which traffic must be governed by Spendra.
- Which provider account owns upstream AI spend.
- Which teams, projects, employees, and agents need scoped keys.
- Which hard caps must block spend before provider traffic.
- Which finance users need ledger, audit, and export access.
- Which environment will host the web, API, worker, database, and secrets.
Documentation sections
- Architecture: how requests, reservations, settlement, ledgering, and workers fit together.
- On-prem deployment: deployment prerequisites, runtime boundaries, secrets, database/Auth, and health checks.
- Gateway integration: OpenAI-compatible client setup and supported routes.
- Governance model: organizations, roles, hierarchy, policies, budgets, allowlists, audit, and ledger semantics.
- Operations: migrations, backups, monitoring, verification, and troubleshooting.