Sub-processors
Last updated: 5 May 2026
Stellan Compliance uses the following third-party services to operate the platform. Each one is bound by a Data Processing Agreement (DPA) with terms equivalent to or stricter than our own commitments to you.
We commit to giving paid customers at least 30 days' advance notice by email before adding or replacing any sub-processor. Demo-tier users are notified by an updated date stamp on this page.
Current sub-processors
| Service | Purpose | Region | Data shared | DPA |
|---|---|---|---|---|
| Neon | PostgreSQL database (application data, embeddings) | Frankfurt — eu-central-1 | Tenant metadata, documents, sign-offs, audit log, vector embeddings | Link |
| Supabase | Authentication and session management | Frankfurt — eu-central-1 | Email address, hashed password, login timestamps, IP address | Link |
| Vercel | Application hosting (Next.js) | Anycast edge; SSR pinned to Frankfurt | Request logs (no body), deployment artifacts | Link |
| Anthropic | Large language model inference (Compliance + Writing agents) | United States (SCCs); EU via Bedrock Frankfurt for paid tier | Document text and prompts during agent runs; zero retention enterprise terms | Link |
| OpenAI | Embeddings (text-embedding-3-small) for semantic search | United States (SCCs); EU via Azure West Europe for paid tier | Document text chunks during ingestion; training disabled by default on API | Link |
| Loopia AB | Transactional email (SMTP) | Sweden | Recipient email address, message subject and body | Link |
What about customer storage?
Stellan's production architecture is zero-storage / bring-your-own-bucket — paid customers connect their own GDrive, SharePoint, S3, or Azure Blob, and master document binaries never leave their infrastructure. Those storage providers are not Stellan sub-processors; they are the customer's own existing vendor relationships.
On the demo tier, document text is cached locally in our Neon database (listed above) so that prospects can evaluate the product without wiring up a real connector.
What about LLM training?
Anthropic and OpenAI are configured with training explicitly disabled. Anthropic enterprise terms grant zero retention; OpenAI API training is off by default. No customer content is used to train any third-party model.
BYO-LLM (paid tier)
Paid tenants can opt into bring-your-own-LLM, in which case Stellan calls the LLM using the customer's own API key against their existing AI vendor relationship. In that mode the LLM provider is no longer a Stellan sub-processor — it is the customer's direct vendor, already covered by their own DPA.
Contact
Questions about sub-processors: privacy@stellan.app.