Sub-processors

Last updated: 5 May 2026

Stellan Compliance uses the following third-party services to operate the platform. Each one is bound by a Data Processing Agreement (DPA) with terms equivalent to or stricter than our own commitments to you.

We commit to giving paid customers at least 30 days' advance notice by email before adding or replacing any sub-processor. Demo-tier users are notified by an updated date stamp on this page.

Current sub-processors

ServicePurposeRegionData sharedDPA
NeonPostgreSQL database (application data, embeddings)Frankfurt — eu-central-1Tenant metadata, documents, sign-offs, audit log, vector embeddingsLink
SupabaseAuthentication and session managementFrankfurt — eu-central-1Email address, hashed password, login timestamps, IP addressLink
VercelApplication hosting (Next.js)Anycast edge; SSR pinned to FrankfurtRequest logs (no body), deployment artifactsLink
AnthropicLarge language model inference (Compliance + Writing agents)United States (SCCs); EU via Bedrock Frankfurt for paid tierDocument text and prompts during agent runs; zero retention enterprise termsLink
OpenAIEmbeddings (text-embedding-3-small) for semantic searchUnited States (SCCs); EU via Azure West Europe for paid tierDocument text chunks during ingestion; training disabled by default on APILink
Loopia ABTransactional email (SMTP)SwedenRecipient email address, message subject and bodyLink

What about customer storage?

Stellan's production architecture is zero-storage / bring-your-own-bucket — paid customers connect their own GDrive, SharePoint, S3, or Azure Blob, and master document binaries never leave their infrastructure. Those storage providers are not Stellan sub-processors; they are the customer's own existing vendor relationships.

On the demo tier, document text is cached locally in our Neon database (listed above) so that prospects can evaluate the product without wiring up a real connector.

What about LLM training?

Anthropic and OpenAI are configured with training explicitly disabled. Anthropic enterprise terms grant zero retention; OpenAI API training is off by default. No customer content is used to train any third-party model.

BYO-LLM (paid tier)

Paid tenants can opt into bring-your-own-LLM, in which case Stellan calls the LLM using the customer's own API key against their existing AI vendor relationship. In that mode the LLM provider is no longer a Stellan sub-processor — it is the customer's direct vendor, already covered by their own DPA.

Contact

Questions about sub-processors: privacy@stellan.app.