Grove's execution model is predictable by design: each workflow run takes an explicit input, traverses a directed graph of nodes, and produces an explicit output. The graph is defined up front and cannot change mid-run. Every data dependency is visible in the definition. For non-conversational workloads — extraction, classification, enrichment, multi-step analysis — there is no implicit state between runs.
Air-Gap Compatible
Grove runs as a self-contained Kubernetes deployment within your VPC. The orchestration engine, database, and all workflow state remain inside your infrastructure perimeter.
Your LLM Keys, Your Contracts
API keys for LLM providers are passed per-request via HTTP headers. You use your own enterprise agreements with Anthropic, OpenAI, or Google. We are not a party to the LLM inference chain — your data flows directly from your infrastructure to your provider.
Self-Hosted Model Support
For workflows processing the most sensitive data, route specific nodes to self-hosted models running within your VPC. Different nodes in the same workflow can use different providers — cloud APIs for non-sensitive tasks, self-hosted models for everything else.
No Implicit State
A workflow run is self-contained. It receives its inputs, executes its nodes, and returns its outputs. Nothing carries over to the next run except the persisted audit trail. Conversational state is an optional, opt-in layer — most workflows do not use it at all.