Why , not Kubernetes
Though isn’t designed for massive-scale orchestration, the applications hosted by most institutions rarely require more than modest scaling. The real advantage of is the developer experience. Because the exact same orchestration runs in both development and production — with only minor environmental tweaks — you can reliably mirror production on your local machine. This provides built-in deployment safety long before your CI pipeline runs a single test. We could have spent resources building Kubernetes operators for various LAC-GLAM stacks instead of creating sitectl. But sitectl was a deliberate choice: it empowers institutions to adopt open-source projects without the hurdle of hiring a Kubernetes administrator or absorbing the heavy operational overhead of a Kubernetes cluster. The goal was to let institutions adopt open-source software without being blocked by infrastructure complexity.Why not just use Docker Contexts?
While Docker’s native context feature handles basic Docker daemon connections,sitectl is purpose-built for projects and adds:
Remote operations
SFTP file operations and clearer SSH error handling beyond what Docker’s own context system exposes.
Container utilities
General helpers to resolve service names to containers, extract secrets and env vars for exec commands, and inspect container network details.
-first design
Automatically sets the equivalent of
DOCKER_HOST, COMPOSE_PROJECT_NAME, COMPOSE_FILE, and COMPOSE_ENV_FILES from the active sitectl context.Plugin model
Plugins extend sitectl without requiring changes to the core binary. The core binary discovers plugins by name convention (sitectl-<plugin> on $PATH) and delegates commands to them. This means:
- Stack-specific logic stays in the plugin, not in core
- Plugins can be installed and upgraded independently
- Plugins can include other plugins — ISLE includes Drupal, so operators of ISLE sites get Drupal commands automatically
SDK runner interfaces
Where a core sitectl command needs plugin-specific behavior, the SDK defines a typed runner interface. The plugin implements the interface, registers it, and the SDK generates the hidden protocol command. This gives plugin authors a structured, type-safe extension point rather than requiring them to write raw Cobra commands. Current runner interfaces:| Interface | Registered by | Hidden command | User command |
|---|---|---|---|
DebugHandler | RegisterDebugHandler | __debug | sitectl debug |
DeployRunner | RegisterDeployRunner | __deploy | sitectl deploy |
ConvergeRunner | RegisterConvergeRunner | __converge | sitectl converge |
SetRunner | RegisterSetRunner | __set | sitectl set |
ValidateRunner | RegisterValidateRunner | __validate | sitectl validate |
sitectl validate is distinct from the others: core runs its own validators first, then captures the plugin’s __validate output (YAML-encoded results), and merges everything before rendering. The plugin does not render; it just returns structured data.
Component model
The component model was designed to solve two problems:- Initial setup — making it easy to start a site with the right capabilities enabled, without manually editing files
- Incremental adoption — existing sites can adopt new upstream capabilities by turning a component on, rather than hand-editing files and hoping nothing breaks
sitectl set and sitectl converge commands are the operator-facing surface for the component model. sitectl component describe, sitectl component set, and sitectl component reconcile remain as lower-level commands and as the hidden protocol layer that the top-level commands delegate to.
See Component development for how to define a new component.
