AWS Company Rollout¶
Use this guide when the question is not just "how do I install agent-bom?" but
"how would a company deploy agent-bom in AWS/EKS for fleet, gateway, proxy, and
control-plane workflows?"
This is a reference rollout path for self-hosted agent-bom on AWS:
- the company platform team still owns the AWS account, VPC, EKS cluster, ingress, cert-manager, and shared controllers
agent-bomowns the product-specific baseline around that platform: Postgres, IRSA, backup bucket, auth secrets, Helm release, fleet onboarding, gateway rollout, and proxy deployment patterns
For the concrete runtime gateway discovery path after fleet and cluster scans have populated remote MCPs, see Gateway Auto-Discovery From the Control Plane.
agent-bom stays one product with two deployable images:
agentbom/agent-bomfor scanner, API, jobs, proxy, gateway, and workersagentbom/agent-bom-uifor the browser dashboard
Use Deployment Overview first if you still need to choose a path. Use Deploy In Your Own AWS / EKS Infrastructure for the paved production rollout. This page is the deeper AWS reference once that decision is already made.
Reference Entry Points¶
Pilot on one workstation:
curl -fsSL https://raw.githubusercontent.com/msaad00/agent-bom/main/deploy/docker-compose.pilot.yml -o docker-compose.pilot.yml
docker compose -f docker-compose.pilot.yml up -d
Reference full self-hosted AWS / EKS rollout:
export AWS_REGION="<your-aws-region>"
scripts/deploy/install-eks-reference.sh \
--create-cluster \
--cluster-name corp-ai \
--region "$AWS_REGION" \
--hostname agent-bom.internal.example.com \
--enable-gateway
If the company already has an EKS platform, reuse it with the same installer:
export AWS_REGION="<your-aws-region>"
scripts/deploy/install-eks-reference.sh \
--cluster-name corp-ai \
--region "$AWS_REGION" \
--hostname agent-bom.internal.example.com \
--enable-gateway
If you want browser operators behind corporate SSO on day 1, add OIDC at install time:
export AWS_REGION="<your-aws-region>"
scripts/deploy/install-eks-reference.sh \
--cluster-name corp-ai \
--region "$AWS_REGION" \
--hostname agent-bom.internal.example.com \
--oidc-issuer https://idp.example.com \
--oidc-audience agent-bom
What The Installer Owns¶
The reference installer at scripts/deploy/install-eks-reference.sh is intentionally opinionated:
- Optionally creates a reference EKS cluster with
eksctl - Applies the
agent-bomAWS baseline Terraform module - Creates the product secrets needed by the Helm release
- Installs the packaged Helm chart with production profile defaults
- Prints next-step commands for fleet onboarding, gateway rollout, and post-deploy verification
It does not try to replace a customer's full AWS platform stack. Keep these as platform-owned:
- corporate VPC topology and networking policy
- DNS and ingress controller strategy
- cert-manager and certificate issuance
- shared logging, SIEM, and OTLP collectors
- ExternalSecrets controller or other shared secret operators
- organization-wide IAM, SCP, and account guardrails
Deployment Shape¶
flowchart LR
subgraph AWS["Customer AWS account"]
subgraph Platform["Company platform-owned layer"]
VPC["VPC / subnets / route tables"]
EKS["EKS cluster"]
Ingress["Ingress controller + DNS + TLS"]
Controllers["Shared controllers<br/>cert-manager / ExternalSecrets / observability"]
end
subgraph AgentBom["agent-bom product-owned layer"]
TF["AWS baseline<br/>Terraform module"]
RDS["RDS Postgres"]
S3["S3 backup bucket"]
IAM["IRSA roles"]
Secrets["Secrets Manager"]
Helm["agent-bom Helm release"]
API["API/runtime image"]
UI["UI image"]
Jobs["Scan jobs / workers"]
Gateway["Optional gateway"]
end
end
VPC --> EKS
Ingress --> EKS
Controllers --> EKS
TF --> RDS
TF --> S3
TF --> IAM
TF --> Secrets
TF --> Helm
Helm --> API
Helm --> UI
Helm --> Jobs
Helm --> Gateway
API --> RDS
Jobs --> RDS
API --> S3
API --> Secrets
API --> IAM
This is the clean ownership model:
- platform team provides a compliant EKS landing zone
agent-bominstaller wires the product-specific AWS and Kubernetes pieces on top- security/platform operators onboard endpoints and selected MCP runtimes after the control plane is live
Product Surfaces In A Company EKS Rollout¶
These are the product surfaces a real enterprise rollout usually cares about:
- control plane: API, UI, auth, graph, findings, remediation, audit, policy
- scan and discovery: repos, images, IaC, MCP configs, skills, cluster and cloud surfaces
- fleet: endpoint and collector inventory pushed into the control plane
- gateway: shared remote MCP traffic, policy, and audit in the cluster
- proxy: laptop or sidecar runtime enforcement where inline MCP inspection is needed
gateway and proxy are core features of the product. In practice they are
deployed selectively by workload and traffic path, not to every process in the
environment on day 1.
Runtime Flow For Fleet, Proxy, And Gateway¶
flowchart LR
subgraph Endpoints["Developer laptops / workstations"]
IDE["Cursor / Claude / IDE"]
Proxy["agent-bom proxy"]
Fleet["Fleet sync push"]
end
subgraph Cluster["Customer EKS"]
UI["Browser UI"]
API["Control-plane API"]
GW["Gateway"]
Jobs["Scheduled scans / workers"]
ProxySidecar["Optional proxy sidecars"]
DB["Postgres"]
end
subgraph Upstreams["Remote MCP / registries / cloud APIs"]
MCP["Remote MCP upstreams"]
Cloud["Cloud / repo / image targets"]
end
IDE --> Proxy
Proxy -->|policy pull + audit push| API
Proxy -->|runtime MCP relay| GW
Fleet -->|/v1/fleet/sync| API
GW --> MCP
Jobs --> Cloud
Jobs --> API
ProxySidecar --> GW
API --> DB
GW --> API
UI --> API
What this means in practice:
- developer endpoints can push fleet inventory and use
agent-bom proxyas a local MCP wrapper - selected MCP workloads in-cluster can run with
agent-bom proxysidecars - the gateway centralizes policy/audit for shared remote upstreams
- the control plane persists findings, graph, fleet state, auth, and audit inside the customer's infrastructure
Scenario Matrix¶
| Company need | Deploy | What becomes visible immediately |
|---|---|---|
| Know which MCPs employees are running | control plane + scans + fleet | endpoints, agents, MCP servers, transports, command or URL, declared tools, auth mode, credential-backed env vars |
| Review risky MCP package exposure | control plane + scans + fleet | package context, vuln context, graph links, blast radius, exposed tools and credentials |
| Govern shared remote MCP traffic | control plane + gateway | shared upstream MCP inventory, gateway policy/audit, remote MCP control plane surfaces |
| Enforce inline on selected workloads | control plane + selected proxy deployment | workload-local runtime evidence, inline blocks, local audit push, selected sidecar/laptop inspection |
| Run the full self-hosted platform | control plane + scans + fleet + selected gateway + selected proxy | one correlated operator plane across discovery, inventory, runtime, graph, findings, and audit |
Recommended Rollout Sequence¶
1. Company platform baseline¶
Start with one of these shapes:
- existing EKS platform: preferred for real companies
- reference EKS cluster from the installer: good for evaluation, pilot, and demos
Before installing agent-bom, confirm:
kubectlaccess to the target cluster- ingress controller strategy is known
- DNS / hostname decision is known, or accept port-forward for first bring-up
- AWS account permissions can create RDS, S3, IAM roles, and Secrets Manager entries
2. Product-specific AWS baseline¶
Run the reference installer or the Terraform module directly:
export AWS_REGION="<your-aws-region>"
scripts/deploy/install-eks-reference.sh \
--cluster-name corp-ai \
--region "$AWS_REGION" \
--hostname agent-bom.internal.example.com \
--enable-gateway
The installer now does two important safety checks before it mutates anything:
- verifies the minimum supported
aws,kubectl,helm,eksctl, andterraform/tofuversions - rejects OIDC installs without a stable
--hostnamesame-origin entrypoint
Under the hood this uses the baseline in deploy/terraform/aws/baseline to create:
- RDS Postgres for the control plane
- S3 backup bucket
- IRSA roles for scan and backup jobs
- Secrets Manager containers for DB/auth wiring
3. Helm release on EKS¶
The installer then applies the production Helm profile plus generated overrides:
- UI and API/runtime images behind one same-origin entrypoint
- scheduled jobs and workers
- optional gateway
- auth and DB secrets wired from generated values
For manual control, use the packaged chart directly:
helm upgrade --install agent-bom deploy/helm/agent-bom \
--namespace agent-bom --create-namespace \
-f deploy/helm/agent-bom/examples/eks-production-values.yaml
See also:
Post-Deploy Verification¶
After install, run the reference verification script before onboarding employees or shared upstream MCPs:
export AWS_REGION="<your-aws-region>"
scripts/deploy/verify-eks-reference.sh \
--cluster-name corp-ai \
--region "$AWS_REGION" \
--namespace agent-bom \
--release agent-bom \
--base-url https://agent-bom.internal.example.com \
--api-key "$AGENT_BOM_API_KEY" \
--check-gateway
That check is intentionally narrow and release-focused:
- refresh kubeconfig for the target cluster
- confirm the Helm release exists
- wait for API and UI rollouts
- optionally verify the gateway rollout
- hit
/healthz - confirm the UI root is reachable
- verify
/v1/auth/debugwith the operator API key when provided - Packaged API + UI Control Plane
4. Endpoint and runtime onboarding¶
After the control plane is live, onboard people and workloads, not just pods:
- use
agent-bom proxy-bootstrapto generate endpoint onboarding bundles - package the generated bundle into
.pkg/.msiartifacts when IT needs managed rollout instead of ad hoc shell execution - point MCP clients at the local
agent-bom proxywrapper - enable fleet sync for workstation visibility
- add proxy sidecars only to workloads that need inline MCP policy enforcement
- enable the gateway when you need shared upstream policy and audit
Typical endpoint bootstrap:
agent-bom proxy-bootstrap \
--bundle-dir ./agent-bom-endpoint-bundle \
--control-plane-url https://agent-bom.internal.example.com \
--control-plane-token <api-key> \
--push-url https://agent-bom.internal.example.com/v1/fleet/sync \
--push-api-key <api-key>
For packaged endpoint rollout, reuse that same generated bundle:
bash scripts/build-pkg.sh \
--bundle-dir ./agent-bom-endpoint-bundle \
--output ./dist/agent-bom-endpoint.pkg \
--dry-run
./scripts/build-msi.ps1 `
-BundleDir .\agent-bom-endpoint-bundle `
-OutputPath .\dist\agent-bom-endpoint.msi `
-DryRun
The bundle also ships Jamf, Kandji, and Intune wrapper scripts plus a
Homebrew formula renderer for organizations that distribute agent-bom
through a managed tap instead of direct package upload.
python3 scripts/render_homebrew_formula.py \
--version 0.85.0 \
--url https://github.com/msaad00/agent-bom/archive/refs/tags/v0.85.0.tar.gz \
--sha256 <release-sha256>
What Operators Get After Deploy¶
The goal is not just "pods are running." The goal is one coherent operator plane:
/for findings, graph, remediation, and operator workflows/fleetfor workstation and collector inventory/auditfor signed audit and auth workflows/gatewayand runtime views for policy/audit surfaces when enabled- one deployment story for pilot and production instead of two unrelated stacks
Dry-Run And Ownership Notes¶
The reference installer supports --dry-run so teams can see the generated
Terraform root, Helm values, and operator summary before any apply:
export AWS_REGION="<your-aws-region>"
scripts/deploy/install-eks-reference.sh \
--cluster-name corp-ai \
--region "$AWS_REGION" \
--hostname agent-bom.internal.example.com \
--enable-gateway \
--dry-run
Use that mode when:
- security wants to review what the installer owns
- platform wants to compare the reference shape to their internal landing zone
- you want to adapt the installer into your own wrappers without changing the product model
Recommended Positioning¶
Use this wording consistently:
agent-bomis one self-hosted control plane for AI and MCP supply-chain security. In AWS/EKS, the company platform owns the cluster and shared controllers;agent-bomowns the product-specific baseline, Helm release, fleet onboarding, and optional gateway/runtime surfaces.
That keeps the architecture honest and the deployment story simple.