Skip to content

Local Quick Start (basebox.local)

This is the shortest evaluation path for basebox on a prepared single-node Kubernetes cluster.

Use this mode for demos, validation, internal testing, and first local installs.

Not for Production

Please note that the quick start installation is not intended for production use.

See Using Helm Charts for a production grade installation.

What This Installs

  • one hostname: basebox.local
  • local/self-managed TLS mode
  • automatic bootstrap secrets
  • automatic database clusters
  • automatic OIDC bootstrap
  • default local storage class: local-path

This is a quick-start topology, not a production topology.

For public domains, cert-manager, or customer-provided TLS, use Using Helm Charts.

Prerequisites

  • Kubernetes cluster already prepared
  • kubectl and helm installed on your workstation
  • CloudNativePG operator installed on the cluster
  • ingress controller installed on the cluster
  • NVIDIA GPU support installed on the cluster for inference and RAG services
  • the workstation browser resolves basebox.local to your ingress IP or node IP

Step 1: Set Kubeconfig

export KUBECONFIG="<PATH_TO_KUBECONFIG>"
kubectl cluster-info
kubectl get nodes -o wide

Step 2: Point basebox.local to the Cluster

On the workstation where you open the browser, add an /etc/hosts entry:

sudo sh -c 'printf "\n<INGRESS_OR_NODE_IP> basebox.local\n" >> /etc/hosts'

Verify:

ping basebox.local

Step 3: Optional OCI Registry Login

helm registry login gitea.basebox.health -u pacman
# If this prompts for a password, use ee4d3d1bad18cae07a1817701d2281f6fbf8aa2f

If anonymous OCI pull works in your environment, this step is optional.

Step 4: Install basebox

helm upgrade --install basebox oci://gitea.basebox.health/basebox-distribution/helm/basebox.ai \
  --version 0.3.13 \
  -n basebox \
  --create-namespace \
  --wait \
  --timeout 120m \
  --set global.domain=basebox.local \
  --set global.tls.mode=local \
  --set global.tls.secretName=basebox-local-tls

What this does:

  • creates the basebox namespace
  • creates bootstrap credentials automatically
  • creates the local TLS secret automatically if it does not already exist
  • installs the full basebox stack
  • configures OIDC for https://basebox.local

Step 5: Wait for Pods and Bootstrap Job

kubectl -n basebox get pods
kubectl -n basebox wait --for=condition=complete job/idp-keycloak-bootstrap --timeout=10m

Expected:

  • service pods become Running
  • idp-keycloak-bootstrap completes successfully

Step 6: Get Login Credentials

Primary UI login:

kubectl -n basebox get secret basebox-admin-secret \
  -o jsonpath='{.data.ADMIN_EMAIL}' | base64 -d && echo

kubectl -n basebox get secret basebox-admin-secret \
  -o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d && echo

Keycloak admin password:

kubectl -n basebox get secret keycloak-admin-secret \
  -o jsonpath='{.data.KEYCLOAK_ADMIN_PASSWORD}' | base64 -d && echo

Default email is usually:

admin@basebox.local

Step 7: Smoke Checks

kubectl -n basebox get ingress

curl -k -X POST https://basebox.local/graphql \
  -H 'Content-Type: application/json' \
  -H 'X-Realm: primary' \
  --data-binary '{"query":"query { __typename }"}'

Expected response:

{"data":{"__typename":"Query"}}

Open in browser:

https://basebox.local

Certificate Note

By default, local mode creates a self-signed TLS secret automatically.

This is enough for installation and backend OIDC bootstrap, but your browser will show a certificate warning unless you replace the secret with a locally trusted certificate.

If you want a trusted local browser certificate, create basebox-local-tls yourself before install, for example with mkcert. The chart will keep the existing secret and not overwrite it.

Troubleshooting

  • https://basebox.local/graphql returns 404:
  • check /etc/hosts and ingress IP alignment
  • inference pod stays Pending:
  • check GPU resource availability on the node
  • browser login page fails after install:
  • check that basebox.local resolves to the same ingress you installed to
  • image pulls fail on cluster nodes:
  • check runtime registry access on the node side