Local Testing
- Kind
- Minikube
The demo requires access to mission control via an ingress configuration. To route traffic into the kind cluster, it must be configured with a binding to a port on the host.
kind.configkind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP
A single node cluster is provisioned, hosting both the control plane and workloads. Configure the hostPort
bindings onto free ports, in this case 8080
and 8443
Provision the kind cluster with
kind create cluster --config kind.config
verify the cluster is running by using:
kubectl get nodes
Install ingress-nginx controller with:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
Confirm that the ingress controller pod is running with:
kubectl get pods -n ingress-nginx
minikube start
minikube addons configure ingress
minikube addons enable ingress
nip.io is a wildcard DNS server that returns the ip provided in the host name, e.g.
❯ nslookup 127.0.0.1.nip.io
Address: 127.0.0.1
By using nip you can access mission-control without any further networking / configuration setup.
values.yamlglobal:
ui:
host: 127.0.0.1.nip.io
resources:
requests:
cpu: 10m
memory: 100Mi
db:
conf:
shared_buffers: 128MB
work_mem: 10MB
resources:
requests:
cpu: 10m
memory: 256Mi
canary-checker:
resources:
requests:
cpu: 10m
memory: 128Mi
config-db:
resources:
requests:
cpu: 10m
memory: 128Mi
helm repo add flanksource https://flanksource.github.io/charts
helm repo update
helm install mission-control \
flanksource/mission-control \
-n mission-control \
--create-namespace \
--wait \
-f values.yaml
See values.yaml for more options.
The default username is admin@local
and the password can be retrieved with:
kubectl get secret mission-control-admin-password \
-n mission-control \
--template='{{.data.password | base64decode}}'
You can then go to https://127.0.0.1.nip.io:8443 to login.
This example uses a self-signed certificate created by nginx, We recommend using cert-manager.io.
Create a file containing canary definitions, for example:
apiVersion: canaries.flanksource.com/v1
kind: Canary
metadata:
name: http
spec:
interval: 30
http:
- url: https://httpstat.us/200
name: 'httpstat healthy'
and apply to the cluster with:
kubectl apply -f canaries.yaml
Navigate to the Health tab you can then see the check running: