You are looking at the documentation of a prior release. To read the documentation of the latest release, please visit here.

New to Voyager? Please start here.

Installation Guide

Voyager operator can be installed via a script or as a Helm chart.

Using Helm 3

Voyager can be installed via Helm 3.x or later versions using the chart from AppsCode Charts Repository. To install the chart with the release name my-release:

$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm search repo appscode/voyager --version v12.0.0
NAME              CHART VERSION APP VERSION DESCRIPTION
appscode/voyager  v12.0.0    v12.0.0  Voyager by AppsCode - Secure HAProxy Ingress Controller...

# provider=acs
# provider=aks
# provider=aws
# provider=azure
# provider=baremetal
# provider=gce
# provider=gke
# provider=minikube
# provider=openstack
# provider=metallb
# provider=digitalocean
# provider=linode

$ helm install voyager-operator appscode/voyager --version v12.0.0 \
  --namespace kube-system \
  --set cloudProvider=$provider

To see the detailed configuration options, visit here.

Using Helm 2

Voyager can be installed via Helm 2.9.x or later versions using the chart from AppsCode Charts Repository. To install the chart with the release name my-release:

$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm search appscode/voyager --version v12.0.0
NAME              CHART VERSION APP VERSION DESCRIPTION
appscode/voyager  v12.0.0    v12.0.0  Voyager by AppsCode - Secure HAProxy Ingress Controller...

# provider=acs
# provider=aks
# provider=aws
# provider=azure
# provider=baremetal
# provider=gce
# provider=gke
# provider=minikube
# provider=openstack
# provider=metallb
# provider=digitalocean
# provider=linode

$ helm install appscode/voyager --name voyager-operator --version v12.0.0 \
  --namespace kube-system \
  --set cloudProvider=$provider

To see the detailed configuration options, visit here.

Using YAML

If you prefer to not use Helm, you can generate YAMLs from Voyager operator chart and deploy using kubectl. Here we are going to show the prodecure using Helm 3.

$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm search repo appscode/voyager --version v12.0.0
NAME              CHART VERSION APP VERSION DESCRIPTION
appscode/voyager  v12.0.0    v12.0.0  Voyager by AppsCode - Secure HAProxy Ingress Controller...

# provider=acs
# provider=aks
# provider=aws
# provider=azure
# provider=baremetal
# provider=gce
# provider=gke
# provider=minikube
# provider=openstack
# provider=metallb
# provider=digitalocean
# provider=linode

$ helm template voyager-operator appscode/voyager --version v12.0.0 \
  --namespace kube-system \
  --no-hooks \
  --set cloudProvider=$provider | kubectl apply -f -

To see the detailed configuration options, visit here.

Installing in GKE Cluster

If you are installing Voyager on a GKE cluster, you will need cluster admin permissions to install Voyager operator. Run the following command to grant admin permision to the cluster.

$ kubectl create clusterrolebinding "cluster-admin-$(whoami)" \
  --clusterrole=cluster-admin \
  --user="$(gcloud config get-value core/account)"

Installing in Minikube

Voyager can be used in minikube using --provider=minikube. In Minikube, a LoadBalancer type ingress will only assigned a NodePort.

Installing in Baremetal Cluster

Voyager works great in baremetal cluster. To install, set --provider=baremetal. In baremetal cluster, LoadBalancer type ingress in not supported. You can use NodePort, HostPort or Internal ingress objects.

Installing in Baremetal Cluster with MetalLB

Follow the instructions for installing on baremetal cluster but specify metallb as provider. Then install MetalLB following the instructions here. Now, you can use LoadBalancer type ingress in baremetal clusters.

Installing in DigitalOcean Cluster

To use LoadBalancer type ingress in DigitalOcean cluster, install Kubernetes cloud controller manager for DigitalOcean. Otherwise set cloud provider to barematal.

Installing in Linode Cluster

To use LoadBalancer type ingress in Linode cluster, install Kubernetes cloud controller manager for Linode. Otherwise set cloud provider to barematal.

Verify installation

To check if Voyager operator pods have started, run the following command:

$ kubectl get pods --all-namespaces -l app=voyager --watch

Once the operator pods are running, you can cancel the above command by typing Ctrl+C.

Now, to confirm CRD groups have been registered by the operator, run the following command:

$ kubectl get crd -l app=voyager

Now, you are ready to create your first ingress using Voyager.

Configuring RBAC

Voyager creates two CRDs: Ingress and Certificate. Voyager installer will create 2 user facing cluster roles:

ClusterRoleAggregates ToDesription
appscode:voyager:editadmin, editAllows edit access to Voyager CRDs, intended to be granted within a namespace using a RoleBinding.
appscode:voyager:viewviewAllows read-only access to Voyager CRDs, intended to be granted within a namespace using a RoleBinding.

These user facing roles supports ClusterRole Aggregation feature in Kubernetes 1.9 or later clusters.

Using kubectl

Since Voyager uses its own TPR/CRD, you need to use full resource kind to find it with kubectl.

# List all voyager ingress
$ kubectl get ingress.voyager.appscode.com --all-namespaces

# List voyager ingress for a namespace
$ kubectl get ingress.voyager.appscode.com -n <namespace>

# Get Ingress YAML
$ kubectl get ingress.voyager.appscode.com -n <namespace> <ingress-name> -o yaml

# Describe Ingress. Very useful to debug problems.
$ kubectl describe ingress.voyager.appscode.com -n <namespace> <ingress-name>

Detect Voyager version

To detect Voyager version, exec into the operator pod and run voyager version command.

$ POD_NAMESPACE=kube-system
$ POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app=voyager -o jsonpath={.items[0].metadata.name})
$ kubectl exec -it $POD_NAME -n $POD_NAMESPACE voyager version

Version = v12.0.0
VersionStrategy = tag
Os = alpine
Arch = amd64
CommitHash = ab0b38d8f5d5b4b4508768a594a9d98f2c76abd8
GitBranch = release-4.0
GitTag = v12.0.0
CommitTimestamp = 2017-10-08T12:45:26