You are looking at the documentation of a prior release. To read the documentation of the latest release, please
visit here.
New to Voyager? Please start here.
Specify NodePort
If you are using a NodePort
or LoadBalancer
type Ingress, a NodePort
or LoadBalancer
type Service is used to expose HAProxy pods respectively. If no node port is specified for each HAProxy Service port, Kubernetes will randomly assign one for you.
Since 3.2.0, you have the option to specify a NodePort for each HAProxy Service port. This allows you to guarantee that the port will not get changed, as you make changes to an Ingress object. If you specify nothing, Kubernetes will auto assign as before.
Ingress Example
First create a test-server and expose it via service:
$ kubectl run test-server --image=gcr.io/google_containers/echoserver:1.8
deployment "test-server" created
$ kubectl expose deployment test-server --type=LoadBalancer --port=80 --target-port=8080
service "test-server" exposed
Then create the ingress:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
ingress.appscode.com/type: NodePort
spec:
rules:
- host: one.example.com
http:
port: '8989'
nodePort: '32666'
paths:
- path: /t1
backend:
serviceName: test-server
servicePort: '80'
- path: /t2
backend:
serviceName: test-server
servicePort: '80'
- host: other.example.com
http:
port: '8989'
nodePort: '32666'
paths:
- backend:
serviceName: test-server
servicePort: '80'
- host: appscode.example.com
tcp:
port: '4343'
nodePort: '32667'
backend:
serviceName: test-server
servicePort: '80'
Since ingress.appscode.com/type: NodePort
annotation is used, this Ingress is going to expose HAProxy pods via a NodePort
Service. This service will listen to 8989
and 4343
port for incoming HTTP connections and these port will map to specified node ports, and will pass any request coming to it to the desired backend.
$ kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
po/test-server-68ddc845cd-x7dtv 1/1 Running 0 23h
po/voyager-test-ingress-77cc5d54d-sgzkv 1/1 Running 0 18s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
svc/test-server LoadBalancer 10.105.13.31 <pending> 80:30390/TCP 1d
svc/voyager-test-ingress NodePort 10.106.53.141 <none> 8989:32666/TCP,4343:32667/TCP 26m
$ kubectl get svc voyager-test-ingress -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
ingress.appscode.com/last-applied-annotation-keys: ""
ingress.appscode.com/origin-api-schema: voyager.appscode.com/v1beta1
ingress.appscode.com/origin-name: test-ingress
creationTimestamp: 2018-02-15T03:51:06Z
name: voyager-test-ingress
namespace: default
ownerReferences:
- apiVersion: voyager.appscode.com/v1beta1
blockOwnerDeletion: true
kind: Ingress
name: test-ingress
uid: 73203752-1203-11e8-b2d5-080027eaa7b2
resourceVersion: "65769"
selfLink: /api/v1/namespaces/default/services/voyager-test-ingress
uid: 732a5322-1203-11e8-b2d5-080027eaa7b2
spec:
clusterIP: 10.106.53.141
externalTrafficPolicy: Cluster
ports:
- name: tcp-8989
nodePort: 32666
port: 8989
protocol: TCP
targetPort: 8989
- name: tcp-4343
nodePort: 32667
port: 4343
protocol: TCP
targetPort: 4343
selector:
origin: voyager
origin-api-group: voyager.appscode.com
origin-name: test-ingress
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Now, if you check the HAProxy configuration generated by Voyager, you should see something like below:
# HAProxy configuration generated by https://github.com/appscode/voyager
# DO NOT EDIT!
global
daemon
stats socket /tmp/haproxy
server-state-file global
server-state-base /var/state/haproxy/
# log using a syslog socket
log /dev/log local0 info
log /dev/log local0 notice
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
hard-stop-after 30s
defaults
log global
# https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-option%20abortonclose
# https://github.com/voyagermesh/voyager/pull/403
option dontlognull
option http-server-close
# Timeout values
timeout client 50s
timeout client-fin 50s
timeout connect 5s
timeout server 50s
timeout tunnel 50s
# Configure error files
# default traffic mode is http
# mode is overwritten in case of tcp services
mode http
frontend http-0_0_0_0-8989
bind *:8989
mode http
option httplog
option forwardfor
acl is_proxy_https hdr(X-Forwarded-Proto) https
acl acl_other.example.com hdr(host) -i other.example.com:8989
use_backend test-server.default:80-2bdf8f33305898e39d66486c50d39fc1 if acl_other.example.com
acl acl_one.example.com hdr(host) -i one.example.com:8989
acl acl_one.example.com:t2 path_beg /t2
use_backend test-server.default:80-64e63a31b2e805238363fc7982c38f12 if acl_one.example.com acl_one.example.com:t2
acl acl_one.example.com:t1 path_beg /t1
use_backend test-server.default:80-6c5cadcbfcb85a324f0cf5c4654dd952 if acl_one.example.com acl_one.example.com:t1
backend test-server.default:80-2bdf8f33305898e39d66486c50d39fc1
server pod-test-server-68ddc845cd-x7dtv 172.17.0.4:8080
backend test-server.default:80-64e63a31b2e805238363fc7982c38f12
server pod-test-server-68ddc845cd-x7dtv 172.17.0.4:8080
backend test-server.default:80-6c5cadcbfcb85a324f0cf5c4654dd952
server pod-test-server-68ddc845cd-x7dtv 172.17.0.4:8080
frontend tcp-0_0_0_0-4343
bind *:4343
mode tcp
default_backend test-server.default:80-b700282ee9f2823f5b4ad3452658a791
backend test-server.default:80-b700282ee9f2823f5b4ad3452658a791
mode tcp
server pod-test-server-68ddc845cd-x7dtv 172.17.0.4:8080
Port 8989 has 2 separate hosts one.example.com
and other.example.com
. one.example.com
has 2 paths /t1
and /t2
. Since they all are exposed via the same HTTP port, they must use the same NodePort.