Installing Traefik Enterprise Edition on Kubernetes¶
Kubernetes Knowledge
Assistance with configuring or setting up a Kubernetes cluster are not included in this guide. If you need more information about Kubernetes, start with the following resources:
Requirements¶
- A Kubernetes cluster:
- Supported versions:
1.13
to1.18
kubectl
properly configured, with the ability to create and manage namespaces and their resources.- A default
StorageClass
configured. TraefikEE usesStatefulSets
with persistent volumes enabled and we rely on the defaultStorageClass
of the cluster by default. Instructions to setup a default storage class can be found here.
- Supported versions:
- Controller pods can reach
https://v3.license.containous.cloud
. - The
teectl
binary is installed, for cluster management
Installing¶
This guide provides a detailed explanation of the install process. For a lighter approach, please refer to the getting started guide.
Generating the teectl
Configuration¶
To start the installation, teectl must first be configured:
teectl setup --kubernetes
This command creates a new teectl
configuration under ~/.config/traefikee/default.yaml
and generates a bundle.zip
which carries a set of needed assets for the installation.
To further configure the teectl
setup, please refer to the reference documentation.
Sensitive Information
The generated bundle.zip
carries sensitive information and is required to generate installation manifests. It should be stored in a secure place.
Managing Multiple Clusters from a Single Workstation
To manage multiple clusters from a single workstation, please refer to the Managing Multiple Clusters guide
Writing the Bundle at a Different Path
teectl setup
comes with an --output
option that specifies the output path in which to create the bundle.
Generating the Installation Manifest¶
The next step is to generate the installation manifest based on the bundle generated previously:
# In this example, we generate a manifest for an installation with 3 controllers and 3 proxies.
teectl setup gen --controllers=3 --proxies=3 --license="your-traefikee-license" > traefikee-manifest.yaml
The generated manifest describes all the resources required to run a cluster in Kubernetes, including:
- A
traefikee
namespace where all resources are created - Traefik custom ressources definitions to create IngressRoutes and related resources.
- Controller statefulset
- Proxy deployment
- Controller headless service
- Proxy loadbalancer service
To further configure the generated manifest, please refer to the reference documentation.
Sensitive Information
The generated manifest carries sensitive information.
It should be stored in a secure place or deleted after installing, as it can be regenerated.
Using a Bundle in a Different Path
If the generated bundle is located in a different path, use the --bundle
option from the gen command.
One Line Installation
To directly install your cluster without writing the generated manifest on a file, please use:
teectl setup gen --controllers=3 --proxies=3 --license="license" | kubectl apply -f -
Customizing the Manifest (Optional)¶
There is a section dedicated to customizing the manifest file, with some common scenarios that requires it.
Rootless Image
A rootless image is available. For more information, please refer to this section.
Deploying the Cluster¶
Once the manifest is reviewed and ready, the next step is to deploy the cluster:
kubectl apply -f traefikee-manifest.yaml
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us unchanged
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.containo.us unchanged
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.containo.us unchanged
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.containo.us unchanged
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.containo.us unchanged
namespace/traefikee created
serviceaccount/default-svc-acc created
clusterrole.rbac.authorization.k8s.io/default-svc-acc-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/default-svc-acc unchanged
secret/default-mtls created
service/default-ctrl-svc created
statefulset.apps/default-controller created
service/default-proxy-svc created
deployment.apps/default-proxy create
Monitor the installation progress using kubectl
:
kubectl -n traefikee get pods
NAME READY STATUS RESTARTS AGE
default-controller-0 1/1 Running 0 32s
default-controller-1 1/1 Running 0 32s
default-controller-2 1/1 Running 0 32s
default-proxy-78877d77d9-bf4w2 1/1 Running 0 32s
default-proxy-78877d77d9-m5v9b 1/1 Running 0 32s
default-proxy-78877d77d9-z9pxg 1/1 Running 0 32s
When all the pods are running, ensure that the cluster is properly installed using teectl get nodes
:
teectl get nodes
ID NAME STATUS ROLE
7tw8nppypruy0iqgfyqf2jvz5 default-controller-1 Ready Controller
i8lmlridgrihknvfpp9a4ckyo default-proxy-78877d77d9-z9pxg Ready Proxy
j8e9luok9ksoj35v80mjasies default-proxy-78877d77d9-bf4w2 Ready Proxy
lufljo7vjyqptmsczfcxb4ljm default-controller-2 Ready Controller
rb74v1d548petedlqny2n76cm default-controller-0 Ready Controller (Leader)
t3zbk2d66k31xsv0qacwnlo5b default-proxy-78877d77d9-m5v9b Ready Proxy
Applying a Static Configuration¶
A cluster is created without any default configuration. To allow the controller to listen to a provider and proxies to manage incoming traffic, it is necessary to apply a static configuration.
The following example defines two entrypoints (listening on 80 and 443), and enables the kubernetes ingress route provider
[providers.kubernetesCRD]
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.websecure]
address = ":443"
---
providers:
kubernetesCRD: {}
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
Apply the configuration using the following command:
teectl apply --file="./static.toml"
teectl apply --file="./static.yaml"
At any time, it is possible to get the currently applied static configuration of a cluster using:
teectl get static-config
---
configuration:
global:
checkNewVersion: true
serversTransport:
maxIdleConnsPerHost: 200
entryPoints:
web:
address: :80
transport:
lifeCycle:
graceTimeOut: 10s
respondingTimeouts:
idleTimeout: 3m0s
forwardedHeaders: {}
websecure:
address: :443
transport:
lifeCycle:
graceTimeOut: 10s
respondingTimeouts:
idleTimeout: 3m0s
forwardedHeaders: {}
providers:
providersThrottleDuration: 2s
kubernetesCRD: {}
cluster:
cleanup:
gracePeriod: 1h0m0s
Deploying a Test Service¶
To validate your setup, it is possible to deploy a test application using the following manifest:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami:v1.5.0
---
apiVersion: v1
kind: Service
metadata:
name: whoami
labels:
app: whoami
spec:
type: ClusterIP
ports:
- port: 80
name: whoami
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: whoami
spec:
entryPoints:
- web
routes:
- match: Path(`/whoami`)
kind: Rule
services:
- name: whoami
namespace: default
port: 80
Then access the application using the following command:
curl <your-cluster-hostname-or-ip>/whoami
Hostname: 06c13be38c61
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 172.17.0.1:33684
GET / HTTP/1.1
Host: localhost
User-Agent: curl/7.68.0
Accept: */*
Congratulations! Your TraefikEE cluster is ready.
What's Next?
Now that the cluster is ready, we recommend reading the various operating guides to dive into all features that TraefikEE provides.
We also recommend getting familiar with the various concepts of TraefikEE.
Installing on Openshift¶
Requirements¶
- OpenShift: 4.1.7, with the
oc
admin tools from openshift-client cluster-admin
privileges to manage Security Context Constraint
Security Context Constraint¶
From OpenShift version 3.0 onwards, Security Context Constraints give the ability to a cluster-admin to control permissions for pods.
However, because the default Security Context Constraints does not allow binding privileged ports (under 1024), a custom one is required in order to install TraefikEE.
Here is an example of Security Context Constraint a cluster-admin
can set to a user
/Service Account
/Group
.
---
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: traefikee-scc provides all features of the restricted SCC
but allows users to run with any UID and any GID.
name: traefikee-scc
priority: 10
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities:
- NET_BIND_SERVICE
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups:
- system:authenticated
readOnlyRootFilesystem: false
requiredDropCapabilities:
- MKNOD
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
supplementalGroups:
type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
To declare the Security Context Constraint in the OpenShift cluster:
kubectl apply -f ./traefikee-scc.yaml
Then, associate the Security Constraint Context to the user default
of the destination namespace (here traefikee
) with:
oc adm policy add-scc-to-user traefikee-scc -z default -n traefikee
And then follow the installation procedure on k8s.