Skip to content

Installing with the CLI

Kubernetes Knowledge

Assistance with configuring or setting up a Kubernetes cluster are not included in this guide. If you need more information about Kubernetes, start with the following resources:

Requirements

  • A Kubernetes cluster:
    • Traefik Enterprise follows the Kubernetes support policy, and supports at least the latest three minor versions of Kubernetes (v1.23 or greater for the current version of Traefik Enterprise). General functionality cannot be guaranteed for versions older than that.
    • kubectl properly configured, with the ability to create and manage namespaces and their resources.
    • A default StorageClass configured. Traefik Enterprise uses StatefulSets with persistent volumes enabled and we rely on the default StorageClass of the cluster by default. Instructions to setup a default StorageClass can be found here.
  • Controller pods can reach https://v4.license.containous.cloud.
  • The teectl binary is installed, for cluster management.

Installing

This guide provides a detailed explanation of the install process. For a lighter approach, please refer to the getting started guide.

Generating the teectl Configuration

To start the installation, teectl must first be configured:

teectl setup --kubernetes

This command creates a new teectl configuration under ~/.config/traefikee/default.yaml and generates a bundle.zip which carries a set of needed assets for the installation.

To further configure the teectl setup, please refer to the reference documentation.

Sensitive Information

The generated bundle.zip carries sensitive information and is required to generate installation manifests. It should be stored in a secure place.

Managing Multiple Clusters from a Single Workstation

To manage multiple clusters from a single workstation, please refer to the Managing Multiple Clusters guide

Writing the Bundle at a Different Path

teectl setup comes with an --output option that specifies the output path in which to create the bundle.

Generating the Installation Manifest

Installing with Service Mesh Option

The service mesh is not installed by default. Enabling it requires commands in addition to those described in this section. For more information, see the Service Mesh section.

The next step is to generate the installation manifest based on the bundle generated previously:

# In this example, we generate a manifest for an installation with 3 controllers and 3 ingress proxies.
teectl setup gen --controllers=3 --proxies=3 --license="your-traefikee-license" > traefikee-manifest.yaml

The generated manifest describes all the resources required to run a cluster in Kubernetes, including:

To further configure the generated manifest, please refer to the reference documentation.

Sensitive Information

The generated manifest carries sensitive information. It should be stored in a secure place or deleted after installing, as it can be regenerated.

Using a Bundle in a Different Path

If the generated bundle is located in a different path, use the --bundle option with teectl setup gen.

One-Line Installation

To directly install your cluster without writing the generated manifest to a file, please use:

teectl setup gen --controllers=3 --proxies=3 --license="your-traefikee-license" | kubectl apply -f -

Customizing the Manifest (Optional)

There is a section dedicated to customizing the manifest file, with some common scenarios that require it.

Rootless Image

A rootless image is available. For more information, please refer to this section.

Service Mesh

The service mesh is not enabled by default. To generate a manifest that includes the DaemonSet for the mesh proxies, you must provide the --mesh flag to teectl setup gen, like so:

# In this example, we generate a manifest for an installation with 3 controllers, 3 ingress proxies and a mesh proxy per node.
teectl setup gen --controllers=3 --proxies=3 --mesh --license="your-traefikee-license" > traefikee-manifest.yaml
KubeDNS

If your cluster is using KubeDNS – which is the default in GKE, for example – you must pass the --mesh.kubedns option to teectl setup gen if you wish to enable the service mesh. This option generates Kubernetes objects to install CoreDNS in the Traefik Enterprise namespace. The KubeDNS ConfigMap will then be patched using stubDomains to delegate service mesh resolution to CoreDNS.

DNS Patching

The service mesh provided by Traefik Enterprise is opt-in by default. A service will use the mesh only if it chooses to. To do so, it has to contact other services using an URL that follows the pattern {service}.{namespace}.maesh.

In order to resolve URLs, the DNS server configuration has to be patched using the following command:

teectl setup patch-dns
...
DNS patched successfully
Cluster DNS Requirements

DNS patching is supported on CoreDNS and KubeDNS when installed as Cluster DNS Provider (versions 1.3+).

Deploying the Cluster

Once the manifest is reviewed and ready, the next step is to deploy the cluster:

kubectl apply -f traefikee-manifest.yaml
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressrouteudps.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsstores.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.containo.us created
namespace/traefikee created
serviceaccount/default-svc-acc created
clusterrole.rbac.authorization.k8s.io/default-svc-acc-role created
clusterrolebinding.rbac.authorization.k8s.io/default-svc-acc created
role.rbac.authorization.k8s.io/default-svc-acc-role-traefikee created
rolebinding.rbac.authorization.k8s.io/default-svc-acc-traefikee created
secret/default-mtls created
service/default-ctrl-svc created
statefulset.apps/default-controller created
service/default-proxy-svc created
deployment.apps/default-proxy created
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressrouteudps.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsstores.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.containo.us created
namespace/traefikee created
serviceaccount/default-svc-acc created
clusterrole.rbac.authorization.k8s.io/default-svc-acc-role created
clusterrolebinding.rbac.authorization.k8s.io/default-svc-acc created
role.rbac.authorization.k8s.io/default-svc-acc-role-traefikee created
rolebinding.rbac.authorization.k8s.io/default-svc-acc-traefikee created
secret/default-mtls created
service/default-ctrl-svc created
statefulset.apps/default-controller created
service/default-proxy-svc created
deployment.apps/default-proxy created
configmap/tcp-state-table created
configmap/udp-state-table created
daemonset.apps/default-mesh-proxy created
poddisruptionbudget.policy/default-mesh-proxy created
serviceaccount/default-maesh-proxy created
customresourcedefinition.apiextensions.k8s.io/traffictargets.access.smi-spec.io created
customresourcedefinition.apiextensions.k8s.io/httproutegroups.specs.smi-spec.io created
customresourcedefinition.apiextensions.k8s.io/tcproutes.specs.smi-spec.io created
customresourcedefinition.apiextensions.k8s.io/trafficsplits.split.smi-spec.io created

Monitor the installation progress using kubectl:

kubectl -n traefikee get pods
NAME                             READY   STATUS    RESTARTS   AGE
default-controller-1             1/1     Running   0          44s
default-controller-0             1/1     Running   0          44s
default-controller-2             1/1     Running   0          44s
default-proxy-6f488c84c5-cx9wj   1/1     Running   0          44s
default-proxy-6f488c84c5-b2c9d   1/1     Running   0          44s
default-proxy-6f488c84c5-2zwb7   1/1     Running   0          44s
NAME                             READY   STATUS    RESTARTS   AGE
svclb-default-proxy-svc-xqjjj    2/2     Running   0          44s
default-controller-1             1/1     Running   0          44s
default-controller-0             1/1     Running   0          44s
default-controller-2             1/1     Running   0          44s
default-mesh-proxy-rqd64         1/1     Running   0          44s
default-proxy-6f488c84c5-cx9wj   1/1     Running   0          44s
default-proxy-6f488c84c5-b2c9d   1/1     Running   0          44s
default-proxy-6f488c84c5-2zwb7   1/1     Running   0          44s

When all the pods are running, ensure that the cluster is properly installed using teectl get nodes:

teectl get nodes
ID                         NAME                            STATUS  ROLE
3l5xt87fkc2ztlqlkwcpavuev  default-proxy-6f488c84c5-cx9wj  Ready   Proxy / Ingress
52sje29l1zreu1h319vabtzmx  default-controller-1            Ready   Controller
c5j53krue2avv77ajr8h5bcoz  default-controller-0            Ready   Controller (Leader)
yjtz8kvnsgmqmuycup69vx180  default-proxy-6f488c84c5-2zwb7  Ready   Proxy / Ingress
yo4cycxshnuazwvmrfjtowugw  default-proxy-6f488c84c5-b2c9d  Ready   Proxy / Ingress
yqz838gxifzoh0czugxju2r4p  default-controller-2            Ready   Controller
ID                         NAME                            STATUS  ROLE
3l5xt87fkc2ztlqlkwcpavuev  default-proxy-6f488c84c5-cx9wj  Ready   Proxy / Ingress
52sje29l1zreu1h319vabtzmx  default-controller-1            Ready   Controller
c5j53krue2avv77ajr8h5bcoz  default-controller-0            Ready   Controller (Leader)
kegs3jdet7g08ckxjb3jxgi32  default-mesh-proxy-rqd64        Ready   Proxy / Mesh
yjtz8kvnsgmqmuycup69vx180  default-proxy-6f488c84c5-2zwb7  Ready   Proxy / Ingress
yo4cycxshnuazwvmrfjtowugw  default-proxy-6f488c84c5-b2c9d  Ready   Proxy / Ingress
yqz838gxifzoh0czugxju2r4p  default-controller-2            Ready   Controller

Applying a Static Configuration

A cluster is created without any default configuration. To allow the controller to listen to a provider and proxies to manage incoming traffic, it is necessary to apply a static configuration.

The following example defines two entry points (listening on ports 80 and 443), enables the Kubernetes IngressRoute provider.

---
providers:
  kubernetesCRD: {}

entryPoints:
  web:
    address: ":80"
  websecure:
    address: ":443"

# Uncomment the following value to enable the service mesh
#mesh: {}
[providers.kubernetesCRD]

[entryPoints]
  [entryPoints.web]
    address = ":80"
  [entryPoints.websecure]
    address = ":443"

# Uncomment the following value to enable the service mesh
#[mesh]

Apply the configuration using the following command:

teectl apply --file="./static.yaml"
teectl apply --file="./static.toml"

At any time, it is possible to get the currently applied static configuration of a cluster using:

teectl get static-config
---
configuration:
  global:
    checkNewVersion: true
  serversTransport:
    maxIdleConnsPerHost: 200
  entryPoints:
    web:
      address: :80
      transport:
        lifeCycle:
          graceTimeOut: 10s
        respondingTimeouts:
          idleTimeout: 3m0s
      forwardedHeaders: {}
    websecure:
      address: :443
      transport:
        lifeCycle:
          graceTimeOut: 10s
        respondingTimeouts:
          idleTimeout: 3m0s
      forwardedHeaders: {}
  providers:
    providersThrottleDuration: 2s
    kubernetesCRD: {}
cluster:
  cleanup:
    gracePeriod: 1h0m0s
mesh:
  defaultMode: http
  httpPortLimit: 10
  tcpPortLimit: 25
  udpPortLimit: 25

Deploying an Ingress Test Service

To validate your setup, it is possible to deploy a test application using the following manifests:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami:v1.6.1
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  type: ClusterIP
  ports:
    - port: 80
      name: whoami
  selector:
    app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: whoami
spec:
  entryPoints:
    - http
  routes:
    - match: Path(`/whoami`)
      kind: Rule
      services:
        - name: whoami
          namespace: default
          port: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: client
spec:
  containers:
    - name: client
      image: giantswarm/tiny-tools:3.9
      command:
        - "sleep"
        - "36000"

Then access the application using the following command:

curl <your-cluster-hostname-or-ip>/whoami
Hostname: whoami-57bcbf7487-bkls7
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.17
IP: fe80::cc09:6cff:fe1b:678d
RemoteAddr: 10.42.0.14:45778
GET /whoami HTTP/1.1
Host: 172.18.0.2
User-Agent: curl/7.65.3
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.42.0.1
X-Forwarded-Host: 172.18.0.2
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: default-proxy-6f488c84c5-b2c9d
X-Real-Ip: 10.42.0.1

Access the application inside the cluster using the service mesh:

kubectl exec -it pod/client -- curl whoami.default.maesh
Hostname: whoami-57bcbf7487-bkls7
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.17
IP: fe80::cc09:6cff:fe1b:678d
RemoteAddr: 10.42.0.15:48300
GET / HTTP/1.1
Host: whoami.default.maesh
User-Agent: curl/7.64.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.42.0.18
X-Forwarded-Host: whoami.default.maesh
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: default-mesh-proxy-rqd64
X-Real-Ip: 10.42.0.18

Congratulations! Your Traefik Enterprise cluster is ready.

What's Next?

Now that the cluster is ready, we recommend reading the various operating guides to dive into all features that Traefik Enterprise provides.

We also recommend getting familiar with the various concepts of Traefik Enterprise.

Installing on Openshift

Requirements

  • OpenShift: 4.1.7, with the oc admin tools from openshift-client
  • cluster-admin privileges to manage Security Context Constraint

Security Context Constraints

From OpenShift version 3.0 onwards, security context constraints give the cluster admin the ability to control permissions for pods.

However, because the default security context constraints do not allow binding privileged ports (below port 1024), a custom one is required in order to install Traefik Enterprise.

Here is an example of a security context constraint a cluster admin can set to a user/service account/group:

---
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  annotations:
    kubernetes.io/description: traefikee-scc provides all features of the restricted SCC
      but allows users to run with any UID and any GID.
  name: traefikee-scc
priority: 10

allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities:
- NET_BIND_SERVICE
defaultAddCapabilities: null
fsGroup:
  type: RunAsAny
groups:
- system:authenticated
readOnlyRootFilesystem: false
requiredDropCapabilities:
- MKNOD
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret

To declare the security context constraint in the OpenShift cluster:

kubectl apply -f ./traefikee-scc.yaml

Then, associate the security context constraint to the user default of the destination namespace (here traefikee) with:

oc adm policy add-scc-to-user traefikee-scc -z default -n traefikee

And then follow the usual installation procedure for Kubernetes.