Frequently Asked Questions¶
How many controllers do you need for high availability?
A multi controller cluster is healthy as long as the nodes can reach a quorum to elect a new leader in case of failure.
The quorum is (N/2)+1
where N is the initial number of controllers.
If the number of healthy nodes goes below this number, the cluster must be recovered before it will resume normal operations.
What happens if the cluster is unhealthy?
Proxies will continue to route the requests based on the last known configuration.
Why can't I access ports 80 or 443 after a successful installation?
Even after installation is complete, TraefikEE won't listen to any ports on the proxies until a static configuration specifying entry-points is applied.
How can I recover if my controller(s) died unexpectedly?
When using some kind of persistent volume in the host / pod / container, you can boot it up again and watch the logs to see if the state was automatically recovered. If the cluster data were lost, please refer to the Backup and Restore section for more recovery options.
Can I update the static / dynamic configuration without downtime?
You can update or change the cluster configuration without losing requests as long as the entry-points are not changed.
Can I run multiple providers on the same TraefikEE cluster?
Yes, you can enable more than one provider in your static configuration.
Is the File provider supported on multi controller clusters?
Yes. The only limitation is that the configuration files have to be replicated on every host executing a controller.
What installation method is best?
We recommend installing with teectl
, even when customization is needed, as it will generate all the required manifest files for your platform.
Manual installation is required for on-premise users.
Why is it trying to start / use a
traefik
entrypoint when there is none in my static config?
Traefik has the concept of a default entrypoint to use for internal services, like the API or Ping, when they are enabled but no entrypoint is specified.
Why my entrypoint is conflicting with
traefik
internal entrypoint?
The traefik
default internal entrypoint will use port ':8080'. When setting up your own custom entrypoint to the same port, make sure you are not using the traefik
internal one by specifying the entrypoint
value on internal services like the API, Metrics, Dashboard and Ping.
Why do my proxies show up as a new nodes in the cluster after every restart?
To ensure cluster consistency, the TraefikEE proxies are configured to always start from a clean state. This means they will get new node IDs inside the cluster and will show up in the Dashboard and in CLI queries as new nodes. Their old entries will be removed from the cluster by the controller(s) after a grace period, by default 1 hour.