Distributed RateLimit
The Distributed RateLimit middleware ensures that requests are limited over time throughout your cluster and not only on an individual proxy.
It is based on the Token Bucket algorithm.
Install Redis
In Traefik Hub API Gateway, the Distributed RateLimit middleware requires you install a Redis() instance in your cluster.
helm install my-release oci://registry-1.docker.io/bitnamicharts/redis
kubectl get secret --namespace default my-release-redis -o jsonpath="{.data.redis-password}" | base64 -d
kubectl create ns apps
kubectl create secret generic redis --from-literal=password=$(kubectl get secret --namespace default my-release-redis -o jsonpath="{.data.redis-password}" | base64 -d) -n apps
Connection parameters to your Redis server are attached to your Middleware deployment.
The following Redis modes are supported:
- Single instance mode
- Redis Cluster
- Redis Sentinel
For more information about Redis, we recommend the official Redis documentation.
If you use Redis in single instance mode or Redis Sentinel, you can configure the database
field.
This value won't be taken into account if you use Redis Cluster (only database 0
is available).
In this case, a warning is displayed, and the value is ignored.
Distributed Rate Limit
- Middleware Distributed Rate Limit
- IngressRoute
- Service & Deployment
- TLS Certificate
# Here, we configure a rate limit middleware where each IP address will get its own bucket.
# Buckets are refilled at a rate of 100 tokens per second and a maximum capacity of 200 tokens.
# It allows someone to query on average at a rate of 100 requests per second and at most 200 simultaneous request temporary.
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: ratelimit
namespace: apps
spec:
plugin:
distributedRateLimit:
burst: 200
denyOnError: false
limit: 100
period: 1s
responseHeaders: true
sourceCriterion:
ipStrategy: {}
store:
redis:
endpoints:
- my-release-redis-master.default.svc.cluster.local:6379
password: urn:k8s:secret:redis:password
timeout: 500ms
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: applications-apigateway-rl
namespace: apps
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Path(`/hello-ratelimit`)
services:
- name: whoami
port: 80
middlewares:
- name: ratelimit
namespace: apps
tls:
secretName: "supersecret"
kind: Deployment
apiVersion: apps/v1
metadata:
name: whoami
namespace: apps
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: apps
labels:
app: whoami
spec:
type: ClusterIP
ports:
- port: 80
name: whoami
selector:
app: whoami
apiVersion: v1
kind: Secret
metadata:
name: supersecret
namespace: apps
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=
tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0=
Advanced options are described in the reference page.
For example, the Redis store configuration options.
If you have only one instance of Traefik Hub API Gateway deployed, or if you don't want the share the rate limit bucket betwenn your instances, You can use the RateLimit middleware.
Related Content
- See the full options of the distributed ratelimit middleware in the dedicated section.
- See the full options of the ratelimit middleware in the dedicated section.
- See how to secure your API access using OIDC.