Redis Broker
The Redis Broker uses a backing Redis instance to store events, thereby providing durability that is suitable for production uses cases where event loss in not acceptable during crashes or restarts.
The management of the Redis instance is greatly simplified by TriggerMesh. It is also possible to point the Broker to any Redis instance, such as a managed instance from a cloud provider.
With tmctl
:
Work in progress
This component is not yet available with tmctl
. By default, Brokers created with tmctl
are Memory Brokers. However, when you export your configuration to Kubernetes manifests using tmctl dump
, TriggerMesh will export a Redis Broker
, thereby providing message durability by default. You can switch to a Memory Broker if you prefer by updating the exported manifest.
TriggerMesh will add support for Redis Broker in tmctl
in the future.
On Kubernetes:
apiVersion: eventing.triggermesh.io/v1alpha1
kind: RedisBroker
metadata:
name: <broker instance name>
spec:
redis:
connection: <Provides a connection to an external Redis instance. Optional>
url: <redis URL. Required>
username: <redis username, referenced using a Kubernetes secret>
secretKeyRef:
name: <Kubernetes secret name>
key: <Kubernetes secret key>
password: <redis password, referenced using a Kubernetes secret>
secretKeyRef:
name: <Kubernetes secret name>
key: <Kubernetes secret key>
tlsEnabled: <boolean that indicates if the Redis server is TLS protected. Optional, defaults to false>
tlsSkipVerify: <boolean that skips verifying TLS certificates. Optional, defaults to false>
stream: <Redis stream name. Optional, defaults to a combination of namespace and broker name>
streamMaxLen: <maximum number of items the Redis stream can host. Optional, defaults to unlimited>
broker:
port: <HTTP port for ingesting events>
observability:
valueFromConfigMap: <kubernetes ConfigMap that contains observability configuration>
The only RedisBroker
specific parameters are:
spec.redis.connection
. When not used the broker will spin up a managed Redis Deployment. However for production scenarios that require HA and hardened security it is recommended to provide the connection to a user managed Redis instance.spec.stream
is the Redis stream name to be used by the broker. If it doesn't exists the Broker will create it.spec.streamMaxLen
is the maximum number of elements that the stream will contain.
The spec.broker
section contains generic Broker parameters:
spec.broker.port
that the Broker service will be listening at. Optional, defaults to port 80.spec.broker.observability
can be set to the name of a ConfigMap at the same namespace that contains observability settings (documentation coming soon). This parameter is optional.
Getting Started with the Redis Broker on Kubernetes
In this introduction tutorial we are going to setup a RedisBroker with a Trigger that sends events to a Target endpoint, and if the delivery faisl to a Dead Letter Sink endpoint.
Instructions
TriggerMesh Core includes 2 components:
- RedisBroker, which uses a backing Redis instance to store events and routes them via Triggers.
- Trigger, which subscribes to events and push them to your targets.
Events must conform to CloudEvents spec using the HTTP binding.
Create a RedisBroker named demo
.
kubectl apply -f https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/getting-started-redis/broker.yaml
Wait until the RedisBroker is ready. It will inform in its status of the URL where events can be ingested.
kubectl get redisbroker demo
NAME URL AGE READY REASON
demo http://demo-rb-broker.default.svc.cluster.local 10s True
To be able to use the broker we will create a Pod that allow us to send events inside the Kubernetes cluster.
kubectl apply -f https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/common/curl.yaml
It is possible now to send events to the broker address by issuing curl commands. The response for ingested events must be an HTTP 200
which means that the broker has received it and will try to deliver them to configured triggers.
kubectl exec -ti curl -- curl -v http://demo-rb-broker.default.svc.cluster.local/ \
-X POST \
-H "Ce-Id: 1234-abcd" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: demo.type1" \
-H "Ce-Source: curl" \
-H "Content-Type: application/json" \
-d '{"test1":"no trigger configured yet"}'
Unfortunately we haven't configured any trigger yet, which means any ingested event will not be delivered. event_display
is a CloudEvents consumer that logs to console the list of events received. We will be creating 2 instances of event_display
, one as the target for consumed events and another one for the Dead Letter Sink.
A Dead Letter Sink, abbreviated DLS is a destination that consumes events that a subscription was not able to deliver to the intended target.
# Target service
kubectl apply -f https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/common/display-target.yaml
# DLS service
kubectl apply -f https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/common/display-deadlettersink.yaml
The Trigger object configures the broker to consume events and send them to a target. The Trigger object can include filters that select which events should be forwarded to the target, and delivery options to configure retries and fallback targets when the event cannot be delivered.
kubectl apply -f https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/getting-started-redis/trigger.yaml
The Trigger created above filters by CloudEvents containing type: demo.type1
attribute and delivers them to display-target
service, if delivery fails it will issue 3 retries and then forward the CloudEvent to the display-deadlettersink
service.
Using the curl
Pod again we can send this CloudEvent to the broker.
kubectl exec -ti curl -- curl -v http://demo-rb-broker.default.svc.cluster.local/ \
-X POST \
-H "Ce-Id: 1234-abcd" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: demo.type1" \
-H "Ce-Source: curl" \
-H "Content-Type: application/json" \
-d '{"test 2":"message for display target"}'
The target display Pod will show the delivered event.
kubectl logs -l app=display-target --tail 100
☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: demo.type1
source: curl
id: 1234-abcd
datacontenttype: application/json
Extensions,
triggermeshbackendid: 1666613846441-0
Data,
{
"test": "value2"
}
To simulate a target failure we will reconfigure the target display service to make it point to a non existing Pod set:
kubectl delete -f https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/common/display-target.yaml
Any event that pass the filter will try to be sent to the target, and upon failing will be delivered to the DLS.
kubectl exec -ti curl -- curl -v http://demo-rb-broker.default.svc.cluster.local/ \
-X POST \
-H "Ce-Id: 1234-abcd" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: demo.type1" \
-H "Ce-Source: curl" \
-H "Content-Type: application/json" \
-d '{"test 3":"not delivered, will be sent to DLS"}'
kubectl logs -l app=display-deadlettersink --tail 100
☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: demo.type1
source: curl
id: 1234-abcd
datacontenttype: application/json
Extensions,
triggermeshbackendid: 1666613846441-0
Data,
{
"test": "value3"
}
Clean Up
To clean up the getting started guide, delete each of the created assets:
# Removal of display-target not in this list, since it was deleted previously.
kubectl delete -f \
https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/getting-started-redis/trigger.yaml,\
https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/common/display-deadlettersink.yaml,\
https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/getting-started-redis/broker.yaml,\
https://raw.githubusercontent.com/triggermesh/triggermesh-core/main/docs/assets/manifests/common/curl.yaml