Skip to content

Amazon S3 target

Sends events to Amazon S3.

With tmctl:

tmctl create target awss3 --arn <arn> --auth.credentials.accessKeyID <access key> --auth.credentials.secretAccessKey <secret key>

On Kubernetes:


apiVersion: v1
kind: Secret
  name: aws
type: Opaque
  AWS_ACCESS_KEY_ID: "<AWS Access Key ID>"
  AWS_SECRET_ACCESS_KEY: "<AWS Secret Access Key>"


kind: AWSS3Target
  name: triggermesh-aws-s3-test
  arn: arn:aws:s3:::bucket
          name: aws
          key: AWS_ACCESS_KEY_ID
          name: aws
          key: AWS_SECRET_ACCESS_KEY

Alternatively you can use an IAM role for authentication instead of an access key and secret, for Amazon EKS only:

  iamrole: arn:aws:iam::123456789012:role/foo

To setup an IAM role for service accounts, please refer to the official AWS documentation.

There is an optional toggle flag indicating if the full CloudEvent should be sent to S3 bucket. By default, this is disabled which means only the event payload will be sent.

Accepts events of any type, with a special rule for io.triggermesh.awss3.object.put for which the target will store the payload body regardless of the Discard CloudEvent context attributes setting.

The Amazon S3 bucket key used to store the event is defined by the ce-subject attribute. If ce-subject is not set, the default key will be: ce-type/ce-source/ce-time.

Attributes for the putoperation are:

  • type io.triggermesh.awss3.object.put
  • subject: string, the key to use with the assigned bucket for the Target
  • data contains the payload to store

Responds with events with the following attributes:

  • type
  • source arn:aws:s3:..., the S3's bucket ARN value as configured by the target
  • data contains a JSON response from the Target invocation with the Etag associated with the request

See the Kubernetes object reference for more details.


  • AWS API key and secret
  • ARN for the S3 bucket to store the event

The ARN for the S3 bucket must include the account number and region of a pre-defined access point.

For more information about using Amazon S3, please refer to the AWS documentation.