Spring Cloud AWS simplifies using AWS managed services in a Spring and Spring Boot applications. It offers a convenient way to interact with AWS provided services using well-known Spring idioms and APIs.
3.3.0-RC1
Spring Cloud AWS is released under the non-restrictive Apache 2.0 license. If you would like to contribute to this section of the documentation or if you find an error, please find the source code and issue trackers in the project at github. |
1. Using Amazon Web Services
AWS provides a Java SDK to issue requests for the all services provided by the Amazon Web Service platform. While the SDK offers all functionality available on AWS, there is a considerable amount of low level code needed to use it in Spring idiomatic way. Spring Cloud AWS provides application developers already integrated Spring-based modules to consume the most popular AWS services and avoid low level code as much as possible.
Thanks to Spring Cloud AWS modularity you can include only dependencies relevant to the particular AWS service you want to integrate with.
It also simplifies creating any non-integrated AWS SDK client by auto-configuring region and credentials providers.
That being said, it is perfectly valid option to use AWS SDK without using Spring Cloud AWS.
Note, that Spring provides support for other AWS services in following projects:
|
2. Getting Started
This section describes how to get up to speed with Spring Cloud AWS libraries.
2.1. Bill of Materials
The Spring Cloud AWS Bill of Materials (BOM) declares the recommended versions of all the dependencies used by a given release of Spring Cloud AWS. Using the BOM from your application’s build script avoids the need for you to specify and maintain the dependency versions yourself. Instead, the version of the BOM you’re using determines the utilised dependency versions. It also ensures that you’re using supported and tested versions of the dependencies by default, unless you choose to override them.
If you’re a Maven user, you can use the BOM by adding the following to your pom.xml
file -
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-dependencies</artifactId>
<version>{project-version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Gradle users can also use the Spring Cloud AWS BOM by leveraging Gradle (5.0+) native support for declaring dependency constraints using a Maven BOM. This is implemented by adding a 'platform' dependency handler method to the dependencies section of your Gradle build script. As shown in the snippet below this can then be followed by version-less declarations of the Starter Dependencies for the one or more framework modules you wish to use, e.g. SQS.
dependencies {
implementation platform("io.awspring.cloud:spring-cloud-aws-dependencies:${springCloudAwsVersion}")
// Replace the following with the starter dependencies of specific modules you wish to use
implementation 'io.awspring.cloud:spring-cloud-aws-starter-sqs'
}
2.2. Starter Dependencies
Spring Cloud AWS offers starter dependencies through Maven to easily depend on different modules of the library. Each starter contains all the dependencies and transitive dependencies needed to begin using their corresponding Spring Cloud AWS module.
For example, if you wish to write a Spring application with S3, you would include the spring-cloud-aws-starter-s3
dependency in your project.
You do not need to include the underlying spring-cloud-aws-s3
dependency, because the starter
dependency includes it.
A summary of these artifacts are provided below.
Spring Cloud AWS Starter | Description | Maven Artifact Name |
---|---|---|
Core |
Automatically configure authentication and region selection |
io.awspring.cloud:spring-cloud-aws-starter |
DynamoDB |
Provides integrations with DynamoDB |
io.awspring.cloud:spring-cloud-aws-starter-dynamodb |
S3 |
Provides integrations with S3 |
io.awspring.cloud:spring-cloud-aws-starter-s3 |
SES |
Provides integrations with SES |
io.awspring.cloud:spring-cloud-aws-starter-ses |
SNS |
Provides integrations with SNS |
io.awspring.cloud:spring-cloud-aws-starter-sns |
SQS |
Provides integrations with SQS |
io.awspring.cloud:spring-cloud-aws-starter-sqs |
IMDS |
Automatically loads EC2 instance metadata when running within an EC2-based environment |
io.awspring.cloud:spring-cloud-aws-starter-imds |
Parameter Store |
Provides integrations with AWS Parameter Store |
io.awspring.cloud:spring-cloud-aws-starter-parameter-store |
Secrets Manager |
Provides integrations with AWS Secrets manager |
io.awspring.cloud:spring-cloud-aws-starter-secrets-manager |
2.3. Choosing AWS SDK version
The AWS SDK is released more frequently than Spring Cloud AWS. If you need to use a newer version of the SDK than the one configured by Spring Cloud AWS, add the SDK BOM to the dependency management section making sure it is declared before any other BOM dependency that configures AWS SDK dependencies.
<dependencyManagement>
<dependencies>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>bom</artifactId>
<version>${aws-java-sdk.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
3. Spring Cloud AWS Core
Each Spring Cloud AWS module uses AwsCredentialsProvider
and AwsRegionProvider
to get the AWS region and access credentials.
Spring Cloud AWS provides a Spring Boot starter to auto-configure the core components.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter</artifactId>
</dependency>
3.1. Credentials
software.amazon.awssdk.auth.credentials.AwsCredentialsProvider
is a functional interface that returns the credentials to authenticate and authorize calls to AWS services.
public interface AwsCredentialsProvider {
AwsCredentials resolveCredentials();
}
There are 3 ways in which the AwsCredentialsProvider
in Spring Cloud AWS can be configured:
-
DefaultCredentialsProvider
-
StsWebIdentityTokenFileCredentialsProvider
- recommended for EKS -
Custom
AwsCredentialsProvider
If you are having problems with configuring credentials, consider enabling debug logging for more info:
logging.level.io.awspring.cloud=debug
3.1.1. DefaultCredentialsProvider
By default, Spring Cloud AWS starter auto-configures a DefaultCredentialsProvider
, which looks for AWS credentials in this order:
-
Java System Properties -
aws.accessKeyId
andaws.secretAccessKey
-
Environment Variables -
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
-
Web Identity Token credentials from system properties or environment variables
-
Credential profiles file at the default location (
~/.aws/credentials
) shared by all AWS SDKs and the AWS CLI -
Credentials delivered through the Amazon EC2 container service if `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`" environment variable is set and security manager has permission to access the variable,
-
Instance profile credentials delivered through the Amazon EC2 metadata service
If it does not serve your project needs, this behavior can be changed by setting additional properties:
property | example | description |
---|---|---|
spring.cloud.aws.credentials.access-key |
AKIAIOSFODNN7EXAMPLE |
The access key to be used with a static provider |
spring.cloud.aws.credentials.secret-key |
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
The secret key to be used with a static provider |
spring.cloud.aws.credentials.instance-profile |
true |
Configures an InstanceProfileCredentialsProvider with no further configuration |
spring.cloud.aws.credentials.profile.name |
default |
The name of a configuration profile in the specified configuration file |
spring.cloud.aws.credentials.profile.path |
|
The file path where the profile configuration file is located. Defaults to |
3.1.2. StsWebIdentityTokenFileCredentialsProvider
The StsWebIdentityTokenFileCredentialsProvider
allows your application to assume an AWS IAM Role using a web identity token file, which is especially useful in Kubernetes and AWS EKS environments.
Prerequisites
-
Create a role that you want to assume.
-
Create a web identity token file for your application.
In EKS, please follow this guide to set up service accounts docs.aws.amazon.com/eks/latest/userguide/pod-configuration.html
The StsWebIdentityTokenFileCredentialsProvider
support is optional, so you need to include the following Maven dependency:
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>sts</artifactId>
</dependency>
Configuring
In EKS no additional configuration is required as the service account already configures the correct environment variables; however, they can be overridden.
STS credentials configuration supports following properties:
Name |
Description |
Required |
Default value |
|
ARN of IAM role associated with STS. |
No |
|
|
Absolute path to the web identity token file that will be used by credentials provider. |
No |
|
|
Enables provider to asynchronously fetch credentials in the background. |
No |
|
|
Role session name that will be used by credentials provider. |
No |
|
3.1.3. Custom AwsCredentialsProvider
It is also possible to configure custom AwsCredentialsProvider
bean which will prevent Spring Cloud AWS from auto-configuring credentials provider:
@Configuration
class CustomCredentialsProviderConfiguration {
@Bean
public AwsCredentialsProvider customAwsCredentialsProvider() {
return new CustomAWSCredentialsProvider();
}
}
3.2. Region
software.amazon.awssdk.regions.providers.AwsRegionProvider
is a functional interface that returns the region AWS clients issue requests to.
public interface AwsRegionProvider {
Region getRegion();
}
By default, Spring Cloud AWS starter auto-configures a DefaultAwsRegionProviderChain
, which looks resolves AWS region in this order:
-
Check the
aws.region
system property for the region. -
Check the
AWS_REGION
environment variable for the region. -
Check the
{user.home}/.aws/credentials
and{user.home}/.aws/config
files for the region. -
If running in EC2, check the EC2 metadata service for the region.
If it does not serve your project needs, this behavior can be changed by setting additional properties:
property | example | description |
---|---|---|
spring.cloud.aws.region.static |
eu-west-1 |
A static value for region used by auto-configured AWS clients |
spring.cloud.aws.region.instance-profile |
true |
Configures an InstanceProfileRegionProvider with no further configuration |
spring.cloud.aws.region.profile.name |
default |
The name of a configuration profile in the specified configuration file |
spring.cloud.aws.region.profile.path |
|
The file path where the profile configuration file is located. Defaults to |
It is also possible to configure custom AwsRegionProvider
bean which will prevent Spring Cloud AWS from auto-configuring region provider:
@Configuration
class CustomRegionProviderConfiguration {
@Bean
public AwsRegionProvider customRegionProvider() {
return new CustomRegionProvider();
}
}
3.3. Endpoint
To simplify using services with AWS compatible APIs, or running applications against LocalStack, it is possible to configure an endpoint set on all auto-configured AWS clients:
property | example | description |
---|---|---|
|
endpoint url applied to auto-configured AWS clients |
3.4. Customizing AWS Clients
Properties cover the most common configuration needs. When more advanced configuration is required, Spring Cloud AWS offers a set of customizer interfaces that can be implemented to customize AWS clients.
There are two types of AWS clients - synchronous and asynchronous. Each Spring Cloud AWS integration use one or the other type:
client type | integrations |
---|---|
synchronous |
DynamoDB, SES, SNS, Parameter Store, Secrets Manager, S3 |
asynchronous |
SQS, CloudWatch |
To customize every synchronous client, provide a bean of type AwsSyncClientCustomizer
. For example:
@Bean
AwsSyncClientCustomizer awsSyncClientCustomizer() {
return builder -> {
builder.httpClient(ApacheHttpClient.builder().connectionTimeout(Duration.ofSeconds(1)).build());
};
}
To customize every asynchronous client, provide a bean of type AwsAsyncClientCustomizer
. For example:
@Bean
AwsAsyncClientCustomizer awsAsyncClientCustomizer() {
return builder -> {
builder.httpClient(NettyNioAsyncHttpClient.builder().connectionTimeout(Duration.ofSeconds(1)).build());
};
}
There can be multiple customizer beans present in single application context and all of them will be used to configure AWS clients. If order of customizer matters, use @Order
annotation on customizer beans.
Client-specific customizations can be applied through client-specific customizer interfaces (for example S3ClientCustomizer
for S3). See integrations documentation for details.
3.5. GraalVM Native Image
Since version 3.3.0 the framework provides experimental support for GraalVM Native Image build.
Known issues are:
-
in DynamoDB integration,
StaticTableSchema
must be used instead ofDynamicTableSchema
(see github.com/aws/aws-sdk-java-v2/issues/2445) -
in S3 integration, when working with CRT client, following guide must be followed: github.com/awslabs/aws-crt-java?tab=readme-ov-file#graalvm-support
4. DynamoDb Integration
DynamoDb is a fully managed serverless key/value Nosql database designed for high performance. A Spring Boot starter is provided to auto-configure DynamoDb integration beans.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-dynamodb</artifactId>
</dependency>
DynamoDb integration will only provide autoconfiguration and simple DynamoDbOperations
which can be used to communicate with DynamoDb, build repositories and so on…
4.1. DynamoDbOperations
The starter automatically configures and registers a DynamoDbOperations
bean providing higher level abstractions for working with DynamoDb.
DynamoDbTemplate
- a default DynamoDbOperations
implementation - being built on top of DynamoDbEnhancedClient
uses annotations provided by AWS.
The list of supported annotations can be found here.
DynamoDbEnchancedClient
supports a programming model similar to JPA - where a class is turned into an entity through applying certain annotations:
@DynamoDbBean
public class Person {
private UUID id;
private String name;
private String lastName;
@DynamoDbPartitionKey
public UUID getId() {
return id;
}
...
}
DynamoDbTemplate
provides methods to perform typical CRUD operations on such entities, plus it adds convenience methods for querying DynamoDb:
...
@Autowired
DynamoDbTemplate dynamoDbTemplate;
...
Person person = new Person(...)
dynamoDbTemplate.save(person);
4.1.1. Resolving Table Name
To resolve a table name for an entity, DynamoDbTemplate
uses a bean of type DynamoDbTableNameResolver
. The default implementation turns an entity class name into its snake case representation with the possibility of using a table name prefix (optional) and suffix (optional). Specify the spring.cloud.aws.dynamodb.table-prefix
and spring.cloud.aws.dynamodb.table-suffix
to provide a table name prefix and suffix. The prefix is prepended to the table name and suffix is appended to the table name. For example, if spring.cloud.aws.dynamodb.table-prefix
is configured as foo_
and spring.cloud.aws.dynamodb.table-suffix
is configured as _foo2
and the entity class is Person
, then the default implementation resolves the table name as foo_person_foo2
. You can configure both properties, only one of them or none. However if you do not specify spring.cloud.aws.dynamodb.table-prefix
and spring.cloud.aws.dynamodb.table-suffix
, the table name will be resolved as person
.
To use a custom implementation, declare a bean of type DynamoDbTableNameResolver
and it will get injected into DynamoDbTemplate
automatically during auto-configuration.
4.1.2. Resolving Table Schema
To resolve a table schema for an entity, DynamoDbTemplate
uses a bean of type DynamoDbTableSchemaResolver
. The default implementation caches TableSchema
objects in internal map.
To use custom implementation, declare a bean of type DynamoDbTableSchemaResolver
and it will get injected into DynamoDbTemplate
automatically during auto-configuration.
To register a custom table schema for a DynamoDB entity a bean of type TableSchema
should be created:
@Configuration
public class MyTableSchemaConfiguration {
@Bean
public TableSchema<MyEntity> myEntityTableSchema() {
// create and return a TableSchema object for the MyEntity class
}
}
Because of classloader related issue in AWS SDK DynamoDB Enhanced client, to use Spring Cloud AWS DynamoDB module together with Spring Boot DevTools you must create a custom table schema resolver and define schema using StaticTableSchema .
|
4.2. Using DynamoDb Client
Autoconfiguration automatically configures DynamoDbClient
which can be used for low level api calls and DynamoDbEnhancedClient
if DynamoDbOperations
are not enough.
If autoconfigured DynamoDbClient
or DynamoDbEnhancedClient
bean configuration does not meet your needs, it can be replaced by creating your custom bean.
public class DynamoDbService {
private final DynamoDbClient dynamoDbClient;
public DynamoDbService(DynamoDbClient dynamoDbClient) {
this.dynamoDbClient = dynamoDbClient;
}
void deletePerson(String uuid) {
dynamoDbClient.deleteItem(DeleteItemRequest.builder().key(Collections.singletonMap("uuid", AttributeValue.builder().s(uuid).build())).build());
}
}
4.3. Using DynamoDB Accelerator
The starter automatically configures and registers an software.amazon.dax.ClusterDaxClient
bean if it finds the following is added to the project:
<dependency>
<groupId>software.amazon.dax</groupId>
<artifactId>amazon-dax-client</artifactId>
</dependency>
4.4. Configuration
The Spring Boot Starter for DynamoDb provides the following configuration options:
Name |
Description |
Required |
Default value |
|
Enables the DynamoDb integration. |
No |
|
|
Configures endpoint used by |
No |
|
|
Configures region used by |
No |
|
|
Table name prefix used by the default |
No |
|
|
Table name suffix used by the default |
No |
|
|
Timeout for idle connections with the DAX cluster. |
No |
|
|
DAX cluster endpoint. |
Yes |
|
|
Connection time to live. |
No |
|
|
Connection timeout |
No |
|
|
Request timeout for connections with the DAX cluster. |
No |
|
|
Number of times to retry writes, initial try is not counted. |
No |
|
|
Number of times to retry reads, initial try is not counted. |
No |
|
|
Interval between polling of cluster members for membership changes. |
No |
|
|
Timeout for endpoint refresh. |
No |
|
|
Maximum number of concurrent requests. |
No |
1000 |
|
Maximum number of pending Connections to acquire. |
No |
10000 |
|
Skips hostname verification in url. |
No |
4.5. Client Customization
DynamoDbClient
can be further customized by providing a bean of type DynamoDbClientCustomizer
:
@Bean
DynamoDbClientCustomizer customizer() {
return builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(1500));
}));
};
}
|
DynamoDbClientCustomizer
is a functional interface that enables configuring DynamoDbClientBuilder
before the DynamoDbClient
is built in auto-configuration.
There can be multiple DynamoDbClientCustomizer
beans present in single application context. @Order(..)
annotation can be used to define the order of the execution.
Note that DynamoDbClientCustomizer
beans are applied after AwsSyncClientCustomizer
beans and therefore can overwrite previously set configurations.
4.6. IAM Permissions
Since it depends on how you will use DynamoDb integration providing a list of IAM policies would be pointless since least privilege model should be used. To check what IAM policies DynamoDb uses and see which ones you should use please check IAM policies
5. S3 Integration
S3 allows storing files in a cloud. A Spring Boot starter is provided to auto-configure the various S3 integration related components.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-s3</artifactId>
</dependency>
5.1. Using S3 client
The starter automatically configures and registers a S3Client
bean in the Spring application context. The S3Client
bean can be used to perform operations on S3 buckets and objects.
@Component
class S3ClientSample {
private final S3Client s3Client;
S3ClientSample(S3Client s3Client) {
this.s3Client = s3Client;
}
void readFile() throws IOException {
ResponseInputStream<GetObjectResponse> response = s3Client.getObject(
request -> request.bucket("bucket-name").key("file-name.txt"));
String fileContent = StreamUtils.copyToString(response, StandardCharsets.UTF_8);
System.out.println(fileContent);
}
}
5.2. Using S3TransferManager and CRT-based S3 Client
AWS launched a high level file transfer utility, called Transfer Manager and a CRT based S3 client.
The starter automatically configures and registers a software.amazon.awssdk.transfer.s3.S3TransferManager
bean if the following dependency is added to the project:
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3-transfer-manager</artifactId>
</dependency>
Transfer Manager works the best with CRT S3 Client. To auto-configure CRT based S3AsyncClient
add following dependency to your project:
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>aws-crt-client</artifactId>
</dependency>
When no S3AsyncClient
bean is created, the default S3AsyncClient
created through AWS SDK is used. To benefit from maximum throughput, multipart upload/download and resumable file upload consider using CRT based S3AsyncClient
.
5.3. S3 Objects as Spring Resources
Spring Resources are an abstraction for a number of low-level resources, such as file system files, classpath files, servlet context-relative files, etc.
Spring Cloud AWS adds a new resource type: a S3Resource
object.
The Spring Resource Abstraction for S3 allows S3 objects to be accessed by their S3 URL using the @Value
annotation:
@Value("s3://[S3_BUCKET_NAME]/[FILE_NAME]")
private Resource s3Resource;
…or the Spring application context
SpringApplication.run(...).getResource("s3://[S3_BUCKET_NAME]/[FILE_NAME]");
This creates a Resource
object that can be used to read the file, among other possible operations.
It is also possible to write to a Resource
, although a WriteableResource
is required.
@Value("s3://[S3_BUCKET_NAME]/[FILE_NAME]")
private Resource s3Resource;
...
try (OutputStream os = ((WritableResource) s3Resource).getOutputStream()) {
os.write("content".getBytes());
}
To work with the Resource
as a S3 resource, cast it to io.awspring.cloud.s3.S3Resource
.
Using S3Resource
directly lets you set the S3 object metadata.
@Value("s3://[S3_BUCKET_NAME]/[FILE_NAME]")
private Resource s3Resource;
...
ObjectMetadata objectMetadata = ObjectMetadata.builder()
.contentType("application/json")
.serverSideEncryption(ServerSideEncryption.AES256)
.build();
s3Resource.setObjectMetadata(objectMetadata);
try (OutputStream outputStream = s3Resource.getOutputStream()) {
outputStream.write("content".getBytes(StandardCharsets.UTF_8));
}
5.4. S3 Client Side Encryption
AWS offers encryption library which is integrated inside of S3 Client called docs.aws.amazon.com/amazon-s3-encryption-client/latest/developerguide/what-is-s3-encryption-client.html [S3EncryptionClient]. With encryption client you are going to encrypt your files before sending them to S3 bucket.
To autoconfigure Encryption Client simply add the following dependency.
<dependency>
<groupId>software.amazon.encryption.s3</groupId>
<artifactId>amazon-s3-encryption-client-java</artifactId>
</dependency>
We are supporting 3 types of encryption.
-
To configure encryption via KMS key specify 'spring.cloud.aws.s3.encryption.keyId' with KMS key arn and this key will be used to encrypt your files.
Also, following dependency is required.
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>kms</artifactId>
<optional>true</optional>
</dependency>
-
Asymmetric encryption is possible via RSA to enable it you will have to implement 'io.awspring.cloud.autoconfigure.s3.S3RsaProvider'
!Note you will have to manage storing private and public keys yourself otherwise you won’t be able to decrypt the data later. Example of simple RSAProvider:
public class MyRsaProvider implements S3RsaProvider {
@Override
public KeyPair generateKeyPair() {
try {
// fetch key pair from secure location such as Secrets Manager
// access to KeyPair is required to decrypt objects when fetching, so it is advised to keep them stored securely
}
catch (Exception e) {
return null;
}
}
}
-
Last option is if you want to use symmetric algorithm, this is possible via
io.awspring.cloud.autoconfigure.s3.S3AesProvider
!Note you will have to manage storing storing private key! Example of simple AESProvider:
public class MyAesProvider implements S3AesProvider {
@Override
public SecretKey generateSecretKey() {
try {
// fetch secret key from secure location such as Secrets Manager
// access to secret key is required to decrypt objects when fetching, so it is advised to keep them stored securely
}
catch (Exception e) {
return null;
}
}
}
5.4.1. S3 Output Stream
Under the hood by default S3Resource
uses a io.awspring.cloud.s3.InMemoryBufferingS3OutputStream
. When data is written to the resource, is gets sent to S3 using multipart upload.
If a network error occurs during upload, S3Client
has a built-in retry mechanism that will retry each failed part. If the upload fails after retries, multipart upload gets aborted and S3Resource
throws io.awspring.cloud.s3.S3Exception
.
If InMemoryBufferingS3OutputStream
behavior does not fit your needs, you can use io.awspring.cloud.s3.DiskBufferingS3OutputStream
by defining a bean of type DiskBufferingS3OutputStreamProvider
which will override the default output stream provider.
With DiskBufferingS3OutputStream
when data is written to the resource, first it is stored on the disk in a tmp
directory in the OS. Once the stream gets closed, the file gets uploaded with S3Client#putObject method.
If a network error occurs during upload, S3Client
has a built-in retry mechanism. If the upload fails after retries, S3Resource
throws io.awspring.cloud.s3.UploadFailed
exception containing a file location in a temporary directory in a file system.
try (OutputStream outputStream = s3Resource.getOutputStream()) {
outputStream.write("content".getBytes(StandardCharsets.UTF_8));
} catch (UploadFailedException e) {
// e.getPath contains a file location in temporary folder
}
If you are using the S3TransferManager
, the default implementation will switch to io.awspring.cloud.s3.TransferManagerS3OutputStream
. This OutputStream also uses a temporary file to write it on disk before uploading it to S3, but it may be faster as it uses a multi-part upload under the hood.
5.4.2. Searching resources
The Spring resource loader also supports collecting resources based on an Ant-style path specification. Spring Cloud AWS offers the same support to resolve resources within a bucket and even throughout buckets. The actual resource loader needs to be wrapped with the Spring Cloud AWS one in order to search for S3 buckets, in case of non S3 bucket the resource loader will fall back to the original one. The next example shows the resource resolution by using different patterns.
public class SimpleResourceLoadingBean {
private final ResourcePatternResolver resourcePatternResolver;
@Autowired
public void setupResolver(S3Client s3Client, ApplicationContext applicationContext) {
this.resourcePatternResolver = new S3PathMatchingResourcePatternResolver(s3Client, applicationContext);
}
public void resolveAndLoad() throws IOException {
Resource[] allTxtFilesInFolder = this.resourcePatternResolver.getResources("s3://bucket/name/*.txt");
Resource[] allTxtFilesInBucket = this.resourcePatternResolver.getResources("s3://bucket...
Resource[] allTxtFilesGlobally = this.resourcePatternResolver.getResources("s3:/...
}
}
Resolving resources throughout all buckets can be very time consuming depending on the number of buckets a user owns. |
5.5. Using S3 Access grants
Sometimes there is a need to make access control to S3 bucket contents fine grained. Since IAM polices and S3 Policies only support 10kbs size, S3 Access Grant is solving this by allowing fine grained access control over content in bucket.
To use S3 Access Grants out of the box with S3Client
and S3Template
introduce following plugin:
<dependency>
<groupId>software.amazon.s3.accessgrants</groupId>
<artifactId>aws-s3-accessgrants-java-plugin</artifactId>
</dependency>
5.6. Using S3Template
Spring Cloud AWS provides a higher abstraction on the top of S3Client
providing methods for the most common use cases when working with S3.
On the top of self-explanatory methods for creating and deleting buckets, S3Template
provides a simple methods for uploading and downloading files:
@Autowired
private S3Template s3Template;
InputStream is = ...
// uploading file without metadata
s3Template.upload(BUCKET, "file.txt", is);
// uploading file with metadata
s3Template.upload(BUCKET, "file.txt", is, ObjectMetadata.builder().contentType("text/plain").build());
Another feature of S3Template
is the ability to generate signed URLs for getting/putting S3 objects in a single method call.
URL signedGetUrl = s3Template.createSignedGetURL("bucket_name", "file.txt", Duration.ofMinutes(5));
S3Template
also allows storing & retrieving Java objects.
Person p = new Person("John", "Doe");
s3Template.store(BUCKET, "person.json", p);
Person loadedPerson = s3Template.read(BUCKET, "person.json", Person.class);
By default, if Jackson is on the classpath, S3Template
uses ObjectMapper
based Jackson2JsonS3ObjectConverter
to convert from S3 object to Java object and vice versa.
This behavior can be overwritten by providing custom bean of type S3ObjectConverter
.
5.7. Determining S3 Objects Content Type
All S3 objects stored in S3 through S3Template
, S3Resource
or S3OutputStream
automatically get set a contentType
property on the S3 object metadata, based on the S3 object key (file name).
By default, PropertiesS3ObjectContentTypeResolver
- a component supporting over 800 file extensions is responsible for content type resolution.
If this content type resolution does not meet your needs, you can provide a custom bean of type S3ObjectContentTypeResolver
which will be automatically used in all components responsible for uploading files.
5.8. Configuration
The Spring Boot Starter for S3 provides the following configuration options:
Name |
Description |
Required |
Default value |
|
Enables the S3 integration. |
No |
|
|
Configures endpoint used by |
No |
|
|
Configures region used by |
No |
|
|
Option to enable using the accelerate endpoint when accessing S3. Accelerate endpoints allow faster transfer of objects by using Amazon CloudFront’s globally distributed edge locations. |
No |
|
|
Option to disable doing a validation of the checksum of an object stored in S3. |
No |
|
|
Option to enable using chunked encoding when signing the request payload for |
No |
|
|
Option to enable using path style access for accessing S3 objects instead of DNS style access. DNS style access is preferred as it will result in better load balancing when accessing S3. |
No |
|
|
If an S3 resource ARN is passed in as the target of an S3 operation that has a different region to the one the client was configured with, this flag must be set to 'true' to permit the client to make a cross-region call to the region specified in the ARN otherwise an exception will be thrown. |
No |
|
|
Sets the minimum part size for transfer parts. Decreasing the minimum part size causes multipart transfer to be split into a larger number of smaller parts. Setting this value too low has a negative effect on transfer speeds, causing extra latency and network communication for each part. |
No |
|
|
Configure the starting buffer size the client will use to buffer the parts downloaded from S3. Maintain a larger window to keep up a high download throughput; parts cannot download in parallel unless the window is large enough to hold multiple parts. Maintain a smaller window to limit the amount of data buffered in memory. |
No |
|
|
The target throughput for transfer requests. Higher value means more S3 connections will be opened. Whether the transfer manager can achieve the configured target throughput depends on various factors such as the network bandwidth of the environment and the configured |
No |
|
|
Specifies the maximum number of S3 connections that should be established during transfer |
No |
|
|
Specifies the maximum number of levels of directories to visit in |
No |
|
|
Specifies whether to follow symbolic links when traversing the file tree in |
No |
|
5.9. Client Customization
S3Client
can be further customized by providing a bean of type S3ClientCustomizer
:
@Bean
S3ClientCustomizer customizer() {
return builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(1500));
}));
};
}
|
S3ClientCustomizer
is a functional interface that enables configuring S3ClientBuilder
before the S3Client
is built in auto-configuration.
There can be multiple S3ClientCustomizer
beans present in single application context. @Order(..)
annotation can be used to define the order of the execution.
Note that S3ClientCustomizer
beans are applied after AwsSyncClientCustomizer
beans and therefore can overwrite previously set configurations.
5.10. Loading External Configuration
Just like Spring Boot supports configuring application through application.properties
stored in the file system, Spring Cloud AWS S3 integration extends this capability with fetching application configuration the S3 bucket through spring.config.import
property.
For example, assuming that there is a file config.properties
in a bucket named bucket-name
, to include it as Spring Boot configuration, add a following property to application.properties
or application.yml
:
spring.config.import=aws-s3:/bucket-name/config.properties
If a file with given name does not exist in S3, application will fail to start. If file configuration is not required for the application, and it should continue to startup even when file configuration is missing, add optional
before prefix:
spring.config.import=optional:aws-s3:/bucket-name/config.properties
To load multiple files, separate their names with ;
:
spring.config.import=aws-s3:/bucket-name/config.properties;/another-name/config.yml
If some files are required, and other ones are optional, list them as separate entries in spring.config.import
property:
spring.config.import[0]=optional:bucket-name/config.properties
spring.config.import[1]=aws-s3=/another-name/config.yml
Fetched files configuration can be referenced with @Value
, bound to @ConfigurationProperties
classes, or referenced in application.properties
file.
JSON
, Java Properties and YAML
configuration file formats are supported.
File resolved with spring.config.import
can be also referenced in application.properties
.
For example, with a file config.json
containing following JSON structure:
{
"url": "someUrl"
}
spring.config.import
entry is added to application.properties
:
spring.config.import=aws-s3:/bucket-name/config.json
File configuration values can be referenced by JSON key names:
@Value("${url}"
private String url;
5.11. Customizing S3Client
To use custom S3Client
in spring.config.import
, provide an implementation of BootstrapRegistryInitializer
. For example:
package com.app;
public class S3ClientBootstrapConfiguration implements BootstrapRegistryInitializer {
@Override
public void initialize(BootstrapRegistry registry) {
registry.register(S3Client.class, context -> {
AwsCredentialsProvider awsCredentialsProvider = StaticCredentialsProvider.create(AwsBasicCredentials.create("yourAccessKey", "yourSecretKey"));
return S3Client.builder().credentialsProvider(awsCredentialsProvider).region(Region.EU_WEST_2).build();
});
}
}
Note that this class must be listed under org.springframework.boot.BootstrapRegistryInitializer
key in META-INF/spring.factories
:
org.springframework.boot.BootstrapRegistryInitializer=com.app.S3ClientBootstrapConfiguration
If you want to use autoconfigured S3Client
but change underlying SDKClient or ClientOverrideConfiguration
you will need to register bean of type S3ClientCustomizer
:
Autoconfiguration will configure S3Client
Bean with provided values after that, for example:
package com.app;
class S3ClientBootstrapConfiguration implements BootstrapRegistryInitializer {
@Override
public void initialize(BootstrapRegistry registry) {
registry.register(S3ClientCustomizer.class, context -> (builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(2001));
}));
}));
}
}
5.12. PropertySource
Reload
Some applications may need to detect changes on external property sources and update their internal status to reflect the new configuration. The reload feature of Spring Cloud AWS S3 config import integration is able to trigger an application reload when a related file value changes.
By default, this feature is disabled. You can enable it by using the spring.cloud.aws.s3.config.reload.strategy
configuration property (for example, in the application.properties
file) and adding following dependencies.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-commons</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-context</artifactId>
</dependency>
The following levels of reload are supported (by setting the spring.cloud.aws.s3.config.reload.strategy
property):
-
refresh
(default): Only configuration beans annotated with@ConfigurationProperties
or@RefreshScope
are reloaded. This reload level leverages the refresh feature of Spring Cloud Context. -
restart_context
: the whole SpringApplicationContext
is gracefully restarted. Beans are recreated with the new configuration. In order for the restart context functionality to work properly you must enable and expose the restart actuator endpoint
management: endpoint: restart: enabled: true endpoints: web: exposure: include: restart
Assuming that the reload feature is enabled with default settings (refresh
mode), the following bean is refreshed when the file changes:
@Configuration @ConfigurationProperties(prefix = "bean") public class MyConfig { private String message = "a message that can be changed live"; // getter and setters }
To see that changes effectively happen, you can create another bean that prints the message periodically, as follows
@Component
public class MyBean {
@Autowired
private MyConfig config;
@Scheduled(fixedDelay = 5000)
public void hello() {
System.out.println("The message is: " + config.getMessage());
}
}
The reload feature periodically re-creates the configuration from S3 file to see if it has changed.
You can configure the polling period by using the spring.cloud.aws.s3.config.reload.period
(default value is 1 minute).
5.13. Configuration
The Spring Boot Starter for S3 provides the following configuration options:
Name |
Description |
Required |
Default value |
|
Enables the S3 config import integration. |
No |
|
|
|
|
The strategy to use when firing a reload ( |
|
|
|
The period for verifying changes |
|
|
|
The maximum time between the detection of changes in property source and the application context restart when |
5.14. IAM Permissions
Following IAM permissions are required by Spring Cloud AWS:
Downloading files |
|
Searching files |
|
Uploading files |
|
Sample IAM policy granting access to spring-cloud-aws-demo
bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::spring-cloud-aws-demo"
},
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::spring-cloud-aws-demo/*"
},
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::spring-cloud-aws-demo/*"
}
]
}
6. SES Integration
Spring has a built-in support to send e-mails based on the Java Mail API to avoid any static method calls while using the Java Mail API and thus supporting the testability of an application. Spring Cloud AWS supports the Amazon SES as an implementation of the Spring Mail abstraction.
As a result Spring Cloud AWS users can decide to use the Spring Cloud AWS implementation of the Amazon SES service or use the standard Java Mail API based implementation that sends e-mails via SMTP to Amazon SES.
It is preferred to use the Spring Cloud AWS implementation instead of SMTP mainly for performance reasons. Spring Cloud AWS uses one API call to send a mail message, while the SMTP protocol makes multiple requests (EHLO, MAIL FROM, RCPT TO, DATA, QUIT) until it sends an e-mail. |
A Spring Boot starter is provided to auto-configure SES integration beans.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-ses</artifactId>
</dependency>
6.1. Sending simple mails
Application developers can inject the MailSender
into their application code and directly send simple text based e-mail
messages. The sample below demonstrates the creation of a simple mail message.
class MailSendingService {
private final MailSender mailSender;
public MailSendingService(MailSender mailSender) {
this.mailSender = mailSender;
}
public void sendMailMessage() {
SimpleMailMessage simpleMailMessage = new SimpleMailMessage();
simpleMailMessage.setFrom("foo@bar.com");
simpleMailMessage.setTo("bar@baz.com");
simpleMailMessage.setSubject("test subject");
simpleMailMessage.setText("test content");
this.mailSender.send(simpleMailMessage);
}
}
6.2. Sending attachments and/or HTML e-mails
Sending attachments with e-mail or HTML e-mails requires MIME messages to be created and sent. In order to create MIME messages,
the Java Mail API dependency and an implementation need to be in the classpath. Spring Cloud AWS will detect the
dependency and create a org.springframework.mail.javamail.JavaMailSender
implementation that allows to create and
build MIME messages and send them. Dependencies for the Java Mail API and an implementation are the only needed configuration changes as shown below.
<dependency>
<groupId>jakarta.mail</groupId>
<artifactId>jakarta.mail-api</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.angus</groupId>
<artifactId>jakarta.mail</artifactId>
</dependency>
Even though there is a dependency to the Java Mail API there is still the Amazon SES API used underneath to send mail messages. There is no SMTP setup required on the Amazon AWS side. |
Sending the mail requires the application developer to use the JavaMailSender
to send an e-mail as shown in the example
below.
class MailSendingService {
private final JavaMailSender mailSender;
public MailSendingService(JavaMailSender mailSender) {
this.mailSender = mailSender;
}
public void sendMailMessage() {
this.mailSender.send(new MimeMessagePreparator() {
@Override
public void prepare(MimeMessage mimeMessage) throws Exception {
MimeMessageHelper helper =
new MimeMessageHelper(mimeMessage, true, "UTF-8");
helper.addTo("foo@bar.com");
helper.setFrom("bar@baz.com");
helper.addAttachment("test.txt", ...);
helper.setSubject("test subject with attachment");
helper.setText("mime body", false);
}
});
}
}
6.3. Authenticating e-mails
To avoid any spam attacks on the Amazon SES mail service, applications without production access must
verify each
e-mail receiver otherwise the mail sender will throw a software.amazon.awssdk.services.ses.model.MessageRejectedException
.
Production access can be requested and will disable the need for mail address verification.
6.4. Configuration
The Spring Boot Starter for SES provides the following configuration options:
Name |
Description |
Required |
Default value |
|
Enables the SES integration. |
No |
|
|
Configures endpoint used by |
No |
|
|
Configures region used by |
No |
|
|
Configures source ARN, used only for sending authorization. |
No |
|
|
Configures from ARN, used only for sending authorization in the SendRawEmail operation. |
No |
|
|
The configuration set name used for every message |
No |
Amazon SES is not available in all regions of the
Amazon Web Services cloud. Therefore, an application hosted and operated in a region that does not support the mail
service will produce an error while using the mail service. Therefore, the region must be overridden for the mail
sender configuration. The example below shows a typical combination of a region (EU-CENTRAL-1
) that does not provide
an SES service where the client is overridden to use a valid region (EU-WEST-1
).
sourceArn
is the ARN of the identity that is associated with the sending authorization policy. For information about when to use this parameter, see the
description see Amazon SES Developer Guide.
fromArn
is the ARN of the identity that is associated with the sending authorization policy that permits you to specify a particular "From" address in the header of the raw email.
For information about when to use this parameter, see the description see Amazon SES Developer Guide.
configurationSetName
sets the configuration set name on mail sender level and applies to every mail. For information about when to use this parameter, see the
description Using configuration sets in Amazon SES.
spring.cloud.aws.ses.region=eu-west-1
spring.cloud.aws.ses.source-arn=arn:aws:ses:eu-west-1:123456789012:identity/example.com
spring.cloud.aws.ses.from-arn=arn:aws:ses:eu-west-1:123456789012:identity/example.com
spring.cloud.aws.ses.configuration-set-name=ConfigSet
6.5. Client Customization
SesClient
can be further customized by providing a bean of type SesClientCustomizer
:
@Bean
SesClientCustomizer customizer() {
return builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(1500));
}));
};
}
|
SesClientCustomizer
is a functional interface that enables configuring SesClientBuilder
before the SesClient
is built in auto-configuration.
There can be multiple SesClientCustomizer
beans present in single application context. @Order(..)
annotation can be used to define the order of the execution.
Note that SesClientCustomizer
beans are applied after AwsSyncClientCustomizer
beans and therefore can overwrite previously set configurations.
6.6. IAM Permissions
Following IAM permissions are required by Spring Cloud AWS:
Send e-mail without attachment |
|
Send e-mail with attachment |
|
Sample IAM policy granting access to SES:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": "arn:aws:ses:your:arn"
}
]
}
7. SNS Integration
SNS is a pub/sub messaging service that allows clients to publish notifications to a particuluar topic. A Spring Boot starter is provided to auto-configure SNS integration beans.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-sns</artifactId>
</dependency>
7.1. Sending Notifications
7.1.1. SNS Template
The starter automatically configures and registers a SnsTemplate
bean providing higher level abstractions for sending SNS notifications.
SnsTemplate
implements Spring Messaging abstractions making it easy to combine with other messaging technologies compatible with Spring Messaging.
It supports sending notifications with payload of type:
-
String
- usingorg.springframework.messaging.converter.StringMessageConverter
-
Object
- which gets serialized to JSON usingorg.springframework.messaging.converter.MappingJackson2MessageConverter
and Jackson’scom.fasterxml.jackson.databind.ObjectMapper
autoconfigured by Spring Boot.
Additionally, it exposes handful of methods supporting org.springframework.messaging.Message
.
class NotificationService {
private final SnsTemplate snsTemplate;
NotificationService(SnsTemplate snsTemplate) {
this.snsTemplate = snsTemplate;
}
void sendNotification() {
// sends String payload
snsTemplate.sendNotification("topic-arn", "payload", "subject");
// sends object serialized to JSON
snsTemplate.sendNotification("topic-arn", new Person("John", "Doe"), "subject");
// sends a Spring Messaging Message
Message<String> message = MessageBuilder.withPayload("payload")
.setHeader("header-name", "header-value")
.build();
snsTemplate.send("topic-arn", message);
}
}
If autoconfigured converters do not meet your needs, you can provide a custom SnsTemplate
bean with a message converter of your choice.
When sending SNS notification, it is required to provide a topic ARN. Spring Cloud AWS simplifies it and allows providing a topic name instead, under a condition that topic with that name has already been created. Otherwise, Spring Cloud AWS will make an attempt to create topic with this name with a first call.
The behavior of resolving topic ARN by a topic name can be altered by providing a custom bean of type io.awspring.cloud.sns.core.TopicArnResolver
.
If resolving topic name by create topic call is not possible you can autoconfigure Bean of io.awspring.cloud.sns.core.TopicsListingTopicArnResolver
.
Autoconfiguration will automatically configure SnsTemplate
with TopicArnResolverConfiguration
.
import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import software.amazon.awssdk.services.sns.SnsClient; import io.awspring.cloud.sns.core.TopicArnResolver; import io.awspring.cloud.sns.core.TopicsListingTopicArnResolver; @Configuration public class TopicArnResolverConfiguration { @Bean public TopicArnResolver topicArnResolver(SnsClient snsClient) { return new TopicsListingTopicArnResolver(snsClient); } }
However, when using the topic ARN in your application, the SnsTemplate
provides a topicExists
method to validate the existence of the SNS topic at application startup itself.
7.1.2. SNS Operations
Because of Spring Messaging compatibility, SnsTemplate
exposes many methods that you may not need if you don’t need Spring Messaging abstractions.
In such case, we recommend using SnsOperations
- an interface implemented by SnsTemplate
, that exposes a convenient method for sending SNS notification, including support for FIFO topics.
class NotificationService {
private final SnsOperations snsOperations;
NotificationService(SnsOperations snsOperations) {
this.snsOperations = snsOperations;
}
void sendNotification() {
SnsNotification<Person> notification = SnsNotification.builder(new Person("John", "Doe"))
.deduplicationId("..")
.groupId("..")
.build();
snsOperations.sendNotification("topic-arn", notification);
}
}
7.2. Sending SMS Messages
The starter automatically configures and registers a SnsSmsTemplate
bean providing higher level abstractions for sending SMS messages to SNS topic or directly to a phone number.
Both SnsSmsTemplate#send
and SnsSmsTemplate#sendToTopic
take an optional parameter SnsMessageAttributes
that provide a fluent type safe interface for setting MessageAttributes
class NotificationService {
private SnsSmsTemplate smsTemplate;
NotificationService(SnsSmsTemplate smsTemplate) {
this.smsTemplate = smsTemplate;
}
void sendSms() {
smsTemplate.send("+1XXX5550100", "the message", SmsMessageAttributes.builder()
.smsType(SmsType.PROMOTIONAL).senderID("mySenderID").maxPrice("0.50").build());
}
}
7.3. Using SNS Client
To have access to all lower level SNS operations, we recommend using SnsClient
from AWS SDK. SnsClient
bean is autoconfigured by SnsAutoConfiguration
.
If autoconfigured SnsClient
bean configuration does not meet your needs, it can be replaced by creating a custom bean of type SnsClient
.
class NotificationService {
private final SnsClient snsClient;
public NotificationService(SnsClient snsClient) {
this.snsClient = snsClient;
}
void sendNotification() {
snsClient.publish(request -> request.topicArn("sns-topic-arn").message("payload"));
}
}
7.4. Annotation-driven HTTP notification endpoint
SNS supports multiple endpoint types (SQS, Email, HTTP, HTTPS), Spring Cloud AWS provides support for HTTP(S) endpoints. SNS sends three type of requests to an HTTP topic listener endpoint, for each of them annotations are provided:
-
Subscription request →
@NotificationSubscriptionMapping
-
Notification request →
@NotificationMessageMapping
-
Unsubscription request →
@NotificationUnsubscribeMapping
HTTP endpoints are based on Spring MVC controllers. Spring Cloud AWS added some custom argument resolvers to extract the message and subject out of the notification requests.
Example of integration:
@Controller
@RequestMapping("/topicName")
public class NotificationTestController {
@NotificationSubscriptionMapping
public void handleSubscriptionMessage(NotificationStatus status) {
//We subscribe to start receive the message
status.confirmSubscription();
}
@NotificationMessageMapping
public void handleNotificationMessage(@NotificationSubject String subject, @NotificationMessage String message) {
// ...
}
@NotificationUnsubscribeConfirmationMapping
public void handleUnsubscribeMessage(NotificationStatus status) {
//e.g. the client has been unsubscribed and we want to "re-subscribe"
status.confirmSubscription();
}
}
7.5. Configuration
The Spring Boot Starter for SNS provides the following configuration options:
Name |
Description |
Required |
Default value |
|
Enables the SNS integration. |
No |
|
|
Configures endpoint used by |
No |
|
|
Configures region used by |
No |
|
7.6. Client Customization
SnsClient
can be further customized by providing a bean of type SnsClientCustomizer
:
@Bean
SnsClientCustomizer customizer() {
return builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(1500));
}));
};
}
|
SnsClientCustomizer
is a functional interface that enables configuring SnsClientBuilder
before the SnsClient
is built in auto-configuration.
There can be multiple SnsClientCustomizer
beans present in single application context. @Order(..)
annotation can be used to define the order of the execution.
Note that SnsClientCustomizer
beans are applied after AwsSyncClientCustomizer
beans and therefore can overwrite previously set configurations.
7.7. IAM Permissions
Following IAM permissions are required by Spring Cloud AWS:
To publish notification to topic |
|
To publish notification you will also need |
|
To use Annotation-driven HTTP notification endpoint |
|
For resolving topic name to ARN |
|
For validating topic existence by ARN |
|
Sample IAM policy granting access to SNS:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sns:Publish",
"sns:ConfirmSubscription",
"sns:GetTopicAttributes"
],
"Resource": "yourArn"
},
{
"Effect": "Allow",
"Action": "sns:ListTopics",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "sns:CreateTopic",
"Resource": "*"
}
]
}
8. SQS Integration
Amazon Simple Queue Service
is a messaging service that provides point-to-point communication with queues.
Spring Cloud AWS SQS integration offers support to receive and send messages using common Spring
abstractions such as @SqsListener
, MessageListenerContainer
and MessageListenerContainerFactory
.
Compared to JMS or other message services Amazon SQS has limitations that should be taken into consideration.
-
Amazon SQS allows only
String
payloads, so anyObject
must be transformed into a String representation. Spring Cloud AWS has dedicated support to transfer Java objects with Amazon SQS messages by converting them to JSON. -
Amazon SQS has a maximum message size of 256kb per message, so bigger messages will fail to be sent.
A Spring Boot starter is provided to auto-configure SQS integration beans. Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-sqs</artifactId>
</dependency>
8.1. Sample Listener Application
Below is a minimal sample application leveraging auto-configuration from Spring Boot
.
@SpringBootApplication
public class SqsApplication {
public static void main(String[] args) {
SpringApplication.run(SqsApplication.class, args);
}
@SqsListener("myQueue")
public void listen(String message) {
System.out.println(message);
}
}
Without Spring Boot, it’s necessary to import the SqsBootstrapConfiguration
class in a @Configuration
, as well as declare a SqsMessageListenerContainerFactory
bean.
public class Listener {
@SqsListener("myQueue")
public void listen(String message) {
System.out.println(message);
}
}
@Import(SqsBootstrapConfiguration.class)
@Configuration
public class SQSConfiguration {
@Bean
public SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory() {
return SqsMessageListenerContainerFactory
.builder()
.sqsAsyncClient(sqsAsyncClient())
.build();
}
@Bean
public SqsAsyncClient sqsAsyncClient() {
return SqsAsyncClient.builder().build();
}
@Bean
public Listener listener() {
return new Listener();
}
}
8.2. Sending Messages
Spring Cloud AWS
provides the SqsTemplate
to send messages to SQS
.
8.2.1. SqsTemplate
When using Spring Boot
and autoconfiguration, a SqsTemplate
instance is autowired by default in case no other template bean is found in the context.
This template instance is backed by the autoconfigured SqsAsyncClient
, with any configurations provided.
SqsTemplate
instances are immutable and thread-safe.
The endpoint to which the message will be sent can be either the queue name or URL. |
8.2.2. Creating a SqsTemplate Instance
SqsTemplate
implements two Operations
interfaces: SqsOperations
contains blocking methods, and SqsAsyncOperations
contains async methods that return CompletableFuture
instances.
In case only sync or async operations are to be used, the corresponding interface can be utilized to eliminate unnecessary methods.
The following methods can be used to create new SqsTemplate
instances with default options:
SqsTemplate template = SqsTemplate.newTemplate(sqsAsyncClient);
SqsOperations blockingTemplate = SqsTemplate.newSyncTemplate(sqsAsyncClient);
SqsAsyncOperations asyncTemplate = SqsTemplate.newAsyncTemplate(sqsAsyncClient);
The returned object is always the SqsTemplate , and the separate methods are only for convenience of the interface return type.
|
In case more complex configuration is required, a builder is also provided, and a set of options:
SqsTemplate template = SqsTemplate.builder()
.sqsAsyncClient(this.asyncClient)
.configure(options -> options
.acknowledgementMode(TemplateAcknowledgementMode.MANUAL)
.defaultEndpointName("my-queue"))
.build();
The builder also offers the buildSyncTemplate()
method to return the template as SqsOperations
, and buildAsyncTemplate()
to return it as SqsAsyncOperations
.
8.2.3. Template Options
The following options can be configured through the options
object.
The defaults are applied in case no other value is provided as a parameter in the operation method.
TemplateOptions Descriptions
Name | Type | Default | Description |
---|---|---|---|
|
TemplateAcknowledgementMode |
TemplateAcknowledgementMode #ACKNOWLEDGE_ON_RECEIVE |
Whether messages should be acknowledged by the template after being received.
Messages can be acknowledged later by using the |
|
SendBatchFailureStrategy |
SendBatchFailureStrategy #THROW |
Whether a |
|
Duration |
10 seconds |
The default maximum time to wait for messages when performing a receive request to SQS. See SqsTemplate for more information. |
|
Integer |
10 |
The default maximum of messages to be returned by a receive request to SQS. See SqsTemplate for more information. |
|
String |
blank |
The default endpoint name to be used by the template. See SqsTemplate for more information. |
|
Class |
null |
The default class to which payloads should be converted to.
Note that messages sent with the |
|
String, Object |
empty |
Set a single header to be added to all messages received by this template. |
|
Map<String, Object> |
empty |
Set headers to be added to all messages received by this template. |
|
QueueNotFoundStrategy |
QueueNotFoundStrategy #CREATE |
Set the strategy to use in case a queue is not found.
With |
|
Collection<AttributeNames> |
empty |
Set the queue attribute names that will be retrieved.
Such attributes are available as |
|
Collection<String> |
All |
Set the message attribute names that will be retrieved with messages on receive operations.
Such attributes are available as |
|
Collection <MessageSystemAttributeName> |
All |
Set the message system attribute names that will be retrieved with messages on receive operations.
Such attributes are available as |
|
ContentBasedDeduplication |
ContentBasedDeduplication #AUTO |
Set the ContentBasedDeduplication queue attribute value of the queues the template is sending messages to.
With |
8.2.4. Sending Messages
There are a number of available methods to send messages to SQS queues using the SqsTemplate
.
The following methods are available through the SqsOperations
interface, with the respective async
counterparts available in the SqsAsyncOperations
.
// Send a message to the configured default endpoint.
SendResult<T> send(T payload);
// Send a message to the provided queue with the given payload.
SendResult<T> send(String queue, T payload);
// Send the given Message to the provided queue.
SendResult<T> send(String queue, Message<T> message);
// Send a message with the provided options.
SendResult<T> send(Consumer<SqsSendOptions> to);
// Send a batch of Messages to the provided queue
SendResult.Batch<T> sendMany(String queue, Collection<Message<T>> messages);
To send a collection of objects, it is recommended to use sendMany(String queue, Collection<Message<T>> messages) to optimize throughput.
To send a collection of objects in a single message, the collection must be wrapped in an object.
|
An example using the options
variant follows:
SendResult<String> result = template.send(to -> to.queue("myQueue")
.payload("myPayload")
.header("myHeaderName", "myHeaderValue")
.headers(Map.of("myOtherHeaderName", "myOtherHeaderValue"))
.delaySeconds(10)
);
To send messages to a Fifo queue, the options include messageDeduplicationId and messageGroupId properties.
If messageGroupId is not provided, a random UUID is generated by the framework.
If messageDeduplicationId is not provided and content deduplication is disabled on AWS, a random UUID is generated.
The generated values can be retrieved in the headers of the Message contained in the SendResult object.
|
SendResult
The SendResult
record contains useful information on the send operation.
public record SendResult<T>(UUID messageId, String endpoint, Message<T> message, Map<String, Object> additionalInformation) {
public record Batch<T>(Collection<SendResult<T>> successful, Collection<SendResult.Failed<T>> failed) {}
public record Failed<T> (String errorMessage, String endpoint, Message<T> message, Map<String, Object> additionalInformation) {}
}
When the send operation is successful, the SendResult
object is created with:
-
the
messageId
returned fromSQS
for the message -
the
endpoint
the message was sent to -
the
Message
instance that was sent, with any additional headers that might have been added by the framework -
an
additionalInformation
map with thesequenceNumber
generated for the message inFifo
queues.
When the send operation fails for single message operations, a MessagingOperationFailedException
containing the message is thrown.
For Batch
send operations, a SendResult.Batch
object is returned.
This object contains a Collection
of successful
and failed
results.
In case there are messages that failed to be sent within a batch, corresponding SendResult.Failed
objects are generated.
The SendBatch.Failed
object contains:
-
the
errorMessage
returned by SQS -
the
endpoint
the message was to be sent to -
the
Message
instance that was tried to be sent, with any additional headers that might have been added by the framework -
an
additionalInformation
map with thecode
andsenderFault
parameters returned by SQS.
By default, if there’s at least one failed message in a send batch operation, a SendBatchOperationFailedException
will be thrown.
Such exception contains a SendResult.Batch<?>
property containing both successful and failed messages.
This behavior can be configured using the sendBatchFailureHandlingStrategy
option when creating the template.
If SendBatchFailureStrategy#DO_NOT_THROW
is configured, no exception is thrown and a SendResult.Batch
object containing both successful and failed messages is returned.
For convenience, the additionalInformation
parameters can be found as constants in the SqsTemplateParameters
class.
8.2.5. Template Message Conversion
Message conversion by default is handled by a SqsMessagingMessageConverter
instance, which contains:
-
SqsHeaderMapper
for mapping headers to and frommessageAttributes
-
CompositeMessageConverter
with aStringMessageConverter
and aMappingJackson2MessageConverter
for converting payloads to and from JSON.
A custom MessagingMessageConverter
implementation can be provided in the SqsTemplate.builder()
:
SqsOperations template = SqsTemplate
.builder()
.sqsAsyncClient(sqsAsyncClient)
.messageConverter(converter)
.buildSyncTemplate();
The default SqsMessagingMessageConverter
instance can also be configured in the builder:
SqsOperations template = SqsTemplate
.builder()
.sqsAsyncClient(sqsAsyncClient)
.configureDefaultConverter(converter -> {
converter.setObjectMapper(objectMapper);
converter.setHeaderMapper(headerMapper);
converter.setPayloadTypeHeader("my-custom-type-header");
}
)
.buildSyncTemplate();
Specifying a Payload Class for Receive Operations
By default, the SqsTemplate
adds a header with name JavaType
containing the fully qualified name of the payload class to all messages sent.
Such header is used in receive operations by the SqsTemplate
, SqsMessageListenerContainer
and @SqsListener
to identify to which class the payload should be deserialized to.
This behavior can be configured in the SqsMessagingMessageConverter
using the setPayloadTypeHeaderValueFunction
method.
The function receives a Message
object and returns a String
with the value to be used in the header, the payload’s class FQCN
by default.
If null
is returned by the function, no header with type information is added.
The typeHeaderName
can be configured using the setPayloadTypeHeader
method.
In case type mapping information is not available, the payload class can be specified either in the Template Options or in the receive()
method variants:
Optional<Message<SampleRecord>> receivedMessage = template
.receive(queue, SampleRecord.class);
8.3. Receiving Messages
The framework offers the following options to receive messages from a queue.
8.3.1. SqsTemplate
The SqsTemplate
offers convenient methods to receive messages from Standard
and Fifo
SQS queues.
These methods are separated into two interfaces that are implemented by SqsTemplate
: SqsOperations
and SqsAsyncOperations
.
If only sync
or async
operations are to be used, using the specific interface can narrow down the methods.
See SqsTemplate for more information on the interfaces, Creating a SqsTemplate Instance and Template Options.
The following methods are available through the SqsOperations
interface, with the respective async
counterparts available in the SqsAsyncOperations
.
// Receive a message from the configured default endpoint and options.
Optional<Message<?>> receive();
// Receive a message from the provided queue and convert the payload to the provided class.
<T> Optional<Message<T>> receive(String queue, Class<T> payloadClass);
// Receive a message with the provided options.
Optional<Message<?>> receive(Consumer<SqsReceiveOptions> from);
// Receive a message with the provided options and convert the payload to the provided class.
<T> Optional<Message<T>> receive(Consumer<SqsReceiveOptions> from, Class<T> payloadClass);
// Receive a batch of messages from the configured default endpoint and options.
Collection<Message<?>> receiveMany();
// Receive a batch of messages from the provided queue and convert the payloads to the provided class.
<T> Collection<Message<T>> receiveMany(String queue, Class<T> payloadClass);
// Receive a batch of messages with the provided options.
Collection<Message<?>> receiveMany(Consumer<SqsReceiveOptions> from);
// Receive a batch of messages with the provided options and convert the payloads to the provided class.
<T> Collection<Message<T>> receiveMany(Consumer<SqsReceiveOptions> from, Class<T> payloadClass);
The following is an example for receiving a message with options:
Optional<Message<SampleRecord>> receivedMessage = template
.receive(from -> from.queue("my-queue")
.visibilityTimeout(Duration.ofSeconds(10))
.pollTimeout(Duration.ofSeconds(5))
.additionalHeaders(Map.of("my-custom-header-name", "my-custom-header-value")),
SampleRecord.class);
To receive messages from a Fifo queue, the options include a receiveRequestAttemptId parameter.
If no such parameter is provided, a random one is generated.
|
SqsTemplate Acknowledgement
The SqsTemplate
by default acknowledges all received messages, which can be changed by setting TemplateAcknowledgementMode.MANUAL
in the template options:
SqsTemplate.builder().configure(options -> options.acknowledgementMode(TemplateAcknowledgementMode.MANUAL));
If an error occurs during acknowledgement, a SqsAcknowledgementException
is thrown, containing both the messages that were successfully acknowledged and those which failed.
To acknowledge messages received with MANUAL
acknowledgement, the Acknowledgement#acknowledge
and Acknowledgement#acknowledgeAsync
methods can be used.
8.3.2. Message Listeners
To receive messages in a manually created container, a MessageListener
or AsyncMessageListener
must be provided.
Both interfaces come with single message
and a batch
methods.
These are functional interfaces and a lambda or method reference can be provided for the single message methods.
Single message / batch modes and message payload conversion can be configured via SqsContainerOptions
.
See Message Conversion and Payload Deserialization for more information.
@FunctionalInterface
public interface MessageListener<T> {
void onMessage(Message<T> message);
default void onMessage(Collection<Message<T>> messages) {
throw new UnsupportedOperationException("Batch not implemented by this MessageListener");
}
}
@FunctionalInterface
public interface AsyncMessageListener<T> {
CompletableFuture<Void> onMessage(Message<T> message);
default CompletableFuture<Void> onMessage(Collection<Message<T>> messages) {
return CompletableFutures
.failedFuture(new UnsupportedOperationException("Batch not implemented by this AsyncMessageListener"));
}
}
8.3.3. SqsMessageListenerContainer
The MessageListenerContainer
manages the entire messages` lifecycle, from polling, to processing, to acknowledging.
It can be instantiated directly, using a SqsMessageListenerContainerFactory
, or using @SqsListener
annotations.
If declared as a @Bean
, the Spring
context will manage its lifecycle, starting the container on application startup and stopping it on application shutdown.
See Container Lifecycle for more information.
It implements the MessageListenerContainer
interface:
public interface MessageListenerContainer<T> extends SmartLifecycle {
String getId();
void setId(String id);
void setMessageListener(MessageListener<T> messageListener);
void setAsyncMessageListener(AsyncMessageListener<T> asyncMessageListener);
}
The generic parameter <T> stands for the payload type of messages to be consumed by this container.
This allows ensuring at compile-time that all components used with the container are for the same type.
If more than one payload type is to be used by the same container or factory, simply type it as Object .
This type is not considered for payload conversion.
|
A container can be instantiated in a familiar Spring way in a @Configuration
annotated class.
For example:
@Bean
MessageListenerContainer<Object> listenerContainer(SqsAsyncClient sqsAsyncClient) {
SqsMessageListenerContainer<Object> container = new SqsMessageListenerContainer<>(sqsAsyncClient);
container.setMessageListener(System.out::println);
container.setQueueNames("myTestQueue");
return container;
}
This framework also provides a convenient Builder
that allows a different approach, such as:
@Bean
MessageListenerContainer<Object> listenerContainer(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainer
.builder()
.sqsAsyncClient(sqsAsyncClient)
.messageListener(System.out::println)
.queueNames("myTestQueue")
.build();
}
The container’s lifecycle can also be managed manually:
void myMethod(SqsAsyncClient sqsAsyncClient) {
SqsMessageListenerContainer<Object> container = SqsMessageListenerContainer
.builder()
.sqsAsyncClient(sqsAsyncClient)
.messageListener(System.out::println)
.queueNames("myTestQueue")
.build();
container.start();
container.stop();
}
8.3.4. SqsMessageListenerContainerFactory
A MessageListenerContainerFactory
can be used to create MessageListenerContainer
instances, both directly or through @SqsListener
annotations.
It can be created in a familiar Spring
way, such as:
@Bean
SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory(SqsAsyncClient sqsAsyncClient) {
SqsMessageListenerContainerFactory<Object> factory = new SqsMessageListenerContainerFactory<>();
factory.setSqsAsyncClient(sqsAsyncClient);
return factory;
}
Or through the Builder
:
@Bean
SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainerFactory
.builder()
.sqsAsyncClient(sqsAsyncClient)
.build();
}
Using this method for setting the SqsAsyncClient instance in the factory, all containers created by this factory will share the same SqsAsyncClient instance.
For high-throughput applications, a Supplier<SqsAsyncClient> can be provided instead through the factory’s setSqsAsyncClientSupplier or the builder’s sqsAsyncSupplier methods.
In this case each container will receive a SqsAsyncClient instance.
Alternatively, a single SqsAsyncClient instance can be configured for higher throughput. See the AWS documentation for more information on tradeoffs of each approach.
|
The factory can also be used to create a container directly, such as:
@Bean
MessageListenerContainer<Object> myListenerContainer(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainerFactory
.builder()
.sqsAsyncClient(sqsAsyncClient)
.messageListener(System.out::println)
.build()
.createContainer("myQueue");
}
8.3.5. @SqsListener Annotation
The simplest way to consume SQS
messages is by annotating a method in a @Component
class with the @SqsListener
annotation.
The framework will then create the MessageListenerContainer
and set a MessagingMessageListenerAdapter
to invoke the method when a message is received.
When using Spring Boot
with auto-configuration
, no configuration is necessary.
Most attributes on the annotation can be resolved from SpEL (#{…})
or property placeholders (${…})
.
Queue Names
One or more queues can be specified in the annotation through the queueNames
or value
properties - there’s no distinction between the two properties.
Instead of queue names, queue urls can also be provided. Using urls instead of queue names can result in slightly faster startup times since it prevents the framework from looking up the queue url when the containers start.
@SqsListener({"${my.queue.url}", "myOtherQueue"})
public void listenTwoQueues(String message) {
System.out.println(message);
}
Any number of @SqsListener
annotations can be used in a bean class, and each annotated method will be handled by a separate MessageListenerContainer
.
Queues declared in the same annotation will share the container, though each will have separate throughput and acknowledgement controls. |
SNS Messages
Since 3.1.1, when receiving SNS messages through the @SqsListener
, the message includes all attributes of the SnsNotification
. To only receive need the Message
part of the payload, you can utilize the @SnsNotificationMessage
annotation.
For handling individual message processing, the @SnsNotificationMessage annotation can be used in the following manner:
@SqsListener("my-queue")
public void listen(@SnsNotificationMessage Pojo pojo) {
System.out.println(pojo.field);
}
For batch message processing, use the @SnsNotificationMessage annotation with a List<Pojo> parameter.
@SqsListener("my-queue")
public void listen(@SnsNotificationMessage List<Pojo> pojos) {
System.out.println(pojos.size());
}
Specifying a MessageListenerContainerFactory
A MessageListenerContainerFactory
can be specified through the factory
property.
Such factory will then be used to create the container for the annotated method.
If not specified, a factory with the defaultSqsListenerContainerFactory
name will be looked up.
For changing this default name, see Global Configuration for @SqsListeners.
@SqsListener(queueNames = "myQueue", factory = "myFactory")
public void listen(String message) {
System.out.println(message);
}
When using a Spring Boot
application with auto-configuration
, a default factory is provided if there are no other factory beans declared in the context.
Other Annotation Properties
The following properties can be specified in the @SqsListener
annotation.
Such properties override the equivalent SqsContainerOptions
for the resulting MessageListenerContainer
.
-
id
- Specify the resulting container’s id. This can be used for fetching the container from theMessageListenerContainerRegistry
, and is used by the container and its components for general logging and thread naming. -
maxConcurrentMessages
- Set the maximum number of messages that can beinflight
at any given moment. See Message Processing Throughput for more information. -
pollTimeoutSeconds
- Set the maximum time to wait before a poll returns from SQS. Note that if there are messages available the call may return earlier than this setting. -
messageVisibilitySeconds
- Set the minimum visibility for the messages retrieved in a poll. Note that forFIFO
single message listener methods, this visibility is applied to the whole batch before each message is sent to the listener. See FIFO Support for more information. -
acknowledgementMode
- Set the acknowledgement mode for the container. If any value is set, it will take precedence over the acknowledgement mode defined for the container factory options. See Acknowledgement Mode for more information.
Listener Method Arguments
A number of possible argument types are allowed in the listener method’s signature.
-
MyPojo
- POJO types are automatically deserialized from JSON. -
Message<MyPojo>
- Provides aMessage<MyPojo>
instance with the deserialized payload andMessageHeaders
. -
List<MyPojo>
- Enables batch mode and receives the batch that was polled from SQS. -
List<Message<MyPojo>>
- Enables batch mode and receives the batch that was polled from SQS along with headers. -
@Header(String headerName)
- provides the specified header. -
@Headers
- provides theMessageHeaders
or aMap<String, Object>
-
Acknowledgement
- provides methods for manually acknowledging messages for single message listeners. AcknowledgementMode must be set toMANUAL
(see Acknowledging Messages) -
BatchAcknowledgement
- provides methods for manually acknowledging partial or whole message batches for batch listeners. AcknowledgementMode must be set toMANUAL
(see Acknowledging Messages) -
Visibility
- provides thechangeTo()
method that enables changing the message’s visibility to the provided value. -
BatchVisibility
- provideschangeTo()
methods that enables changing partial or whole message batches visibility to the provided value. -
QueueAttributes
- provides queue attributes for the queue that received the message. See Retrieving Attributes from SQS for how to specify the queue attributes that will be fetched fromSQS
-
software.amazon.awssdk.services.sqs.model.Message
- provides the originalMessage
fromSQS
To receive a collection of objects in a single message, the collection must be wrapped in an object. See Sending Messages. |
Here’s a sample with many arguments:
@SqsListener("${my-queue-name}")
public void listen(Message<MyPojo> message, MyPojo pojo, MessageHeaders headers, Acknowledgement ack, Visibility visibility, QueueAttributes queueAttributes, software.amazon.awssdk.services.sqs.model.Message originalMessage) {
Assert.notNull(message);
Assert.notNull(pojo);
Assert.notNull(headers);
Assert.notNull(ack);
Assert.notNull(visibility);
Assert.notNull(queueAttributes);
Assert.notNull(originalMessage);
}
Batch listeners support a single List<MyPojo> and List<Message<MyPojo>> method arguments, and optional BatchAcknowledgement (or AsyncBatchAcknowledgement ) and BatchVisibility arguments.
MessageHeaders should be extracted from the Message instances through the getHeaders() method.
|
8.3.6. Batch Processing
All message processing interfaces have both single message
and batch
methods.
This means the same set of components can be used to process both single and batch methods, and share logic where applicable.
When batch mode is enabled, the framework will serve the entire result of a poll to the listener.
If a value greater than 10 is provided for maxMessagesPerPoll
, the result of multiple polls will be combined and up to the respective amount of messages will be served to the listener.
To enable batch processing using @SqsListener
, a single List<MyPojo>
or List<Message<MyPojo>>
method argument should be provided in the listener method.
The listener method can also have:
- an optional BatchAcknowledgement
argument for AcknowledgementMode.MANUAL
- an optional BatchVisibility
argument
Alternatively, SqsContainerOptions
can be set to ListenerMode.BATCH
in the SqsContainerOptions
in the factory or container.
The same factory can be used to create both single message and batch containers for @SqsListener methods.
|
In case the same factory is shared by both delivery methods, any supplied ErrorHandler , MessageInterceptor or MessageListener should implement the proper methods.
|
8.3.7. Container Options
Each MessageListenerContainer
can have a different set of options.
MessageListenerContainerFactory
instances have a SqsContainerOptions.Builder
instance property that is used as a template for the containers it creates.
Both factory and container offer a configure
method that can be used to change the options:
@Bean
SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainerFactory
.builder()
.configure(options -> options
.messagesPerPoll(5)
.pollTimeout(Duration.ofSeconds(10)))
.sqsAsyncClient(sqsAsyncClient)
.build();
}
@Bean
MessageListenerContainer<Object> listenerContainer(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainer
.builder()
.configure(options -> options
.messagesPerPoll(5)
.pollTimeout(Duration.ofSeconds(10)))
.sqsAsyncClient(sqsAsyncClient)
.messageListener(System.out::println)
.queueNames("myTestQueue")
.build();
}
The SqsContainerOptions
instance is immutable and can be retrieved via the container.getContainerOptions()
method.
If more complex configurations are necessary, the toBuilder
and fromBuilder
methods provide ways to create a new copy of the options, and then set it back to the factory or container:
void myMethod(MessageListenerContainer<Object> container) {
SqsContainerOptions.Builder modifiedOptions = container.getContainerOptions()
.toBuilder()
.pollTimeout(Duration.ofSeconds(5))
.shutdownTimeout(Duration.ofSeconds(20));
container.configure(options -> options.fromBuilder(modifiedOptions));
}
A copy of the options can also be created with containerOptions.createCopy()
or containerOptionsBuilder.createCopy()
.
Using Auto-Configuration
The Spring Boot Starter for SQS provides the following auto-configuration properties:
Name |
Description |
Required |
Default value |
|
Enables the SQS integration. |
No |
|
|
Configures endpoint used by |
No |
|
|
Configures region used by |
No |
|
|
Maximum number of inflight messages per queue. |
No |
10 |
Maximum number of messages to be received per poll. |
No |
10 |
|
Maximum amount of time to wait for messages in a poll. |
No |
10 seconds |
|
|
The strategy to be used by SqsTemplate and SqsListeners when a queue does not exist. |
No |
CREATE |
SqsContainerOptions Descriptions
Property | Range | Default | Description |
---|---|---|---|
1 - |
10 |
The maximum number of messages from each queue that can be processed simultaneously in this container. This number will be used for defining the thread pool size for the container following (maxConcurrentMessages * number of queues). For batching acknowledgements a message is considered as no longer inflight when it’s handed to the acknowledgement queue. See Acknowledging Messages. |
|
1 - |
10 |
The maximum number of messages that will be received by a poll to a SQS queue in this container. If a value greater than 10 is provided, the result of multiple polls will be combined, which can be useful for batch listeners. See AWS documentation for more information. |
|
1 - 10 seconds |
10 seconds |
The maximum duration for a poll to a SQS queue before returning empty. Longer polls decrease the chance of empty polls when messages are available. See AWS documentation for more information. |
|
1 - 10 seconds |
10 seconds |
The maximum time the framework will wait for permits to be available for a queue before attempting the next poll.
After that period, the framework will try to perform a partial acquire with the available permits, resulting in a poll for less than |
|
Any valid |
|
The back off policy to be applied when a polling thread throws an error. The default is an exponential policy with a delay of |
|
|
true, false |
true |
Determines wherever container should start automatically. When set to false the container will not launch on startup, requiring manual intervention to start it. See Container Lifecycle. |
|
0 - undefined |
20 seconds |
The amount of time the container will wait for a queue to complete message processing before attempting to forcefully shutdown. See Container Lifecycle. |
|
0 - undefined |
20 seconds |
The amount of time the container will wait for acknowledgements to complete for a queue after message processing has ended. See Container Lifecycle. |
|
|
|
Configures the backpressure strategy to be used by the container. See Configuring BackPressureMode. |
|
|
|
Configures whether this container will use |
|
|
Empty list |
Configures the |
|
|
|
Configures the |
|
|
|
Configures the |
|
|
|
Specifies how messages from FIFO queues should be grouped when retrieved by the container when listener
mode is |
|
|
|
Configures the |
|
|
|
Configures the processing outcomes that will trigger automatic acknowledging of messages. See Acknowledging Messages. |
|
0 - undefined |
|
Configures the interval between acknowledges for batching.
Set to |
|
0 - undefined |
|
Configures the minimal amount of messages in the acknowledgement queue to trigger acknowledgement of a batch.
Set to zero along with |
|
|
|
Configures the order acknowledgements should be made. Fifo queues can be acknowledged in parallel for immediate acknowledgement since the next message for a message group will only start being processed after the previous one has been acknowledged. See Acknowledging Messages. |
|
|
|
Provides a |
|
|
|
Provides a |
|
|
|
Specify the message visibility duration for messages polled in this container.
For |
|
|
|
Configures the behavior when a queue is not found at container startup. See Container Lifecycle. |
8.3.8. Retrieving Attributes from SQS
QueueAttributes
, MessageAttributes
and MessageSystemAttributes
can be retrieved from SQS.
These can be configured using the SqsContainerOptions
queueAttributeNames
, messageAttributeNames
and messageSystemAttributeNames
methods.
QueueAttributes
for a queue are retrieved when containers start, and can be looked up by adding the QueueAttributes
method parameter in a @SqsListener
method, or by getting the SqsHeaders.SQS_QUEUE_ATTRIBUTES_HEADER
header.
MessageAttributes
and MessageSystemAttributes
are retrieved with each message, and are mapped to message headers.
Those can be retrieved with @Header
parameters, or directly in the Message
.
The message headers are prefixed with SqsHeaders.SQS_MA_HEADER_PREFIX
("Sqs_MA_") for message attributes and
SqsHeaders.SQS_MSA_HEADER_PREFIX
("Sqs_MSA_") for message system attributes.
By default, no QueueAttributes and ALL MessageAttributes and MessageSystemAttributes are retrieved.
|
8.3.9. Container Lifecycle
The MessageListenerContainer
interface extends SmartLifecycle
, which provides methods to control the container’s lifecycle.
Containers created from @SqsListener
annotations are registered in a MessageListenerContainerRegistry
bean that is registered by the framework.
The containers themselves are not Spring-managed beans, and the registry is responsible for managing these containers` lifecycle in application startup and shutdown.
The DefaultListenerContainerRegistry ` implementation provided by the framework allows the phase value to be set through the `setPhase method. The default value is MessageListenerContainer.DEFAULT_PHASE .
|
At startup, the containers will make requests to SQS
to retrieve the queues` urls for the provided queue names or ARNs, and for retrieving QueueAttributes
if so configured.
Providing queue urls instead of names and not requesting queue attributes can result in slightly better startup times since there’s no need for such requests.
If retrieving the queue url fails due to the queue not existing, the framework can be configured to either create the queue or fail.
If a URL is provided instead of a queue name the framework will not make this request at startup, and thus if the queue does not exist it will fail at runtime.
This configuration is available in SqsContainerOptions queueNotFoundStrategy.
|
At shutdown, by default containers will wait for all polling, processing and acknowledging operations to finish, up to SqsContainerOptions.getShutdownTimeout()
.
After this period, operations will be canceled and the container will attempt to forcefully shutdown.
Containers as Spring Beans
Manually created containers can be registered as beans, e.g. by declaring a @Bean
in a @Configuration
annotated class.
In these cases the containers lifecycle will be managed by the Spring
context at application startup and shutdown.
@Bean
MessageListenerContainer<Object> listenerContainer(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainer
.builder()
.sqsAsyncClient(sqsAsyncClient)
.messageListener(System.out::println)
.queueNames("myTestQueue")
.build();
}
The SqsMessageListenerContainer.builder() allows to specify the SmartLifecycle.phase , to override the default value defined in MessageListenerContainer.DEFAULT_PHASE
|
Retrieving Containers from the Registry
Containers can be retrieved by fetching the MessageListenerContainer
bean from the container and using the getListenerContainers
and getContainerById
methods.
Then lifecycle methods can be used to start and stop instances.
@Autowired
MessageListenerContainerRegistry registry;
public void myLifecycleMethod() {
MessageListenerContainer container = registry.getContainerById("myId");
container.stop();
container.start();
}
Lifecycle Execution
By default, all lifecycle actions performed by the MessageListenerContainerRegistry
and internally by the MessageListenerContainer
instances are executed in parallel.
This behavior can be disabled by setting LifecycleHandler.get().setParallelLifecycle(false)
.
Spring-managed MessageListenerContainer beans' lifecycle actions are always performed sequentially.
|
8.3.10. FIFO Support
FIFO
SQS queues are fully supported for receiving messages - queues with names that ends in .fifo
will automatically be setup as such.
-
Messages are polled with a
receiveRequestAttemptId
, and the received batch of messages is split according to the message`sMessageGroupId
. -
Each message from a given group will then be processed in order, while each group is processed in parallel.
-
To receive messages from multiple groups in a
batch
, setfifoBatchGroupingStrategy
toPROCESS_MULTIPLE_GROUPS_IN_SAME_BATCH
inSqsContainerOptions
. -
If processing fails for a message, the following messages from the same message group are discarded so they will be served again after their
message visibility
expires. -
Messages which were already successfully processed and acknowledged will not be served again.
-
FIFO
queues also have different defaults for acknowledging messages, see Acknowledgement Defaults for more information. -
If a
message visibility
is set through@SqsListener
orSqsContainerOptions
, visibility will be extended for all messages in the message group before each message is processed.
A MessageListenerContainer can either have only Standard queues or FIFO queues - not both.
This is valid both for manually created containers and @SqsListener annotated methods.
|
8.4. Message Interceptor
The framework offers the MessageInterceptor
and the AsyncMessageInterceptor
interfaces:
public interface MessageInterceptor<T> {
default Message<T> intercept(Message<T> message) {
return message;
}
default Collection<Message<T>> intercept(Collection<Message<T>> messages) {
return messages;
}
default void afterProcessing(Message<T> message, Throwable t) {
}
default void afterProcessing(Collection<Message<T>> messages, Throwable t) {
}
}
public interface AsyncMessageInterceptor<T> {
default CompletableFuture<Message<T>> intercept(Message<T> message) {
return CompletableFuture.completedFuture(message);
}
default CompletableFuture<Collection<Message<T>>> intercept(Collection<Message<T>> messages) {
return CompletableFuture.completedFuture(messages);
}
default CompletableFuture<Void> afterProcessing(Message<T> message, Throwable t) {
return CompletableFuture.completedFuture(null);
}
default CompletableFuture<Void> afterProcessing(Collection<Message<T>> messages, Throwable t) {
return CompletableFuture.completedFuture(null);
}
}
When using the auto-configured factory, simply declare a @Bean
and the interceptor will be set:
@Bean
public MessageInterceptor<Object> messageInterceptor() {
return new MessageInterceptor<Object>() {
@Override
public Message<Object> intercept(Message<Object> message) {
return MessageBuilder
.fromMessage(message)
.setHeader("newHeader", "newValue")
.build();
}
};
}
Alternatively, implementations can be set in the MessageListenerContainerFactory
or directly in the MessageListenerContainer
:
@Bean
public SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory() {
return SqsMessageListenerContainerFactory
.builder()
.sqsAsyncClientSupplier(BaseSqsIntegrationTest::createAsyncClient)
.messageInterceptor(new MessageInterceptor<Object>() {
@Override
public Message<Object> intercept(Message<Object> message) {
return MessageBuilder
.fromMessage(message)
.setHeader("newHeader", "newValue")
.build();
}
})
.build();
}
Multiple interceptors can be added to the same factory or container. |
The intercept
methods are executed before
a message is processed, and a different message can be returned.
In case a different message is returned, it’s important to add the SqsHeaders.SQS_RECEIPT_HANDLE_HEADER with the value of the original handler so the original message is acknowledged after processing.
Also, a SqsHeaders.SQS_MESSAGE_ID_HEADER must always be present.
|
The intercept methods must not return null.
|
The afterProcessing
methods are executed after message is processed and the ErrorHandler
is invoked, but before the message is acknowledged.
8.5. Error Handling
By default, messages that have an error thrown by the listener will not be acknowledged, and the message can be polled again after visibility timeout
expires.
Alternatively, the framework offers the ErrorHandler
and AsyncErrorHandler
interfaces, which are invoked after a listener execution fails.
public interface ErrorHandler<T> {
default void handle(Message<T> message, Throwable t) {
}
default void handle(Collection<Message<T>> messages, Throwable t) {
}
}
public interface AsyncErrorHandler<T> {
default CompletableFuture<Void> handle(Message<T> message, Throwable t) {
return CompletableFutures.failedFuture(t);
}
default CompletableFuture<Void> handle(Collection<Message<T>> messages, Throwable t) {
return CompletableFutures.failedFuture(t);
}
}
When using the auto-configured factory, simply declare a @Bean
and the error handler will be set:
@Bean
public ErrorHandler<Object> errorHandler() {
return new ErrorHandler<Object>() {
@Override
public void handle(Message<Object> message, Throwable t) {
// error handling logic
// throw if the message should not be acknowledged
}
}}
Alternatively, implementations can be set in the MessageListenerContainerFactory
or directly in the MessageListenerContainer
:
@Bean
public SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory() {
return SqsMessageListenerContainerFactory
.builder()
.sqsAsyncClientSupplier(BaseSqsIntegrationTest::createAsyncClient)
.errorHandler(new ErrorHandler<Object>() {
@Override
public void handle(Message<Object> message, Throwable t) {
// error handling logic
}
})
.build();
}
If the error handler execution succeeds, i.e. no error is thrown from the error handler, the message is considered to be recovered and is acknowledged according to the acknowledgement configuration.
If the message should not be acknowledged and the ON_SUCCESS acknowledgement mode is set, it’s important to propagate the error.
For simply executing an action in case of errors, an interceptor should be used instead, checking the presence of the throwable argument for detecting a failed execution.
|
8.6. Message Conversion and Payload Deserialization
Payloads are automatically deserialized from JSON
for @SqsListener
annotated methods using a MappingJackson2MessageConverter
.
When using Spring Boot’s auto-configuration, if there’s a single ObjectMapper in Spring Context, such object mapper will be used for converting messages.
This includes the one provided by Spring Boot’s auto-configuration itself.
For configuring a different ObjectMapper , see Global Configuration for @SqsListeners.
|
For manually created MessageListeners
, MessageInterceptor
and ErrorHandler
components, or more fine-grained conversion such as using interfaces
or inheritance
in listener methods, type mapping is required for payload deserialization.
By default, the framework looks for a MessageHeader
named Sqs_MA_JavaType
containing the fully qualified class name (FQCN
) for which the payload should be deserialized to.
If such header is found, the message is automatically deserialized to the provided class.
Further configuration can be achieved by providing a configured MessagingMessageConverter
instance in the SqsContainerOptions
.
If type mapping is setup or type information is added to the headers, payloads are deserialized right after the message is polled.
Otherwise, for @SqsListener annotated methods, payloads are deserialized right before the message is sent to the listener.
For providing custom MessageConverter instances to be used by @SqsListener methods, see Global Configuration for @SqsListeners
|
8.6.1. Configuring a MessagingMessageConverter
The framework provides the SqsMessagingMessageConverter
, which implements the MessagingMessageConverter
interface.
public interface MessagingMessageConverter<S> {
Message<?> toMessagingMessage(S source);
S fromMessagingMessage(Message<?> message);
}
The default header-based type mapping can be configured to use a different header name by using the setPayloadTypeHeader
method.
It is also possible not to include payload type information in the header by using the doNotSendPayloadTypeHeader
method.
More complex mapping can be achieved by using the setPayloadTypeMapper
method, which overrides the default header-based mapping.
This method receives a Function<Message<?>, Class<?>> payloadTypeMapper
that will be applied to incoming messages.
The default MappingJackson2MessageConverter
can be replaced by using the setPayloadMessageConverter
method.
The framework also provides the SqsHeaderMapper
, which implements the HeaderMapper
interface and is invoked by the SqsMessagingMessageConverter
.
To provide a different HeaderMapper
implementation, use the setHeaderMapper
method.
An example of such configuration follows:
// Create converter instance
SqsMessagingMessageConverter messageConverter = new SqsMessagingMessageConverter();
// Configure Type Header
messageConverter.setPayloadTypeHeader("myTypeHeader");
// Do not send Type Header
messageConverter.doNotSendPayloadTypeHeader();
// Configure Header Mapper
SqsHeaderMapper headerMapper = new SqsHeaderMapper();
headerMapper.setAdditionalHeadersFunction(((sqsMessage, accessor) -> {
accessor.setHeader("myCustomHeader", "myValue");
return accessor.toMessageHeaders();
}));
messageConverter.setHeaderMapper(headerMapper);
// Configure Payload Converter
MappingJackson2MessageConverter payloadConverter = new MappingJackson2MessageConverter();
payloadConverter.setPrettyPrint(true);
messageConverter.setPayloadMessageConverter(payloadConverter);
// Set MessageConverter to the factory or container
factory.configure(options -> options.messageConverter(messageConverter));
8.6.2. Interfaces and Subclasses in Listener Methods
Interfaces and subclasses can be used in @SqsListener
annotated methods by configuring a type mapper
:
messageConverter.setPayloadTypeMapper(message -> {
String eventTypeHeader = message.getHeaders().get("myEventTypeHeader", String.class);
return "eventTypeA".equals(eventTypeHeader)
? MyTypeA.class
: MyTypeB.class;
});
And then, in the listener method:
@SpringBootApplication
public class SqsApplication {
public static void main(String[] args) {
SpringApplication.run(SqsApplication.class, args);
}
// Retrieve the converted payload
@SqsListener("myQueue")
public void listen(MyInterface message) {
System.out.println(message);
}
// Or retrieve a Message with the converted payload
@SqsListener("myOtherQueue")
public void listen(Message<MyInterface> message) {
System.out.println(message);
}
}
8.7. Acknowledging Messages
In SQS
acknowledging a message is the same as deleting the message from the queue.
A number of Acknowledgement
strategies are available and can be configured via SqsContainerOptions
.
Optionally, a callback action can be added to be executed after either a successful or failed acknowledgement.
Here’s an example of a possible configuration:
@Bean
SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainerFactory
.builder()
.configure(options -> options
.acknowledgementMode(AcknowledgementMode.ALWAYS)
.acknowledgementInterval(Duration.ofSeconds(3))
.acknowledgementThreshold(5)
.acknowledgementOrdering(AcknowledgementOrdering.ORDERED)
)
.sqsAsyncClient(sqsAsyncClient)
.build();
}
Each option is explained in the following sections.
All options are available for both single message and batch message listeners.
|
8.7.1. Acknowledgement Mode
-
ON_SUCCESS
- Acknowledges a message or batch of messages after successful processing. -
ALWAYS
- Acknowledges a message or batch of messages after processing returns success or error. -
MANUAL
- The framework won’t acknowledge messages automatically andAcknowledgement
objects can be received in the listener method.
The Acknowledgement
strategy can be configured in the SqsContainerOptions
or in the @SqsListener
annotation.
8.7.2. Acknowledgement Batching
The acknowledgementInterval
and acknowledgementThreshold
options enable acknowledgement batching.
Acknowledgements will be executed after either the amount of time specified in the interval
or the number of messages to acknowledge reaches the threshold
.
Setting acknowledgementInterval
to Duration.ZERO
will disable the periodic acknowledgement, which will be executed only when the number of messages to acknowledge reaches the specified acknowledgementThreshold
.
Setting acknowledgementThreshold
to 0
will disable acknowledging per number of messages, and messages will be acknowledged only on the specified acknowldgementInterval
When using acknowledgement batching messages stay inflight for SQS purposes until their respective batch is acknowledged. MessageVisibility should be taken into consideration when configuring this strategy.
|
Immediate Acknowledging
Setting both acknowledgementInterval
and acknowledgementThreshold
to Duration.ZERO
and 0
respectively enables Immediate Acknowledging
.
With this configuration, messages are acknowledged sequentially after being processed, and the message is only considered processed after the message is successfully acknowledged.
If an immediate acknowledging triggers an error, message processing is considered failed and will be retried after the specified visibilityTimeout .
|
8.7.3. Manual Acknowledgement
Acknowledgements can be handled manually by setting AcknowledgementMode.MANUAL
in the SqsContainerOptions
.
Manual acknowledgement can be used in conjunction with acknowledgement batching - the message will be queued for acknowledgement but won’t be executed until one of the acknowledgement thresholds is reached.
It can also be used in conjunction with immediate acknowledgement.
The Acknowledgement#acknowledge
and Acknowledgement#acknowledgeAsync
methods are also available to acknowledge messages received in MANUAL
acknowledgement mode.
The following arguments can be used in listener methods to manually acknowledge:
Acknowledgement
The Acknowledgement
interface can be used to acknowledge messages in ListenerMode.SINGLE_MESSAGE
.
public interface Acknowledgement {
/**
* Acknowledge the message.
*/
void acknowledge();
/**
* Asynchronously acknowledge the message.
*/
CompletableFuture<Void> acknowledgeAsync();
}
BatchAcknowledgement
The BatchAcknowledgement
interface can be used to acknowledge messages in ListenerMode.BATCH
.
The acknowledge(Collection<Message<T>)
method enables acknowledging partial batches.
public interface BatchAcknowledgement<T> {
/**
* Acknowledge all messages from the batch.
*/
void acknowledge();
/**
* Asynchronously acknowledge all messages from the batch.
*/
CompletableFuture<Void> acknowledgeAsync();
/**
* Acknowledge the provided messages.
*/
void acknowledge(Collection<Message<T>> messagesToAcknowledge);
/**
* Asynchronously acknowledge the provided messages.
*/
CompletableFuture<Void> acknowledgeAsync(Collection<Message<T>> messagesToAcknowledge);
}
8.7.4. Acknowledgement Ordering
-
PARALLEL
- Acknowledges the messages as soon as one of the above criteria is met - many acknowledgement calls can be made in parallel. -
ORDERED
- One batch of acknowledgements will be executed after the previous one is completed, ensuringFIFO
ordering forbatching
acknowledgements. -
ORDERED_BY_GROUP
- One batch of acknowledgements will be executed after the previous one for the same group is completed, ensuringFIFO
ordering of acknowledgements with parallelism between message groups. Only available forFIFO
queues.
8.7.5. Acknowledgement Defaults
The defaults for acknowledging differ for Standard
and FIFO
SQS queues.
Standard SQS
-
Acknowledgement Interval: One second
-
Acknowledgement Threshold: Ten messages
-
Acknowledgement Ordering:
PARALLEL
FIFO SQS
-
Acknowledgement Interval: Zero (Immediate)
-
Acknowledgement Threshold: Zero (Immediate)
-
Acknowledgement Ordering:
PARALLEL
if immediate acknowledgement,ORDERED
if batching is enabled (one or both above defaults are overridden).
PARALLEL is the default for FIFO because ordering is guaranteed for processing.
This assures no messages from a given MessageGroup will be polled until the previous batch is acknowledged.
Implementations of this interface will be executed after an acknowledgement execution completes with either success or failure.
|
8.7.6. Acknowledgement Result Callback
The framework offers the AcknowledgementResultCallback
and AsyncAcknowledgementCallback
interfaces that can be added to a SqsMessageListenerContainer
or SqsMessageListenerContainerFactory
.
public interface AcknowledgementResultCallback<T> {
default void onSuccess(Collection<Message<T>> messages) {
}
default void onFailure(Collection<Message<T>> messages, Throwable t) {
}
}
public interface AsyncAcknowledgementResultCallback<T> {
default CompletableFuture<Void> onSuccess(Collection<Message<T>> messages) {
return CompletableFuture.completedFuture(null);
}
default CompletableFuture<Void> onFailure(Collection<Message<T>> messages, Throwable t) {
return CompletableFuture.completedFuture(null);
}
}
@Bean
public SqsMessageListenerContainerFactory<Object> defaultSqsListenerContainerFactory(SqsAsyncClient sqsAsyncClient) {
return SqsMessageListenerContainerFactory
.builder()
.sqsAsyncClient(sqsAsyncClient)
.acknowledgementResultCallback(getAcknowledgementResultCallback())
.build();
}
When immediate acknowledgement is set, as is the default for FIFO queues, the callback will be executed before the next message in the batch is processed, and next message processing will wait for the callback completion.
This can be useful for taking action such as retrying to delete the messages, or stopping the container to prevent duplicate processing in case an acknowledgement fails in a FIFO queue.
For batch parallel processing , as is the default for Standard queues the callback execution happens asynchronously.
|
8.8. Global Configuration for @SqsListeners
A set of configurations can be set for all containers from @SqsListener
by providing SqsListenerConfigurer
beans.
@FunctionalInterface
public interface SqsListenerConfigurer {
void configure(EndpointRegistrar registrar);
}
The following attributes can be configured in the registrar:
-
setMessageHandlerMethodFactory
- provide a different factory to be used to create theinvocableHandlerMethod
instances that wrap the listener methods. -
setListenerContainerRegistry
- provide a differentMessageListenerContainerRegistry
implementation to be used to register theMessageListenerContainers
-
setMessageListenerContainerRegistryBeanName
- provide a different bean name to be used to retrieve theMessageListenerContainerRegistry
-
setObjectMapper
- set theObjectMapper
instance that will be used to deserialize payloads in listener methods. See Message Conversion and Payload Deserialization for more information on where this is used. -
setValidator
- set theValidator
instance that will be used for payload validation in listener methods. -
manageMessageConverters
- gives access to the list of message converters that will be used to convert messages. By default,StringMessageConverter
,SimpleMessageConverter
andMappingJackson2MessageConverter
are used. -
manageArgumentResolvers
- gives access to the list of argument resolvers that will be used to resolve the listener method arguments. The order of resolvers is important -PayloadMethodArgumentResolver
should generally be last since it’s used as default.
A simple example would be:
@Bean
SqsListenerConfigurer configurer(ObjectMapper objectMapper) {
return registrar -> registrar.setObjectMapper(objectMapper);
}
Any number of SqsListenerConfigurer beans can be registered in the context.
All instances will be looked up at application startup and iterated through.
|
8.9. Message Processing Throughput
The following options are available for tuning the application’s throughput.
When a configuration is available both in the SqsContainerOptions
and @SqsListener
annotation, the annotation value takes precedence, if any.
8.9.1. SqsContainerOptions and @SqsListener
properties
maxConcurrentMessages
Can be set in either the SqsContainerOptions
or the @SqsListener
annotation.
Represents the maximum number of messages being processed by the container at a given time.
Defaults to 10.
This value is enforced per queue, meaning the number of inflight messages in a container can be up to (number of queues in container * maxConcurrentMessages).
When using acknowledgement batching, a message is considered as no longer inflight when it’s delivered to the acknowledgement queue. In this case, the actual number of inflight messages on AWS SQS console can be higher than the configured value. When using immediate acknowledgement, a message is considered as no longer inflight after it’s been acknowledged or throws an error. |
maxMessagesPerPoll
Set in SqsContainerOptions
or the @SqsListener
annotation.
Represents the maximum number of messages returned by a single poll to a SQS queue, to a maximum of 10.
This value has to be less than or equal to maxConcurrentMessages
.
Defaults to 10.
Note that even if the queue has more messages, a poll can return less messages than specified. See the AWS documentation for more information.
pollTimeout
Can be set in either the SqsContainerOptions
or the @SqsListener
annotation.
Represents the maximum duration of a poll.
Higher values represent long polls
and increase the probability of receiving full batches of messages.
Defaults to 10 seconds.
maxDelayBetweenPolls
Set in SqsContainerOptions
.
Represents the maximum amount of time the container will wait for maxMessagesPerPoll
permits to be available before trying to acquire a partial batch if so configured.
This wait is applied per queue and one queue has no interference in another in this regard.
Defaults to 10 seconds.
pollBackOffPolicy
Since 3.2 it’s possible to specify a BackOffPolicy
which will be applied when a polling thread throws an exception.
The default policy is an exponential back off with a delay of 1000ms, a 2.0 multiplier, and a 10000ms maximum delay.
Note that in highly concurrent environments with many polling threads it may happen that a successful poll cancels the next scheduled backoff before it happens, and as such no back offs need to be executed.
8.9.2. Default Polling Behavior
By default, the framework starts all queues in low throughput mode
, where it will perform one poll for messages at a time.
When a poll returns at least one message, the queue enters a high throughput mode
where it will try to fulfill maxConcurrentMessages
messages by making (maxConcurrentMessages / maxMessagesPerPoll) parallel polls to the queue.
Any poll that returns no messages will trigger a low throughput mode
again, until at least one message is returned, triggering high throughput mode
again, and so forth.
After maxDelayBetweenPolls
, if maxMessagesPerPoll
permits are not available, it’ll poll for the difference, i.e. as many messages as have been processed so far, if any.
E.g. Let’s consider a scenario where the container is configured for: maxConcurrentMessages
= 20, maxMessagesPerPoll
= 10, maxDelayBetweenPolls
= 5 seconds, and a pollTimeout
= 10 seconds.
The container starts in low throughput mode
, meaning it’ll attempt a single poll for 10 messages.
If any messages are returned, it’ll switch to high throughput mode
, and will make up to 2 simultaneous polls for 10 messages each.
If all 20 messages are retrieved, it’ll not attempt any more polls until messages are processed.
If after the 5 seconds for maxDelayBetweenPolls
6 messages have been processed, the framework will poll for the 6 messages.
If the queue is depleted and a poll returns no messages, it’ll enter low throughput
mode again and perform only one poll at a time.
8.9.3. Configuring BackPressureMode
The following BackPressureMode
values can be set in SqsContainerOptions
to configure polling behavior:
-
AUTO
- The default mode, as described in the previous section. -
ALWAYS_POLL_MAX_MESSAGES
- Disables partial batch polling, i.e. if the container is configured for 10 messages per poll, it’ll wait for 10 messages to be processed before attempting to poll for the next 10 messages. Useful for optimizing for fewer polls at the expense of throughput. -
FIXED_HIGH_THROUGHPUT
- Disableslow throughput mode
, while still attempting partial batch polling as described in the previous section. Useful for really high throughput scenarios where the risk of making parallel polls to an idle queue is preferable to an eventual switch tolow throughput mode
.
The AUTO setting should be balanced for most use cases, including high throughput ones.
|
8.10. Blocking and Non-Blocking (Async) Components
The SQS integration leverages the CompletableFuture
-based async capabilities of AWS SDK 2.0
to deliver a fully non-blocking infrastructure.
All processing involved in polling for messages, changing message visibilities and acknowledging messages is done in an async, non-blocking fashion. This allows a higher overall throughput for the application.
When a MessageListener
, MessageInterceptor
, and ErrorHandler
implementation is set to a MesssageListenerContainer
or MesssageListenerContainerFactory
these are adapted by the framework. This way, blocking and non-blocking components can be used in conjunction with each other.
Listener methods annotated with @SqsListener
can either return a simple value, e.g. void
, or a CompletableFuture<Void>
.
The listener method will then be wrapped in either a MessagingMessageListenerAdapter
or a AsyncMessagingMessageListenerAdapter
respectively.
In order to achieve higher throughput, it’s encouraged that, at least for simpler logic in message listeners, interceptors and error handlers , the async variants are used.
|
8.10.1. Threading and Blocking Components
Message processing always starts in a framework thread from the default or provided TaskExecutor
.
If an async component is invoked and the execution returns to the framework on a different thread, such thread will be used until a blocking
component is found, when the execution switches back to a TaskExecutor
thread to avoid blocking i.e. SqsAsyncClient
or HttpClient
threads.
If by the time the execution reaches a blocking
component it’s already on a framework thread, it remains in the same thread to avoid excessive thread allocation and hopping.
When using async methods it’s critical not to block the incoming thread, which might be very detrimental to overall performance.
If thread-blocking logic has to be used, the blocking logic should be executed on another thread, e.g. using CompletableFuture.supplyAsync(() → myLogic(), myExecutor) .
Otherwise, a sync interface should be used.
|
8.10.2. Providing a TaskExecutor
The default TaskExecutor
is a ThreadPoolTaskExecutor
, and a different componentTaskExecutor
supplier can be set in the SqsContainerOptions
.
When providing a custom executor, it’s important that it’s configured to support all threads that will be created, which should be (maxConcurrentMessages * total number of queues).
To avoid unnecessary thread hopping between blocking components, a MessageExecutionThreadFactory MUST be set to the executor.
|
8.11. Client Customization
SqsAsyncClient
can be further customized by providing a bean of type SqsAsyncClientCustomizer
:
@Bean
SqsAsyncClientCustomizer customizer() {
return builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(1500));
}));
};
}
|
SqsAsyncClientCustomizer
is a functional interface that enables configuring SqsAsyncClientBuilder
before the SqsAsyncClient
is built in auto-configuration.
There can be multiple SqsAsyncClientCustomizer
beans present in single application context. @Order(..)
annotation can be used to define the order of the execution.
Note that SqsAsyncClientCustomizer
beans are applied after AwsAsyncClientCustomizer
beans and therefore can overwrite previously set configurations.
8.12. IAM Permissions
Following IAM permissions are required by Spring Cloud AWS SQS:
Send message to Queue |
|
Receive message from queue |
|
Delete message from queue |
|
To use sqsListener with SimpleMessageListenerContainerFactory you will need to add as well |
|
To use SqsListener with Sqs name instead of ARN you will need |
|
Sample IAM policy granting access to SQS:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage",
"sqs:SendMessage",
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl"
],
"Resource": "yourARN"
}
9. Instance Metadata Service Integration
Spring Cloud AWS applications can use the Instance MetaData Service (IMDS) to acquire EC2 instance metadata when running within an EC2-based compute environment. This metadata can be used for a wide variety of reasons, including detecting the availability zone, public IP address, MAC address, and so on. When available, properties can be referenced using the @Value annotation:
@Value("placement/availability-zone") String availabilityZone;
@Value("public-ipv4") String publicIPAddress;
@Value("mac") String macAddress;
A full list of instance metadata tags is available in the AWS reference documentation at AWS EC2 User Guide - Instance Metadata Categories. Spring Cloud AWS always retrieves the "latest" categories of metadata and removes the prefix so that "/latest/meta-data/instance-id" is available as "instance-id". The "spring.cloud.aws" prefix is also omitted.
9.1. Enabling
To enable instance metadata, add the spring-cloud-aws-starter-imds starter.
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-imds</artifactId>
</dependency>
This adds the software.amazon.awssdk/imds dependency to the classpath which is used to query the IMDS. Depending on resources, metadata loading can add a half-second delay to application start time. Loading can be explicitly disabled by setting spring.cloud.aws.imds.enabled propery:
spring.cloud.aws.imds.enabled=false
Instance metadata is generally available on any EC2-based compute environment, which includes EC2, Elastic Beanstalk, Elastic Container Service (ECS), Elastic Kubernetes Service (EKS), etc. It is not available in non-EC2 environments such as Lambda or Fargate. Even within EC2-based compute environments instance metadata may be disabled or may be subject to an internal firewall which prohibits it. Whenever instance metadata is unavailable, including when running on a local environment, the autoconfiguration process silently ignores its absence.
9.2. Considerations
Instance metadata is retrieved on a best effort basis and not all keys are always available. For example, the "ipv6" key would only be present if IPv6 addresses were being used, "public-hostname" would only be available for instances running in public subnets with DNS hostnames enabled.
Instance metadata is retrieved at application start time and is not updated as the application runs. Both IDMS v1 and v2 are supported. Certain keys / ranges are not retrieved, including "block-device-mapping/*", "events/\*", "iam/security-credentials/*", "network/interfaces/\*", "public-keys/*", "spot/\*" for various reasons including security. For example, Some keys such as "spot/termination-time" are only reliable if polled on an interval; presenting their static values obtained at startup time would be deceptive. If you have such a requirement, consider polling the key yourself using the Ec2MetadataClient from the SDK:
...
@Autowired Ec2MetadataClient client;
client.get("/latest/meta-data/spot/termination-time");
10. Secrets Manager Integration
Secrets Manager helps to protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
Spring Cloud AWS adds support for loading configuration properties from Secrets Manager through Spring Boot config import feature.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-secrets-manager</artifactId>
</dependency>
10.1. Loading External Configuration
To fetch secrets from Secrets Manager and add them to Spring’s environment properties, add spring.config.import
property to application.properties
:
For example, assuming that the secret name in Secrets Manager is /secrets/database-secrets
:
spring.config.import=aws-secretsmanager:/secrets/database-secrets
If a secret with given name does not exist in Secrets Manager, application will fail to start. If secret value is not required for the application, and it should continue to startup even when secret is missing, add optional
before prefix:
spring.config.import=optional:aws-secretsmanager:/secrets/database-secrets
To load multiple secrets, separate their names with ;
:
spring.config.import=aws-secretsmanager:/secrets/database-secrets;/secrets/webclient-secrets
If you have the requirement to load secrets from another AWS account you can achieve this using the secret’s ARN.
spring.config.import=aws-secretsmanager:arn:aws:secretsmanager:eu-central-1:0123456789:secret:secrets/database-secrets
If some secrets are required, and other ones are optional, list them as separate entries in spring.config.import
property:
spring.config.import[0]=optional:aws-secretsmanager=/secrets/required-secret
spring.config.import[1]=aws-secretsmanager=/secrets/optional-secret
Fetched secrets can be referenced with @Value
, bound to @ConfigurationProperties
classes, or referenced in application.properties
file.
10.1.1. Using Key-Value (JSON) Secrets
Secrets resolved with spring.config.import
can be also referenced in application.properties
.
When a content of SecretString
in a JSON, all top level JSON keys are added as properties to Spring Environment.
For example, with a file mycreds.json
containing following JSON structure:
{
"username": "saanvi",
"password": "EXAMPLE-PASSWORD"
}
Secret is created with a command:
$ aws secretsmanager create-secret --name /secrets/database-secrets --secret-string file://mycreds.json
spring.config.import
entry is added to application.properties
:
spring.config.import=aws-secretsmanager:/secrets/database-secrets
Secret values can be referenced by JSON key names:
@Value("${username}"
private String username;
@Value("${password}"
private String password;
10.1.2. Using plain text secrets
If a SecretString
is a plain text or if you are using SecretBinary
, use secret name to retrieve its value.
For example, we will JDBC saved as plain text secret type with name /secrets/my-certificate
:
$ aws secretsmanager create-secret --name /secrets/prod/jdbc-url --secret-string jdbc:url
spring.config.import
entry is added to application.properties
:
spring.config.import=aws-secretsmanager:/secrets/prod/jdbc-url
Secret value can be retrieved by referencing secret name:
spring.datasource.url=${jdbc-url}
10.1.3. Adding prefix to property keys
To avoid property keys collisions it is possible to configure property key prefix that gets added to each resolved property from a secret.
As an example lets consider following JSON secret with a name /secrets/database-secrets
:
{
"username": "saanvi",
"password": "EXAMPLE-PASSWORD"
}
By default, usernmame
and password
properties will be added to the Spring environment. To add a prefix to property keys configure spring.config.import
property with ?prefix=
added to the secret name:
spring.config.import=optional:aws-secretsmanager:/secrets/database-secrets?prefix=db.
With such config, properties db.username
and db.password
are added to the Spring environment.
Prefixes are added as-is to all property names returned by Secrets Manager. If you want key names to be separated with a dot between the prefix and key name, make sure to add a trailing dot to the prefix. |
10.2. Using SecretsManagerClient
The starter automatically configures and registers a SecretsManagerClient
bean in the Spring application context. The SecretsManagerClient
bean can be used to create or retrieve secrets imperatively.
...
@Autowired
private SecretsManagerClient secretsManagerClient;
...
secretsManagerClient.createSecret(CreateSecretRequest.builder().name(name).secretString(secret).build());
10.3. Customizing SecretsManagerClient
To use custom SecretsManagerClient
in spring.config.import
, provide an implementation of BootstrapRegistryInitializer
. For example:
package com.app;
public class SecretsManagerBootstrapConfiguration implements BootstrapRegistryInitializer {
@Override
public void initialize(BootstrapRegistry registry) {
registry.register(SecretsManagerClient.class, context -> {
AwsCredentialsProvider awsCredentialsProvider = StaticCredentialsProvider.create(AwsBasicCredentials.create("yourAccessKey", "yourSecretKey"));
return SecretsManagerClient.builder().credentialsProvider(awsCredentialsProvider).region(Region.EU_WEST_2).build();
});
}
}
Note that this class must be listed under org.springframework.boot.BootstrapRegistryInitializer
key in META-INF/spring.factories
:
org.springframework.boot.BootstrapRegistryInitializer=com.app.SecretsManagerBootstrapConfiguration
If you want to use autoconfigured SecretsManagerClient
but change underlying SDKClient or ClientOverrideConfiguration
you will need to register bean of type SecretsManagerClientCustomizer
:
Autoconfiguration will configure SecretsManagerClient
Bean with provided values after that, for example:
package com.app;
class SecretsManagerBootstrapConfiguration implements BootstrapRegistryInitializer {
@Override
public void initialize(BootstrapRegistry registry) {
registry.register(SecretsManagerClientCustomizer.class, context -> (builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(2001));
}));
}));
}
}
10.4. PropertySource
Reload
Some applications may need to detect changes on external property sources and update their internal status to reflect the new configuration. The reload feature of Spring Cloud AWS Secrets Manager integration is able to trigger an application reload when a related secret value changes.
By default, this feature is disabled. You can enable it by using the spring.cloud.aws.secretsmanager.reload.strategy
configuration property (for example, in the application.properties
file) and adding following dependencies.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-commons</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-context</artifactId>
</dependency>
The following levels of reload are supported (by setting the spring.cloud.aws.secretsmanager.reload.strategy
property):
-
refresh
(default): Only configuration beans annotated with@ConfigurationProperties
or@RefreshScope
are reloaded. This reload level leverages the refresh feature of Spring Cloud Context. -
restart_context
: the whole SpringApplicationContext
is gracefully restarted. Beans are recreated with the new configuration. In order for the restart context functionality to work properly you must enable and expose the restart actuator endpoint
management: endpoint: restart: enabled: true endpoints: web: exposure: include: restart
Assuming that the reload feature is enabled with default settings (refresh
mode), the following bean is refreshed when the secret changes:
@Configuration @ConfigurationProperties(prefix = "bean") public class MyConfig { private String message = "a message that can be changed live"; // getter and setters }
To see that changes effectively happen, you can create another bean that prints the message periodically, as follows
@Component
public class MyBean {
@Autowired
private MyConfig config;
@Scheduled(fixedDelay = 5000)
public void hello() {
System.out.println("The message is: " + config.getMessage());
}
}
The reload feature periodically re-creates the configuration from config maps and secrets to see if it has changed.
You can configure the polling period by using the spring.cloud.aws.secretsmanager.reload.period
(default value is 1 minute).
10.5. Configuration
The Spring Boot Starter for Secrets Manager provides the following configuration options:
Name |
Description |
Required |
Default value |
|
Enables the Secrets Manager integration. |
No |
|
|
Configures endpoint used by |
No |
|
|
Configures region used by |
No |
|
|
|
|
The strategy to use when firing a reload ( |
|
|
|
The period for verifying changes |
|
|
|
The maximum time between the detection of changes in property source and the application context restart when |
10.6. IAM Permissions
Following IAM permissions are required by Spring Cloud AWS:
Get secret value: |
|
Sample IAM policy granting access to Secrets Manager:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "yourArn"
}
]
}
11. Parameter Store Integration
Spring Cloud AWS adds support for loading configuration properties from Parameter Store through Spring Boot config import feature.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-parameter-store</artifactId>
</dependency>
11.1. Loading External Configuration
To fetch parameters from Parameter Store and add them to Spring’s environment properties, add spring.config.import
property to application.properties
:
For example, assuming that the parameters in Parameter Store are stored under path /config/spring/
:
Parameter Name | Parameter Value |
---|---|
|
|
|
Once spring.config.import
statement is added:
spring.config.import=aws-parameterstore:/config/spring/
Two parameters are added to environment: message
and httpUrl
.
If a given path in Parameter Store does not exist, application will fail to start. If parameters retrieved from Parameter Store are not required for the application, and it should continue to startup even when the path is missing, add optional
before prefix:
spring.config.import=optional:aws-parameterstore:/config/spring/
To load parameters from multiple paths, separate their names with ;
:
spring.config.import=aws-parameterstore:/config/spring/;/config/app/
If some parameters are required, and other ones are optional, list them as separate entries in spring.config.import
property:
spring.config.import[0]=optional:aws-parameterstore=/config/spring/
spring.config.import[1]=aws-parameterstore=/config/optional-params/
If you add indexed parameter names such as /config/application/cloud.aws.stack_0_.name
, /config/application/cloud.aws.stack_1_.name
, … to Parameter Store,
these will become accessible as array properties cloud.aws.stack[0].name
, cloud.aws.stack[1].name
, … in Spring.
11.1.1. Adding prefix to property keys
To avoid property key collisions it is possible to configure a property key prefix that gets added to each resolved parameter.
As an example, assuming the following parameters are stored under path /config/my-datasource/
:
Parameter Name | Parameter Value |
---|---|
|
|
|
|
By default, url
and username
properties will be added to the Spring environment. To add a prefix to property keys configure spring.config.import
property with ?prefix=
added to the parameter path:
spring.config.import=aws-parameterstore:/config/my-datasource/?prefix=spring.datasource.
With such config, properties spring.datasource.url
and spring.datasource.username
are added to the Spring environment.
Prefixes are added as-is to all property names returned by Parameter Store. If you want key names to be separated with a dot between the prefix and key name, make sure to add a trailing dot to the prefix. |
11.2. Using SsmClient
The starter automatically configures and registers a SsmClient
bean in the Spring application context. The SsmClient
bean can be used to create or retrieve parameters from Parameter Store.
...
@Autowired
private SsmClient ssmClient;
...
ssmClient.getParametersByPath(request -> request.path("/config/spring/")).parameters();
11.3. Customizing SsmClient
To use custom SsmClient
in spring.config.import
, provide an implementation of BootstrapRegistryInitializer
. For example:
package com.app;
class ParameterStoreBootstrapConfiguration implements BootstrapRegistryInitializer {
@Override
public void initialize(BootstrapRegistry registry) {
registry.register(SsmClient.class, context -> {
AwsCredentialsProvider awsCredentialsProvider = StaticCredentialsProvider.create(AwsBasicCredentials.create("yourAccessKey", "yourSecretKey"));
return SsmClient.builder().credentialsProvider(awsCredentialsProvider).region(Region.EU_WEST_2).build();
});
}
}
Note that this class must be listed under org.springframework.boot.BootstrapRegistryInitializer
key in META-INF/spring.factories
:
org.springframework.boot.BootstrapRegistryInitializer=com.app.ParameterStoreBootstrapConfiguration
If you want to use autoconfigured SsmClient
but change underlying SDKClient
or ClientOverrideConfiguration
you will need to register bean of type SsmClientCustomizer
:
Autoconfiguration will configure SsmClient
Bean with provided values after that, for example:
package com.app;
class ParameterStoreBootstrapConfiguration implements BootstrapRegistryInitializer {
@Override
public void initialize(BootstrapRegistry registry) {
registry.register(SsmClientCustomizer.class, context -> (builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(2000));
}));
}));
}
}
11.4. PropertySource
Reload
Some applications may need to detect changes on external property sources and update their internal status to reflect the new configuration. The reload feature of Spring Cloud AWS Parameter Store integration is able to trigger an application reload when a related parameter value changes.
By default, this feature is disabled. You can enable it by using the spring.cloud.aws.parameterstore.reload.strategy
configuration property (for example, in the application.properties
file) and adding following dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-commons</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-context</artifactId>
</dependency>
The following levels of reload are supported (by setting the spring.cloud.aws.parameterstore.reload.strategy
property):
-
refresh
(default): Only configuration beans annotated with@ConfigurationProperties
or@RefreshScope
are reloaded. This reload level leverages the refresh feature of Spring Cloud Context. -
restart_context
: the whole SpringApplicationContext
is gracefully restarted. Beans are recreated with the new configuration. In order for the restart context functionality to work properly you must enable and expose the restart actuator endpoint
management: endpoint: restart: enabled: true endpoints: web: exposure: include: restart
Assuming that the reload feature is enabled with default settings (refresh
mode), the following bean is refreshed when the secret changes:
@Configuration @ConfigurationProperties(prefix = "bean") public class MyConfig { private String message = "a message that can be changed live"; // getter and setters }
To see that changes effectively happen, you can create another bean that prints the message periodically, as follows
@Component
public class MyBean {
@Autowired
private MyConfig config;
@Scheduled(fixedDelay = 5000)
public void hello() {
System.out.println("The message is: " + config.getMessage());
}
}
The reload feature periodically re-creates the configuration from config maps and secrets to see if it has changed.
You can configure the polling period by using the spring.cloud.aws.parameter.reload.period
(default value is 1 minute).
11.5. Configuration
The Spring Boot Starter for Parameter Store provides the following configuration options:
Name | Description | Required | Default value |
---|---|---|---|
|
Enables the Parameter Store integration. |
No |
|
|
Configures endpoint used by |
No |
|
|
Configures region used by |
No |
|
|
|
|
The strategy to use when firing a reload ( |
|
|
|
The period for verifying changes |
|
|
|
The maximum time between the detection of changes in property source and the application context restart when |
11.6. IAM Permissions
Following IAM permissions are required by Spring Cloud AWS:
Get parameter from specific path |
|
Sample IAM policy granting access to Parameter Store:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ssm:GetParametersByPath",
"Resource": "yourArn"
}
]
}
12. CloudWatch Metrics
Spring Cloud AWS provides Spring Boot auto-configuration for Micrometer CloudWatch integration.
To send metrics to CloudWatch add a dependency to micrometer-registry-cloudwatch
module:
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-cloudwatch2</artifactId>
</dependency>
Additionally, CloudWatch integration requires a value provided for management.cloudwatch.metrics.export.namespace
configuration property.
Following configuration properties are available to configure CloudWatch integration:
property | default | description |
---|---|---|
management.cloudwatch.metrics.export.namespace |
The namespace which will be used when sending metrics to CloudWatch. This property is needed and must not be null. |
|
management.cloudwatch.metrics.export.step |
1m |
The interval at which metrics are sent to CloudWatch. The default is 1 minute. |
spring.cloud.aws.cloudwatch.enabled |
true |
If CloudWatch integration should be enabled. This property should be likely set to |
spring.cloud.aws.cloudwatch.endpoint |
Overrides the default endpoint. |
|
spring.cloud.aws.cloudwatch.region |
The specific region for CloudWatch integration. |
12.1. Client Customization
CloudWatchAsyncClient
can be further customized by providing a bean of type CloudWatchAsyncClientCustomizer
:
@Bean
CloudWatchAsyncClientCustomizer customizer() {
return builder -> {
builder.overrideConfiguration(builder.overrideConfiguration().copy(c -> {
c.apiCallTimeout(Duration.ofMillis(1500));
}));
};
}
|
CloudWatchAsyncClientCustomizer
is a functional interface that enables configuring CloudWatchAsyncClientBuilder
before the CloudWatchAsyncClient
is built in auto-configuration.
There can be multiple CloudWatchAsyncClientCustomizer
beans present in single application context. @Order(..)
annotation can be used to define the order of the execution.
Note that CloudWatchAsyncClientCustomizer
beans are applied after AwsAsyncClientCustomizer
beans and therefore can overwrite previously set configurations.
13. Spring Modulith Integration
Spring Cloud AWS comes with externalized events
Integration with Spring Modulith provides capability to send externalized events to SNS and SQS. Read more about externalizing events in Spring Modulith Reference Guide.
13.1. SNS
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-modulith-events-sns</artifactId>
</dependency>
The logical routing key will be used as SNS message group id. When routing key is set, requires SNS to be configured as a FIFO topic with content based deduplication enabled.
13.2. SQS
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-modulith-events-sqs</artifactId>
</dependency>
The logical routing key will be used as SQS message group id. When routing key is set, requires SQS queue to be configured as a FIFO queue.
14. Testing
Spring Cloud AWS provides utilities for LocalStack Container that simplify using Testcontainers LocalStack module with Spring Cloud AWS based projects.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-testcontainers</artifactId>
</dependency>
14.1. Service Connection
@ServiceConnection for LocalStack Container simplifies configuring Spring Cloud AWS based project to point to LocalStack instead of real AWS.
Once Spring Cloud AWS detects in application test code a LocalStackContainer
bean annotated with @ServiceConnection
, it will automatically configure region
and credentials
to point to a LocalStack container.
@Bean
@ServiceConnection
LocalStackContainer localStackContainer() {
return new LocalStackContainer(DockerImageName.parse("localstack/localstack:<version>"));
}
To understand in depth how service connection works, follow Spring Boot Reference Guide on this topic.
14.2. Using AWS Clients with LocalStack
Spring Cloud AWS provides LocalstackAwsClientFactory
that simplifies creating AWS clients pointing to LocalStack, when there is a need to configure an AWS client outside of Spring application context:
@Testcontainers
class LocalstackAwsClientFactoryTest {
@Container
private LocalStackContainer localStackContainer = new LocalStackContainer(
DockerImageName.parse("localstack/localstack:3.8.1"));
@Test
void aTest() {
LocalstackAwsClientFactory factory = new LocalstackAwsClientFactory(localStackContainer);
S3Client s3Client = factory.create(S3Client.builder());
// ...
}
}
15. Docker Compose
Spring Cloud AWS provides Docker Compose support for LocalStack docker images which simplifies local development of Spring Cloud AWS based projects.
Maven coordinates, using Spring Cloud AWS BOM:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-docker-compose</artifactId>
</dependency>
For more information about Spring Docker Compose support please refer to official Spring documentation
15.1. Example docker-compose.yaml file
services:
localstack:
image: localstack/localstack
environment:
AWS_ACCESS_KEY_ID: noop
AWS_SECRET_ACCESS_KEY: noop
AWS_DEFAULT_REGION: eu-central-1
ports:
- "4566:4566"
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and AWS_DEFAULT_REGION
are required environment variables to ensure proper integration.
16. Migration from 2.x to 3.x
Migration guide is work in progress. |
Properties that have changed from 2.x to 3.x are listed below:
Version 2.x | Version 3.x |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
Properties that have been removed in 3.x are listed below:
-
Cognito Properties
-
spring.cloud.aws.security.cognito.app-client-id
-
spring.cloud.aws.security.cognito.user-pool-id
-
spring.cloud.aws.security.algorithm
-
spring.cloud.aws.security.region
-
spring.cloud.aws.security.enabled
-
The same behaviour can be enabled using Spring Boot integration with Spring Security OAuth2 support:
spring.security.oauth2.resourceserver.jwt.issuer-uri=http://127.0.0.1:4566/us-east-1_f865f8979c4d4361b6af703db533dbb4
spring.security.oauth2.resourceserver.jwt.jwk-set-uri=http://127.0.0.1:4566/us-east-1_f865f8979c4d4361b6af703db533dbb4/.well-known/jwks.json
The example above set the URIs to LocalStack’s Cognito Server.
16.1. Parameter store migration
In 2.x, parameter store items were resolved using a combination of the following settings:
-
aws.paramstore.prefix
(defaulted to/config
) -
aws.paramstore.defaultContext
for shared config items -
aws.paramstore.name
for application-specific config items
For example, if all you had done was set aws.paramstore.enabled=true
, and your spring.application.name
was my-service
, then parameter store items could be resolved from either /config/application
(for shared config items) and /config/my-service
(for application specific config items)
If you overrode one or more of the settings, then there could potentially be other prefixes where the application would look for parameters.
In 3.x, the prefix is defined by the spring.config.import
property, and is no longer made up of a "prefix" or "name" component. For example, if reading properties for a service called my-service
, the parameter store should be configured like this:
# Still needed for other parts of spring boot
spring.application.name=my-service
# Needed for spring-cloud-aws-starter-parameter-store
spring.config.import=aws-parameterstore:/config/my-service/
To load multiple parameters from multiple paths, separate their names by ;
. For example, this would replicate a 2.x configuration that utilized both shared config items and application specific config items
spring.config.import=aws-parameterstore:/config/application/;/config/my-service/
The paths specified after the aws-parameterstore: scheme are used as-is for resolving parameter names. If you want parameter names to be separated with a slash between the prefix and property name, make sure to add a trailing slash to the prefix.
|
16.2. SQS Integration migration
In Spring Cloud AWS 2.x, when a collection of objects was passed as an argument to send or receive payloads, the framework serialized and deserialized the list in a single message. In Spring Cloud AWS 3.x, the way to send and receive object collections has changed, see Sending Messages.
17. Configuration properties
To see the list of all Spring Cloud AWS related configuration properties please check the Appendix page.