Working with CSFLE with aws kms in java spring boot running application in aws lambda

I have a spring boot application running on a aws lambda with Java 17, atlas mongo 7.0 and mongo java driver 4.11. I want to apply CSFLE on certain fields and I want to use AWS KMS as a master key provider for the CSFLE. The role is created with access to the key and same role is assigned to AWS lambda.

However I am clueless on how to provide aws kms to the mongo client. I went through some articles which are mentioned below but I am still not able to understand on how I can provide the AWS KMS as kms provider. In some examples it is mentioned that we need to provide AWS access key and secrete key, but I won’t be able to provide as I want to go with role based access.

I had a look at below articles:

A code snippet which tells how CSFLE can be implemented with AWS KMS will be really helpful.

Hi @Vishal_Jamdade and thank you for your question!

If you are using a lambda, please make sure to cache the connection correctly so you don’t recreate a new connection to MDB each time you trigger a new lambda.

The AWS KMS config is supposed to be done in the code when you create the kmsProviders.

Check this doc:

There is also this note in the doc that is your solution I think:


1 Like

Hi @MaBeuLux88 ,

Thanks for responding. I have below code snippet for creating ClientEncryption object.

public static ClientEncryption clientEncryption() {"=> Creating the MongoDB Key Vault Client.");
        MongoClientSettings mcs = MongoClientSettings.builder()
                .applyConnectionString(new ConnectionString(CONNECTION_STR))
        ClientEncryptionSettings ces = ClientEncryptionSettings.builder()
                .kmsProviders(new HashMap<>() {{
                    put("aws", new HashMap<>() {{
                        put("key", "arn:aws:kms:ap-south-1:85906757657:key/6a86s21c-889c-428f-adf4-3jhdf67f212c6");
                        put("provider", new BsonString("aws"));
                        put("region", new BsonString("ap-south-1"));
        ClientEncryption clientEncryption = ClientEncryptions.create(ces);"Created the MongoDB Key Vault Client.");
        return clientEncryption;

And I am getting below exception (Here I don’t want to pass access key and secrete key and instead I expect it to access the kms key using role based access provided):

	at org.springframework.web.servlet.FrameworkServlet.processRequest(
	at org.springframework.web.servlet.FrameworkServlet.doGet(
	at jakarta.servlet.http.HttpServlet.service(
	at org.springframework.web.servlet.FrameworkServlet.service(
	at jakarta.servlet.http.HttpServlet.service(
	at com.amazonaws.serverless.proxy.internal.servlet.FilterChainManager$ServletExecutionFilter.doFilter(
	at com.amazonaws.serverless.proxy.internal.servlet.FilterChainHolder.doFilter(
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(
	at com.amazonaws.serverless.proxy.internal.servlet.FilterChainHolder.doFilter(
	at com.amazonaws.serverless.proxy.internal.servlet.AwsLambdaServletContainerHandler.doFilter(
	at com.amazonaws.serverless.proxy.spring.SpringBootLambdaContainerHandler.handleRequest(
	at com.amazonaws.serverless.proxy.spring.SpringBootLambdaContainerHandler.handleRequest(
	at com.amazonaws.serverless.proxy.internal.LambdaContainerHandler.proxy(
	at com.amazonaws.serverless.proxy.internal.LambdaContainerHandler.proxyStream(
	at com.niyo.serverless.nsdl.pb.customer.StreamLambdaHandler.handleRequest(
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(Unknown Source)
	at java.base/java.lang.reflect.Method.invoke(Unknown Source)
Caused by: com.mongodb.crypt.capi.MongoCryptException: expected UTF-8 aws.accessKeyId
1 Like

I figured out what I was doing wrong. I was thinking that kmsProvider also holds master key details like ARN and AWS Region. However KMS provider only tells which kms system to use and key details we need to send separately to mongo client. Below is code snippet which worked for me.

MongoClientSettings mcs = MongoClientSettings.builder()
                .applyConnectionString(new ConnectionString(CONNECTION_STR))
        ClientEncryptionSettings ces = ClientEncryptionSettings.builder()
                .kmsProviders(new HashMap<>() {{
            put("aws", new HashMap<>());
        ClientEncryption clientEncryption = ClientEncryptions.create(ces);

        BsonDocument masterKeyProperties = new BsonDocument();
        masterKeyProperties.put("provider", new BsonString(KMS_PROVIDER));
        masterKeyProperties.put("key", new BsonString(kmsArn));
        masterKeyProperties.put("region", new BsonString("AWS_REGION"));

        EncryptedEntity encryptedEntity = new EncryptedEntity("user", "user", User.class, "userDEK");
        String dekName = encryptedEntity.getDekName();
        DataKeyOptions dko = new DataKeyOptions().keyAltNames(of(dekName)).masterKey(masterKeyProperties);
        dataKeyId = clientEncryption.createDataKey(KMS_PROVIDER, dko);
1 Like


I’m glad you found a solution because I never tried to use 3rd party KMS providers.

Happy encoding!

While my previous problem is solved, I have started facing new problem. And this is related to schema generation.

The schema which is getting generated is getting generated with keyId as null. And I am not getting any clue on why it is not having keyId in it. It is causing the shared library to throw the exception. Exception is:

Caused by: com.mongodb.MongoClientException: Exception in encryption library: csfle "analyze_query" failed: Encryption schema 'keyId' array elements must have type BinData, found null [Error 2, code 51088]

The schema which is generated is as below:

  "encryptMetadata": {
    "keyId": [
  "type": "object",
  "properties": {
    "pan": {
      "encrypt": {
        "bsonType": "string",
        "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic"

Did you implement the SpEL evaluation correctly?

I think so. I am quite new to SpEL Evaluation. Also I want to understand what exactly keyID is in the Schema ? Is is id of DEK ? If yes should the DEK get created first before schema evaluation ?

Also sharing my Entity class and extension class.

import lombok.*;

@Encrypted(keyId = "#{mongocrypt.keyId(#target)}")
public class User {

    private String id;

    @Indexed(unique = true)
    private String customerId;

    private String custFirstName;

    @Encrypted(algorithm = "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic")
    private String pan;

    private String gender;


import com.mongodb.MongoNamespace;
import org.bson.BsonDocument;
import org.bson.json.JsonWriterSettings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;

import java.util.List;
import java.util.Map;

import static;

public class SchemaServiceImpl implements SchemaService {

    private static final Logger LOGGER = LoggerFactory.getLogger(SchemaServiceImpl.class);
    private Map<MongoNamespace, BsonDocument> schemasMap;

    public Map<MongoNamespace, BsonDocument> generateSchemasMap(MongoJsonSchemaCreator schemaCreator) {"=> Generating schema map.");
        List<EncryptedEntity> encryptedEntities = EncryptedCollectionsConfiguration.encryptedEntities;
        return schemasMap =
                        e -> generateSchema(schemaCreator, e.getEntityClass())));

    public Map<MongoNamespace, BsonDocument> getSchemasMap() {
        return schemasMap;

    private BsonDocument generateSchema(MongoJsonSchemaCreator schemaCreator, Class<?> entityClass) {
        BsonDocument schema = schemaCreator.filter(MongoJsonSchemaCreator.encryptedOnly())
                .toBsonDocument();"=> JSON Schema for {}:\n{}", entityClass.getSimpleName(),
        return schema;

I could figure out above issue too. Explaining as below:

Find operation on key-vault collection was failing. Hence DEK creation / retrieval operation was also failing. As a result schema also didn’t had id.

Failure was happening because aws lambda role had dbAdmin access. My understanding was wrong about dbAdmin Role. In entire process we do read / write operation as well as schema creation. Schema creation requires dbAdmin role and for other read / write operations readWrite role is required.

And there was no issue in code, After assigning dbAdmin as well as readWrite mongo role to AWS Lambda role things seems to be working fine.

@MaBeuLux88 Your blogpost was very helpful for me to setup entire thing and thanks for your responses. Details about required mongo permissions also can be added to this post. Sharing link here again if anyone looking for it: How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB | MongoDB

1 Like

Wow that’s awesome :smiley: and I’m really glad you found this helpful. It was not trivial for me to create all this content.

I’ll add a note in the blog post to make sure people assign both dbAdmin and readWrite roles in AWS.