Hello Team,
We are using Spark Mongo connector to write data from our Databricks Delta Lake.
When we use the Spark write mode as “append”, we could see that if the _id from the dataframe is already existing in MongoDB, the document itself is getting replaced with the new document from the dataframe.
We would like to merge the documents, add new elements to array fields of existing MongoDB documents with Spark. More like Spark Databricks Delta upsert.
Is it possible with Spark Mongo Connector?
Regards,
Puviarasu S.