Docs Menu

The following example creates a DataFrame from a json file and saves it to the MongoDB collection specified in SparkConf:

Dataset<Row> df = spark.read().format("json").load("example.json");
df.write().format("mongodb").mode("overwrite").save();

The MongoDB Connector for Spark supports the following save modes:

  • append
  • overwrite

To learn more about save modes, see the Spark SQL Guide.

MongoDB Connector for Spark →
Give Feedback
© 2022 MongoDB, Inc.

About

  • Careers
  • Investor Relations
  • Legal Notices
  • Privacy Notices
  • Security Information
  • Trust Center
© 2022 MongoDB, Inc.