Error when using spark connector 10.0.4 | java.lang.NoSuchMethodError: com.mongodb.client.MongoIterable.cursor

I am trying to use mongo connector 10.0.4 and getting below error. I am using below versions of dependencies.

spark-core_2.12  3.0.2
mongodb-driver-sync 4.7.1
mongo-java-driver 3.12.11
spark-streaming_2.12 3.1.2
mongo-spark-connector 10.0.4
aused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (192.168.1.41 executor driver): org.apache.spark.SparkException: Data read failed
	at org.apache.spark.sql.errors.QueryExecutionErrors$.failedToReadDataError(QueryExecutionErrors.scala:1839)
	at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.next(ContinuousQueuedDataReader.scala:106)
	at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:102)
	at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:94)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
	at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$1(ContinuousWriteRDD.scala:60)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538)
	at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:91)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:136)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.NoSuchMethodError: com.mongodb.client.MongoIterable.cursor()Lcom/mongodb/client/MongoCursor;
	at com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.getCursor(MongoStreamPartitionReader.java:184)
	at com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.withCursor(MongoStreamPartitionReader.java:196)
	at com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.tryNext(MongoStreamPartitionReader.java:137)
	at com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.next(MongoStreamPartitionReader.java:112)
	at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread.run(ContinuousQueuedDataReader.scala:146)

Can you share the code that generated this error?

This may be a dependency resolution issue with gradle. I tried to reproduce above issue with stand alone repo so I can share the code. When doing that I was able to run the example in Java. However it printed values like below. Only _id has values that too some random values.

±-----±-------------------±--------------±—±----------±----------------+
|col4 | _id| col0 |col1| col2 | col3 |
±-----±-------------------±--------------±—±----------±----------------+
| null|{“_data”: "826347…| null|null| null| null|

Followed below example:

Above issue got resolved when writing to spark stream from mongodb stream I am getting below error. Attached code for reference.

Exception in thread "main" org.apache.spark.sql.AnalysisException: Required attribute 'value' not found
	at org.apache.spark.sql.kafka010.KafkaWriter$.validateQuery(KafkaWriter.scala:59)
	at org.apache.spark.sql.kafka010.KafkaStreamingWrite.<init>(KafkaStreamingWrite.scala:42)
	at org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaTable$$anon$2.buildForStreaming(KafkaSourceProvider.scala:411)
	at org.apache.spark.sql.execution.streaming.StreamExecution.createStreamingWrite(StreamExecution.scala:625)
	at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution.<init>(ContinuousExecution.scala:93)
	at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:302)
	at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:359)
	at org.apache.spark.sql.streaming.DataStreamWriter.startQuery(DataStreamWriter.scala:466)
	at org.apache.spark.sql.streaming.DataStreamWriter.startInternal(DataStreamWriter.scala:456)
	at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:301)
	at org.sample.test.SparkJob.getQuery(SparkJob.java:55)
	at org.sample.test.SparkJob.run(SparkJob.java:44)
	at org.sample.test.SparkJobMain.main(SparkJobMain.java:8)

javacode.txt (2.6 KB)

Could you post your build.sbt file please? Thanks. I am having some issues building a scala project with the mongo-spark connector. Pyspark does work so it is specific to sbt.