DecodeException when creating mongo connector

when i am creating a mongo conenctor in openshift, i get a DecodeException:
Failed to decode:No content to map due to end-of-input
at [Source: (io.netty.buffer.ByteBufInputStream); line 1, column: 0]
reason: DecodeException

my connector jar file is located in /opt/kafka/plugins and when i am specifying the class conenctor,
it recognizes the plugin as a valid conenctor type and yet i am still getting this error…
Has someone encountered this issue?

I can make some guesses but I am not completely sure. There are kafka connector builds that include all the dependencies like Avro, not sure if you used one that included dependencies or not. Im not sure which libraries are available by default on openshift out of the box.

I have tried to use the newest mongo plugin (mongo-kafka-1.5.1-all.jar),
and now the sink connector works properly but the source connector still throws the same decoding exception…
any ideas why the sink works and the source not?

I haven’t received a reply,
if you have any idea for the cause of my error (only in source and not in sink) it would be great.

Hi @ori_iro,

Please post the full stack trace as that may provide some insight to the cause.

Also what version of Kafka and what version of MongoDB are you using?


I am using 2.5 version of kafka and my mongoDB version is 4.4.0
my connector plugin is 1.5.1 version. my stack trace:

javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig 
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl( 
	at org.glassfish.jersey.servlet.WebComponent.service( 
	at org.glassfish.jersey.servlet.ServletContainer.service( 
	at org.glassfish.jersey.servlet.ServletContainer.service( 
	at org.glassfish.jersey.servlet.ServletContainer.service( 
	at org.eclipse.jetty.servlet.ServletHolder.handle( 
	at org.eclipse.jetty.servlet.ServletHandler.doHandle( 
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle( 
	at org.eclipse.jetty.server.session.SessionHandler.doHandle( 
	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle( 
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle( 
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope( 
	at org.eclipse.jetty.servlet.ServletHandler.doScope( 
	at org.eclipse.jetty.server.session.SessionHandler.doScope( 
	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope( 
	at org.eclipse.jetty.server.handler.ContextHandler.doScope( 
	at org.eclipse.jetty.server.handler.ScopedHandler.handle( 
	at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( 
	at org.eclipse.jetty.server.handler.StatisticsHandler.handle( 
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle( 
	at org.eclipse.jetty.server.Server.handle( 
	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1( 
	at org.eclipse.jetty.server.HttpChannel.dispatch( 
	at org.eclipse.jetty.server.HttpChannel.handle( 
	at org.eclipse.jetty.server.HttpConnection.onFillable( 
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask( 
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce( 
	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce( 
	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ 
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( 
	at org.eclipse.jetty.util.thread.QueuedThreadPool$ 
	at java.base/ 
Caused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig 
	at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow( 
	at org.glassfish.jersey.servlet.internal.ResponseWriter.failure( 
	at org.glassfish.jersey.server.ServerRuntime$Responder.process( 
	at org.glassfish.jersey.server.ServerRuntime$ 
	at org.glassfish.jersey.internal.Errors$ 
	at org.glassfish.jersey.internal.Errors$ 
	at org.glassfish.jersey.internal.Errors.process( 
	at org.glassfish.jersey.internal.Errors.process( 
	at org.glassfish.jersey.internal.Errors.process( 
	at org.glassfish.jersey.process.internal.RequestScope.runInScope( 
	at org.glassfish.jersey.server.ServerRuntime.process( 
	at org.glassfish.jersey.server.ApplicationHandler.handle( 
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl( 
	... 35 more 

I have been trying to add mongo-driver-sync jar file and it didnt work.
My sink connector works but the source connector not…

I forgot to tag you @Ross_Lawley @Robert_Walters

@ori_iro, thats strange - theres nothing Kafka related in the stacktrace, which I’d expect to see.

The sink and source connector share the same jar, so I don’t know what is happening here to prevent it being loaded for one and not the other.

There is a lot of jetty / jersey code is the error coming from a web UI? Also the error doesn’t appear related to the initial decode error - has that been fixed?

Finally, in the 1.6.0 release the jar packages were updated so now the mongo-kafka-connect-1.6.0-all.jar contain all the dependencies, which are needed for non confluent kafka connect implementations. It also includes the avro dependencies as well as the driver. Does updating it to use the mongo-kafka-connect-1.6.0-all.jar work?


1 Like

I’m working in Openshift 4.5 with strimzi 0.18 operator.
the logs are provided from the kafka connect cluster (from a console).
the decode exception is written in the connector (with state: Not Ready and reason DecodeException).
Something interesting happend when i tried to add avro and mongo driver jar file to the same plugin path of the mongo-conenct jar file;
suddenly the sink connector stopped working and show me the same decode exception in the connector status (both the sink and source show the same exception).
What could cause MongoSourceConfig initialization error?
and I will also try the new connector

OK, looks like multiple issues here.

I think first step would be to clean up the class path - remove any jars that were added (mongo-driver-sync, avro etc…). Then add the 1.6.0-all jar and see what errors (if any) occur then.


1 Like

@Ross_Lawley @Robert_Walters I tried to use the 1.6.0 connector and the problem is the same - the sink connector works and the source connector does not with reason of DecodeException.
The logs are pretty much the same with another error:
Caused by: java.lang.ClassNotFoundException: org.apache.avro.Schema
at java.base/
at java.base/java.lang.ClassLoader.loadClass(
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(
at java.base/java.lang.ClassLoader.loadClass(
… 15 more

Added avro-1.3.2.jar to /opt/kafka/plugins (where the mongo plugin is) and nothing changed. (not sure if Avro is needed and if it is, does it matter the version of Avro)

In my docker file I take kafka 2.5.0 image, creating plugins directory (/opt/kafka/plugins)
and copying the mongo and avro plugins to that directory.

Hi @ori_iro,

Apologies, these seems to be harder to solve than needed! All the errors seem to be classpath related, which indicates something is missing from the classpath for the connector. Kafka uses per connector class loaders, so I think that maybe why just putting the dependency jars in /opt/kafka/plugins/ isn’t working.

There are 3 different 1.6.0 connector jar files:

I hope you will just need the mongo-kafka-connect-1.6.0-all.jar as it contains all the non Kafka dependencies needed for a non confluent Kafka install.

I think the classpath for connectors work as follows: /opt/kafka/plugins/<connector name>/lib/<connectors classpath>

So putting the jar file here hopefully will work: /opt/kafka/plugins/mongodb-kafka-connect/lib/mongo-kafka-connect-all.jar. As it will have all the dependencies needed within its own class loader.

Let me know if that solves the issue.


1 Like

@Ross_Lawley The decodeException no longer appears after I put the jar file under /opt/kafka/plugins/mongodb-kafka-connect-mongodb-1.6.0.
Now the sink connector works and the source connector status is ready and appears to be working, but the data has not been transferred to my topics…
this is my connector configurations:
database: myDatabase
collection: myCollection
connection.uri: *******
key.converter.schemas.enable: false
value.converter.schemas.enable: false
topic.prefix: myTopic

Hi @ori_iro,

Glad to hear you managed to get the jar loaded. Did it not work when added here: /opt/kafka/plugins/mongodb-kafka-connect/lib/mongo-kafka-connect-all.jar

I’m not sure what your source configuration is so impossible to tell why there’s nothing published to the topic. Next place to look would be the logs and see if they report anything.


@Ross_Lawley It also works with the path you mentioned.
I misunderstood the field of topic.prefix and thought its a prefix of topics which i want the data to be transferred to, instead of prefix.db.collection topic name.
I appreciate the help, thank you very much!