Nullpointer exception when creating Kafka sink connector

Hi,

I am getting null pointer exception when creating a Kafka Sink Connector. Below are the details. Can anyone help me what’s missing here to resolve this?

  1. Using Conflent Kafka latest version
  2. Installed Kafka Connector plugin using confluent hub.
  3. Creating connector using API. Below is the request and payload.
    http://localhost:8083/connectors (method POST)
{
	  "name": "orgunitsinc",
	  "config": {
      "connector.class":"com.mongodb.kafka.connect.MongoSinkConnector",
      "tasks.max":"1",
      "topics":"orgunits",
      "connection.uri":"mongodb://localhost:32771",
      "database":"smartconnect",
      "collection":"orgunits",
      "key.converter":"org.apache.kafka.connect.json.JsonConverter",
      "key.converter.schemas.enable":false,
      "value.converter":"org.apache.kafka.connect.json.JsonConverter",
      "value.converter.schemas.enable":false
	  }
}
  1. Here is the exception
[2020-11-12 19:50:09,467] INFO Cluster created with settings {hosts=[localhost:32771], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} (org.mongodb.driver.cluster:71)

[2020-11-12 19:50:09,479] INFO Opened connection [connectionId{localValue:2, serverValue:16}] to localhost:32771 (org.mongodb.driver.connection:71)

[2020-11-12 19:50:09,482] INFO Monitor thread successfully connected to server with description ServerDescription{address=localhost:32771, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 4, 1]}, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=2095612} (org.mongodb.driver.cluster:71)

[2020-11-12 19:50:09,483] ERROR Uncaught exception in REST call to /connectors (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)

java.lang.NullPointerException

at org.apache.kafka.connect.runtime.WorkerConfigDecorator$MutableConfigInfos.lambda$removeAllWithName$0(WorkerConfigDecorator.java:295)

at org.apache.kafka.connect.runtime.WorkerConfigDecorator$MutableConfigInfos.removeAll(WorkerConfigDecorator.java:305)

at org.apache.kafka.connect.runtime.WorkerConfigDecorator$MutableConfigInfos.removeAllWithName(WorkerConfigDecorator.java:294)

at org.apache.kafka.connect.runtime.WorkerConfigDecorator$DecorationPattern.filterValidationResults(WorkerConfigDecorator.java:432)

at org.apache.kafka.connect.runtime.WorkerConfigDecorator.lambda$decorateValidationResult$5(WorkerConfigDecorator.java:273)

at java.util.Collections$SingletonList.forEach(Collections.java:4822)

at org.apache.kafka.connect.runtime.WorkerConfigDecorator.decorateValidationResult(WorkerConfigDecorator.java:273)

at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:392)

at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:748)

Hi @venkat_utla,

I’m not sure what is going on or why the connectors rest call is throwing a NPE. Could I ask you to open up a bug ticket on the project Jira page?

That way I can investigate and keep you in the loop with any findings. It could be a connector issue, a Kafka connect issue or some other integration issue. Definitely a bug somewhere and I’m happy to help resolve it / find a work around for you.

Ross

Also experiencing this exact issue. Did you resolve in the end @venkat_utla?

I wonder if it’s related to Confluent Platform 6.0 / Apache Kafka 2.6? This looks like the same issue: https://github.com/NovatecConsulting/showcase-kafka-iot-emob/issues/6

Another user experiencing the same issue:

Hi,

I solved the problem by indicating security protocol:
“confluent.topic.security.protocol”: “PLAINTEXT”

Manuel

Where and how should this setting “confluent.topic.security.protocol”: “PLAINTEXT” be set?
And should the value be PLAINTEXT or PLAIN?

In the end, I got my sink connector working with an Atlas cluster after I installed confluent 6.1.0 and mongodb kafka connector 1.3.0 instead of 1.4.0.