Cannot insert all documents to the MongoDB cloud cluster: Prematurely reached end of stream

I’m using JDK/JRE version like: 1.8.0_192
When I’m inserting documents to the Mongodb cloud cluster, I’m getting always the issue:

Caused by: com.mongodb.MongoSocketReadException: Prematurely reached end of stream
	at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(
	at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(
	at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(
	at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(
	at com.mongodb.internal.connection.CommandHelper.sendAndReceive(
	at com.mongodb.internal.connection.CommandHelper.executeCommand(
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(
	at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(
	at com.mongodb.internal.connection.DefaultConnectionPool$
	at com.mongodb.internal.connection.DefaultConnectionPool.get(
	at com.mongodb.internal.connection.DefaultConnectionPool.get(
	at com.mongodb.internal.connection.DefaultServer.getConnection(
	at com.mongodb.internal.binding.ClusterBinding$ClusterBindingConnectionSource.getConnection(
	at com.mongodb.client.internal.ClientSessionBinding$SessionBindingConnectionSource.getConnection(
	at com.mongodb.internal.operation.FindOperation$
	at com.mongodb.internal.operation.FindOperation$
	at com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(
	at com.mongodb.internal.operation.FindOperation.execute(
	at com.mongodb.internal.operation.FindOperation.execute(
	at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(
	at com.mongodb.client.internal.FindIterableImpl.first(
	at gui.Controller.cloudData(
	at gui.Controller.lambda$pushAction$1(
	at java.util.ArrayList.forEach(
	at gui.Controller.pushAction(
	... 58 more

I’ve tried different connection strings for older drivers like:


and without maxIdleTimeMS parameter:

For newer drivers like:

and like:


It is worth noting that I can insert half of the documents, but then this error occurs.

And nothing helps here.

This looks like SSL connection issue… check whether or not JRE’s SSL keystore is set properly.

1 Like

Thanks for the answer, coderkid. can you a bit elaborate how to check it properly?
I’m not sure, but maybe these limits messages would be helpful here.тест

UPD: I suppose I’ve found, at least, place where is located trustore:


And I have inside something like:

To clarify a bit:

I’ve tried everything what I’ve already mentioned here:

as you can see my post above.

My Network Access looks like:

Network Access

Also I can’t provide any logs, because:

“M0 Free Tier and M2/M5 shared clusters do not provide downloadable logs.”

1 Like

Hi @invzbl3,

The fact that you can connect and insert for a period of time, could be an indication that it’s not related to the keystore.

Looking at both of these log lines:

Caused by: com.mongodb.MongoSocketReadException: Prematurely reached end of stream
at com.mongodb.client.internal.FindIterableImpl.first(

It seems that you are looping through an iterable from Find(), and within the loop you’re inserting documents. If so, it is possible that each of the loop process is taking too long, that either the find cursor has timeout or the server that the cursor is connected to was disconnected.

Could you provide more information about how you’re inserting the documents with a snippet code example ?


1 Like

Hi, @wan, thanks for the answer. My code looks like:

 public void data(MongoCollection<Document> collection, String title, String country) {

        Document found = collection.find(new Document("title", title)             
                .append("country", country)).first();

        if (found == null) {
            collection.insertOne(new Document("title", title)
                .append("country", country));

The cause of error stacktrace shows me on line:

.append("country", country)).first();

By the same logic, I had this exactly error using code snippet like here:

I have so far inserted about 50 documents instead of the 176 that I planned using this snippet.

Thanks for the snippet code.
This function is called within a loop right ? Also, are you running this in a concurrent/parallel operations ?

Could you check in Atlas cluster monitoring views the number of connections and operations displayed around the time you’re getting the error ? See also Monitor a Cluster for information how to view the metrics.

I’m suspecting that you could be hitting the shared clusters limitations.


1 Like

@wan, yes, you’re right about the loop. To correct a bit, my method of program is named like cloudData not data (as I mentioned earlier in snippet), so stacktrace is correct here.

Journal is my custom class of parameters that I’m adding inside list.

So I’m declaring firstly instance variable:
private List<Journal> journalList;

and then I’m using for each to call this method using getters like:

journalList.forEach(journal -> cloudData(journal.getTitle(), journal.getCountry()));

I don’t think it’s related to concurrent/parallel operations, because I simply using the same function until all documents from list will be added to MongoDb Tier cluster one by one.

My metrics looks like:
Logical Size Network Opcounters Operation Execution Times


Also I’m getting alert like:

Can you tell me, please, can I somehow control/regulate limit of % connections from connection string or something? I suppose it can be solution here if I add something like connection limiter as parameter, but still can insert everything that I need to cluster.

P.s. I’ve tried to increase number of milliseconds for parameter maxIdleTimeMS up to 100000 or 1000000 instead of 40000, but it doesn’t affect the overall situation, the same error appears.

Depending on your use case, there are a number of ways that you could try.

You could try to reduce the connection pool limit using maxPoolSize parameter (default is 100). Alternatively could also change the connections pool limit via ConnectionPoolSettings.Builder().maxSize(int).
If you reduce the number of connection pool, you need to be mindful about the time limit a thread may wait for a connection to become available. The default is 2 minutes see also ConnectionPoolSettings.Builder().maxWaitTime(long, TimeUnit).

In addition to the above, try to restructure your code. Instead of inserting a document one by one you could utilise BulkWrites operations. Also, looking at your example method cloudData, where you try to find a document and insert if it doesn’t exist, try performing update upsert operation whenever possible.

Depending on your use case, you could also consider upgrading the cluster to a dedicated cluster. See also Free, Shared and Dedicated Cluster Comparison.


1 Like

Thanks, for valuable answer, @wan. I’ve changed my code to:

public void cloudData(MongoCollection<Document> collection, String title, String country) {
  List<WriteModel<Document>> updates = Collections.singletonList(
                new ReplaceOneModel<>(
                        new Document("title", title) // find part
                                .append("country", country),              
                        new Document("title", title) // update part
                                .append("country", country),            
                        new ReplaceOptions().upsert(true)             

It works also fine, but the same issue appears after half of inserted documents.

Also I’ve already tried to add maxPoolSize and reduce value to 80 and 50 instead of default 100 in format like:


unfortunately to no avail, still here:
Caused by: com.mongodb.MongoSocketReadException: Prematurely reached end of stream

Probably yes, the most likely option as I see it’s really to pay for upgrade.

P.s. I’ll try to combine maxPoolSize and maxWaitTime, as you mentioned, but not sure if it possible solution for connection string.

Finally solved. I should call mongodb.MongoClient.connect once, not each request. So I’ve restructured my code a bit, to call connection once instead of each time when I insert specific document.

Thanks for the help, everyone.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.