Lookup bad performance

I have the following data:
collection1:
{ _id:“ff273fc0-b19a-4ed4-8b84-16971a39a672”, V1:“c1_0” }

collection2:
{_id: “d2598fbf-3772-4f11-b353-32e0eebb3ef3”, Id2:“ff273fc0-b19a-4ed4-8b84-16971a39a672”, V1:“c2_0”}

collection4:
{_id: “0af96153-0871-49e5-9f0d-cb22c4f2f5f8”, Id4: “d2598fbf-3772-4f11-b353-32e0eebb3ef3”, V1: “c4_0”}

I have indexes on collection2.Id2 and collection4.Id4.
I have approximately 1 million records in collection1, 3 million in collection2 and 10 million in collection4.

On collection 1 I do a pipeline aggregate:

{ "$lookup", {
             	{ "from", "collection2"},
                {"localField", "_id"},
                {"foreignField", "Id2"},
                {"as", "c2"}
             }
},
{ "$unwind", {
             	{ "path", "$c2"}, { "preserveNullAndEmptyArrays", true }
             }
},
{ "$lookup", {
             	{ "from", "collection4"},
                {"localField", "c2._id"},
                {"foreignField", "Id4"},
                {"as", "c2.c4"}
             }
},
{ "$unwind", {
             	{ "path", "$c2.c4"}, { "preserveNullAndEmptyArrays", true }
             }
},
{ "$count", "c" }

When I run this pipeline (from .net code but this doesn’t matter), it doesn’t finish in 5 minutes, I didn’t wait for it to see how much it would take.
The exact same setup with indexes and with inner join between the 3 tables in PostgreSQL works in roughly 30 seconds.
I do the unwinds because its the only way I can obtain a “join” between 3 tables c1 - one to many - c2 - one to many - c4.
I am using MongoDB 6 Community.
I know you usually filter the data and do lookup on much fewer records.
But keeping the high level logic without filtering, can I improve the performance of this MongoDB aggregate ?