Need help with really slow aggregation

Alright so I have two tables that I need to join with a condition, I have a massive aggregation that works flawlessly at around 20-30ms queries at 15M+ records, however I want to add another aggregation to match documents but it’s in another table, I’ve tried lookup aggregation but it’s really slow, my 20-30ms queries are now reaching 20+ seconds if I add this aggregation, below is a sample model of the Post collection

// Posts table
{
  "_id": {
    "$oid": "64e04965f31b0b617fd1168a"
  },
  "_t": "Post",
  "PostText": "",
  "PostIncrementId": "mLYGr9"
...and more omitted objects
}

below is a sample model of the Relationships collection where I log users that liked a post

// Relationships table
{
  "_id": {
    "$oid": "64fc90ff4a40b8da206f94ce"
  },
  "_t": "SocialRelationships",
  "TargetId": "OUCPy0BU", // PostIncrementId on Post model
  "RelationshipType": 1, // action type, 1 = like
  "UserUniqueId": "sample-user-id" // user who did the action
}

now, how do I get all Posts that has been liked by a user with the ID of sample-user-id using the Relationships table?
this is what my current aggregation looks like, this takes more than 20 seconds to run, I need this to run below 1 second or much better if below 500ms, I am writing in C# but any solution of any programming language will suffice


            // aggregationPipeline is a List<BsonDocument> of aggregations 
            // for the Posts table
            aggregationPipeline.Add(
                new BsonDocument
                {
                    {
                        "$lookup", new BsonDocument
                        {
                            { "from", "relationships" },
                            { "localField", "PostIncrementId" },
                            { "foreignField", "TargetId" },
                            { "as", "user_likes" } // user_likes should be array
                        }
                    }
                }
            );

            aggregationPipeline.Add(new BsonDocument
            {
                {
                    "$match", new BsonDocument
                    {
                        {
                            "$and", new BsonArray
                            {
                                new BsonDocument
                                {
                                    {
                                        "user_likes", new BsonDocument
                                        {
                                            {
                                                "$elemMatch", new BsonDocument
                                                {
                                                    { "UserUniqueId", "sample-user-id" },
                                                    { "RelationshipType", 1 }
                                                }
                                            }
                                        }
                                    }
                                }
                            }
                        }
                    }
                }
            });

I may be mis-understanding this but as opposed to joining likes to posts and then filtering down by likes from a user, find likes by a user and then join THAT onto the posts collection?

With an appropriate index or two it should be very fast? I’m sure you have a load of other stages that are actually doing stuff that may prevent this or I’ve mis-read the code fragments.

We can’t see the rest of the query or sample documents, but have you run an explain to see why it’s now taking so much longer?

Perhaps you could provide the index information for the collections and the explain result.