I come from a classic database background and it does require a slight adjustment to your thinking, more a 90 degree shift.
So I don’t look at the find/delete/update etc as lacking features. Like most things, it is about using the right tool for the job. Some things are easier to do in the client language, where you can change the data. I think when I first looked at MongoDB 3+ years ago,
I think I remember reading that the aggregation pipeline was developed separately, so it didn’t sit in the core of MongoDB, like find, update etc. Which has the benefit in that it could be maintained independently, without impacting other facets of the code base, of course that comes with limitations of course, but it does mean it can be fine tuned for server side performance.
Aggregate is about keeping the heavier data manipulation on the server, and reducing network traffic. I tended to look at it as a reporting tool initially, however that simplifies what you can achieve with it, and now I would use it for a lot more complex uses.
I think the strength of aggregate is the ability to pipeline data, from one operation into another, into another, reusing pipeline operators, grouping, joining, splitting, rejoining etc.
I think you will appreciate it more as as you move through the course.
For MongoDB, it is about how your data is read. What data best sits together, not about normalising the crap out of it
NOTE - have a look at the aggregate feature in compass (next to the schema view), which provides a gui look at aggregation pipeline. I don’t know if it is as fully formed as the shell command, but I do anticipate I will use it to re-review my data.
I will also be looking to see if the Cosmos DB extension for Visual Studio code will help with intellisense for MongoDB methods.