Setup for a large time-series logging collection


I’m not a complete newbie when it comes to using Mongo, but I’m out of my league when it comes to planning the setup/sharding for a bit heavier workload.
I would like to know if anyone would like to share their thoughts or their existing setup with a workload similar to mine:
600 MB of raw text logs per hour should be redirected to Mongo using Serilog, this will create around 0.5 billion log events/documents per month in a time-series collection (planned retention is 6 months).
If storage is not an issue except that it will outgrow RAM very quickly by a large factor, what else should I look for to make this project viable? This database will serve as an analytics platform, so waiting a bit for a query to return is not that big a deal(if within reason).
Current projections of raw data generated are ballparking at 2.5TB in 6 months.
Can I keep all 3 billions of expected log entries in one time-series collection, or is it advisable to split the collection into year-month archives? How will it affect querying on a month border?

Thanks for any shared data.