Newbie with basic questions

So I am the project lead and front end developer on a website that has tons of complicated searches with tons of variables. Think OkCupid for politics but a lot more complicated. So like “find how I rate with 50 people running for office in my area based on my answers to any of 2,000 questions, their answers, and how organizations that took matching questions rated them and how I match to those organizations.” might be a common search. Some of our current searches/calculations are really slow and we have entered maybe 1/10th of the data that we will have in a couple years. The concept behind the project is if Google and Facebook can use big data to sell us stuff, we can use big data to fix democracy. We will probably have a few websites hitting up the same database.

We are about to redesign the db, and I have some basic questions:

  1. I was told that MongoDB used to not record whether it has successfully written something, but now it does. Someone else told me that this log is unreliable. If writing the same data to multiple places can be trusted, then it would make things faster to have collections dedicated to certain tasks.

  2. How much time do we lose searching multiple tables versus one really long table with many fields and arrays in each object? Is there any way of knowing when breaking things up into smaller pieces or joining things together makes sense?

  3. (this one is really basic) For indexes, does it save the object ids? Like if I knew that voters within NJ mostly did searches within NJ, and i had a table with all politicians all across America, would making 50 state indexes save me time, or would it still have to go through all of the “state” fields to pick out the NJ politicians before it does the rest of the compound search?

  4. Does having an index make it as fast as it being an actual separate table?

  5. Is there a way of tricking an incredibly large collection to have more than the limited 64 indexes?

Thanks.