We all think our jobs are hard. But if you’re a recruiter, you know just how tough it is to place people into those jobs: the average response rate to recruiters is an abysmal 7%. Enter Interseller, a fast-growing NYC-based SaaS company in the recruiting tech space.
For this episode of #BuiltWithMongoDB, we go behind the scenes in recruiting technology with Steven Lu, co-founder and CEO of Interseller.
How did you pick this problem to work on?
While working as an engineer, I helped teach and recruit many other tech professionals. That’s when I realized that good engineers don’t find jobs. They get poached. Sourcing is essential for assembling great teams, but with the low industry response rate, I knew we needed a new solution.
I started looking into recruiting technology and was frankly surprised by how outdated the solutions were. We began by addressing three parts of sourcing:
- Data Management
We serve about 4,000 recruiters, 75% of whom use us every single day. Some of our customers include Squarespace, Honey, and Compass. Overall, we have had about 2 million candidates respond to us, boosting our average response rate from the industry average of 7% to between 40% and 60%. We attempt to close candidates within 21 days.
How did you decide to have Interseller #BuiltWithMongoDB?
Like any engineer, I hate database migrations. I hate having to build around the database rather than the database building around my product. I remember using MongoDB at Compass in 2012—we were a MongoDB shop.
After that, I went to another company that was using SQL and a relational database and I felt we were constantly being blocked by database migrations. I had to depend on our CTO to run the database migration before I could merge anything. I have such bad memories from that experience. I would rather have my engineering team push things faster than have to wait on the database side.
Our release schedule is really short: as a startup, you have to keep pumping things out, and if half your time is spent on database migration, you won’t be able to serve customers. That’s why MongoDB Atlas is so core to our business. It’s reliable, and I don’t have to deal with database versions.
Looking to build something cool? Get started with the MongoDB for Startups program.
Behind the Scenes of the MongoDB Podcast
Vector Search and Dedicated Search Nodes: Now in General Availability
Today we’re excited to take the next step in adding even more value to the Atlas platform with the general availability (GA) release of both Atlas Vector Search and Search Nodes. Since announcing Atlas Vector Search and dedicated infrastructure with Search Nodes in public preview, we’ve seen continued excitement and demand for additional workloads using vector-optimized search nodes. This new level of scalability and performance ensures workload isolation and the ability to better optimize resources for vector search use cases. Atlas Vector Search allows developers to build intelligent applications powered by semantic search and generative AI over any data type. Atlas Vector Search solves the challenge of providing relevant results even when users don’t know what they’re looking for and uses machine learning models to find results that are similar for almost any type of data. Within just five months of being announced in public preview, Atlas Vector Search has already received the highest developer net promoter score (NPS) — a measure of how likely someone is to recommend a solution to someone else — and is the second most widely used vector database, according to Retool’s State of AI report . There are two key use cases for Atlas Vector Search to build next-gen applications: Semantic search: searching and finding relevant results from unstructured data, based on semantic similarity Retrieval augmented generation (RAG): augment the incredible reasoning capabilities of LLMs with feeds of your own, real-time data to create GenAI apps uniquely tailored to the demands of your business. Atlas Vector Search unlocks the full potential of your data, no matter whether it’s structured or unstructured, taking advantage of the rise in popularity and usage of AI and LLMs to solve critical business challenges. This is possible due to Vector Search being part of the MongoDB Atlas developer data platform, which starts with our flexible document data model and unified API providing one consistent experience. To ensure you unlock the most value possible from Atlas Vector Search, we have cultivated a robust ecosystem of AI integrations, allowing developers to build with their favorite LLMs or frameworks. Our ecosystem of AI integrations for Atlas Vector Search To learn more about Atlas Vector Search, watch our short video or jump right into the tutorial . Atlas Vector Search also takes advantage of our new Search Nodes dedicated architecture, enabling better optimization for the right level of resourcing for specific workload needs. Search Nodes provide dedicated infrastructure for Atlas Search and Vector Search workloads, allowing you to optimize compute resources and fully scale search needs independent of the database. Search Nodes provide better performance at scale, delivering workload isolation, higher availability, and the ability to better optimize resource usage. In some cases we’ve seen 60% faster query time for some users' workloads, leveraging concurrent querying in Search Nodes. In addition to the compute-heavy search nodes we provided in the public preview, this GA release includes a memory-optimized, low CPU option that is optimal for Vector Search in production. This makes resource contention or the possibility of a resulting service interruption (due to your database and search sharing the same infrastructure previously) a thing of the past. Coupled Architecture (left) compared with the decoupled Search Node architecture (right) We see this as the next evolution of our architecture for both Atlas Search and Vector Search, furthering the value provided by the MongoDB developer data platform. At this time Search Nodes are currently available on AWS single-region clusters (with Google Cloud and Azure coming soon), as customers can continue using shared infrastructure for Google Cloud and Microsoft Azure. Read our initial announcement blog post to view the steps of how to turn on Search Nodes today, or jump right into the tutorial . Both of these features are available today for production usage. We can’t wait to see what you build, and please reach out to us with any questions.