Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

How to Build a Search Service in Java

Erik Hatcher10 min read • Published Feb 13, 2024 • Updated Apr 23, 2024
SearchJavaAtlas
Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
We need to code our way from the search box to our search index. Performing a search and rendering the results in a presentable fashion, itself, is not a tricky endeavor: Send the user’s query to the search server, and translate the response data into some user interface technology. However, there are some important issues that need to be addressed, such as security, error handling, performance, and other concerns that deserve isolation and control.
A typical three-tier system has a presentation layer that sends user requests to a middle layer, or application server, which interfaces with backend data services. These tiers separate concerns so that each can focus on its own responsibilities.
Three-tier architecture
If you’ve built an application to manage a database collection, you’ve no doubt implemented Create-Read-Update-Delete (CRUD) facilities that isolate the business logic in a middle application tier.
Search is a bit of a different type of service in that it is read-only, is accessed very frequently, must respond quickly to be useful, and generally returns more than just documents. Additional metadata returned from search results commonly includes keyword highlighting, document scores, faceting, and the number of results found. Also, searches often match way more documents than are reasonably presentable, and thus pagination and filtered searches are necessary features.
Our search service provides the three-tier benefits outlined above in these ways:
  • Security: The database connection string is isolated into the service environment. Parameters are validated and sanitized. The client/user cannot request a large number of results or do deep paging.
  • Scalability: The service is stateless and could easily be deployed multiple times and load-balanced.
  • Faster deployment: Service end-points could be versioned and kept running while enhanced versions are deployed. Behavior can be modified without necessarily affecting either the presentation tier or the database and search index configurations.
In this article, we are going to detail an HTTP Java search service designed to be called from a presentation tier, and in turn, it translates the request into an aggregation pipeline that queries our Atlas data tier. This is purely a service implementation, with no end-user UI; the user interface is left as an exercise for the reader. In other words, the author has deep experience providing search services to user interfaces but is not a UI developer himself. :)

Prerequisites

The code for this article lives in the GitHub repository.
This project was built using:
  • Gradle 8.5
  • Java 21
Standard Java and servlet APIs are used and should work as-is or port easily to later Java versions.
In order to run the examples provided here, the Atlas sample data needs to be loaded and a movies_index, as described below, created on the sample_mflix.movies collection. If you’re new to Atlas Search, a good starting point is Using Atlas Search from Java.

Search service design

The front-end presentation layer provides a search box, renders search results, and supplies sorting, pagination, and filtering controls. A middle tier, via an HTTP request, validates and translates the search request parameters into an aggregation pipeline specification that is then sent to the data tier.
A search service needs to be fast, scalable, and handle these basic parameters:
  • The query itself: This is what the user entered into the search box.
  • Number of results to return: Often, only 10 or so results are needed at a time.
  • Starting point of the search results: This allows the pagination of search results.
Also, a performant query should only search and return a small number of fields, though not necessarily the same fields searched need to be returned. For example, when searching movies, you might want to search the fullplot field but not return the potentially large text for presentation. Or, you may want to include the year the movie was released in the results but not search the year field.
Additionally, a search service must provide a way to constrain search results to, say, a specific category, genre, or cast member, without affecting the relevancy ordering of results. This filtering capability could also be used to enforce access control, and a service layer is an ideal place to add such constraints that the presentation tier can rely on rather than manage.

Search service interface

Let’s now concretely define the service interface based on the design. Our goal is to support a request, such as find “Music” genre movies for the query “purple rain” against the title and plot fields, returning only five results at a time that only include the fields title, genres, plot, and year. That request from our presentation layer’s perspective is this HTTP GET request:
1http://service_host:8080/search?q=purple%20rain&limit=5&skip=0&project=title,genres,plot,year&search=title,plot&filter=genres:Music
These parameters, along with a debug parameter, are detailed in the following table:
parameterdescription
qThis is a full-text query, typically the value entered by the user into a search box.
searchThis is a comma-separated list of fields to search across using the query (q) parameter.
limitOnly return this maximum number of results, constrained to a maximum of 25 results.
skipReturn the results starting after this number of results (up to the limit number of results), with a maximum of 100 results skipped.
projectThis is a comma-separated list of fields to return for each document. Add _id if that is needed. _score is a “pseudo-field” used to include the computed relevancy score.
filter<field name>:<exact value> syntax; supports zero or more filter parameters.
debugIf true, include the full aggregation pipeline .explain() output in the response as well.

Returned results

Given the specified request, let’s define the response JSON structure to return the requested (project) fields of the matching documents in a docs array. In addition, the search service returns a request section showing both the explicit and implicit parameters used to build the Atlas $search pipeline and a meta section that will return the total count of matching documents. This structure is entirely our design, not meant to be a direct pass-through of the aggregation pipeline response, allowing us to isolate, manipulate, and map the response as it best fits our presentation tier’s needs.
1{
2 "request": {
3 "q": "purple rain",
4 "skip": 0,
5 "limit": 5,
6 "search": "title,plot",
7 "project": "title,genres,plot,year",
8 "filter": [
9 "genres:Music"
10 ]
11 },
12 "docs": [
13 {
14 "plot": "A young musician, tormented by an abusive situation at home, must contend with a rival singer, a burgeoning romance and his own dissatisfied band as his star begins to rise.",
15 "genres": [
16 "Drama",
17 "Music",
18 "Musical"
19 ],
20 "title": "Purple Rain",
21 "year": 1984
22 },
23 {
24 "plot": "Graffiti Bridge is the unofficial sequel to Purple Rain. In this movie, The Kid and Morris Day are still competitors and each runs a club of his own. They make a bet about who writes the ...",
25 "genres": [
26 "Drama",
27 "Music",
28 "Musical"
29 ],
30 "title": "Graffiti Bridge",
31 "year": 1990
32 }
33 ],
34 "meta": [
35 {
36 "count": {
37 "total": 2
38 }
39 }
40 ]
41}

Search service implementation

Code! That’s where it’s at. Keeping things as straightforward as possible so that our implementation is useful for every front-end technology, we’re implementing an HTTP service that works with standard GET request parameters and returns easily digestible JSON. And Java is our language of choice here, so let’s get to it. Coding is an opinionated endeavor, so we acknowledge that there are various ways to do this in Java and other languages — here’s one opinionated (and experienced) way to go about it.
To run with the configuration presented here, a good starting point is to get up and running with the examples from the article Using Atlas Search from Java. Once you’ve got that running, create a new index, called movies_index, with a custom index configuration as specified in the following JSON:
1{
2 "analyzer": "lucene.english",
3 "searchAnalyzer": "lucene.english",
4 "mappings": {
5 "dynamic": true,
6 "fields": {
7 "cast": [
8 {
9 "type": "token"
10 },
11 {
12 "type": "string"
13 }
14 ],
15 "genres": [
16 {
17 "type": "token"
18 },
19 {
20 "type": "string"
21 }
22 ]
23 }
24 }
25}
Here’s the skeleton of the implementation, a standard doGet servlet entry point, grabbing all the parameters we’ve specified:
1public class SearchServlet extends HttpServlet {
2 private MongoCollection<Document> collection;
3 private String indexName;
4
5 private Logger logger;
6
7 // ...
8 @Override
9 protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
10 String q = request.getParameter("q");
11 String searchFieldsValue = request.getParameter("search");
12 String limitValue = request.getParameter("limit");
13 String skipValue = request.getParameter("skip");
14 String projectFieldsValue = request.getParameter("project");
15 String debugValue = request.getParameter("debug");
16 String[] filters = request.getParameterMap().get("filter");
17
18 // ...
19 }
20}
Notice that a few instance variables have been defined, which get initialized in the standard servlet init method from values specified in the web.xml deployment descriptor, as well as the ATLAS_URI environment variable:
1 @Override
2 public void init(ServletConfig config) throws ServletException {
3 super.init(config);
4
5 logger = Logger.getLogger(config.getServletName());
6
7 String uri = System.getenv("ATLAS_URI");
8 if (uri == null) {
9 throw new ServletException("ATLAS_URI must be specified");
10 }
11
12 String databaseName = config.getInitParameter("database");
13 String collectionName = config.getInitParameter("collection");
14 indexName = config.getInitParameter("index");
15
16 MongoClient mongo_client = MongoClients.create(uri);
17 MongoDatabase database = mongo_client.getDatabase(databaseName);
18 collection = database.getCollection(collectionName);
19
20 logger.info("Servlet " + config.getServletName() + " initialized: " + databaseName + " / " + collectionName + " / " + indexName);
21 }
For the best protection of our ATLAS_URI connection string, we define it in the environment so that it’s not hard-coded nor visible within the application itself other than at initialization, whereas we specify the database, collection, and index names within the standard web.xml deployment descriptor which allows us to define end-points for each index that we want to support. Here’s a basic web.xml definition:
1<web-app>
2 <servlet>
3 <servlet-name>SearchServlet</servlet-name>
4 <servlet-class>com.mongodb.atlas.SearchServlet</servlet-class>
5 <load-on-startup>1</load-on-startup>
6 <!--
7 The connection string must be defined in the
8 `ATLAS_URI` environment variable
9 -->
10 <init-param>
11 <param-name>database</param-name>
12 <param-value>sample_mflix</param-value>
13 </init-param>
14 <init-param>
15 <param-name>collection</param-name>
16 <param-value>movies</param-value>
17 </init-param>
18 <init-param>
19 <param-name>index</param-name>
20 <param-value>movies_index</param-value>
21 </init-param>
22 </servlet>
23
24 <servlet-mapping>
25 <servlet-name>SearchServlet</servlet-name>
26 <url-pattern>/search</url-pattern>
27 </servlet-mapping>
28</web-app>

GETting the search results

Requesting search results is a stateless operation with no side effects to the database and works nicely as a straightforward HTTP GET request, as the query itself should not be a very long string. Our front-end tier can constrain the length appropriately. Larger requests could be supported by adjusting to POST/getPost, if needed.

Aggregation pipeline behind the scenes

Ultimately, to support the information we want returned (as shown above in the example response), the request example shown above gets transformed into this aggregation pipeline request:
1[
2 {
3 "$search": {
4 "compound": {
5 "must": [
6 {
7 "text": {
8 "query": "purple rain",
9 "path": [
10 "title",
11 "plot"
12 ]
13 }
14 }
15 ],
16 "filter": [
17 {
18 "equals": {
19 "path": "genres",
20 "value": "Music"
21 }
22 }
23 ]
24 },
25 "index": "movies_index",
26 "count": {
27 "type": "total"
28 }
29 }
30 },
31 {
32 "$facet": {
33 "docs": [
34 {
35 "$skip": 0
36 },
37 {
38 "$limit": 5
39 },
40 {
41 "$project": {
42 "title": 1,
43 "genres": 1,
44 "plot": 1,
45 "year": 1,
46 "_id": 0,
47 }
48 }
49 ],
50 "meta": [
51 {
52 "$replaceWith": "$$SEARCH_META"
53 },
54 {
55 "$limit": 1
56 }
57 ]
58 }
59 }
60]
There are a few aspects to this generated aggregation pipeline worth explaining further:
  • The query (q) is translated into a text operator over the specified search fields. Both of those parameters are required in this implementation.
  • filter parameters are translated into non-scoring filter clauses using the equals operator. The equals operator requires string fields to be indexed as a token type; this is why you see the genres and cast fields set up to be both string and token types. Those two fields can be searched full-text-wise (via the text or other string-type supporting operators) or used as exact match equals filters.
  • The count of matching documents is requested in $search, which is returned within the $$SEARCH_META aggregation variable. Since this metadata is not specific to a document, it needs special handling to be returned from the aggregation call to our search server. This is why the $facet stage is leveraged, so that this information is pulled into a meta section of our service’s response.
The use of $facet is a bit of a tricky trick, which gives our aggregation pipeline response room for future expansion too.
$facet aggregation stage is confusingly named the same as the Atlas Search facet collector. Search result facets give a group label and count of that group within the matching search results. For example, faceting on genres (which requires an index configuration adjustment from the example here) would provide, in addition to the documents matching the search criteria, a list of all genres within those search results and the count of how many of each. Adding the facet operator to this search service is on the roadmap mentioned below.

$search in code

Given a query (q), a list of search fields (search), and filters (zero or more filter parameters), building the $search stage programmatically is straightforward using the Java driver’s convenience methods:
1 // $search
2 List<SearchPath> searchPath = new ArrayList<>();
3 for (String search_field : searchFields) {
4 searchPath.add(SearchPath.fieldPath(search_field));
5 }
6
7 CompoundSearchOperator operator = SearchOperator.compound()
8 .must(List.of(SearchOperator.text(searchPath, List.of(q))));
9 if (filterOperators.size() > 0)
10 operator = operator.filter(filterOperators);
11
12 Bson searchStage = Aggregates.search(
13 operator,
14 SearchOptions.searchOptions()
15 .option("scoreDetails", debug)
16 .index(indexName)
17 .count(SearchCount.total())
18 );
We’ve added the scoreDetails feature of Atlas Search when debug=true, allowing us to introspect the gory Lucene scoring details only when desired; requesting score details is a slight performance hit and is generally only useful for troubleshooting.

Field projection

The last interesting bit of our service implementation entails field projection. Returning the _id field, or not, requires special handling. Our service code looks for the presence of _id in the project parameter and explicitly turns it off if not specified. We have also added a facility to include the document’s computed relevancy score, if desired, by looking for a special _score pseudo-field specified in the project parameter. Programmatically building the projection stage looks like this:
1 List<String> projectFields = new ArrayList<>();
2 if (projectFieldsValue != null) {
3 projectFields.addAll(List.of(projectFieldsValue.split(",")));
4 }
5
6 boolean include_id = false;
7 if (projectFields.contains("_id")) {
8 include_id = true;
9 projectFields.remove("_id");
10 }
11
12 boolean includeScore = false;
13 if (projectFields.contains("_score")) {
14 includeScore = true;
15 projectFields.remove("_score");
16 }
17
18 // ...
19
20 // $project
21 List<Bson> projections = new ArrayList<>();
22 if (projectFieldsValue != null) {
23 // Don't add _id inclusion or exclusion if no `project` parameter specified
24 projections.add(Projections.include(projectFields));
25 if (include_id) {
26 projections.add(Projections.include("_id"));
27 } else {
28 projections.add(Projections.excludeId());
29 }
30 }
31 if (debug) {
32 projections.add(Projections.meta("_scoreDetails", "searchScoreDetails"));
33 }
34 if (includeScore) {
35 projections.add(Projections.metaSearchScore("_score"));
36 }

Aggregating and responding

Pretty straightforward at the end of the parameter wrangling and stage building, we build the full pipeline, make our call to Atlas, build a JSON response, and return it to the calling client. The only unique thing here is adding the .explain() call when debug=true so that our client can see the full picture of what happened from the Atlas perspective:
1 AggregateIterable<Document> aggregationResults = collection.aggregate(List.of(
2 searchStage,
3 facetStage
4 ));
5
6 Document responseDoc = new Document();
7 responseDoc.put("request", new Document()
8 .append("q", q)
9 .append("skip", skip)
10 .append("limit", limit)
11 .append("search", searchFieldsValue)
12 .append("project", projectFieldsValue)
13 .append("filter", filters==null ? Collections.EMPTY_LIST : List.of(filters)));
14
15 if (debug) {
16 responseDoc.put("debug", aggregationResults.explain().toBsonDocument());
17 }
18
19 // When using $facet stage, only one "document" is returned,
20 // containing the keys specified above: "docs" and "meta"
21 Document results = aggregationResults.first();
22 if (results != null) {
23 for (String s : results.keySet()) {
24 responseDoc.put(s,results.get(s));
25 }
26 }
27
28 response.setContentType("text/json");
29 PrintWriter writer = response.getWriter();
30 writer.println(responseDoc.toJson());
31 writer.close();
32
33 logger.info(request.getServletPath() + "?" + request.getQueryString());

Taking it to production

This is a standard Java servlet extension that is designed to run in Tomcat, Jetty, or other servlet API-compliant containers. The build runs Gretty, which smoothly allows a developer to either jettyRun or tomcatRun to start this example Java search service.
In order to build a distribution that can be deployed to a production environment, run:
1./gradlew buildProduct

Future roadmap

Our search service, as is, is robust enough for basic search use cases, but there is room for improvement. Here are some ideas for the future evolution of the service:
  • Add negative filters. Currently, we support positive filters with the filter=field:value parameter. A negative filter could have a minus sign in front. For example, to exclude “Drama” movies, support for filter=-genres:Drama could be implemented.
  • Support highlighting, to return snippets of field values that match query terms.
  • Implement faceting.
  • And so on… see the issues list for additional ideas and to add your own.
And with the service layer being a middle tier that can be independently deployed without necessarily having to make front-end or data-tier changes, some of these can be added without requiring changes in those layers.

Conclusion

Implementing a middle-tier search service provides numerous benefits from security, to scalability, to being able to isolate changes and deployments independent of the presentation tier and other search clients. Additionally, a search service allows clients to easily leverage sophisticated search capabilities using standard HTTP and JSON techniques.
For the fundamentals of using Java with Atlas Search, check out Using Atlas Search from Java | MongoDB. As you begin leveraging Atlas Search, be sure to check out the Query Analytics feature to assist in improving your search results.
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this article
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

Building a Scalable Media Management Back End: Integrating Node.js, Azure Blob Storage, and MongoDB


Nov 05, 2024 | 10 min read
Tutorial

How to Do Semantic Search in MongoDB Using Atlas Vector Search


Sep 18, 2024 | 8 min read
Tutorial

Maintaining a Geolocation Specific Game Leaderboard with Phaser and MongoDB


Apr 02, 2024 | 18 min read
Tutorial

Getting Started with MongoDB Atlas and Azure Functions using Node.js


Feb 03, 2023 | 8 min read
Table of Contents